|
Post by Rob Allen on Mar 31, 2016 13:15:50 GMT -5
Some people are fervent believers in the imminence of the "technological singularity". I'm not so sure, but some people close to the situation think it really is coming. From Wikipedia (https://en.wikipedia.org/wiki/Technological_singularity):
"The technological singularity is a hypothetical event in which artificial general intelligence (constituting, for example, intelligent computers, computer networks, or robots) would be capable of recursive self-improvement (progressively redesigning itself), or of autonomously building ever smarter and more powerful machines than itself, up to the point of a runaway effect — an intelligence explosion — that yields an intelligence surpassing all current human control or understanding. Because the capabilities of such a superintelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence. [...] Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030. In 2012, Stuart Armstrong and Kaj Sotala published a study of artificial general intelligence (AGI) predictions by both experts and non-experts and found a wide range of predicted dates, with a median value of 2040. Discussing the level of uncertainty in AGI estimates, Armstrong stated at the 2012 Singularity Summit: 'It's not fully formalized, but my current 80% estimate is something like five to 100 years.'"
Anyone want to start a betting pool?
|
|
|
Post by DE Sinclair on Mar 31, 2016 13:28:01 GMT -5
In case anyone was wondering where my GIFS came from...meet Sophia, the humanlike robot that wants to destroy us all: "You know, somehow, "I told you so" just doesn't quite say it." - I, Robot (the execrable film, according to Slam)
|
|
|
Post by Slam_Bradley on Mar 31, 2016 13:56:17 GMT -5
In case anyone was wondering where my GIFS came from...meet Sophia, the humanlike robot that wants to destroy us all: "You know, somehow, "I told you so" just doesn't quite say it." - I, RobotPlease cite that as the execrable film so it's not confused with the incredible stories by Asimov.
|
|
|
Post by Ish Kabbible on Apr 1, 2016 12:29:11 GMT -5
|
|
|
Post by DE Sinclair on Apr 1, 2016 13:12:59 GMT -5
"You know, somehow, "I told you so" just doesn't quite say it." - I, RobotPlease cite that as the execrable film so it's not confused with the incredible stories by Asimov. Updated. Though I rather like the movie. Perhaps because I didn't read Asimov.
|
|
|
Post by DE Sinclair on Apr 1, 2016 13:14:39 GMT -5
It's sad that every time you hear about a "lifelike robot" it's always female. Do these guys really think they're fooling anyone as to why they're making these things?
|
|
|
Post by Randle-El on Apr 1, 2016 15:56:58 GMT -5
Count me in as one of the people who has concerns about the rapid development and ubiquity of AI. I don't envision doomsday scenarios like The Matrix or Terminator, with evil machines rising up to take over the world. But I do predict that in the not-too-distant future, in the same way that advances in medicine and biotechnology have resulted in the field of bioethics, advances in AI will result in discussions regarding machine ethics. Consider the following:
1) Between social networks, search engines histories, e-commerce sites, and various "free" websites that provide a service in exchange for your personal information, there's a lot of information about each of us floating around out there. Now suppose an artificial intelligence has access to all this information. Can we trust it to use that information responsibly, or in a secure and private manner?
2) AIs will soon be placed in areas where decisions involving human safety will be made -- autonomous vehicles being the most imminent application. Supposing there are emergency situations where loss of life, or at least severe injury, will occur, do we trust machines to make decisions for us as to what is the "best" outcome? If either the driver or a pedestrian's life is threatened, whose life does the AI assign greater priority?
3) As machines become more and more lifelike, it's inevitable that the sex and pornography trades will develop artificial sex workers. Does this cross the line from porn to prostitution? And if so, what kinds of laws will govern it? While it's arguable as to whether human prostitution is a victimless crime (I would say in most instances it is not), is it more convincingly "victimless" if the prostitute is a machine? And what will be the societal ramifications if we allow unfettered access to robot prostitution on the grounds that it is no different that using any other device for our entertainment?
4) Related to #3, lifelike machines with personalities and emotional responses will raise the question of people wanting to assign human qualities to those machines. Whether or not it's actually possible for machines to truly emote, have a soul, be sentient, etc etc, the more realistic they become the more we will *want* to think they are. In the same way that people attribute human qualities to animals with a consequent belief in things like animal rights, will there arise a movement for AI rights?
|
|
|
Post by Spike-X on Apr 1, 2016 18:42:08 GMT -5
I honestly thought this was an April Fool's joke until I watched the video.
|
|
|
Post by dupersuper on Apr 1, 2016 20:09:23 GMT -5
I can't fault his taste...
|
|
|
Post by DE Sinclair on Apr 5, 2016 11:55:42 GMT -5
I can't fault his taste... His taste is on target. The way he chose to express it is really creepy.
|
|