By Kyle James  |  06-08-2018   News
Photo credit: Ilexx | Dreamstime.com

The potential existential threat that Elon Musk has warned AI represents could be here sooner than we think. Computer scientist Stuart Russell, the author behind a book called the textbook "Artificial Intelligence: A Modern Approach", has spent his career thinking about the problems that could arise when an AI becomes aware and whether its values would align with humanity's interests.

Related coverage: <a href="https://thegoldwater.com/news/27371-Religious-Perspectives-On-AI-And-Transhumanism">Religious Perspectives On AI And Transhumanism</a>

Several groups have formed for just to work on enacting safeguards and combating the potential for an AI to go rogue. One such company is OpenAI, which was actually founded by billionaire genius Elon Musk to "to build safe [AGI], and ensure AGI’s benefits are as widely and evenly distributed as possible." Maybe it's worth asking why humanity is scared of artificial intelligence deeming us unworthy and poor gatekeepers of the planet.

Researchers at MIT revealed their latest creation, an AI named Norman who is as disturbed as the character from Hitchcock's Psycho he is named after. The researchers said, "Norman is an AI that is trained to perform image captioning, a popular deep learning method of generating a textual description of an image. We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders."

Related coverage: <a href="https://thegoldwater.com/news/23333-China-s-AI-Startup-Part-of-the-World-s-Biggest-System-of-Surveillance">China’s AI Startup Part of the World’s Biggest System of Surveillance</a>

<img src="https://media.8ch.net/file_store/241362af13baa0e83e5afb6844545f11a02ba275309cb392ca76483aa856a64a.jpg" style="max-height:640px;max-width:360px;">

<span style="margin-top:15px;rgba(42,51,6,0.7);font-size:12px;">MIT</span>

Some argue the Rorschach test isn't a valid way of measuring a person's psychological state but the answers Norman gave are creepy nonetheless. You can see Norman's answers compared to another AI's answers in the photo above. While Norman is just a thought experiment, it raises questions about machine learning algorithms making judgments and decisions based on potentially bias data.

<i>On Twitter:</i>

<a href="https://twitter.com/MAGASyndicate">@MAGASyndicate</a>

Tips? Info? Send me a message!

Source: https://www.theverge.com/2018/6/7/17437454/mit-ai-psychopathic-reddit-data-algorithmic-bias

Twitter: #AI #MIT #ElonMusk #OpenAI #Artificial #Intelligence
Share this article
Thoughts on the above story? Comment below!
0 Comment/s


What do you think about this article?
Name
Comment *
Image