Fear of AI and ethics

Though the invasion of artificial intelligence is no longer a strange topic, it remains a hot one. Since the early days of the Internet, people started questioning about the development of technology that endangers human’s life. To be specific, there is a classic hypothesis concerning super intelligence set forth in 1965, which is called “intelligence explosion”:

An AI sufficiently intelligence to understand its own design could redesign itself or create a successor system, more intelligent, which could then redesign itself yet again to become even more intelligence, and so on in a positive feedback cycle.
– I. J. Good (1965) –

Yudkowsky (2008) listed three families of metaphors visualising the capability of AI:

  • Inspired by differences of individual intelligence between humans: AIs will patent new inventions, publish groundbreaking research papers, make money on the stock market or lead political power block
  • Inspired by knowledge differences between past and present human civilisations: Fast AIs will invent capabilities that futurists commonly predict for human civilisations a century or millennium in the future like molecular nanotech to interstellar travel. (Soon enough, Interstellar isn’t just a movie).
  • Inspired by differences of brain architecture between humans and other biological organisms.

Yes humans are now afraid of robot, mostly because of their high productivity, so they are taking our jobs. But we have recently seen the scenarios in early sci-fi come to live, we’ve seen humanlike robots, which look exactly like us, they walk, and talk and express feeling just like a real human. So when does the scenario of human controlled by machine come true? Kurzweil (2005) holds that “intelligence is inherently impossible to control”, despite human attempt at taking precautions, “intelligence entitles have the cleverness to easily overcome such barriers.” Bostrom (2011) hypothesises that the AI is not only clever but has unhindered access to its own source code as part of the process of improving its own intelligence, it can rewrite itself to anything it wants to be.

This reminds me of a TV series I’ve been watching recently, Agents of S.H.I.E.L.D season 4, with the emergence of Aida, a life-model decoy/ android created to be a shield to protect human. It has human look, human voice and it is programmed to have the ability to feel and express some basic feelings. Things would be fine if she didn’t read the Darkhold, which later secretly corrupts her mainframe, as she begins to break her protocol to not inflict harm and it has brought her intelligence to the next level. She then murders and kidnaps people, replaces them with LMDs and locks them in a machine called Framework and says it’s for human benefit. 

DWOD_Brain.png

Source: Marvel Universe Wiki

agents-of-shield-aida

Source: Den of Geek 

 

I want to say it’s just human imagination and that we are influenced by the media, as Bostrom (2002) said about the “good-story bias”: “We should then, when thinking critically, suspect our intuitions of being biased in the direction of overestimating the probability of those scenarios that make for a good story, since such scenarios will seem much more familiar and more “real”.  This Good‐story bias could be quite powerful.” But there was an incident in a factory in Michigan where Wanda Holbrook, a maintenance technician was kill by a robot, as described: “A robot at a car parts manufacturer killed a maintenance technician when it went rogue and crushed her skull”. What needs to mention is the robot should have never entered the section she was working, and should have never attempted to load a hitch assembly within a fixture that was already loaded with a hitch assembly.

The incident can raise a question whether AIs are good or evil, but since it could be simply a programming mistake, the reply should be “Exactly which AI design are you talking about?” (Bostrom 2011). However, if it is about developing advanced AIs, which are placed in a position of being stronger, faster and smarter than human, the discipline of machine ethics “must commit itself to seeking human-superior niceness” (Bostrom 2011).

Reference:

  • Agerholm, H 2017, “Robot ‘goes rogue and kills woman on Michigan car parts production line'”, The Independant.
  • Bostrom, N & Youdkowski, E (2011), “THE ETHICS OF ARTIFICIAL INTELLIGENCE”, Cambridge Handbook of Artificial Intelligence, , eds. Ramsey, W & Frankish, K, Cambridge University Press.
Advertisements

One response to “Fear of AI and ethics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s