I am a hard core science fiction fan.
So I was fascinated when I first read about scientist/fiction writer Isaac Asimov's Three Laws of Robotics which he introduced in his 1942 short story "Runaround."
The Three Laws, quoted as being from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
This was easily 45 years ago. 2058 seemed like the very distant future. And I wondered when technology would become advanced enough for mankind to grapple with the ethics of artificial intelligence.
Well folks, we didn't have to wait for 2058 - it's happening in 2016.
A New York Times article, "How Tech Giants Are Devising Real Ethics for Artificial Intelligence," by John Markoffsept, reported that five of the world’s largest tech companies are trying to create a standard of ethics around the creation of artificial intelligence.
The article explained that in recent years, the A.I. field has made rapid advances in a range of areas, from self-driving cars and machines that understand speech, to a new generation of weapons systems that threaten to automate combat.
These developments prompted the necessity to ensure that A.I. research is focused on benefiting people, not hurting them.
The importance of the industry effort is underscored in a report issued by a Stanford University group called the One Hundred Year Study on Artificial Intelligence. It lays out a plan to produce a detailed report on the impact of A.I. on society every five years for the next century.
Separately, Reid Hoffman, a founder of LinkedIn, is in discussions with the Massachusetts Institute of Technology Media Lab to fund a project exploring the social and economic effects of artificial intelligence.
There is a long-running debate about designing computer and robotic systems that still require interaction with humans. For example, the Pentagon has recently begun articulating a military strategy that calls for using A.I. in which humans continue to control killing decisions, rather than delegating that responsibility to machines. See Robotics Law number 1.
Of note, the Stanford report does not consider the possibility of a "singularity" that might lead to machines that are more intelligent than us and possibly threaten mankind.
Not there yet.
No comments:
Post a Comment