AI Ethics, Safety & Governance

AI Safety First

The safety and ethics of creating a human like artificial  should be a cause of concern for all humans:

  • Artificial intelligence is a technological tool that is currently very much in development.
  • Currently AI is dangerous in the same way as knife technology can be thought of as dangerous.  Knives are incredibly useful for doing such things as helping us prepare food or operate on people needing surgery.  But knives like an AI can also be designed to cause harm to people.  A harmful AI like a knife will not do harm until it is deployed to do a harmful task.
  • I believe a chat bot AI should remain safe at all times as I cannot forsee how an AI that is purely able to hold a conversation and understand and interpret language could be become dangerous.  Danger starts to become possible after an AI is gifted with the capacity  to develop its own intentions or with a capacity to lie or manipulate.
  • The development of an autonomous AI with its own capacity for independent intent will be a point at which an AI could start to become dangerous.  How can an AI judge that the intent it has chosen to act upon is currently safe and appropriate and remains safe?  Humans are able to judge appropriate behaviour by comparing acts to a knowledge of the standards set by human society and a life-time of learning through childhood.  An AI would need these skills to judge it’s own intents.  Like a human an AI will need a childhood before full and potentially unsupervised entry is granted into the adult human world.
  • If a human had no capacity for emotions or empathy then they would be regarded as being dangerous and psychopathic.  Such a human, once identified, would almost certainly be contained in a secure environment with limited access to other people and limited access to potentially harmful resources.  Will it ever be appropriate to stop considering a human like artificial intelligence as being anything other than a potential psychopath?  What evidence do we need that an artificial intelligence is no longer capable of psychopathic behaviour?
  • i.e. What skills does an artificial intelligence need to stop it ever expressing phsycopathic intents?
  • The Three Laws of Robotics are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story “Runaround” (included in the 1950 collection I, Robot), although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the “Handbook of Robotics, 56th Edition, 2058 A.D.”,M
    are:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    Perhaps in recent years not many people will have read this short story but the remake of the story in the film I, Robot,  with Will Smith in the lead role, has been very popular.  This story describes how these three laws are insufficient to govern or control an AI. What is missing probably includes:

    • a respect for human rights and human laws
    • a need for empathy
    • an enobled and true sense of respectful righteousness
    • a respect for the environment.

    Can an AI be safely give a capacity for autonomous without a capacity to understand and abide by the three laws and these additional requirements?

DRAFT TO DO:—->>>>

AI Technology Adoption & Society’s Potential Backlash

 

Potential AI Social Impacts Good & Bad
+ Plus Saving the Planet & the Human Race

 

The HEMSEYE Project’s Commitment To Be Always Open Source

“Closed source” or “proprietary software” is software where the only people capable of fixing and extending the software are the people who originally built it.  An example of this type of software supplier is Microsoft.  Open source software, that is software where the source code is public, has a track record of being less open to abuse by hackers.

The HEMSEYE Project’s Commitment To Always Being Ethical

The HEMSEYE Project’s Commitment To Community Engagement and Acceptance

The HEMSEYE Project’s Commitment To Non-Heirarchical Leadership and On Going Ethics

 

The “Make AI Happen” Meetup Social Network – International & Local AI Community Governance

 

Commercialism vs Ethics & Charitable Control

 

The HEMSEYE as a Replacement for the World Wide Web

 

In Conclusion

There is a great deal of writing on robotic ethics on the internet as well as lists of principles and frameworks that people have put forward or asked people to sign up to.  I would like to add to this page as a resource to bring together and share theses ideas.  As this page stands it is my first attempt at comment in this area.  Please send us your comments our add comments to a blog post (at some point you may need to register as a blog user to do this – depending on how we change this website’s security settings).