Q&A

Questions To Pluck Your Interest In AI …

Below I have listed seven questions about artificial intelligence. You can get to my answers are at the bottom of this page by clicking on the links. By thinking about these questions I hope to learn about the nature of artificial intelligence:

  1. What can you think of as being the most difficult task that an artificially intelligent entity could experience?
  2. If an artificially intelligent entity was able to enter a room and survey the scene around them what might they understand that showed evidence of their superior intelligence?
  3. Biologists talk of tool usage as being a proof of intelligence. What might this involve?
  4. How might artificially intelligent entities understand human emotions motivations fiction and conversations?
  5. What is the danger that an artificial intelligence might take over the world?
  6. How do you teach a computer to understand language?
  7. How do you encourage a crowd sourced interest in artificial intelligence?

My Answers To Questions On The Nature Of Artificial Intelligence

Q1: What can you think of as being the most difficult task that an artificial intelligent entity could experience?

By asking this question, I hoped to learn what properties would be required of an artificially “intelligent” system to demonstrate “intelligence”. By considering the extreme case, I am in no way suggesting what might actually be possible. I am using inspiration from science fiction to learn about the nature of intelligence. I felt an ultimate test would be:

  • A team of sentient verbally interactive robots were working cooperatively on a newly discovered planet, supporting a team of human explorers. The robots would share objectives and commands and pass on information across a wireless network. A hierarchical command structure could exist between robots; between humans and between humans and robots. Objective information and commands would need to respect the teams organisational hierarchy, but the hierarchy would perhaps, at times, need to be modified, bypassed or ignored, as new challenging situations developed.
  • The robots could be working to terraform the new world.
  • The robots would be working cooperatively, doing practical work in an alien environment where unexpected eventualities would need to be catered for. Not all of the robots would need to share the same level of intelligence. All the robots would need to be appropriately responsive to commands between themselves and humans.
  • If all the humans were incapacitated and made unconscious, the robots would need to be able to protect themselves and the humans and consider their mission objectives.
  • If the robots were confronted with a new alien intelligence upset at their terraforming efforts, they would need to reassess their objectives, protect the humans, protect the environment they had previously been disrupting and understand and negotiate with the aliens and prevent any violence.

In the situation I have described, I believe the keys to intelligent survival are appropriate responses, flexible communication and consideration for physical, environmental, emotional and cultural needs. In this scenario Isaac Asimov’s three laws of robotics are insufficient. The Three Laws are:

  • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

I believe further laws are required to govern respect for the environment, human laws, all life and consideration for emotions and culture. The weakness of Asimov’s three laws are ably demonstrated in his original short story ‘I, Robot’ which was later made into a film.

Q2: If an artificial intelligent entity was able to enter a room and survey the scene around them what might they understand showed evidence for their superior ability?

There is an existing technology available called a “discrete event simulation tool” that a specialist consultant can use to analyse how processes work. This type of tool can be used to make predictions about what will happen next or what would happen if the situation changed slightly. I use to work on developing such a tool. Software engineers and business analysts are use to producing process models (using UML , BPMN or IDEF3). I worked on a project to convert this type of model into a simulation model. I am interested in try to automatically create business process models from text and web based knowledge. One day it might also be possible to combine this ability with visually input information.

Q3: Biologists talk of tool usage as being a proof of intelligence. What might this involve?

I believe scientists would generally agree that tool making or appropriate tool use is a good demonstration of intelligence. There are a number of approaches to tasks and tool making, some of which are described in an article by Professor Murray Shanahan entitled “The Brain’s Connective Core and its Role in Animal Cognition”. The following list describes the tool usage problems listed in Professor Shanahan’s article and describes how simulation technologies are important and relevant in each case:

Tool Problem 1. Look at the task and tools you have and working out how to achieve what you want.
This type of problem solving requires that an assessment is made of the capabilities of the available tools. A capability assessment is, in a sense, a recall of a memory. A memory could be thought of as a simulation in the mind’s eye of what the tool can achieve.

Tool Problem 2. Understanding that the tools available are not capable.
If the tools available to achieve a task are not suitable, then possible responses would be assessing what alternative resources are available or whether tools could be modified or used in combination. Again this strategy involves memory recall and simulation in the mind’s eye.

Tool Problem 3. Looking at the task and working backwards or forwards
When a task that needs to be achieved is a long way from the current state, it is possible to work forwards or backwards and break the problem down. It is possible to identify the set of possible states that need to be achieved along the way to the required objective. This strategy aims to simplify the problem by defining the processes and resources needed to achieve a step-by-step change of state to achieve an objective.

This strategy involves memory recall; process, state and again simulation modelling.

Tool Problem 4. Applying previous knowledge in a new context
This problem are could be supported by a simulation tool that could recall and build on parts of previously created simulation models.

Tool Problem 5. A combined approach
By combining all three of the above approaches and comparing the alternatives, the most appropriate approach could be intelligently selected. This strategy involves using a variety of simulation experimentation technologies.

Q4: How might artificial intelligent entities understand human emotions motivations fiction and conversations?

There really is perhaps little point in getting an AI to understand emotions if you have not got it to understand language first. I think targeting modelling empathy will be more useful than modelling emotions directly. You could try to adjust an AI’s behaviour by giving its behaviour a configuration based on Meyers Briggs personality architypes. You could try training your AI to spot non-verbal communication. You could try implementing a use transactional analysis to try to help understand and respond appropriately to conversation. You could use simulation to consider alternative conversational responses and their possible effects on a target conversational outcome. Whilst I find these ideas interesting to consider I see little point in pursuing them whilst the basics of language understanding are not worked out as yet.

Q5. What is the danger that an artificial intelligence might take over the world?

One of the celebrity academics who have been asked that question is Professor Andrew Ng. He has been a professor at Stanford University he has been a researcher at Google. He founded the online training site coursera. He works on machine learning and bigdata (i.e. He is a professional AI researcher and a very bright and dynamic man). He says worrying about the threat from AI taking over the world is like worrying about over population on Mars. One day this could be a problem that will need to be researched and a solution found. This is not a problem now or for the foreseeable future.

1) I believe software engineers developing artificial intelligence applications should think like a parent or guardian. First off a psychopath is someone with no sense of right or wrong and no regrets. In everyday life would it always been criminal to give a two year old child or a psychopath a sub-machine? YES! Artificial intelligence entities ought to always be regarded as less trustworthy than a two year old psychopath.

2) Artificial Intelligence can be perfectly safe in the same way that a knife is a perfectly safe invention to butter toast or cut potatoes. Knives become unsafe when someone chooses to stab people with them.

3) The USA and most other countries do not currently outlaw the development of “LAWS” or Lethal Autonomous Weapon Systems. In my view this is the problem. Humans are choosing to give power to technology that technology should not have.

4) I believe it should be possible to build a highly advanced AI with sufficient inbuilt safeguards to ensure that it is safe to do the tasks that it is allowed to do.

5) Some people have cautioned that an AI will become dangerous if it learns how to improve itself and hence becomes more intelligent than humans.  The dangers from such a system might be further seen to increase if it were able to embody itself in a robot and then also build better robots.  The questions that should be asked then are

  • Could we teach and get robots to adhere to a set of ethics?
    There is no reason why this should not be possible, it is an open area of research.
  • Could we always require that the building of robots requires human intervention?
    Within some science fiction robots are allowed sole access to automated factories where they are allowed to build other more intelligent robot.  Common sense would suggest that humans would be able to not let this happen.
  • What needs to happen before this threat could even start to become a problem?
    We would need to develop an AI that understands cause an effect and has free will.  This is still an open area of research, but one that the Hemseye project would like to address.
  • How far away are we from having this problem?
    One famous machine learning researcher suggested that we probably need to wait until Mars has an overpopulation problem before we are likely to need to worry about the threat from a super human intelligence.

Q6. How do you teach a computer to understand language?

By understanding I am not talking about pattern matching text and mechanical translation to an alternative representation which is the current target of much research.  I mean a correct artificially concious understanding that would grant a computer the ability to explain issues or hold a conversation (like the Star Trek Computer or later something approaching a Commander Data).  I have grave doubts whether this will ever even be possible. When you consider the millions of years of evolution that it has taken to give the human brain its abilities the idea that a bunch of humans could replicate it within an individuals lifetime seems silly (but it this is what I am working on part-time).  See my blog and the rest of this website for the progress I have made.

From what I have read so far the most impressive progress on artificial conciousness has been done on the LIDA project in Memphis.  However their open source software framework is not cloud based.

Historically linguists looked at clever algorithms to translate grammar constructs into an alternate computer modelled understanding.  There has been a great deal of time and money spent on this with only limited success (see https://en.wikipedia.org/wiki/Natural_language_understanding).  I believe the major problem with these linguistics approaches is the failure to recognise that language acquisition is grounded in an experiential understanding of the world.  Language communicates about the world.  Attempting to understand language without a grounded knowledge of the world and its physical states cannot work.

DRAFT
Potential targets for representing an understanding of the world and how language relates to include the Schema.org model.  I think a higher level meta model is required to bring the likes of Schema.org and OWL2 together under one representation.  I need to look further at EMF and MOF and other standards.

A great deal of work has been done on “neural networks” and more recently on what has been coined as “deep learning”.  Success in this area has been limited despite the millions spent and the great predictions of progress made.

I currently believe the most promising approach to computerisied understanding will be to understand the skills that a human child uses in developing language skills and to then model these.  I am just reading the PhD thesis of Barend Beekhuizen on this topic who recently completed his studies at Liden University in the Netherlands (reference http://www.lotpublications.nl/Documents/401_fulltext.pdf).  I think progress could potentially be made by extending the work described in Barend Beekhuizen PhD by implementing this approach using neural networks driven with LIDA like conciousness.

I would ideally like to work on this full-time however I do not have a PhD and I do not want to go back to earning student wages (I could not afford this currently). If I have impressed you with my website and my ideas and abilities maybe you would like to fund my project or you know someone who might, if so contact me on email at james.kitching<at>hemseye.org.

Q7. How do you encourage a crowd sourced interest in artificial intelligence?

If you are going to produce an artificial intelligent entity of the type I am interested in then it will need a great deal of training.

The amount of time a human child gets exposed to other humans so that it is able to learn is huge. The numbers of hours an artificially intelligent system is likely to need in order to gain an education is likely to be considerably longer than an individual human child receives. This is because the child has been developed from millions of years of evolution. To bypass this evolutionary benefit will need will take a great deal of human ingenuity.

It makes sense to provide a system that has wide appeal or better still is very useful in some way to gain that essential real human training effort. The idea that it might one day be possible to produce a system that could automatically model processes could be potentially very useful (see What Why and How). If this system were available across the internet via a browser and was connected to shared knowledge base so that it grew in intellectual experience as it was used then it could become a “Hive Mind” which has until now always been just an idea in science fiction.

I have no idea whether it is possible to produce a human like artificial intelligence. I am confident that trying will be very difficult and will be a learning experience for anyone trying such a thing. For me this makes it a good thing to try to do.