Home

I want to solve the problem of true computerised intelligence. Could a computer ever:

  • Properly understand discuss and explain a Jane Austin novel?
  • When you asked it something it would quiz you as necessary on what it was you actually wanted?
  • Give you a ten minute audio visual presentation on any topic you wanted to know about?
  • Gave you answers that were tailoured to your level of understand and learning style?

The aims of this project sound like science fiction. But perhaps they can be achieved in tiny incremental steps using the assistance of the www in creating a “Make AI Happen” social network of interested experts.

  1. What steps are needed to make this happen?
  2. How close can we get to making this happen?
  3. What should we do first?

The information on this website is subject to Open Software License 3.0 (OSL-3.0). It will move to use a creative commons copyright licence. Ethical restrictions will then b able to be applied on how the information on this site can be used. You are free to share links to this website. In using information on this website you must credit the HEMSEYE project, and quoting this website as the source of the information. Work based on this work must be open source and subject to the as yet undefined ethical restrictions. Open source licensing and restrictions are currently under review. This work should be used to help humanity and the world in general it must not be used in any armament systems.

This is the website of the HEMSEYE Open Source Project founded by James Kitching.

I believe I can apply some existing obscure linguistics research (*see below Professor Cliff Goddard & Anna Wiezbicka) to create a system that does almost the reverse of the TV game show “Catch Phrase”:

” two contestants, …, would have to identify the familiar phrase represented by a piece of animation”

In reverse we would need to take some text and as necessary either draw a picture or create an animation to explain the word. These pictures and animations could either be in 2D or 3D. To try to do this with any possible text from the outset would be far far too difficult a task. The important next questions are:

  1. How small a set of words would be needed to create an ability to understand and create all a language’s dictionary definitions, stating from the most basic set of words?
  2. Could a language acquisition bootstrap be created in part by using all the words that describe a human child’s pre-verbal concious sensory experience?
  3. Understanding other people and their different perspectves and view points in social situations (also known as social cognition) has been shown to be important in acquiring language.  Should these words be in our language acquisition bootstrap?

Words describing human conscious experience may well be likely to be difficult to define as a dictionary definition.  Complex experiences such as TIME, I You and Me are difficult to define verbally, but humans can know intuitively what these word concepts mean and when to use them.  This concept understanding happens even before an individual has the actual words to express these basic concepts.  Words can be the everyday communication tools in a particular language or one language’s label of a common concept that could exist pre-verbally in multiple or even all languages..

Human experience words such as knowing “up” or “this” (within arms length) could be represented as pictures or animations or links bound into some sensory infomation (such as gravity).

In a robot’s mind you could choose to implement a human’s visual and sensory experience and understanding as a series of connected layers, a bit like those views shown in the teminator’s vision in the Terminator films. As humans we could use this view approach, with multiple extra layers, to see into thhe mind of what our robot is thinking. For example when as humans we recognise and observer things we can choose to give the things we recognise names. Within a robot’s mind view these recognised word labels could popup on a separate overlapping transparent layer and link themselves to the things being are observed and displayed on other layers. We need this to happen with an artificial robot’s mind so that we can have insight into what they are experiencing.

After searching with Google, with the hope of answering the above questions, I came across the work of Professor Anna Weizbeica and Professor Cliff Goddard.  I first want to use the 65 words they have named the semantic primes.  The semantic primes are 65 concepts (describe with English words), that exist as the same concept word meaning in every human language. These special word concepts are also used in the same way in every human language.

It is possible to take any complex text (potentially expressed in any language and expressing any complex idea) and convert it into this small language universal “intermediate” or core to all languages that contains just 65 + 50 concept words and about 120 phrases. Research has shown that this tiny and obscure universal conceptual language is capable of building dictionary definitions of any more complex word or expressing any idea. When you study this tiny language you will I believe see that what it actually describes are the minimum set of the most basic human conscious sensory world experiences. These “experience” type words like “up” “this” “one” “more” can all be pictured or anmated. Every linguistic concept may/could potentially therefore be built from these pictures and animations or converted back to pictures from these basic linguistics primitives.

Once we have the context of these concepts words modelled could they be recognised in a video stream of the real-world? To do this we would need to generating a real-time 2D or 3D virtual world representation from or video.Neither Alexa or Siri are self aware. They do not have a proper understanding of their own existence and are not really properly aware of the existence of others. The new software tools that I describe in the links below should ultimately gift an AI with a mind that allows imaginative and constructive visual representation, recognition and planning. The research modelling process I am proposing would take the concepts underlying textual understanding and visualise these in an artificial mind. These generic “text concept” (semantic) visualisations can then become the targets of mapping real-world visual observation instances to their underlying visual concepts cognitive concepts. See:

A shorter description:
https://hemseye.org/wp/2018/08/26/hemseye-project-phase-1-shorter-presentation/

A more detailed description from first principles:
https://hemseye.org/wp/2018/08/07/make-ai-happen-brighton-meetup-launch-meeting/

I have applied to the IBM AI X-Prize competition to see whether publishing my ideas might help in getting funding. My skills are not in selling or explaining to non-technical people why and idea is a good one. I am therefore not expecting anything to come of this unless I am incredibly lucky and get significant help, see
https://hemseye.org/wp/ibm-ai-x-prize/

Further research is required to extend and apply my ideas on computerising language context representation. This would require further work ing the fields of object and location simultaneous localisation and mapping (also known as SLAM). This might be done with a DVS camera.
https://hemseye.org/wp/new-computer-vision-slam-research-target/

Whilst some software engineers and other technical people, whom I have explained this to, think the approach is high doable, the people with control of large sums of money have so far wanted something with a short-term and confident payback. Whilst my design might well be thoroughly sensible and realistic it will needs long term research to make it happen. Ultimately I believe my proposal should be a route towards creating a “Star Trek Computer” that would replace the world wide web https://hemseye.org/wp/the-hemseye-a-www-replacement/

There might be a few visionary people in the world with enough money to invest in this type of project but the likelihood of me being able to meet persuade or email any of them remains very very low. I plan therefore to focus on trying to make my AI happen myself and hope that others will want to join with me in making my planned ethical AI.

I still have a list of people of influence that I think might truly understand or have an interested in pursuing these ideas with or without me. I have not yet approached all of them. I will try to get through approaching all of them at some point (But I do have a full-time job).

I am going to look at running regular sessions to learn ROS (the robotic operating System). This study will allow me to integrate my plans with very popular robotic system, see
https://hemseye.org/wp/learning-ros-the-robotic-operating-system/

I run two community AI special interest group one In Brighton and one in Haywards Heath (both in the UK), using the meetup.com website see:

https://www.meetup.com/MakeAIHappenBrighton
and
https://www.meetup.com/MakeAIHappenHaywardsHeath

I am hoping to one day launch an ethical AI research charity to hopefully help oversee and participate in developing global AI safety & AI development via the HEMSEYE Open Source Project. This is a very naive and beautiful idea that is unlikely to ever happen. It is very very unlikely that any government, corporation or retired capitalist would choose to give up control of AI for the greater good and interest of the world and humanity in general.
https://hemseye.org/wp/ai-ethics-safety/

The aspirations of what I would like to achieve with this project are described by a mission statement.

*See Developed & identified over more than three decades:
Professor Anna Wierzbicka The Semantic Primes (+Natural Semantic Metalanguage NSM) 1972-Present
An Impressive Record On Google Scolar
Including:
2362 Citations – 1996, Book: Semantics: Primes and Universals
962 Citations – 1972, Book Semantic primitives
Professor Cliff Goddard & Anna Wiezbicka Natural Semantic Metalanguage & The Semantic Primes

Leave a Reply

Your email address will not be published. Required fields are marked *