Home

I want to solve the problem of true computerised intelligence.
Could a computer ever:

  • Properly understand, discuss and explain a Jane Austin novel?
  • When you ask a computer something, could it quiz you as necessary on what it was you actually wanted?
  • Give you a ten-minute audio-visual presentation on any topic you want to know about.
  • Gave you answers that were tailored to your level of understanding and learning style?

The aims of this project sound like science fiction. But perhaps they can be achieved in tiny incremental steps using the assistance of the www in creating a “Make AI Happen” social network of interested experts.

  1. What steps are needed to make this happen?
  2. How close can we get to make this happen?
  3. What should we do first?

The information on this website is subject to Open Software License 3.0 (OSL-3.0). It will move to use a creative commons copyright licence. Ethical restrictions will then b able to be applied to how the information on this site can be used. You are free to share links to this website. In using the information on this website, you must credit the HEMSEYE project and quote this website as the source of information. Work-based on this work must be open source and subject to the as yet undefined ethical restrictions. Open-source licensing and restrictions are currently under review. This work should be used to help humanity and the world; it must not be used in any armament systems.

This is the website of the HEMSEYE Open Source Project, founded by James Kitching.

I believe I can apply some existing obscure linguistics research (*see below Professor Cliff Goddard & Anna Wiezbicka) to create a system that does almost the reverse of the TV game show “Catch Phrase”:

” two contestants, …, would have to identify the familiar phrase represented by a piece of animation”

In reverse, we would need to take some text and, as necessary, either draw a picture or create an animation to explain the word. These pictures and animations could either be in 2D or 3D. To try to do this with any possible text from the outset would be far, far too difficult a task. The important next questions are:

  1. How small a set of words would be needed to create an ability to understand and create all a language’s dictionary definitions, starting from the most basic set of words?
  2. Could a language acquisition bootstrap be created in part by using all the words that describe a human child’s pre-verbal conscious sensory experience?
  3. Understanding other people and their different perspectives and viewpoints in social situations (also known as social cognition) has been shown to be important in acquiring language. Should these words be in our language acquisition bootstrap?

Words describing human conscious experience may well be likely to be difficult to define as a dictionary definition. Complex experiences such as TIME, I, YOU and ME are difficult to define verbally, but humans can know intuitively what these word concepts mean and when to use them. This concept understanding happens even before an individual has the actual words to express these basic concepts. Words can be the everyday communication tools in a particular language or one language’s label of a common concept that could exist pre-verbally in multiple or even all languages.

Human experience words such as knowing “up” or “this” (within arms length) could be represented as pictures or animations or links bound into some sensory information (such as gravity).

In a robot’s mind, you could choose to implement a human’s visual and sensory experience and understanding as a series of connected layers, a bit like those views shown in the terminator’s vision in the Terminator films. As humans, we could use this view approach, with multiple extra layers, to see what our robot is thinking. For example, when we recognise and observe things as humans, we can choose to give the things we recognise names. Within a robot’s mind view, these recognised word labels could pop up on a separate overlapping transparent layer and link themselves to the things being observed and displayed on other layers. We need this to happen with an artificial robot’s mind so that we can have insight into what they are experiencing.

After searching with Google, hoping to answer the above questions, I came across the work of Professor Anna Weizbeica and Professor Cliff Goddard. First, I want to use the 65 words they named the semantic primes. The semantic primes are 65 concepts (described with English words) that exist as the same concept word meaning in every human language. These special word concepts are also used in the same way in every human language.

It is possible to take any complex text (potentially expressed in any language and expressing any complex idea) and convert it into this small language universal “intermediate” or core to all languages that contain just 65 + 50 concept words and about 120 phrases. Research has shown that this tiny and obscure universal conceptual language can build dictionary definitions of any more complex word or express any idea.

When you study this tiny language, you will I believe, see that what it actually describes are the minimum set of the most basic human conscious sensory world experiences. These “experience” type words like “up” “this” “one” “more” can all be pictured or animated. Every linguistic concept may/could potentially therefore be built from these pictures and animations or converted back to pictures from these basic linguistics primitives.

Once we have the context of these concepts and words modelled, could they be recognised in a video stream of the real world? We would need to generate a real-time 2D or 3D virtual world representation from video to do this. Neither Alexa nor Siri is self-aware. They do not have a proper understanding of their own existence and are not really properly aware of the existence of others. The new software tools I describe in the links below should ultimately gift an AI with a mind allowing imaginative and constructive visual representation, recognition and planning. The research modelling process I am proposing would take the concepts underlying textual understanding and visualise these in an artificial mind. These generic “text concept” (semantic) visualisations can then become the targets of mapping real-world visual observation instances to their underlying visual concepts cognitive concepts. See:

NOTE
This shorter presentation is now four years old;
https://hemseye.org/wp/hemseye-project-phase-1-shorter-presentation/
The presentation does not include a description of KBox modelling and still includes mention of using BPM modelling, which is now obsolete:

A more detailed description
https://hemseye.org/wp/make-ai-happen-brighton-meetup-launch-meeting/

Further research is required to extend and apply my ideas on computerising language context representation. This would require further work ing the fields of object and location simultaneous localisation and mapping (also known as SLAM). This might be done with a DVS camera.

Ultimately I believe my proposal should be a route towards creating a “Star Trek Computer” that would replace the world wide web https://hemseye.org/wp/the-hemseye-a-www-replacement/

I hope to one day launch an ethical AI research charity to help oversee and participate in developing global AI safety & AI development via the HEMSEYE Open Source Project.
https://hemseye.org/wp/ai-ethics-safety/

The aspirations of what I would like to achieve with this project are described by a mission statement.

*See Developed & identified over more than three decades:
Professor Anna Wierzbicka The Semantic Primes (+Natural Semantic Metalanguage NSM) 1972-Present
An Impressive Record On Google Scholar
Including:
2362 Citations – 1996, Book: Semantics: Primes and Universals
962 Citations – 1972, Book Semantic primitives
Professor Cliff Goddard & Anna Wiezbicka Natural Semantic Metalanguage & The Semantic Primes