I think I have figured out how to model language understanding in a computer by extending and modifying a discrete event simulation engine…
Could this one day replace the world wide web?
The world wide web does not understand cause and effect or everyday human experience. If it did then our “web browsers” might stand a chance of modelling being self-aware and have a better chance of actually being able to understand us, search for what we really wanted and hold a proper conversation.
When artificial intelligent agents are being training on research pojects that maybe last two or three years or so all the knowledge they have gained is often forgotten.
If you compare the life experience of an AI and a human child the human will get sensory experience 24/7. Okay babies spend alot of time asleep but we do not know what their subconcious is doing. The amount of training data a human baby gets compared to an AI is absolutely huge. If an AI is ever going to become human like this learning disaparity needs to be addressed.
When a bee is out exploring and comes across a new situation it could be unsure of what to do next (if it were a particular intelligent bee). The new sitaution the bee finds itself in could be life threatening or perhaps advantageous, if it only knew what was best to do next. The bee would stand a far better chance of responding appropriately if it were wifi enabled. Such a bee could then call upon the past relevant experiences of other bee hive members that had been in similar situations. For this to work the bees in the hive would need to be modelling their world experiences as they occured and reporting these experience back to the hive as they occured. They would only need to report in detail when new experiences occured. If the bee believed it was just experiencing everyday events, that were already well known to the hive, then it would just need to keep reporting a summary of its experinces (doing a at location c, d happened etc etc). If a bee were to loose contact with the hive then the hive would know to investigate and be careful of the situation and location at which it last reported having life experience.
Bees or Individuals within such a society would benefit from the learning of others and so learn at an advanced rate; far faster than a human child. By receiving the memories of other individuals in a given situation, an individual would benefit from knowing what was likely to happen. Through receiving this type of memory it could hold within its mind a simulation. The simulation could be used to monitor its current situation and make predictions on possible outcomes. I have named this type of memory “Hive Memory”. Checking Google, this phrase has been used to describe this process in a science fiction films:
Hive Memory : http://en.wikipedia.org/wiki/Group_mind_(science_fiction)
I would expect this to have been an area of academic study or at least speculation based on science fiction. A far more common term is “Hive Mind”, where individuals share a group consciousness. The term “Hive” is used in this context, as it refers to communities of animals such as bees, which live as a community in a hive. I believe a hive memory is a specific type of swarm intelligence (http://en.wikipedia.org/wiki/Swarm_intelligence).
If models were stored on the internet and shared by multiple intelligent agents, multiple agents could benefit from the previous learning experiences undertaken by earlier agents. This is a form of hive based memory.
My open source project is named the HEMSEYE project as it is a contraction of the phrase HivE Mind’S EYE.
I am actually interesting in understanding and exploring the nature of intelligence and figuring out how to get computers to properly understand language. As a professional commercial software engineer, with many many years of experience, I have been doing this since 2011 after I heard about artificial consciousness work of Bernard Baars and Stan Franklin.
If you could create a HEMSEYE (hive mind’s eye) that understood human language then you would have a chance of creating a human like intelligence. I would like to do this but it MUST be safe and it MUST be ethical and well controlled and governed.
With current AI research we are in roughly the same position as in the 1930s when teams were gathering to climb Everest. We are not sure what is the right path, some paths might be unsafe, we are not sure what technology to use or whether we new entirely new technology.
I believe there are very significant as yet unexplored opportunities by combining the research work of Anna Weizbecia and Cliff Goddard and semantic primes and semantic metalanguage. Developed & identified over more than three decades:
- Professor Anna Wierzbicka The Semantic Primes (+Natural Semantic Metalanguage NSM) 1972-Present
An Impressive Record On Google Scolar
2362 Citations – 1996, Book: Semantics: Primes and Universals
962 Citations – 1972, Book Semantic primitives
- Professor Cliff Goddard & Professor Anna Wiezbicka
Natural Semantic Metalanguage & The Semantic Primes
- As a conscious self aware pre-verbal human, how many “unlabelled” basic human experience concepts (that are actually later described as words) might have you actually understood?
- There are 65 “semantic primes” or hard to define basic word “atoms”.
- No meaningful dictionary definitions.
- Human sensory experiences & environmental experiences (e.g. This, Up, Move, One, Two, More)
- A common core mini language. “Part of all human languages” Identified as a “Language of thought”.
- The semantic primes can be used with 50 “semantic molecules” E.g. (“man,” “woman,” etc). These are more complex everyday word “molecules” that can be defined using the semantic prime basic “atoms”.
- These 65 + 50 words can be used to create valid dictionary definitions for ALL other words.
- A claim tested on 1000 random words picked from a dictionary and defined with this limited vocabulary.
- Some works of Plato translated into this mini language with no loss of meaning.
- It is claimed that this small set of words can be used to “define or paraphrase any concept”.
- Each word’s meaning and contextual usage defines a mini “grammar” that defines how each word can be used with other words. This defines a “universal grammar”.
- This “grammar” is known as Natural Semantic Metalanguage.
- Could understanding these core word skills be a set of skills or clues to bootstrap language acquisition?
I think I have figured how to model language understanding in a computer by extending and modifying a discrete event simulation engine, for a short introduction see
For a longer and more academic and detailed introduction see
I actually run a local AI community meetup group see:
We plan to stream our meetings live on the web see our yuotube channel and sign up for watching our next meeting live by setting a reminder:
It would be nice to be able to enter the IBM AI X-Prize but we need the funding:
BUT – I am very particular about how the ethics and dangers of AI are addressed and controlled (although much of the thoughts on the following link are just draft headings). Will this lack of immediate commercialism and high ethics put people off? It is not that I am against commercial exploitation I just believe for the sake of safety an AI project’s high ideals need to follow a particular mission.
I recently came across an extremely useful academic article based on the Anna Wiezbicka & Cliff Goddard work:
Fähndrich J., Ahrndt S., Albayrak S. (2014)
Are There Semantic Primes in Formal Languages?.
In: Omatu S., Bersini H., Corchado J., Rodríguez S., Pawlewski P., Bucciarelli E. (eds)
Distributed Computing and Artificial Intelligence,
11th International Conference. Advances in Intelligent Systems and Computing,
vol 290. Springer, Cham
- DOI https://doi.org/10.1007/978-3-319-07593-8_46
- Publisher Name Springer, Cham
- Print ISBN 978-3-319-07592-1
- Online ISBN 978-3-319-07593-8
This work describes how some of the semantic primes are already described in formally languages. This is great as it gives the open source HEMSEYE project less to do. These formally languages could be added to a context conneced to our discrete event simulation model instance / 2D or 3D representation or animation or video as an extra hideable or displayable visual layer. This will help us create our mind’s eye view like a “terminator” heads up annotated world view of mixed technologies all joined up that could lead to a model of understanding.
See the video and presentation slides on this link for a longer and more academic and detailed introduction: