Response to

Partial response to

I am a hobbiest non-academic open source developer and the architect of proposal that aims to also aims create an artificial proxy of Chomsky’s Language Acquisition Device and Universal Grammar.

My computational theory of the mind is that absorb experience and knowledge via a newtork of Markov Blanket like component experiences.

I believe the brain’s behaviour could potentially be replicated using a layered architecture.

Bottom level. Gazebo robotic simulator like physical and continuous simulation on a SLAM+ classification layer.

The next layer is a discrete event simulation (DES), which is a means of constructing Fodor free energy principle compliant components and strcutures. These structure range from hidden Markov models to Markov processes, directed acyclic graphs, and Markov chains up to Markov Blankets.

There is a networking proctocol under development called the DEVS Formalism that allows integration of DES models together and with continuous and physical simulation by using a external clock. This protocal is Markov Property compliant so that complex problems can be controlled as Markov Blankets accross a distributed system.

DES technology

Imagine an artificial intelligence that had a self-aware mind, an imagination, and a human-like body. From within this artificial intelligence’s mind, it could be experiencing its world just like we do when we play a video game. This AI mind could be represented in its world sitting on a chair, we could virtually sit next to it and play along with it playing a video game. We could both watch and share a video screen where we shared the experience of driving and using a humanoid robot within another separate virtual world. With this design, we can represent a virtual world where we can sit within the in the mind experience of this artificial intelligence. In doing this we can then experience the perspective the artificial intelligence has in existing in its world. From our perspective, we are then actually playing a video game in a virtual world from inside another virtual world. The experience we have in this second virtual world is actually potentially a disconnectable experience from the first virtual world. In this second virtual world, we do not need to be playing a game all the time in the first virtual world. We could stop or pause our game we could replay the game, still sat by this AI the whole time. We could load up multi instances of this game whilst still sat by this AI. We could use these multiple instances to run different multi-player world perspectives all on the same game all running at the same time. Being able to see the world from multiple perspectives and pause and replay it would give us an opportunity to observer cause and effect and also the chance to see understand and show empathy. We could use our ability to run multiple virtual world simulations all at the same time to manage and understand very complex tasks. In these virtual world’s we would want to be able to pause time and fast forward and fast reverse or step forward and back in single time increments. Using these multiple world perspectives we could use them to allow us to ask what-if or critical path planning questions.

We need to wrap DES with a metacondition layer so that we can have a human like AI drive the formation of DES models.

The metacondition layer needs to define the WHAT, WHY, HOW, WHEN, WHERE, MISSION, TASK, EVENTS, RESOURCES for the outside of a Markov Blanket.

Thoughts ?

Linguistics – I have a compositional model for the preverbal language of thought that could be used to create a model of a sense of existence of self using the work of Weizbeic, Goddard and Bullock.

Thoughts ??
Comments ??

Can anyone help ??

ABSTRACT: A Human-Like AI Software Development Platform

Creating Human-Like Self-Awareness For Language & Conceptual Understanding In An AI/VR/AR/Cloud AI-Matrix R&D Test Platform

I have been working on a proposal to use simulation technology to simulate human-like conceptual self-awareness

Java etc Code/UML Shared Augmented Reality For Humans and AI

I want to try to start coding what we have been talking about at the Make AI Happen meetings that I run that have been mentioned on this blog.  Then I thought of using UML.  But I struggled with that until I thought of going back to using an old ADA / HOOD, Euro Fighter Firmware software development technique.  I explain this below…

What We Could Do:
If you are new to this project I am preparing some WHAT / HOW Part 1 / HOW Part 2 / WHY info.  If you wanted to treat this as pre-reading and get up to speed on the project all the sooner then that would be great and very helpful / save time later at the start of the meeting.

OO Design by extracting verbs and nouns for objectcs and methods ?
UML (& BPMN) Desgn – using gemymodel (are there better tools??):-

Coding Java / C++ etc – any coding platform you fancy.
Code/hack – It does not need to be pretty at this stage.
Create /design explainable / human Like AI tiny prototypes/components.
– But first some group discussions and introductions.

No rules – no prizes – (no funding just open source and always open source).
The plan is to be creative and share ideas.
+ Talk about other fun stuff if you want.
+ Talk about what to do next.

At our next monthly meeting:
Enjoy Barclays free refreshments including wine and beer plus tea coffee coke and fruit juices etc.
Wednesday, October 10, 2018 6:00 PM

Barclays Eagle Labs
1 Preston Road, BN1 4QU · Brighton

Location image of event venue

I had been considering trying to keep AI safe by limiting its ability to just be an intelligent conversational agent with the ability to do what it is told.  To have a proper understanding of language it would need to emulate human intelligence and understanding by having a capacity to “read between the lines”. Linguistics researchers call this field of study pragmatics. Which is what can be understood when you add context, intent, social cognition and prior knowledge etc to a written phrase.  The study of the written phrase being the field of lexical semantics (apparently!).

Looking at this problem as a visual software engineer thinker I realise that we need an AI designed to have an understanding of the intent of others as well as a capacity to develop its own intents in response to the actions of others.  This implies that before we get on to getting a computer to acquire language as words, the lexical semantic bit, we need a model for artificial self-awareness.  This self-awareness model is needed to do the more difficult pragmatics bit that gifts humans with the ability to acquire a language. I have become more confident in this conclusion after reading about work on identifying the intellectual differences seen between great apes and humans and great ape and human society. Not being an academic or a university student I almost certainly came across this on Wikipedia.  (Wikipedia! – academics heard sighing in horror.. across the internet… Could someone give me institutional online access?? – I would sign up for a partially taught research MPhil if you could teach me relevant stuff?)  There is a good summary here:….  It is obvious in reading this that what is written makes a lot of sense when you think about this from a social behaviour and evolutionary perspective.

So how do you create such a thing in software? I have just been trying to extract nouns and verbs from text descriptions of models of self-awareness. This is the approach taken when designing jet fighter control systems (Hierarchical Object Orientated Design or HOOD). See…/…/013390816X

This does not work for me on this problem. The verbs imply too complex a set of contextually dependent actions. No problem I just define them as an interface that I need to describe more exactly at a later date.

Here is an earlier draft / more detailed / different draft some might have originally seen posted:

What Is The Route To Coding Language Understanding In a Computer?  (Can We Do This Next?)
We need to emulate (imitate) on a computer the human capacity for “self awareness” which I believe is the pre-requisite for what linguists call “First Language Aquisition”. We develop our capacity to communicate from our ability to recognise the sense of self that we have as well as the sense of self that others have.

In 1988, after hearing about the linguistic research work of Professor Anna Wiezbeica and Professor Cliff Goddard, Professor Gerald James Whitrow, (a British mathematician, cosmologist and science historian, working at Imperial College London), published a text book in which he wrote on page 11:

….despite the great diversity of existing languages and dialect, the capacity for language appears to be identical in all races. Consequently, we can conclude that man’s linguistic ability existed before racial diversification occurred.

G.J.Whitrow: Time in History: The evolution of our general awareness of time and temporal perspective.
Oxford University Press 1988.

I have been studying linguistics for a short while. I am a computer scientist (an academically and professionally qualified commercial software engineer – MSc + MCP x2) and one time research biochemist and applied genetic engineering PhD student at Cambridge University.

I believe it should be possible to go much further with this statement:

The potential capacity for human language understanding exists in each human’s pre-verbal conscious understanding of themselves and also has about how they can relate to, co-operate with and appreciate others (See Tomasello 1986-2009). These practical universal human capabilities are embodied by a pre-verbal capacity to understand the semantic primes and natural semantic metalanguage and other similar related research ideas described in the work of Professor Anna Wiezbeica and Professor Cliff Goddard.

AI Augmented Reality

What Do We Need?

A Quick Glossary Of Terms: (Draft)

  • Bayesian Network
  • Montecarlo Method
  • Discrete Event Simulation
  • BPM
    Business Process Modelling. A modelling diagram standard used in business to describe business processes. BPMN = Business Process Modelling Notation

BPMN = Business Process Modelling Notation

  • BPSIM () pools and swim lanes

Create an AI driven 2D/3D animated discrete event simulation engine capable of running heavily enhanced BPMN and BPSIM () pools and swim lanes. Enhance the simulator to decorate an augmented reality shared with humans. Connected to DVS driven SLAM and cloud hive shared cause and effect experience etc etc. This will grant us explainable open source AI in a solution far better than DARPA’s current publicised plans.

The words I want to use are from some linguistics research I found on google that describes a set of 65 words (known as the semantic primes). These are words found in all languages, that after adding 50 other words (the semantic molecules) and 120 grammar usage phrases (known as natural semantic metalanguage or NSM). This is the work of Professor Anna Wiezbeica and Professor Cliff Goddard. I believe this linguistic research has not been used within artifical intelligence research see: There is more description on this on other blog posts and also on youtube.

I want to download and hack/experiment with the list of open source “discrete event simulation software engines listed on wikipedia”. A promising candidate could be Javasim. I believe that simulation engine models could be recalled or created at run-time by an AI based on cloud based shared past experience. Such a simulation engine connected to a live video feed could be used to create a shared human / AI perspective and an augmented reality of what the AI/robot has predicted / interpreted / suggested “understood” using its simulation engine.

I want to animate these words as both 2D and 3D animations. Think of this effort as like working as an animater creating the animation “questions” for an episode of the TV game show “Catch Phrase”.

Rather than use the kind of animation engine they use for the TV show I want to use something called a “discrete event simulation engine”. I use to work on developing one of these, so I know more about them than most. This is very commonly used and hghly developed existing technology. With this technology we can share with a computer a human’s ability to understand: cause and effect, a knowledge of time, making predictions based on previous experience, tests of what if scenarios, problem solving and optimisations. -> Ultimately we would need to develop augmented reality between a human user and our AI.

I then hope that these animated representations can be used as targets in 2D and 3D video recognition (or simultaneous localisation and mapping – SLAM). If we could develop algyrhythms to identify these animated patterns in video then we would have a means of acquiring or relating knowledge from video text and audio back into language animation and simulated understanding.

The simulation tecnologies I have described is based on existing industrial and business technology. I use to work as a software developer and designer for a discrete event simulation software tool supplier. I am therefore very well aware of what can be done with this technology, as it use to be my job to extend it.

Other Ideas Out There On This Topic (“Phase 2 AI ??”):

Geof Hinton, known as the father of deep learning, is the man originally behind the resurgence in the popularity of neural networks in AI research. He has stated that he is now: “deeply suspicious” , “My view is throw it all away and start again,” “I don’t think it’s how the brain works. We clearly don’t need all the labeled data.”.

DARPA (The USA’s Military Research Arm)
They appear to want to take neural networks / deep learning (the things Geof Hinton has / is loosing faith in) and make them understandable. But no one knows how to do this yet. Plus no one yet knows how to properly express understanding. Currently it appears that DARPA want to use “hierachical” “visual understanding tools” as an after thought/add on to visualise what machine learning does.

The people who do machine learning and deep learning are often very focused on the detailed bottom up solution and perhaps are missing the opportunity to try to understand and emulate how a human child experiences sense and thought, structures understanding and uses these skills to acquire language and understanding. They also have a lot of investment money to do what they believe is the next step.

Have you any new ideas you would like to share with the group?
I believe for AI to become viable trusted and safe it needs to be open source and shared. If whoever makes the significant conributions needed for true AI free for eevryone to use. will still do well out of and stay employed (Tim Berners-Lee has on okay). Hopefully anyone inventing true AI will not keep it proprietory and create a monopoly with their invention. Whilst this might ultimately make them/you ridiculously rich it is not likley to be that good for the rest of when it comes to our own wealth and user experience.

Does any one know much / anything about (for possible future work / sessions):

  • Open Source Licensing Tips?
    Any tips on open source licensing and contraining usage to follow the project mission and ethics constraints?
  • Augmented Reality:

Make AI Happen Brighton – Live Webcast Planned on Youtube. Wednesday, September 12, 2018 (UK time BST – GMT+1) 6:00 PM to 9:30 PM

Live Youtube Broadcast & Discussion
+ Post Meeting Youtube Channel
Watch Again Arrangements:

The second kickoff meeting of Make AI Happen Brighton meetup group this will hopefully be live on youtube from my laptop webcam as I present to my AI interest group. Click here for youtube link and to set a reminder.

This signup for people who will physically attend is here

  1. We will be talking about trying to make AI happen.
  2. Could we bring Brighton ourselves and the world some benefit from this new technology?
  3. Who Are You and Why Are You Here?
  4. What Else Do You Want To Talk About?
    Check out the headings I have listed below.
    Do we want to add delete or change this agenda?
    We could try to emulate the management of the Homebrew Computer Club
    – Which brought the world the Apple One and Silicon Valley.
  5. Can we do a show and tell of stuff ? We could all contribute on our own interests or share our own AI design experiences and problems?
  6. What about some fun stuff?
  7. Oh and there is free drink (and beer)
    – thanks Barclays.
    We do have to order our own food.

Let us talk for five minutes or less on each of the following topics…
Also anyone could ask for five minutes of the groups time. Any ideas or volunteers?

There are too many topics to cover properly below. We could time me speaking on each for 5 mins or less. This would fill half the meeting time. We could then discuss how to prioritise the rest of the meeting and the next meeting.  Infact we could take a consensus or vote on whether to delay drop or add to this list of stuff to address and expand on the time we choose to allocate to any topic (another 5, 10 mins and hour or more…).

:This is how the Pre-Silicon Valley Homebrew computer club operated.
– Which helped produced the Apple One with this approach.

Many of the topics on this list are already mentioned somewhere on this website (sometimes only in draft).
So our efforts could help in extending this website and open source project.

  1. What is …
    the safest,
    most altruistic,
    most idealistic way to research control and maintain an AI?
  2. We will be discussing what DARPA (the USA military funding arm) are planning. DARPA has published currently appears less ambition and less well worked out than our HEMSEYE project plans. I have only skip read their proposal so far. It should, however, be remembered that it might also be the case that what DARPA has released is currently more of a PR exercise rather than a detailed explanation of their real plans.
  3. How is self-awareness essential to understanding and consciousness?
    Is self awareness dangerous?
    What is intent?
    What about all the other human like abilities and intelligencies – Peter Voss post?
  4. What is the HEMSEYE project? (The very quick version)
    The hive minds eye or HEMSEYE comes form the words HivE Mind’S EYE:
    The HEMSEYE Cloud
    Known experience reporting
    Unusual experience reporting
    Cause and effect knowledgeWhat are HEMSEYE intelligent clients?
    Discrete event simulation (vs calculus & machine learning)
    Language understanding
    – a tiny linguistic acquisition bootstrap
    – simulation model designers.
    Time understanding.
    Explainable cause and effect and prediction understanding.

    • Playing a reverse of the TV show catch phrase.
    • Playing the TV show catch phrase.
      The DVS Event camera for light weight SLAM
      New semantic SLAM targets for a background SLAM service.
      Has computer vision research thrown away 3D data for too long?
      Is using Algebraic topology an opportunity for creating 3D high speed HEMSEYE lookup hash functions?
    • Using a universal language grammar boot strap
    • Learning and language or all words
  5. Why might the HEMSEYE project be better than what DARPA are planning?
  6. Could a HEMSEYE replace the world wide web?
  7. The IBM AI X-Prize – But We Would Need Funding
    I doubt it Will Happend
  8. Funding Update From The Cabinet Office
  9. Coast 2 Coast
  10. Local Chamber Of Commerce
  11. Academics and Induxtry Not Approached
  12. Using Social Media & The Internet
    You might also want to join / follow the social media groupsOur Facebook page / group twitter account:

A video a presentation slides of the first meeting can be found at:

: BUT ! This is quite long and detailed and a bit academic. If you want something a little easier going and quicker by try the shorter presentation I mentioned earlier:
You might want to catch up on with these links before the 11th if you are feeling highly motivated.

The Agenda / Things To Discuss For Meeting 2:

What are LAWS (Lethal Autonomous Weapons)
What could this be about? Perhaps a device to go to GPS coordinates and kill everyon.e as instructed by its programming without further human oversight.
This is a campaign to have these banned.

DARPA (The USA Military Arm Controlling AI research) are planning to spend billions on explainable AI.
Does that mean they want to make lethal autonomous weapon system ?
It is not clear.

Apparently Google have backed of working for the CIA on picture recognition after a number of employees protested and left the company. Picture recognition sounds innocent but when it is being done as part of a drone weapon system some people could become cautious over participating in this work.
Again allegedly (reported on the internet) it has been suggested that DARPA / the CIA hav enow called a conference for other AI companies hoping to fill the research role Google chose not to pursue (after pressure from their employees).

If this is true well done to the Google employees for resigning also go for google for backing down.
But Google was formed with a hippie friendly attitude to do no evil.
But when google was restructured the new mission statement of their new holding company was set to “Do the right thing” rather than “Do no evil”. How happy would the employees of Google be if they could control or vote on this. When it came to doing the right thing in helping the USA develop better drones some employees felt this was not the right thing.

Google like DARPA has a hierarchcal management structure. Is this a safe way to manage AI?

I am in no way suggesting that joining this group will make you any money as I am committed to trying to create a charity funded open source project. But if we work together perhaps we can help make AI happen and bring so benefits and jobs to Brighton and Sussex.
Are there enough generous or potentially naive and rich enough people willing to give money to a potentially at a pie in the sky idea (that might be world saving and really really good). If you are interested sign up to watch.

Make AI Happen Brighton Meetup – Launch Meeting

Make AI Happen Brighton – AI Interest Group Kick Off Meeting Wednesday 08/08/2018

Development Plans For Implementing AI/AGI

Barclays will provide food for this first meeting. At every meeting they will provide liquid refreshments, including free beer. The meetup will open at 18:00 with a start on the first talk at 18:15-18:30 (I have to travel from Southwater near Horsham).

– What Should We Do As A Group ?
– Who Are You?
– Why Are You Here ?

Development Plans For Implementing AI/AGI

Creating a means for a robot to become self-aware in order to acquire any human language using artificial consciousness (Is It Safe?).

Just The Presentation Slides – No Audio
A 5 Video Youtube Playlist Click this link for details.
Video One Is Below (intro missing)

Using a model of Weizbeica & Goddards 65 semantic primes and their semantic meta language / minimal language as bootstrap for Chomsky’s Language Acquisition Device (LAD) and Universal Grammar and Tomasello 2005, 2009 Social, tool, self and other awareness in an extensible discrete event simulation framework and virtual world simulation.
Modelling cause and effect and making predictions using Discrete Event Simulation (DES).
Using Semantic Prime context / model identification (like a Skinner Box) in a visual background service for a language acquisition and understanding bootstrap in a virtual real world real-time simulation.
Cloud shared learning from a real-time virtual world. The “HivE Mind’S EYE (the HEMSEYE or www mk2 ?).

What Next…
See the nexts posts including about our next meeting which should be live on youtube.

Some Fun / Chat / Networking – What Should We Do Next?

If We Had Time :
AI Safety / Ownership / Control / Governance / Acceptance / Society & Social Issues.
Raspberry PI projects ?

ROS – Robotic Operating System
A youtube channel.
The construct “ROS Ambassador Scheme” via the Robot Ignite Academy are allowing me access to free online training.
They also do paid for training.
We could have regular free online training available here.

DARPA – Some BRIEF COMMENTS On Their Plans For Making Explainable AI
The next generation of US military led AI research. See

IBM AI X-Prize
Competition application news.

A Review Of Peter Voss’s Medium Article On Advanced General Intelligence
Can we just aim at a subset of what  he suggests with a Human Like Intelligence  (HLI) implementation. I think we can do this by using and extending the operation of a discrete event simulation engine (see

Review The RSA Report On AI

The HEMSEYE Project Youtube Channel
I have a HEMSEYE project youtube channel which I will be able to brand with the HEMSEYE name once I get 100 subscribers. 

What next ???

More Q&A
Free ROS training sessions?
Next meetings ?
Academic involvement?
iCub humanoid robot open source project collaboration – I spoke to them last year about this…?
Panel discussions?

Project History Since: LIDA Artificial Consciousness and my 2013 PhD Project Proposal Accepted at Birmingham University

In creating and founding a research project effort it is important to to try to impress people who might be motivated to help or take an interest in a project. So here is some history to demonstrate this project’s longevity and development from an interest in artificial consciousness to modelling language understanding.

I think in about 2011 I read an article in New Scientist about “Global Work Space Theory” as a theory to explain consciousness. The article in question went on to describe the work being done at Memphis University on artificial cognition with the IDAand LIDA projects.

In 2013 I managed to get a PhD project proposal accepted at Birmingham University. My original project proposal is available here for download here: Download 2013 PhD Project Original Proposal Sent to Birmngham University.

I originally applied to Dr Bezard Bordbar as my supervisor after I had come across some of his work regarding the translation of free text into UML and OCL. The proposal I have made available for download is my original proposal to Dr Bordbar. I actually re-wrote this initial proposal into a PhD project application which included some input from Dr Bordbar and Dr Mark Lee who agreed to be a joint supervisor on the project with Dr Bordbar. In looking to do this PhD I was hoping to find an industrial partner who would fund me sufficiently so that I could maintain a commercial developer’s salary. This effort was not successful. In frustration I looked at raising funds via kickstarter to create my own research company. Dr Bordbar was reluctant for me to proceed on this basis I think mainly as having to raise funds and do a PhD would be too much to do all at once. We therefore paused or efforts at pursuing this project.

Since 2013 in making this PhD project application I have:

  • Continued to study Artificial Intelligence in my spare-time.
  • Created this website and blog to document the development of my research ideas.
  • Learned a great deal more about many topics periferal to what would be considered mainstream and accepted in the field of artificial intelligence research. I have attempted to do this as I am unlikely to be able to compete with full-time academics by following the mainstream.
  • Created two meetup groups to try to encourage local hobby interest in Artificial Intelligence
  • My original group was Haywards Heath based.

Bootstrapping Computerised Language Understanding With Semantic Primes

I believe that a child’s need to acquire language is about the intent or a need to communicate ideas. Therefore the ideas come first before language.  By first developing an AI’s self awareness of concepts you can get a mechanism for bootstrapping language acquisition and eventually start to simulate an artificial consciousness.

To behave like a human I believe an AI will need to learn like a child. If this is the case then it would need to learn about the world and then learn language by speaking and listening. Learning to read is then a secondary activity.

There are a set of universal indivisible language concepts (known as semantic primes – see later) that can be thought of as an understanding of the world that cannot be defined with language. These pre/non-verbal concepts need to be learnt from the environment and from the experience of having a consciousness and from social interaction.  If you could define a model for defining each semantic prime using physical observation or some mechanistic knowledge of the world it should be possible to define these concepts to yourself and an AI before you had added any language to describe them.

Semantic primes are grouped by category.  The first category are the “Substantives” (I, YOU, SOMEONE/PERSON, PEOPLE).  To give a robot understanding of these particular words you would need to identify the underlying process and data that gives rise to these word definitions.  In this case the process is “Recognition”.  These “substantive” words are all the outcome of a recognition process.  So to understand the substantives conceptually a robot will need to implement a recognition process that correctly identifies a substantive.  Another group of semantic primes are the relational substantives; these are SOMETHING/THING, BODY, KIND, PART.   These are actually less specialised versions of the first “Substantives” group that just relate to objects rather than people.  I believe all the semantic primes could be understood by identifying their underlying processing and data storage and also by building them into a learning hierarchy.  Therefore “relational substantive primes” would be learnt before the “substantive primes”.

(The following is from Wikipedia)

Semantic primes or semantic primitives are semantic concepts that are innately understood, but cannot be expressed in simpler terms. They represent words or phrases that are learned through practice, but cannot be defined concretely. For example, although the meaning of “touching” is readily understood, a dictionary might define “touch” as “to make contact” and “contact” as “touching”, providing no information if neither of these words are understood.

The concept of innate semantic primes was largely introduced by Anna Wierzbicka‘s book, Semantics: Primes and Universals.

Semantic primes represent universally meaningful concepts, but to have meaningful messages, or statements, such concepts must combine in a way that they themselves convey meaning. Such meaningful combinations, in their simplest form as sentences, constitute the syntax of the language.

Wierzbicka provides evidence that just as all languages use the same set of semantic primes, they also use the same, or very similar syntax. She states: “I am also positing certain innate and universal rules of syntax-not in the sense of some intuitively unverifiable formal syntax a la Chomsky, but in the sense of intuitively verifiable patterns determining possible combinations of primitive concepts(Wierzbicka, 1996).” She gives one example comparing the English sentence, “I want to do this”, with its equivalent in Russian. Although she notes certain formal differences between the two sentence structures, their semantic equivalence emerges from the “….equivalence of the primitives themselves and of the rules for their combination.
This work [of Wierzbicka and colleagues] has led to a set of a highly concrete proposals about a hypothesised irreducible core of all human languages. This universal core is believed to have a fully ‘language-like’ character in the sense that it consists of a lexicon of semantic primitives together with a syntax governing how the primitives can be combined (Goddard, 1998).

The semantic primes by category (categoies shown as a link to their definition) are:



Relational Substantives










Mental predicates




Actions, Events, Movement, contact


Existence, Possession


Life and Death






Logical Concepts


Intensifier, Augmenter




We have being looking at process modelling and producing a world view on which to hang an understanding of our semantic prime definitions.  We will be looking further at this work and how we can build a software object model / AI that describes the world using conceptual awareness.

Could you help with co-operation input or funding?  See elsewhere on this website for further details.