Java etc Code/UML Shared Augmented Reality For Humans and AI

I want to try to start coding what we have been talking about at the Make AI Happen meetings that I run that have been mentioned on this blog.  Then I thought of using UML.  But I struggled with that until I thought of going back to using an old ADA / HOOD, Euro Fighter Firmware software development technique.  I explain this below…

What We Could Do:
If you are new to this project I am preparing some WHAT / HOW Part 1 / HOW Part 2 / WHY info.  If you wanted to treat this as pre-reading and get up to speed on the project all the sooner then that would be great and very helpful / save time later at the start of the meeting.

OO Design by extracting verbs and nouns for objectcs and methods ?
UML (& BPMN) Desgn – using gemymodel (are there better tools??):-

Coding Java / C++ etc – any coding platform you fancy.
Code/hack – It does not need to be pretty at this stage.
Create /design explainable / human Like AI tiny prototypes/components.
– But first some group discussions and introductions.

No rules – no prizes – (no funding just open source and always open source).
The plan is to be creative and share ideas.
+ Talk about other fun stuff if you want.
+ Talk about what to do next.

At our next monthly meeting:
Enjoy Barclays free refreshments including wine and beer plus tea coffee coke and fruit juices etc.
https://www.meetup.com/MakeAIHappenBrighton/events/251653980/
Wednesday, October 10, 2018 6:00 PM

Barclays Eagle Labs
1 Preston Road, BN1 4QU · Brighton

Location image of event venue

I had been considering trying to keep AI safe by limiting its ability to just be an intelligent conversational agent with the ability to do what it is told.  To have a proper understanding of language it would need to emulate human intelligence and understanding by having a capacity to “read between the lines”. Linguistics researchers call this field of study pragmatics. Which is what can be understood when you add context, intent, social cognition and prior knowledge etc to a written phrase.  The study of the written phrase being the field of lexical semantics (apparently!).

Looking at this problem as a visual software engineer thinker I realise that we need an AI designed to have an understanding of the intent of others as well as a capacity to develop its own intents in response to the actions of others.  This implies that before we get on to getting a computer to acquire language as words, the lexical semantic bit, we need a model for artificial self-awareness.  This self-awareness model is needed to do the more difficult pragmatics bit that gifts humans with the ability to acquire a language. I have become more confident in this conclusion after reading about work on identifying the intellectual differences seen between great apes and humans and great ape and human society. Not being an academic or a university student I almost certainly came across this on Wikipedia.  (Wikipedia! – academics heard sighing in horror.. across the internet… Could someone give me institutional online access?? – I would sign up for a partially taught research MPhil if you could teach me relevant stuff?)  There is a good summary here:  https://en.wikipedia.org/wiki/Michael_Tomasello….  It is obvious in reading this that what is written makes a lot of sense when you think about this from a social behaviour and evolutionary perspective.

So how do you create such a thing in software? I have just been trying to extract nouns and verbs from text descriptions of models of self-awareness. This is the approach taken when designing jet fighter control systems (Hierarchical Object Orientated Design or HOOD). See https://www.amazon.com/Hierarchical-Object-Ori…/…/013390816X

This does not work for me on this problem. The verbs imply too complex a set of contextually dependent actions. No problem I just define them as an interface that I need to describe more exactly at a later date.

————————————————
Here is an earlier draft / more detailed / different draft some might have originally seen posted:

What Is The Route To Coding Language Understanding In a Computer?  (Can We Do This Next?)
We need to emulate (imitate) on a computer the human capacity for “self awareness” which I believe is the pre-requisite for what linguists call “First Language Aquisition”. We develop our capacity to communicate from our ability to recognise the sense of self that we have as well as the sense of self that others have.

In 1988, after hearing about the linguistic research work of Professor Anna Wiezbeica and Professor Cliff Goddard, Professor Gerald James Whitrow, (a British mathematician, cosmologist and science historian, working at Imperial College London), published a text book in which he wrote on page 11:

….despite the great diversity of existing languages and dialect, the capacity for language appears to be identical in all races. Consequently, we can conclude that man’s linguistic ability existed before racial diversification occurred.

G.J.Whitrow: Time in History: The evolution of our general awareness of time and temporal perspective.
Oxford University Press 1988.

I have been studying linguistics for a short while. I am a computer scientist (an academically and professionally qualified commercial software engineer – MSc + MCP x2) and one time research biochemist and applied genetic engineering PhD student at Cambridge University.

I believe it should be possible to go much further with this statement:

The potential capacity for human language understanding exists in each human’s pre-verbal conscious understanding of themselves and also has about how they can relate to, co-operate with and appreciate others (See Tomasello 1986-2009). These practical universal human capabilities are embodied by a pre-verbal capacity to understand the semantic primes and natural semantic metalanguage and other similar related research ideas described in the work of Professor Anna Wiezbeica and Professor Cliff Goddard.

AI Augmented Reality

What Do We Need?

A Quick Glossary Of Terms: (Draft)

  • Bayesian Network
  • Montecarlo Method
  • Discrete Event Simulation
  • BPM
    Business Process Modelling. A modelling diagram standard used in business to describe business processes. BPMN = Business Process Modelling Notation

BPMN = Business Process Modelling Notation

  • BPSIM () pools and swim lanes

Create an AI driven 2D/3D animated discrete event simulation engine capable of running heavily enhanced BPMN and BPSIM () pools and swim lanes. Enhance the simulator to decorate an augmented reality shared with humans. Connected to DVS driven SLAM and cloud hive shared cause and effect experience etc etc. This will grant us explainable open source AI in a solution far better than DARPA’s current publicised plans.

The words I want to use are from some linguistics research I found on google that describes a set of 65 words (known as the semantic primes). These are words found in all languages, that after adding 50 other words (the semantic molecules) and 120 grammar usage phrases (known as natural semantic metalanguage or NSM). This is the work of Professor Anna Wiezbeica and Professor Cliff Goddard. I believe this linguistic research has not been used within artifical intelligence research see: https://hemseye.org/wp/2018/08/26/hemseye-project-phase-1-shorter-presentation/. There is more description on this on other blog posts and also on youtube.

I want to download and hack/experiment with the list of open source “discrete event simulation software engines listed on wikipedia”. A promising candidate could be Javasim. I believe that simulation engine models could be recalled or created at run-time by an AI based on cloud based shared past experience. Such a simulation engine connected to a live video feed could be used to create a shared human / AI perspective and an augmented reality of what the AI/robot has predicted / interpreted / suggested “understood” using its simulation engine.

I want to animate these words as both 2D and 3D animations. Think of this effort as like working as an animater creating the animation “questions” for an episode of the TV game show “Catch Phrase”.

Rather than use the kind of animation engine they use for the TV show I want to use something called a “discrete event simulation engine”. I use to work on developing one of these, so I know more about them than most. This is very commonly used and hghly developed existing technology. With this technology we can share with a computer a human’s ability to understand: cause and effect, a knowledge of time, making predictions based on previous experience, tests of what if scenarios, problem solving and optimisations. -> Ultimately we would need to develop augmented reality between a human user and our AI.

I then hope that these animated representations can be used as targets in 2D and 3D video recognition (or simultaneous localisation and mapping – SLAM). If we could develop algyrhythms to identify these animated patterns in video then we would have a means of acquiring or relating knowledge from video text and audio back into language animation and simulated understanding.

The simulation tecnologies I have described is based on existing industrial and business technology. I use to work as a software developer and designer for a discrete event simulation software tool supplier. I am therefore very well aware of what can be done with this technology, as it use to be my job to extend it.

Other Ideas Out There On This Topic (“Phase 2 AI ??”):

Geof Hinton, known as the father of deep learning, is the man originally behind the resurgence in the popularity of neural networks in AI research. He has stated that he is now: “deeply suspicious” , “My view is throw it all away and start again,” “I don’t think it’s how the brain works. We clearly don’t need all the labeled data.”.

DARPA (The USA’s Military Research Arm)
They appear to want to take neural networks / deep learning (the things Geof Hinton has / is loosing faith in) and make them understandable. But no one knows how to do this yet. Plus no one yet knows how to properly express understanding. Currently it appears that DARPA want to use “hierachical” “visual understanding tools” as an after thought/add on to visualise what machine learning does.

The people who do machine learning and deep learning are often very focused on the detailed bottom up solution and perhaps are missing the opportunity to try to understand and emulate how a human child experiences sense and thought, structures understanding and uses these skills to acquire language and understanding. They also have a lot of investment money to do what they believe is the next step.

Have you any new ideas you would like to share with the group?
I believe for AI to become viable trusted and safe it needs to be open source and shared. If whoever makes the significant conributions needed for true AI free for eevryone to use. will still do well out of and stay employed (Tim Berners-Lee has on okay). Hopefully anyone inventing true AI will not keep it proprietory and create a monopoly with their invention. Whilst this might ultimately make them/you ridiculously rich it is not likley to be that good for the rest of when it comes to our own wealth and user experience.

Does any one know much / anything about (for possible future work / sessions):

  • Open Source Licensing Tips?
    Any tips on open source licensing and contraining usage to follow the project mission and ethics constraints?
  • Augmented Reality:

Make AI Happen Brighton – Live Webcast Planned on Youtube. Wednesday, September 12, 2018 (UK time BST – GMT+1) 6:00 PM to 9:30 PM

Live Youtube Broadcast & Discussion
+ Post Meeting Youtube Channel
Watch Again Arrangements:

The second kickoff meeting of Make AI Happen Brighton meetup group this will hopefully be live on youtube from my laptop webcam as I present to my AI interest group. Click here for youtube link and to set a reminder.

This signup for people who will physically attend is here https://www.meetup.com/MakeAIHappenBrighton/events/251653940/

  1. We will be talking about trying to make AI happen.
  2. Could we bring Brighton ourselves and the world some benefit from this new technology?
  3. Who Are You and Why Are You Here?
  4. What Else Do You Want To Talk About?
    Check out the headings I have listed below.
    Do we want to add delete or change this agenda?
    We could try to emulate the management of the Homebrew Computer Club
    – Which brought the world the Apple One and Silicon Valley.
  5. Can we do a show and tell of stuff ? We could all contribute on our own interests or share our own AI design experiences and problems?
  6. What about some fun stuff?
  7. Oh and there is free drink (and beer)
    – thanks Barclays.
    We do have to order our own food.

Let us talk for five minutes or less on each of the following topics…
Also anyone could ask for five minutes of the groups time. Any ideas or volunteers?

There are too many topics to cover properly below. We could time me speaking on each for 5 mins or less. This would fill half the meeting time. We could then discuss how to prioritise the rest of the meeting and the next meeting.  Infact we could take a consensus or vote on whether to delay drop or add to this list of stuff to address and expand on the time we choose to allocate to any topic (another 5, 10 mins and hour or more…).

:This is how the Pre-Silicon Valley Homebrew computer club operated.
– Which helped produced the Apple One with this approach.

Many of the topics on this list are already mentioned somewhere on this website (sometimes only in draft).
So our efforts could help in extending this website and open source project.


  1. What is …
    the safest,
    most altruistic,
    most idealistic way to research control and maintain an AI?
    See:
    https://hemseye.org/wp/ai-ethics-safety/
    https://hemseye.org/wp/mission/
  2. We will be discussing what DARPA (the USA military funding arm) are planning.
    https://www.youtube.com/watch?v=-O01G3tSYpU
    https://www.darpa.mil/attachments/XAIProgramUpdate.pdf
    https://www.darpa.mil/program/explainable-artificial-intelligenceWhat DARPA has published currently appears less ambition and less well worked out than our HEMSEYE project plans. I have only skip read their proposal so far. It should, however, be remembered that it might also be the case that what DARPA has released is currently more of a PR exercise rather than a detailed explanation of their real plans.
  3. How is self-awareness essential to understanding and consciousness?
    Is self awareness dangerous?
    What is intent?
    What about all the other human like abilities and intelligencies – Peter Voss post?
  4. What is the HEMSEYE project? (The very quick version)
    The hive minds eye or HEMSEYE comes form the words HivE Mind’S EYE:
    The HEMSEYE Cloud
    Known experience reporting
    Unusual experience reporting
    Cause and effect knowledgeWhat are HEMSEYE intelligent clients?
    Discrete event simulation (vs calculus & machine learning)
    Language understanding
    – a tiny linguistic acquisition bootstrap
    – simulation model designers.
    Time understanding.
    Explainable cause and effect and prediction understanding.

    • Playing a reverse of the TV show catch phrase.
    • Playing the TV show catch phrase.
      The DVS Event camera for light weight SLAM
      New semantic SLAM targets for a background SLAM service.
      Has computer vision research thrown away 3D data for too long?
      Is using Algebraic topology an opportunity for creating 3D high speed HEMSEYE lookup hash functions?
    • Using a universal language grammar boot strap
    • Learning and language or all words
  5. Why might the HEMSEYE project be better than what DARPA are planning?
  6. Could a HEMSEYE replace the world wide web?
  7. The IBM AI X-Prize – But We Would Need Funding
    I doubt it Will Happend
  8. Funding Update From The Cabinet Office
  9. Coast 2 Coast
  10. Local Chamber Of Commerce
  11. Academics and Induxtry Not Approached
  12. Using Social Media & The Internet
    You might also want to join / follow the social media groupsOur Facebook page / group
    https://www.facebook.com/makeaihappen
    https://www.facebook.com/groups/makeaihappen/Our twitter account:
    https://twitter.com/HEMSEYE

A video a presentation slides of the first meeting can be found at: https://hemseye.org/wp/2018/08/07/make-ai-happen-brighton-meetup-launch-meeting/

: BUT ! This is quite long and detailed and a bit academic. If you want something a little easier going and quicker by try the shorter presentation I mentioned earlier: https://hemseye.org/wp/2018/08/26/hemseye-project-phase-1-shorter-presentation/
NOTE
You might want to catch up on with these links before the 11th if you are feeling highly motivated.

The Agenda / Things To Discuss For Meeting 2:

What are LAWS (Lethal Autonomous Weapons)
What could this be about? Perhaps a device to go to GPS coordinates and kill everyon.e as instructed by its programming without further human oversight.
This is a campaign to have these banned.

DARPA (The USA Military Arm Controlling AI research) are planning to spend billions on explainable AI.
Does that mean they want to make lethal autonomous weapon system ?
It is not clear.

Apparently Google have backed of working for the CIA on picture recognition after a number of employees protested and left the company. Picture recognition sounds innocent but when it is being done as part of a drone weapon system some people could become cautious over participating in this work.
Again allegedly (reported on the internet) it has been suggested that DARPA / the CIA hav enow called a conference for other AI companies hoping to fill the research role Google chose not to pursue (after pressure from their employees).

If this is true well done to the Google employees for resigning also go for google for backing down.
But Google was formed with a hippie friendly attitude to do no evil.
But when google was restructured the new mission statement of their new holding company was set to “Do the right thing” rather than “Do no evil”. How happy would the employees of Google be if they could control or vote on this. When it came to doing the right thing in helping the USA develop better drones some employees felt this was not the right thing.

Google like DARPA has a hierarchcal management structure. Is this a safe way to manage AI?

I am in no way suggesting that joining this group will make you any money as I am committed to trying to create a charity funded open source project. But if we work together perhaps we can help make AI happen and bring so benefits and jobs to Brighton and Sussex.
Are there enough generous or potentially naive and rich enough people willing to give money to a potentially at a pie in the sky idea (that might be world saving and really really good). If you are interested sign up to watch.