Java etc Code/UML Shared Augmented Reality For Humans and AI

I want to try to start coding what we have been talking about at the Make AI Happen meetings that I run that have been mentioned on this blog.  Then I thought of using UML.  But I struggled with that until I thought of going back to using an old ADA / HOOD, Euro Fighter Firmware software development technique.  I explain this below…

What We Could Do:
If you are new to this project I am preparing some WHAT / HOW Part 1 / HOW Part 2 / WHY info.  If you wanted to treat this as pre-reading and get up to speed on the project all the sooner then that would be great and very helpful / save time later at the start of the meeting.

OO Design by extracting verbs and nouns for objectcs and methods ?
UML (& BPMN) Desgn – using gemymodel (are there better tools??):-

Coding Java / C++ etc – any coding platform you fancy.
Code/hack – It does not need to be pretty at this stage.
Create /design explainable / human Like AI tiny prototypes/components.
– But first some group discussions and introductions.

No rules – no prizes – (no funding just open source and always open source).
The plan is to be creative and share ideas.
+ Talk about other fun stuff if you want.
+ Talk about what to do next.

At our next monthly meeting:
Enjoy Barclays free refreshments including wine and beer plus tea coffee coke and fruit juices etc.
https://www.meetup.com/MakeAIHappenBrighton/events/251653980/
Wednesday, October 10, 2018 6:00 PM

Barclays Eagle Labs
1 Preston Road, BN1 4QU · Brighton

Location image of event venue

I had been considering trying to keep AI safe by limiting its ability to just be an intelligent conversational agent with the ability to do what it is told.  To have a proper understanding of language it would need to emulate human intelligence and understanding by having a capacity to “read between the lines”. Linguistics researchers call this field of study pragmatics. Which is what can be understood when you add context, intent, social cognition and prior knowledge etc to a written phrase.  The study of the written phrase being the field of lexical semantics (apparently!).

Looking at this problem as a visual software engineer thinker I realise that we need an AI designed to have an understanding of the intent of others as well as a capacity to develop its own intents in response to the actions of others.  This implies that before we get on to getting a computer to acquire language as words, the lexical semantic bit, we need a model for artificial self-awareness.  This self-awareness model is needed to do the more difficult pragmatics bit that gifts humans with the ability to acquire a language. I have become more confident in this conclusion after reading about work on identifying the intellectual differences seen between great apes and humans and great ape and human society. Not being an academic or a university student I almost certainly came across this on Wikipedia.  (Wikipedia! – academics heard sighing in horror.. across the internet… Could someone give me institutional online access?? – I would sign up for a partially taught research MPhil if you could teach me relevant stuff?)  There is a good summary here:  https://en.wikipedia.org/wiki/Michael_Tomasello….  It is obvious in reading this that what is written makes a lot of sense when you think about this from a social behaviour and evolutionary perspective.

So how do you create such a thing in software? I have just been trying to extract nouns and verbs from text descriptions of models of self-awareness. This is the approach taken when designing jet fighter control systems (Hierarchical Object Orientated Design or HOOD). See https://www.amazon.com/Hierarchical-Object-Ori…/…/013390816X

This does not work for me on this problem. The verbs imply too complex a set of contextually dependent actions. No problem I just define them as an interface that I need to describe more exactly at a later date.

————————————————
Here is an earlier draft / more detailed / different draft some might have originally seen posted:

What Is The Route To Coding Language Understanding In a Computer?  (Can We Do This Next?)
We need to emulate (imitate) on a computer the human capacity for “self awareness” which I believe is the pre-requisite for what linguists call “First Language Aquisition”. We develop our capacity to communicate from our ability to recognise the sense of self that we have as well as the sense of self that others have.

In 1988, after hearing about the linguistic research work of Professor Anna Wiezbeica and Professor Cliff Goddard, Professor Gerald James Whitrow, (a British mathematician, cosmologist and science historian, working at Imperial College London), published a text book in which he wrote on page 11:

….despite the great diversity of existing languages and dialect, the capacity for language appears to be identical in all races. Consequently, we can conclude that man’s linguistic ability existed before racial diversification occurred.

G.J.Whitrow: Time in History: The evolution of our general awareness of time and temporal perspective.
Oxford University Press 1988.

I have been studying linguistics for a short while. I am a computer scientist (an academically and professionally qualified commercial software engineer – MSc + MCP x2) and one time research biochemist and applied genetic engineering PhD student at Cambridge University.

I believe it should be possible to go much further with this statement:

The potential capacity for human language understanding exists in each human’s pre-verbal conscious understanding of themselves and also has about how they can relate to, co-operate with and appreciate others (See Tomasello 1986-2009). These practical universal human capabilities are embodied by a pre-verbal capacity to understand the semantic primes and natural semantic metalanguage and other similar related research ideas described in the work of Professor Anna Wiezbeica and Professor Cliff Goddard.

AI Augmented Reality

What Do We Need?

A Quick Glossary Of Terms: (Draft)

  • Bayesian Network
  • Montecarlo Method
  • Discrete Event Simulation
  • BPM
    Business Process Modelling. A modelling diagram standard used in business to describe business processes. BPMN = Business Process Modelling Notation

BPMN = Business Process Modelling Notation

  • BPSIM () pools and swim lanes

Create an AI driven 2D/3D animated discrete event simulation engine capable of running heavily enhanced BPMN and BPSIM () pools and swim lanes. Enhance the simulator to decorate an augmented reality shared with humans. Connected to DVS driven SLAM and cloud hive shared cause and effect experience etc etc. This will grant us explainable open source AI in a solution far better than DARPA’s current publicised plans.

The words I want to use are from some linguistics research I found on google that describes a set of 65 words (known as the semantic primes). These are words found in all languages, that after adding 50 other words (the semantic molecules) and 120 grammar usage phrases (known as natural semantic metalanguage or NSM). This is the work of Professor Anna Wiezbeica and Professor Cliff Goddard. I believe this linguistic research has not been used within artifical intelligence research see: https://hemseye.org/wp/2018/08/26/hemseye-project-phase-1-shorter-presentation/. There is more description on this on other blog posts and also on youtube.

I want to download and hack/experiment with the list of open source “discrete event simulation software engines listed on wikipedia”. A promising candidate could be Javasim. I believe that simulation engine models could be recalled or created at run-time by an AI based on cloud based shared past experience. Such a simulation engine connected to a live video feed could be used to create a shared human / AI perspective and an augmented reality of what the AI/robot has predicted / interpreted / suggested “understood” using its simulation engine.

I want to animate these words as both 2D and 3D animations. Think of this effort as like working as an animater creating the animation “questions” for an episode of the TV game show “Catch Phrase”.

Rather than use the kind of animation engine they use for the TV show I want to use something called a “discrete event simulation engine”. I use to work on developing one of these, so I know more about them than most. This is very commonly used and hghly developed existing technology. With this technology we can share with a computer a human’s ability to understand: cause and effect, a knowledge of time, making predictions based on previous experience, tests of what if scenarios, problem solving and optimisations. -> Ultimately we would need to develop augmented reality between a human user and our AI.

I then hope that these animated representations can be used as targets in 2D and 3D video recognition (or simultaneous localisation and mapping – SLAM). If we could develop algyrhythms to identify these animated patterns in video then we would have a means of acquiring or relating knowledge from video text and audio back into language animation and simulated understanding.

The simulation tecnologies I have described is based on existing industrial and business technology. I use to work as a software developer and designer for a discrete event simulation software tool supplier. I am therefore very well aware of what can be done with this technology, as it use to be my job to extend it.

Other Ideas Out There On This Topic (“Phase 2 AI ??”):

Geof Hinton, known as the father of deep learning, is the man originally behind the resurgence in the popularity of neural networks in AI research. He has stated that he is now: “deeply suspicious” , “My view is throw it all away and start again,” “I don’t think it’s how the brain works. We clearly don’t need all the labeled data.”.

DARPA (The USA’s Military Research Arm)
They appear to want to take neural networks / deep learning (the things Geof Hinton has / is loosing faith in) and make them understandable. But no one knows how to do this yet. Plus no one yet knows how to properly express understanding. Currently it appears that DARPA want to use “hierachical” “visual understanding tools” as an after thought/add on to visualise what machine learning does.

The people who do machine learning and deep learning are often very focused on the detailed bottom up solution and perhaps are missing the opportunity to try to understand and emulate how a human child experiences sense and thought, structures understanding and uses these skills to acquire language and understanding. They also have a lot of investment money to do what they believe is the next step.

Have you any new ideas you would like to share with the group?
I believe for AI to become viable trusted and safe it needs to be open source and shared. If whoever makes the significant conributions needed for true AI free for eevryone to use. will still do well out of and stay employed (Tim Berners-Lee has on okay). Hopefully anyone inventing true AI will not keep it proprietory and create a monopoly with their invention. Whilst this might ultimately make them/you ridiculously rich it is not likley to be that good for the rest of when it comes to our own wealth and user experience.

Does any one know much / anything about (for possible future work / sessions):

  • Open Source Licensing Tips?
    Any tips on open source licensing and contraining usage to follow the project mission and ethics constraints?
  • Augmented Reality:

Leave a Reply

Your email address will not be published. Required fields are marked *