Java etc Code/UML Shared Augmented Reality For Humans and AI

I want to try to start coding what we have been talking about at the Make AI Happen meetings that I run that have been mentioned on this blog.  Then I thought of using UML.  But I struggled with that until I thought of going back to using an old ADA / HOOD, Euro Fighter Firmware software development technique.  I explain this below…

What We Could Do:
If you are new to this project I am preparing some WHAT / HOW Part 1 / HOW Part 2 / WHY info.  If you wanted to treat this as pre-reading and get up to speed on the project all the sooner then that would be great and very helpful / save time later at the start of the meeting.

OO Design by extracting verbs and nouns for objectcs and methods ?
UML (& BPMN) Desgn – using gemymodel (are there better tools??):-

Coding Java / C++ etc – any coding platform you fancy.
Code/hack – It does not need to be pretty at this stage.
Create /design explainable / human Like AI tiny prototypes/components.
– But first some group discussions and introductions.

No rules – no prizes – (no funding just open source and always open source).
The plan is to be creative and share ideas.
+ Talk about other fun stuff if you want.
+ Talk about what to do next.

At our next monthly meeting:
Enjoy Barclays free refreshments including wine and beer plus tea coffee coke and fruit juices etc.
https://www.meetup.com/MakeAIHappenBrighton/events/251653980/
Wednesday, October 10, 2018 6:00 PM

Barclays Eagle Labs
1 Preston Road, BN1 4QU · Brighton

Location image of event venue

I had been considering trying to keep AI safe by limiting its ability to just be an intelligent conversational agent with the ability to do what it is told.  To have a proper understanding of language it would need to emulate human intelligence and understanding by having a capacity to “read between the lines”. Linguistics researchers call this field of study pragmatics. Which is what can be understood when you add context, intent, social cognition and prior knowledge etc to a written phrase.  The study of the written phrase being the field of lexical semantics (apparently!).

Looking at this problem as a visual software engineer thinker I realise that we need an AI designed to have an understanding of the intent of others as well as a capacity to develop its own intents in response to the actions of others.  This implies that before we get on to getting a computer to acquire language as words, the lexical semantic bit, we need a model for artificial self-awareness.  This self-awareness model is needed to do the more difficult pragmatics bit that gifts humans with the ability to acquire a language. I have become more confident in this conclusion after reading about work on identifying the intellectual differences seen between great apes and humans and great ape and human society. Not being an academic or a university student I almost certainly came across this on Wikipedia.  (Wikipedia! – academics heard sighing in horror.. across the internet… Could someone give me institutional online access?? – I would sign up for a partially taught research MPhil if you could teach me relevant stuff?)  There is a good summary here:  https://en.wikipedia.org/wiki/Michael_Tomasello….  It is obvious in reading this that what is written makes a lot of sense when you think about this from a social behaviour and evolutionary perspective.

So how do you create such a thing in software? I have just been trying to extract nouns and verbs from text descriptions of models of self-awareness. This is the approach taken when designing jet fighter control systems (Hierarchical Object Orientated Design or HOOD). See https://www.amazon.com/Hierarchical-Object-Ori…/…/013390816X

This does not work for me on this problem. The verbs imply too complex a set of contextually dependent actions. No problem I just define them as an interface that I need to describe more exactly at a later date.

————————————————
Here is an earlier draft / more detailed / different draft some might have originally seen posted:

What Is The Route To Coding Language Understanding In a Computer?  (Can We Do This Next?)
We need to emulate (imitate) on a computer the human capacity for “self awareness” which I believe is the pre-requisite for what linguists call “First Language Aquisition”. We develop our capacity to communicate from our ability to recognise the sense of self that we have as well as the sense of self that others have.

In 1988, after hearing about the linguistic research work of Professor Anna Wiezbeica and Professor Cliff Goddard, Professor Gerald James Whitrow, (a British mathematician, cosmologist and science historian, working at Imperial College London), published a text book in which he wrote on page 11:

….despite the great diversity of existing languages and dialect, the capacity for language appears to be identical in all races. Consequently, we can conclude that man’s linguistic ability existed before racial diversification occurred.

G.J.Whitrow: Time in History: The evolution of our general awareness of time and temporal perspective.
Oxford University Press 1988.

I have been studying linguistics for a short while. I am a computer scientist (an academically and professionally qualified commercial software engineer – MSc + MCP x2) and one time research biochemist and applied genetic engineering PhD student at Cambridge University.

I believe it should be possible to go much further with this statement:

The potential capacity for human language understanding exists in each human’s pre-verbal conscious understanding of themselves and also has about how they can relate to, co-operate with and appreciate others (See Tomasello 1986-2009). These practical universal human capabilities are embodied by a pre-verbal capacity to understand the semantic primes and natural semantic metalanguage and other similar related research ideas described in the work of Professor Anna Wiezbeica and Professor Cliff Goddard.

AI Augmented Reality

What Do We Need?

A Quick Glossary Of Terms: (Draft)

  • Bayesian Network
  • Montecarlo Method
  • Discrete Event Simulation
  • BPM
    Business Process Modelling. A modelling diagram standard used in business to describe business processes. BPMN = Business Process Modelling Notation

BPMN = Business Process Modelling Notation

  • BPSIM () pools and swim lanes

Create an AI driven 2D/3D animated discrete event simulation engine capable of running heavily enhanced BPMN and BPSIM () pools and swim lanes. Enhance the simulator to decorate an augmented reality shared with humans. Connected to DVS driven SLAM and cloud hive shared cause and effect experience etc etc. This will grant us explainable open source AI in a solution far better than DARPA’s current publicised plans.

The words I want to use are from some linguistics research I found on google that describes a set of 65 words (known as the semantic primes). These are words found in all languages, that after adding 50 other words (the semantic molecules) and 120 grammar usage phrases (known as natural semantic metalanguage or NSM). This is the work of Professor Anna Wiezbeica and Professor Cliff Goddard. I believe this linguistic research has not been used within artifical intelligence research see: https://hemseye.org/wp/2018/08/26/hemseye-project-phase-1-shorter-presentation/. There is more description on this on other blog posts and also on youtube.

I want to download and hack/experiment with the list of open source “discrete event simulation software engines listed on wikipedia”. A promising candidate could be Javasim. I believe that simulation engine models could be recalled or created at run-time by an AI based on cloud based shared past experience. Such a simulation engine connected to a live video feed could be used to create a shared human / AI perspective and an augmented reality of what the AI/robot has predicted / interpreted / suggested “understood” using its simulation engine.

I want to animate these words as both 2D and 3D animations. Think of this effort as like working as an animater creating the animation “questions” for an episode of the TV game show “Catch Phrase”.

Rather than use the kind of animation engine they use for the TV show I want to use something called a “discrete event simulation engine”. I use to work on developing one of these, so I know more about them than most. This is very commonly used and hghly developed existing technology. With this technology we can share with a computer a human’s ability to understand: cause and effect, a knowledge of time, making predictions based on previous experience, tests of what if scenarios, problem solving and optimisations. -> Ultimately we would need to develop augmented reality between a human user and our AI.

I then hope that these animated representations can be used as targets in 2D and 3D video recognition (or simultaneous localisation and mapping – SLAM). If we could develop algyrhythms to identify these animated patterns in video then we would have a means of acquiring or relating knowledge from video text and audio back into language animation and simulated understanding.

The simulation tecnologies I have described is based on existing industrial and business technology. I use to work as a software developer and designer for a discrete event simulation software tool supplier. I am therefore very well aware of what can be done with this technology, as it use to be my job to extend it.

Other Ideas Out There On This Topic (“Phase 2 AI ??”):

Geof Hinton, known as the father of deep learning, is the man originally behind the resurgence in the popularity of neural networks in AI research. He has stated that he is now: “deeply suspicious” , “My view is throw it all away and start again,” “I don’t think it’s how the brain works. We clearly don’t need all the labeled data.”.

DARPA (The USA’s Military Research Arm)
They appear to want to take neural networks / deep learning (the things Geof Hinton has / is loosing faith in) and make them understandable. But no one knows how to do this yet. Plus no one yet knows how to properly express understanding. Currently it appears that DARPA want to use “hierachical” “visual understanding tools” as an after thought/add on to visualise what machine learning does.

The people who do machine learning and deep learning are often very focused on the detailed bottom up solution and perhaps are missing the opportunity to try to understand and emulate how a human child experiences sense and thought, structures understanding and uses these skills to acquire language and understanding. They also have a lot of investment money to do what they believe is the next step.

Have you any new ideas you would like to share with the group?
I believe for AI to become viable trusted and safe it needs to be open source and shared. If whoever makes the significant conributions needed for true AI free for eevryone to use. will still do well out of and stay employed (Tim Berners-Lee has on okay). Hopefully anyone inventing true AI will not keep it proprietory and create a monopoly with their invention. Whilst this might ultimately make them/you ridiculously rich it is not likley to be that good for the rest of when it comes to our own wealth and user experience.

Does any one know much / anything about (for possible future work / sessions):

  • Open Source Licensing Tips?
    Any tips on open source licensing and contraining usage to follow the project mission and ethics constraints?
  • Augmented Reality:

Make AI Happen Brighton – Live Webcast Planned on Youtube. Wednesday, September 12, 2018 (UK time BST – GMT+1) 6:00 PM to 9:30 PM

Live Youtube Broadcast & Discussion
+ Post Meeting Youtube Channel
Watch Again Arrangements:

The second kickoff meeting of Make AI Happen Brighton meetup group this will hopefully be live on youtube from my laptop webcam as I present to my AI interest group. Click here for youtube link and to set a reminder.

This signup for people who will physically attend is here https://www.meetup.com/MakeAIHappenBrighton/events/251653940/

  1. We will be talking about trying to make AI happen.
  2. Could we bring Brighton ourselves and the world some benefit from this new technology?
  3. Who Are You and Why Are You Here?
  4. What Else Do You Want To Talk About?
    Check out the headings I have listed below.
    Do we want to add delete or change this agenda?
    We could try to emulate the management of the Homebrew Computer Club
    – Which brought the world the Apple One and Silicon Valley.
  5. Can we do a show and tell of stuff ? We could all contribute on our own interests or share our own AI design experiences and problems?
  6. What about some fun stuff?
  7. Oh and there is free drink (and beer)
    – thanks Barclays.
    We do have to order our own food.

Let us talk for five minutes or less on each of the following topics…
Also anyone could ask for five minutes of the groups time. Any ideas or volunteers?

There are too many topics to cover properly below. We could time me speaking on each for 5 mins or less. This would fill half the meeting time. We could then discuss how to prioritise the rest of the meeting and the next meeting.  Infact we could take a consensus or vote on whether to delay drop or add to this list of stuff to address and expand on the time we choose to allocate to any topic (another 5, 10 mins and hour or more…).

:This is how the Pre-Silicon Valley Homebrew computer club operated.
– Which helped produced the Apple One with this approach.

Many of the topics on this list are already mentioned somewhere on this website (sometimes only in draft).
So our efforts could help in extending this website and open source project.


  1. What is …
    the safest,
    most altruistic,
    most idealistic way to research control and maintain an AI?
    See:
    https://hemseye.org/wp/ai-ethics-safety/
    https://hemseye.org/wp/mission/
  2. We will be discussing what DARPA (the USA military funding arm) are planning.
    https://www.youtube.com/watch?v=-O01G3tSYpU
    https://www.darpa.mil/attachments/XAIProgramUpdate.pdf
    https://www.darpa.mil/program/explainable-artificial-intelligenceWhat DARPA has published currently appears less ambition and less well worked out than our HEMSEYE project plans. I have only skip read their proposal so far. It should, however, be remembered that it might also be the case that what DARPA has released is currently more of a PR exercise rather than a detailed explanation of their real plans.
  3. How is self-awareness essential to understanding and consciousness?
    Is self awareness dangerous?
    What is intent?
    What about all the other human like abilities and intelligencies – Peter Voss post?
  4. What is the HEMSEYE project? (The very quick version)
    The hive minds eye or HEMSEYE comes form the words HivE Mind’S EYE:
    The HEMSEYE Cloud
    Known experience reporting
    Unusual experience reporting
    Cause and effect knowledgeWhat are HEMSEYE intelligent clients?
    Discrete event simulation (vs calculus & machine learning)
    Language understanding
    – a tiny linguistic acquisition bootstrap
    – simulation model designers.
    Time understanding.
    Explainable cause and effect and prediction understanding.

    • Playing a reverse of the TV show catch phrase.
    • Playing the TV show catch phrase.
      The DVS Event camera for light weight SLAM
      New semantic SLAM targets for a background SLAM service.
      Has computer vision research thrown away 3D data for too long?
      Is using Algebraic topology an opportunity for creating 3D high speed HEMSEYE lookup hash functions?
    • Using a universal language grammar boot strap
    • Learning and language or all words
  5. Why might the HEMSEYE project be better than what DARPA are planning?
  6. Could a HEMSEYE replace the world wide web?
  7. The IBM AI X-Prize – But We Would Need Funding
    I doubt it Will Happend
  8. Funding Update From The Cabinet Office
  9. Coast 2 Coast
  10. Local Chamber Of Commerce
  11. Academics and Induxtry Not Approached
  12. Using Social Media & The Internet
    You might also want to join / follow the social media groupsOur Facebook page / group
    https://www.facebook.com/makeaihappen
    https://www.facebook.com/groups/makeaihappen/Our twitter account:
    https://twitter.com/HEMSEYE

A video a presentation slides of the first meeting can be found at: https://hemseye.org/wp/2018/08/07/make-ai-happen-brighton-meetup-launch-meeting/

: BUT ! This is quite long and detailed and a bit academic. If you want something a little easier going and quicker by try the shorter presentation I mentioned earlier: https://hemseye.org/wp/2018/08/26/hemseye-project-phase-1-shorter-presentation/
NOTE
You might want to catch up on with these links before the 11th if you are feeling highly motivated.

The Agenda / Things To Discuss For Meeting 2:

What are LAWS (Lethal Autonomous Weapons)
What could this be about? Perhaps a device to go to GPS coordinates and kill everyon.e as instructed by its programming without further human oversight.
This is a campaign to have these banned.

DARPA (The USA Military Arm Controlling AI research) are planning to spend billions on explainable AI.
Does that mean they want to make lethal autonomous weapon system ?
It is not clear.

Apparently Google have backed of working for the CIA on picture recognition after a number of employees protested and left the company. Picture recognition sounds innocent but when it is being done as part of a drone weapon system some people could become cautious over participating in this work.
Again allegedly (reported on the internet) it has been suggested that DARPA / the CIA hav enow called a conference for other AI companies hoping to fill the research role Google chose not to pursue (after pressure from their employees).

If this is true well done to the Google employees for resigning also go for google for backing down.
But Google was formed with a hippie friendly attitude to do no evil.
But when google was restructured the new mission statement of their new holding company was set to “Do the right thing” rather than “Do no evil”. How happy would the employees of Google be if they could control or vote on this. When it came to doing the right thing in helping the USA develop better drones some employees felt this was not the right thing.

Google like DARPA has a hierarchcal management structure. Is this a safe way to manage AI?

I am in no way suggesting that joining this group will make you any money as I am committed to trying to create a charity funded open source project. But if we work together perhaps we can help make AI happen and bring so benefits and jobs to Brighton and Sussex.
Are there enough generous or potentially naive and rich enough people willing to give money to a potentially at a pie in the sky idea (that might be world saving and really really good). If you are interested sign up to watch.

HEMSEYE Project Phase 1 Shorter Presentation

Below is a shorter summary presentation of the first phase of the HEMSEYE Project

This is an incomplete draft presentation.

Please send me feedback so that I can improve it:

Make AI Happen Brighton Meetup – Launch Meeting

Make AI Happen Brighton – AI Interest Group Kick Off Meeting Wednesday 08/08/2018

Development Plans For Implementing AI/AGI

Barclays will provide food for this first meeting. At every meeting they will provide liquid refreshments, including free beer. The meetup will open at 18:00 with a start on the first talk at 18:15-18:30 (I have to travel from Southwater near Horsham).

FIRST BRIEFLY
– What Should We Do As A Group ?
– Who Are You?
– Why Are You Here ?

Development Plans For Implementing AI/AGI

Creating a means for a robot to become self-aware in order to acquire any human language using artificial consciousness (Is It Safe?).

Just The Presentation Slides – No Audio
A 5 Video Youtube Playlist Click this link for details.
Video One Is Below (intro missing)

Sumary
Using a model of Weizbeica & Goddards 65 semantic primes and their semantic meta language / minimal language as bootstrap for Chomsky’s Language Acquisition Device (LAD) and Universal Grammar and Tomasello 2005, 2009 Social, tool, self and other awareness in an extensible discrete event simulation framework and virtual world simulation.
Modelling cause and effect and making predictions using Discrete Event Simulation (DES).
Using Semantic Prime context / model identification (like a Skinner Box) in a visual background service for a language acquisition and understanding bootstrap in a virtual real world real-time simulation.
Cloud shared learning from a real-time virtual world. The “HivE Mind’S EYE (the HEMSEYE or www mk2 ?).

What Next…
See the nexts posts including about our next meeting which should be live on youtube.


Some Fun / Chat / Networking – What Should We Do Next?

If We Had Time :
AI Safety / Ownership / Control / Governance / Acceptance / Society & Social Issues.
Raspberry PI projects ?

ROS – Robotic Operating System
A youtube channel.
The construct “ROS Ambassador Scheme” via the Robot Ignite Academy are allowing me access to free online training.
They also do paid for training.
We could have regular free online training available here.

DARPA – Some BRIEF COMMENTS On Their Plans For Making Explainable AI
The next generation of US military led AI research. See
https://www.darpa.mil/attachments/XAIProgramUpdate.pdf
https://www.darpa.mil/program/explainable-artificial-intelligence
or https://www.youtube.com/watch?v=-O01G3tSYpU

IBM AI X-Prize
Competition application news.

A Review Of Peter Voss’s Medium Article On Advanced General Intelligence
Can we just aim at a subset of what  he suggests with a Human Like Intelligence  (HLI) implementation. I think we can do this by using and extending the operation of a discrete event simulation engine (see https://hemseye.org/wp/2018/08/07/make-ai-happen-brighton-meetup-launch-meeting/)

Review The RSA Report On AI
See https://www.thersa.org/globalassets/pdfs/reports/rsa_the-age-of-automation-report.pdf

The HEMSEYE Project Youtube Channel
I have a HEMSEYE project youtube channel which I will be able to brand with the HEMSEYE name once I get 100 subscribers. 


What next ???

More Q&A
Free ROS training sessions?
Next meetings ?
Hackathon?
Sponsorship?
Academic involvement?
iCub humanoid robot open source project collaboration – I spoke to them last year about this…?
Suggestions?
Panel discussions?

Project History Since: LIDA Artificial Consciousness and my 2013 PhD Project Proposal Accepted at Birmingham University

In creating and founding a research project effort it is important to to try to impress people who might be motivated to help or take an interest in a project. So here is some history to demonstrate this project’s longevity and development from an interest in artificial consciousness to modelling language understanding.

I think in about 2011 I read an article in New Scientist about “Global Work Space Theory” as a theory to explain consciousness. The article in question went on to describe the work being done at Memphis University on artificial cognition with the IDAand LIDA projects.

In 2013 I managed to get a PhD project proposal accepted at Birmingham University. My original project proposal is available here for download here: Download 2013 PhD Project Original Proposal Sent to Birmngham University.

I originally applied to Dr Bezard Bordbar as my supervisor after I had come across some of his work regarding the translation of free text into UML and OCL. The proposal I have made available for download is my original proposal to Dr Bordbar. I actually re-wrote this initial proposal into a PhD project application which included some input from Dr Bordbar and Dr Mark Lee who agreed to be a joint supervisor on the project with Dr Bordbar. In looking to do this PhD I was hoping to find an industrial partner who would fund me sufficiently so that I could maintain a commercial developer’s salary. This effort was not successful. In frustration I looked at raising funds via kickstarter to create my own research company. Dr Bordbar was reluctant for me to proceed on this basis I think mainly as having to raise funds and do a PhD would be too much to do all at once. We therefore paused or efforts at pursuing this project.

Since 2013 in making this PhD project application I have:

  • Continued to study Artificial Intelligence in my spare-time.
  • Created this website and blog to document the development of my research ideas.
  • Learned a great deal more about many topics periferal to what would be considered mainstream and accepted in the field of artificial intelligence research. I have attempted to do this as I am unlikely to be able to compete with full-time academics by following the mainstream.
  • Created two meetup groups to try to encourage local hobby interest in Artificial Intelligence
  • My original group was Haywards Heath based.

Bootstrapping Computerised Language Understanding With Semantic Primes

I believe that a child’s need to acquire language is about the intent or a need to communicate ideas. Therefore the ideas come first before language.  By first developing an AI’s self awareness of concepts you can get a mechanism for bootstrapping language acquisition and eventually start to simulate an artificial consciousness.

To behave like a human I believe an AI will need to learn like a child. If this is the case then it would need to learn about the world and then learn language by speaking and listening. Learning to read is then a secondary activity.

There are a set of universal indivisible language concepts (known as semantic primes – see later) that can be thought of as an understanding of the world that cannot be defined with language. These pre/non-verbal concepts need to be learnt from the environment and from the experience of having a consciousness and from social interaction.  If you could define a model for defining each semantic prime using physical observation or some mechanistic knowledge of the world it should be possible to define these concepts to yourself and an AI before you had added any language to describe them.

Semantic primes are grouped by category.  The first category are the “Substantives” (I, YOU, SOMEONE/PERSON, PEOPLE).  To give a robot understanding of these particular words you would need to identify the underlying process and data that gives rise to these word definitions.  In this case the process is “Recognition”.  These “substantive” words are all the outcome of a recognition process.  So to understand the substantives conceptually a robot will need to implement a recognition process that correctly identifies a substantive.  Another group of semantic primes are the relational substantives; these are SOMETHING/THING, BODY, KIND, PART.   These are actually less specialised versions of the first “Substantives” group that just relate to objects rather than people.  I believe all the semantic primes could be understood by identifying their underlying processing and data storage and also by building them into a learning hierarchy.  Therefore “relational substantive primes” would be learnt before the “substantive primes”.

(The following is from Wikipedia)

Semantic primes or semantic primitives are semantic concepts that are innately understood, but cannot be expressed in simpler terms. They represent words or phrases that are learned through practice, but cannot be defined concretely. For example, although the meaning of “touching” is readily understood, a dictionary might define “touch” as “to make contact” and “contact” as “touching”, providing no information if neither of these words are understood.

The concept of innate semantic primes was largely introduced by Anna Wierzbicka‘s book, Semantics: Primes and Universals.

Semantic primes represent universally meaningful concepts, but to have meaningful messages, or statements, such concepts must combine in a way that they themselves convey meaning. Such meaningful combinations, in their simplest form as sentences, constitute the syntax of the language.

Wierzbicka provides evidence that just as all languages use the same set of semantic primes, they also use the same, or very similar syntax. She states: “I am also positing certain innate and universal rules of syntax-not in the sense of some intuitively unverifiable formal syntax a la Chomsky, but in the sense of intuitively verifiable patterns determining possible combinations of primitive concepts(Wierzbicka, 1996).” She gives one example comparing the English sentence, “I want to do this”, with its equivalent in Russian. Although she notes certain formal differences between the two sentence structures, their semantic equivalence emerges from the “….equivalence of the primitives themselves and of the rules for their combination.
This work [of Wierzbicka and colleagues] has led to a set of a highly concrete proposals about a hypothesised irreducible core of all human languages. This universal core is believed to have a fully ‘language-like’ character in the sense that it consists of a lexicon of semantic primitives together with a syntax governing how the primitives can be combined (Goddard, 1998).

The semantic primes by category (categoies shown as a link to their definition) are:

Substantives

I, YOU, SOMEONE/PERSON, PEOPLE

Relational Substantives

SOMETHING/THING, BODY, KIND, PART

Determiners

THIS, THE SAME, OTHER

Quantifiers

ONE, TWO, SOME, ALL, MANY/MUCH

Evaluators

GOOD, BAD

Descriptors

BIG, SMALL

Mental predicates

THINK, KNOW, WANT, FEEL, SEE, HEAR

Speech

SAY, WORDS, TRUE

Actions, Events, Movement, contact

DO, HAPPEN, MOVE

Existence, Possession

THERE IS/EXIST, HAVE

Life and Death

LIVE, DIE

Time

WHEN/TIME, NOW, BEFORE, AFTER, A LONG TIME, A SHORT TIME, FOR SOME TIME, MOMENT

Space

WHERE/PLACE, HERE, ABOVE, BELOW, FAR, NEAR, SIDE, INSIDE, TOUCH (CONTACT)

Logical Concepts

NOT, MAYBE, CAN, BECAUSE, IF

Intensifier, Augmenter

VERY, MORE

Similarity

LIKE/WAY

We have being looking at process modelling and producing a world view on which to hang an understanding of our semantic prime definitions.  We will be looking further at this work and how we can build a software object model / AI that describes the world using conceptual awareness.

Could you help with co-operation input or funding?  See elsewhere on this website for further details.

Experiences Of Installing Ubuntu ROS and Gazebo on A Raspberry PI 2 (Updated July 2016)

I am writing these notes to summarise my experiences of installing Ubuntu on a Raspberry PI and getting ROS (the robot operating system) and Gazebo (the robot simulator) running mainly for the benefit of my AI and Robotics meetup group.

When I last tried I had not managed to get ROS running on a PI.  There is however a new ROS version out so I will need to try again.  The new version rather than require you to update the OS on your PI actually just supports Rasbian directly (See http://wiki.ros.org/ROSberryPi/Setting%20up%20ROS%20on%20RaspberryPi).

I have managed to install ROS on my laptop using the latest Ubuntu 16.04 LTS (LTS=Long Term Support) on my laptop and this time got both ROS and Gazebo to install.

The old instructions involved first installing Ubuntu 14.04.03  but Ubuntu updates about HTTPS security had caused this to break.  DO NOT USE Raspberry PI 2 Ubuntu install  http://wiki.ros.org/indigo/Installation/UbuntuARM. DO NOT USE http://wiki.ros.org/ROSberryPi/Installing%20ROS%20Indigo%20on%20Raspberry%20Pi (although of course these links might get fixed at some point).  Historically I also got this same HTTPS problem when trying to install ROS on the PI with Raspbian and also on a laptop running 14.04.03 Ubuntu  The errors I saw are described on the web and had not been fixed for over a year.  Issue Details – I got the error rosdep init gives “Website may be down.” when I ran the command: sudo rosdep init.  I got a bit further by taking the following manual copy / setup step https://github.com/ros/rosdistro/issues/9721.  But I got a later problem described at:  http://answers.ros.org/question/234425/how-to-fix-error-when-running-rosdep-update.  You can avoid all these issues on a laptop at least by using the latest Ubuntu LTS and new ROS versions I described above.  BUT I had found that my working version of Eclipse / Papyrus on Ubuntu 14.04.03 had problems with the 16.04 LTS.  So I currently have both Ubuntu versions installed as a dual boot.  14.04.03 for Papyrus / UML development and 16.04 for running ROS / Gazebo.  Not very satisfactory but no doubt the issues will get fixed.

You could run NOOBS – the new out of the box OS installer or buy / make an SD Card for Raspbian. BUT you would get lot of extras that are designed to teach kids programming.  With the latest ROS updates I will be sticking with Raspbian for the time being.  Maybe someone can suggests tips for stripping out the OS to level just techie non-kiddie stuff.

———————

Installing Ubuntu ROS And Gazebo (The  Robot Operating System and Simulator) On A Raspberry PI 2

(<- Learn some basic linux first ???  (I did a course years ago on unix when I used it professionally so did not bother).  Tips anyone ? – You could try http://www.ee.surrey.ac.uk/Teaching/Unix/)

Quick Tips
– I found I needed to reboot my home hub to get wireless + ethernet networking going.
– I have a blue tooth logitech keyboard with an integrared mouse pad that operates with my PI for about £30.
I tried a smaller cheaper game console like one but when it arrived it was broke.
You could spend about £10 on such a thing from amazon but the keys are tiny.
NOTE! Apparently not all USB keyboards work so check it out first or buy one that is for the PI (I got mine at Maplin)
– I got a PI wifi USB dongle of amazon for next to nothing.  It just plugs in and works.
– I got a 3.2 inch PI touch screen for the PI for £16 !  It works with Raspbian the NOOBS installed version but so far not the recomended Ubuntu install
I have followed some instructions but it has not worked.  I will be putting in a support request.
– Creating an SD Card with a new PI operating system on it https://www.raspberrypi.org/documentation/installation/installing-images/windows.md
– I missed a step during the install about resizing the second partition after reboot and I soon got an out of disk space error.  Be careful.
– I currently use an HDMI cable from my television to my PI to allow me to find out my PIs IP address but if my PI auto emailed me its IP address when it booted up I could simply connect to via my laptop using putty (teminal emulator software you can download) or VNC (to get a connection to a GUI running on the PI.  I initially chose to install kubuntu but later backed off on this as my laptop is old and Kubuntu was a bit slow.).

For a text editor I use to always use vi when I used unix before – it is always guaranteed to be available.  There is now something called nano (but do not forget to run it as sudo nano as unless you are in your default $Home folder you will not be able to save (cd $home  will return you to home if you strayed into /etc/network etc ).

Laptop Access https://www.raspberrypi.org/documentation/remote-access/vnc/

Networking ???
– ??? Go anywhere laptop integration with no screen?

  1. Auto email the PI IP address??  I looked at using WhatsApp but the ports this APP use were blocked in a local pub.
  2. Get support for two wireless network adapters on your laptop and get your PI to access the web over wirless to a fixed IP address on your laptop the access the web via the second wireless shared internet connection on your laptop.
  3. URL access using dynamic dns for remote internet access.  But you need to configure your wireless router to allow the dynamic IP address configuration through.  Which is not going to work in the pub ???

Implementing Autostart Commands ???

Further documentation – More news and comment Soon

http://wiki.ros.org/ROS/Tutorials

Gazebo The Robot Simulator

I am not even yet sure it is a good idea to even put Gazebo on a PI so I have tried and succeeded getting it running on my laptop instead.  You can run Gazebo without ROS on Ubuntu 14.04.03
http://wiki.ros.org/gazebo_ros_pkgs
http://gazebosim.org/tutorials
http://gazebosim.org/tutorials/?tut=ros_wrapper_versions#GazeboversionsandROSintegration

Also if you want to run the binaries on the PI only up to version 2 is possible yet version 7 is available if you build from source.  I failed to do this on my laptop when I last tried.  I think I will stick to running Gazebo on my ubuntu laptop instead of messing with it on my PI.  I am seeing the PI as a delivery target rather than a development environment.

Actually what I am really after is being able to understand the object models available from the world / object mapping implementations within ROS:

http://roboearth.org
http://http://robohow.eu/
http://lider-project.eu/?q=what-is-lider
http://www.knowrob.org/

We have had a quite a few meetups on this topic knowledge modelling / understanding topic which I will have to write up.  More soon…

Making The Semantic Web Work

From a linkedin Artificial Intelligence Group I am In. 
I commented on the post:

Paul Houle
Applying Schemas for Natural Language Processing, Distributed Systems, Classification and Text Mining and Data Lakes

Making The Semantic Web Work

http://www.slideshare.net/paulahoule/making-the-semantic-web-work

James Kitching

Hi Paul

I have been working my way through your slide presentation which I am very much enjoying. I have so far got to slide 38 and have a number of comments (Although perhaps I should have waited till I got to the end!):

1) You mentioned the maturity of BPM in understanding and modelling processes. But you have (so far) not mentioned the far greater maturity of the discrete event simulation application area (see https://en.wikipedia.org/wiki/Discrete_event_simulation). This is an area similar to the Allen Algebra you did mention (https://en.wikipedia.org/wiki/Allen’s_interval_algebra) where you comment “A complete theory is not fully developed but their are some pretty good tools available”. I think it is likely that discrete event simulation tools will indirectly (after a descriptive transformation) fully support Allen Algebra . Discrete event simulation has grown out of manufacturing industry’s need for business process optimisation. BPM’s heritage has more grown from the administration and financial services industries and is far less mature. I believe discrete event simulation is closer to what you are implying as a possible eventual target application for the semantic web.

Discrete event simulation is quite a complex topic to understand. You can get a flavour of what it involves from the wikipedia article I have quoted. You can get an impression of what and why this would be applicable to AI research at http://www.hemseye.org/wp/what and http://www.hemseye.org/wp/why (the how is far more difficult – but see this website and blog for more). How and why is works is described in this lengthy link https://www.youtube.com/watch?v=zycpLaeunuY. You can say more about this in that you can go on from using simulation and apply machine learning or genetic algorithms to optimise or experiment on a simulation using what if scenarios. This kind of technology could be used to develop tool usage application or capability stategies (See Professor Murray Shanahan’s article “The Brain’s Connective Core and its Role in Animal Cognition” Philosophical Transactions of the Royal Society B, vol. 367(1603) (2012), pp. 2704-2714.).

I have this knowledge as I use to work for the Lanner Group. They are simulation tool company that grew from a branch of the IT department of British Leyland (originally the Austin motor company). They originally developed software to simulate and optimise the production activities within their car factory. Whilst at Lanner I worked on a project writing the specification to integrate Lanner’s Witness simulation engine into Popkin Software’s System Architect CASE tool. This application automated the translation of process models (in IDEF3 / BPM) with System Architect into animated 2D discrete event simulation models driven by the Lanner software.

For my part I am interested in developing an understanding and communication interface that has the capacity to merge and combine bootstrapped understanding (traditional linguistics / conceptual semantics) with an organic style / deep learning system modelled but modelled more on the skills of a language learning child .

Concerning how humans learn language and therefore what skills a computer might need I am just reading the PhD thesis of Barend Beekhuizen who recently completed his studies at Liden University in the Netherlands (reference http://www.lotpublications.nl/Documents/401_fulltext.pdf). I think so far that this is a really excellent common sense work in tackling the whole scope of the computerised language understanding problem, which so many have missed in the past.

As you implied elsewhere when you look across various areas of independent research the sum of understanding we actually have is far closer to achieving what is required than most people realise.

2) There appear to be a lot of very bright people still working on OWL. Yet you seem to have a very negative view of this technology for your area of research interest. Are these OWL researchers wrong or do they have other interests in applying OWL that do not currently appeal to you? If these people are interested in OWL for reasons that are not of interest to you what are these interests?

3) I believe that linguistic understanding needs to be grounded in association with a real world understanding and experience of the world and the objects and processes within it. Interesting / useful links:

http://ros.org
http://roboearth.org
http://http://robohow.eu/
http://lider-project.eu/?q=what-is-lider
http://www.knowrob.org/
http://ccrg.cs.memphis.edu/projects.html
http://lanner.com

As you will see if you read my website (http://hemseye.org) I am not currently an academic AI researcher, I am a commercial software engineer. You will also see that I would like obtain a PhD in this area and work ideally work in academia. I have been offered a PhD position based on a submission I made to Dr Behazad Bordbar and Dr Mark Lee from Birmingham University in the UK. Note the interests I have expressed above are beyond the scope of this original project. I would be very interested in seeking funding to be able to undertake this work full-time. I ideally want to work in an open source manner (see http://hemseye.org/wp/open-source/). I am considering applying for EU seventh framework funding. It would assist my chances of obtaining funding to partner with other researchers in seeking this funding. I am very grateful for the support and interest Behazad and Mark have given me to try to develop and contribute in this research area. Depending on the terms of the finance I may manage to obtain and the scope of the work I seek to pursue they may be interested in continuing to support my research interests. I would further be interested in hearing from any other academics who might be interested in collaborating with me or assist me in pursuing an academic career.

dkpro-wsd Word Sense Disambiguation Framework

Ah ! I have just discovered https://github.com/dkpro/dkpro-wsd/ which came out in 2013.  I have been wasting my time designing something similar (re-inventing the wheel!). Bah!  I really could do with some academic input 🙁

Linguistics Computing and Natural Language Understanding – Learning The History

I have today received a copy of “Using Computers In Linguistics A Practical Guide”.  I got this book after seeing it referenced on wikipedia (https://en.wikipedia.org/wiki/Natural_language_understanding) against the phrase:

The interpretation capabilities of a language understanding system depend on the semantic theory it uses. Competing semantic theories of language have specific trade offs in their suitability as the basis of computer automated semantic interpretation [21]”.

These range from naive semantics or stochastic semantic analysis to the use of pragmatics to derive meaning from context.[22][23][24].

References:
21: Using computers in linguistics: a practical guide by John Lawler, Helen Aristar Dry 198 ISBN 0-415-16792-2 page 209
22: Naive semantics for natural language understanding by Kathleen Dahlgren 1988 ISBN 0-89838-287-4
23: Stochastically-based semantic analysis by Wolfgang Minker, Alex Waibel, Joseph Mariani 1999 ISBN 0-7923-8571-3
24: Pragmatics and natural language understanding by Georgia M. Green 1996 ISBN 0-8058-2166-X

This looks to be an excellant introductory book on historical approaches to language understanding. I need to learn my history so as not to repeat the mistakes of the past if I am going to contribute to developing a computer system that understands natural language.

I am also waiting on a book on dependency grammar which was used in an early but unsuccessful venture into the field of language understanding.  Interest in this particular field is however now growing http://depling.org/depling2015/ (also https://en.wikipedia.org/wiki/Dependency_grammar).

– Ok I am a bit of a geek but this is my train set…