Make AI Happen Brighton – Live Webcast Planned on Youtube. Wednesday, September 12, 2018 (UK time BST – GMT+1) 6:00 PM to 9:30 PM

Live Youtube Broadcast & Discussion
+ Post Meeting Youtube Channel
Watch Again Arrangements:

The second kickoff meeting of Make AI Happen Brighton meetup group this will hopefully be live on youtube from my laptop webcam as I present to my AI interest group. Click here for youtube link and to set a reminder.

This signup for people who will physically attend is here https://www.meetup.com/MakeAIHappenBrighton/events/251653940/

  1. We will be talking about trying to make AI happen.
  2. Could we bring Brighton ourselves and the world some benefit from this new technology?
  3. Who Are You and Why Are You Here?
  4. What Else Do You Want To Talk About?
    Check out the headings I have listed below.
    Do we want to add delete or change this agenda?
    We could try to emulate the management of the Homebrew Computer Club
    – Which brought the world the Apple One and Silicon Valley.
  5. Can we do a show and tell of stuff ? We could all contribute on our own interests or share our own AI design experiences and problems?
  6. What about some fun stuff?
  7. Oh and there is free drink (and beer)
    – thanks Barclays.
    We do have to order our own food.

Let us talk for five minutes or less on each of the following topics…
Also anyone could ask for five minutes of the groups time. Any ideas or volunteers?

There are too many topics to cover properly below. We could time me speaking on each for 5 mins or less. This would fill half the meeting time. We could then discuss how to prioritise the rest of the meeting and the next meeting.  Infact we could take a consensus or vote on whether to delay drop or add to this list of stuff to address and expand on the time we choose to allocate to any topic (another 5, 10 mins and hour or more…).

:This is how the Pre-Silicon Valley Homebrew computer club operated.
– Which helped produced the Apple One with this approach.

Many of the topics on this list are already mentioned somewhere on this website (sometimes only in draft).
So our efforts could help in extending this website and open source project.


  1. What is …
    the safest,
    most altruistic,
    most idealistic way to research control and maintain an AI?
    See:
    https://hemseye.org/wp/ai-ethics-safety/
    https://hemseye.org/wp/mission/
  2. We will be discussing what DARPA (the USA military funding arm) are planning.
    https://www.youtube.com/watch?v=-O01G3tSYpU
    https://www.darpa.mil/attachments/XAIProgramUpdate.pdf
    https://www.darpa.mil/program/explainable-artificial-intelligenceWhat DARPA has published currently appears less ambition and less well worked out than our HEMSEYE project plans. I have only skip read their proposal so far. It should, however, be remembered that it might also be the case that what DARPA has released is currently more of a PR exercise rather than a detailed explanation of their real plans.
  3. How is self-awareness essential to understanding and consciousness?
    Is self awareness dangerous?
    What is intent?
    What about all the other human like abilities and intelligencies – Peter Voss post?
  4. What is the HEMSEYE project? (The very quick version)
    The hive minds eye or HEMSEYE comes form the words HivE Mind’S EYE:
    The HEMSEYE Cloud
    Known experience reporting
    Unusual experience reporting
    Cause and effect knowledgeWhat are HEMSEYE intelligent clients?
    Discrete event simulation (vs calculus & machine learning)
    Language understanding
    – a tiny linguistic acquisition bootstrap
    – simulation model designers.
    Time understanding.
    Explainable cause and effect and prediction understanding.

    • Playing a reverse of the TV show catch phrase.
    • Playing the TV show catch phrase.
      The DVS Event camera for light weight SLAM
      New semantic SLAM targets for a background SLAM service.
      Has computer vision research thrown away 3D data for too long?
      Is using Algebraic topology an opportunity for creating 3D high speed HEMSEYE lookup hash functions?
    • Using a universal language grammar boot strap
    • Learning and language or all words
  5. Why might the HEMSEYE project be better than what DARPA are planning?
  6. Could a HEMSEYE replace the world wide web?
  7. The IBM AI X-Prize – But We Would Need Funding
    I doubt it Will Happend
  8. Funding Update From The Cabinet Office
  9. Coast 2 Coast
  10. Local Chamber Of Commerce
  11. Academics and Induxtry Not Approached
  12. Using Social Media & The Internet
    You might also want to join / follow the social media groupsOur Facebook page / group
    https://www.facebook.com/makeaihappen
    https://www.facebook.com/groups/makeaihappen/Our twitter account:
    https://twitter.com/HEMSEYE

A video a presentation slides of the first meeting can be found at: https://hemseye.org/wp/2018/08/07/make-ai-happen-brighton-meetup-launch-meeting/

: BUT ! This is quite long and detailed and a bit academic. If you want something a little easier going and quicker by try the shorter presentation I mentioned earlier: https://hemseye.org/wp/2018/08/26/hemseye-project-phase-1-shorter-presentation/
NOTE
You might want to catch up on with these links before the 11th if you are feeling highly motivated.

The Agenda / Things To Discuss For Meeting 2:

What are LAWS (Lethal Autonomous Weapons)
What could this be about? Perhaps a device to go to GPS coordinates and kill everyon.e as instructed by its programming without further human oversight.
This is a campaign to have these banned.

DARPA (The USA Military Arm Controlling AI research) are planning to spend billions on explainable AI.
Does that mean they want to make lethal autonomous weapon system ?
It is not clear.

Apparently Google have backed of working for the CIA on picture recognition after a number of employees protested and left the company. Picture recognition sounds innocent but when it is being done as part of a drone weapon system some people could become cautious over participating in this work.
Again allegedly (reported on the internet) it has been suggested that DARPA / the CIA hav enow called a conference for other AI companies hoping to fill the research role Google chose not to pursue (after pressure from their employees).

If this is true well done to the Google employees for resigning also go for google for backing down.
But Google was formed with a hippie friendly attitude to do no evil.
But when google was restructured the new mission statement of their new holding company was set to “Do the right thing” rather than “Do no evil”. How happy would the employees of Google be if they could control or vote on this. When it came to doing the right thing in helping the USA develop better drones some employees felt this was not the right thing.

Google like DARPA has a hierarchcal management structure. Is this a safe way to manage AI?

I am in no way suggesting that joining this group will make you any money as I am committed to trying to create a charity funded open source project. But if we work together perhaps we can help make AI happen and bring so benefits and jobs to Brighton and Sussex.
Are there enough generous or potentially naive and rich enough people willing to give money to a potentially at a pie in the sky idea (that might be world saving and really really good). If you are interested sign up to watch.

HEMSEYE Project Phase 1 Shorter Presentation

Below is a shorter summary presentation of the first phase of the HEMSEYE Project

This is an incomplete draft presentation.

Please send me feedback so that I can improve it:

Make AI Happen Brighton Meetup – Launch Meeting

Make AI Happen Brighton – AI Interest Group Kick Off Meeting Wednesday 08/08/2018

Development Plans For Implementing AI/AGI

Barclays will provide food for this first meeting. At every meeting they will provide liquid refreshments, including free beer. The meetup will open at 18:00 with a start on the first talk at 18:15-18:30 (I have to travel from Southwater near Horsham).

FIRST BRIEFLY
– What Should We Do As A Group ?
– Who Are You?
– Why Are You Here ?

Development Plans For Implementing AI/AGI

Creating a means for a robot to become self-aware in order to acquire any human language using artificial consciousness (Is It Safe?).

Just The Presentation Slides – No Audio
A 5 Video Youtube Playlist Click this link for details.
Video One Is Below (intro missing)

Sumary
Using a model of Weizbeica & Goddards 65 semantic primes and their semantic meta language / minimal language as bootstrap for Chomsky’s Language Acquisition Device (LAD) and Universal Grammar and Tomasello 2005, 2009 Social, tool, self and other awareness in an extensible discrete event simulation framework and virtual world simulation.
Modelling cause and effect and making predictions using Discrete Event Simulation (DES).
Using Semantic Prime context / model identification (like a Skinner Box) in a visual background service for a language acquisition and understanding bootstrap in a virtual real world real-time simulation.
Cloud shared learning from a real-time virtual world. The “HivE Mind’S EYE (the HEMSEYE or www mk2 ?).

What Next…
See the nexts posts including about our next meeting which should be live on youtube.


Some Fun / Chat / Networking – What Should We Do Next?

If We Had Time :
AI Safety / Ownership / Control / Governance / Acceptance / Society & Social Issues.
Raspberry PI projects ?

ROS – Robotic Operating System
A youtube channel.
The construct “ROS Ambassador Scheme” via the Robot Ignite Academy are allowing me access to free online training.
They also do paid for training.
We could have regular free online training available here.

DARPA – Some BRIEF COMMENTS On Their Plans For Making Explainable AI
The next generation of US military led AI research. See
https://www.darpa.mil/attachments/XAIProgramUpdate.pdf
https://www.darpa.mil/program/explainable-artificial-intelligence
or https://www.youtube.com/watch?v=-O01G3tSYpU

IBM AI X-Prize
Competition application news.

A Review Of Peter Voss’s Medium Article On Advanced General Intelligence
Can we just aim at a subset of what  he suggests with a Human Like Intelligence  (HLI) implementation. I think we can do this by using and extending the operation of a discrete event simulation engine (see https://hemseye.org/wp/2018/08/07/make-ai-happen-brighton-meetup-launch-meeting/)

Review The RSA Report On AI
See https://www.thersa.org/globalassets/pdfs/reports/rsa_the-age-of-automation-report.pdf

The HEMSEYE Project Youtube Channel
I have a HEMSEYE project youtube channel which I will be able to brand with the HEMSEYE name once I get 100 subscribers. 


What next ???

More Q&A
Free ROS training sessions?
Next meetings ?
Hackathon?
Sponsorship?
Academic involvement?
iCub humanoid robot open source project collaboration – I spoke to them last year about this…?
Suggestions?
Panel discussions?

Project History Since: LIDA Artificial Consciousness and my 2013 PhD Project Proposal Accepted at Birmingham University

In creating and founding a research project effort it is important to to try to impress people who might be motivated to help or take an interest in a project. So here is some history to demonstrate this project’s longevity and development from an interest in artificial consciousness to modelling language understanding.

I think in about 2011 I read an article in New Scientist about “Global Work Space Theory” as a theory to explain consciousness. The article in question went on to describe the work being done at Memphis University on artificial cognition with the IDAand LIDA projects.

In 2013 I managed to get a PhD project proposal accepted at Birmingham University. My original project proposal is available here for download here: Download 2013 PhD Project Original Proposal Sent to Birmngham University.

I originally applied to Dr Bezard Bordbar as my supervisor after I had come across some of his work regarding the translation of free text into UML and OCL. The proposal I have made available for download is my original proposal to Dr Bordbar. I actually re-wrote this initial proposal into a PhD project application which included some input from Dr Bordbar and Dr Mark Lee who agreed to be a joint supervisor on the project with Dr Bordbar. In looking to do this PhD I was hoping to find an industrial partner who would fund me sufficiently so that I could maintain a commercial developer’s salary. This effort was not successful. In frustration I looked at raising funds via kickstarter to create my own research company. Dr Bordbar was reluctant for me to proceed on this basis I think mainly as having to raise funds and do a PhD would be too much to do all at once. We therefore paused or efforts at pursuing this project.

Since 2013 in making this PhD project application I have:

  • Continued to study Artificial Intelligence in my spare-time.
  • Created this website and blog to document the development of my research ideas.
  • Learned a great deal more about many topics periferal to what would be considered mainstream and accepted in the field of artificial intelligence research. I have attempted to do this as I am unlikely to be able to compete with full-time academics by following the mainstream.
  • Created two meetup groups to try to encourage local hobby interest in Artificial Intelligence
  • My original group was Haywards Heath based.

Bootstrapping Computerised Language Understanding With Semantic Primes

I believe that a child’s need to acquire language is about the intent or a need to communicate ideas. Therefore the ideas come first before language.  By first developing an AI’s self awareness of concepts you can get a mechanism for bootstrapping language acquisition and eventually start to simulate an artificial consciousness.

To behave like a human I believe an AI will need to learn like a child. If this is the case then it would need to learn about the world and then learn language by speaking and listening. Learning to read is then a secondary activity.

There are a set of universal indivisible language concepts (known as semantic primes – see later) that can be thought of as an understanding of the world that cannot be defined with language. These pre/non-verbal concepts need to be learnt from the environment and from the experience of having a consciousness and from social interaction.  If you could define a model for defining each semantic prime using physical observation or some mechanistic knowledge of the world it should be possible to define these concepts to yourself and an AI before you had added any language to describe them.

Semantic primes are grouped by category.  The first category are the “Substantives” (I, YOU, SOMEONE/PERSON, PEOPLE).  To give a robot understanding of these particular words you would need to identify the underlying process and data that gives rise to these word definitions.  In this case the process is “Recognition”.  These “substantive” words are all the outcome of a recognition process.  So to understand the substantives conceptually a robot will need to implement a recognition process that correctly identifies a substantive.  Another group of semantic primes are the relational substantives; these are SOMETHING/THING, BODY, KIND, PART.   These are actually less specialised versions of the first “Substantives” group that just relate to objects rather than people.  I believe all the semantic primes could be understood by identifying their underlying processing and data storage and also by building them into a learning hierarchy.  Therefore “relational substantive primes” would be learnt before the “substantive primes”.

(The following is from Wikipedia)

Semantic primes or semantic primitives are semantic concepts that are innately understood, but cannot be expressed in simpler terms. They represent words or phrases that are learned through practice, but cannot be defined concretely. For example, although the meaning of “touching” is readily understood, a dictionary might define “touch” as “to make contact” and “contact” as “touching”, providing no information if neither of these words are understood.

The concept of innate semantic primes was largely introduced by Anna Wierzbicka‘s book, Semantics: Primes and Universals.

Semantic primes represent universally meaningful concepts, but to have meaningful messages, or statements, such concepts must combine in a way that they themselves convey meaning. Such meaningful combinations, in their simplest form as sentences, constitute the syntax of the language.

Wierzbicka provides evidence that just as all languages use the same set of semantic primes, they also use the same, or very similar syntax. She states: “I am also positing certain innate and universal rules of syntax-not in the sense of some intuitively unverifiable formal syntax a la Chomsky, but in the sense of intuitively verifiable patterns determining possible combinations of primitive concepts(Wierzbicka, 1996).” She gives one example comparing the English sentence, “I want to do this”, with its equivalent in Russian. Although she notes certain formal differences between the two sentence structures, their semantic equivalence emerges from the “….equivalence of the primitives themselves and of the rules for their combination.
This work [of Wierzbicka and colleagues] has led to a set of a highly concrete proposals about a hypothesised irreducible core of all human languages. This universal core is believed to have a fully ‘language-like’ character in the sense that it consists of a lexicon of semantic primitives together with a syntax governing how the primitives can be combined (Goddard, 1998).

The semantic primes by category (categoies shown as a link to their definition) are:

Substantives

I, YOU, SOMEONE/PERSON, PEOPLE

Relational Substantives

SOMETHING/THING, BODY, KIND, PART

Determiners

THIS, THE SAME, OTHER

Quantifiers

ONE, TWO, SOME, ALL, MANY/MUCH

Evaluators

GOOD, BAD

Descriptors

BIG, SMALL

Mental predicates

THINK, KNOW, WANT, FEEL, SEE, HEAR

Speech

SAY, WORDS, TRUE

Actions, Events, Movement, contact

DO, HAPPEN, MOVE

Existence, Possession

THERE IS/EXIST, HAVE

Life and Death

LIVE, DIE

Time

WHEN/TIME, NOW, BEFORE, AFTER, A LONG TIME, A SHORT TIME, FOR SOME TIME, MOMENT

Space

WHERE/PLACE, HERE, ABOVE, BELOW, FAR, NEAR, SIDE, INSIDE, TOUCH (CONTACT)

Logical Concepts

NOT, MAYBE, CAN, BECAUSE, IF

Intensifier, Augmenter

VERY, MORE

Similarity

LIKE/WAY

We have being looking at process modelling and producing a world view on which to hang an understanding of our semantic prime definitions.  We will be looking further at this work and how we can build a software object model / AI that describes the world using conceptual awareness.

Could you help with co-operation input or funding?  See elsewhere on this website for further details.

Experiences Of Installing Ubuntu ROS and Gazebo on A Raspberry PI 2 (Updated July 2016)

I am writing these notes to summarise my experiences of installing Ubuntu on a Raspberry PI and getting ROS (the robot operating system) and Gazebo (the robot simulator) running mainly for the benefit of my AI and Robotics meetup group.

When I last tried I had not managed to get ROS running on a PI.  There is however a new ROS version out so I will need to try again.  The new version rather than require you to update the OS on your PI actually just supports Rasbian directly (See http://wiki.ros.org/ROSberryPi/Setting%20up%20ROS%20on%20RaspberryPi).

I have managed to install ROS on my laptop using the latest Ubuntu 16.04 LTS (LTS=Long Term Support) on my laptop and this time got both ROS and Gazebo to install.

The old instructions involved first installing Ubuntu 14.04.03  but Ubuntu updates about HTTPS security had caused this to break.  DO NOT USE Raspberry PI 2 Ubuntu install  http://wiki.ros.org/indigo/Installation/UbuntuARM. DO NOT USE http://wiki.ros.org/ROSberryPi/Installing%20ROS%20Indigo%20on%20Raspberry%20Pi (although of course these links might get fixed at some point).  Historically I also got this same HTTPS problem when trying to install ROS on the PI with Raspbian and also on a laptop running 14.04.03 Ubuntu  The errors I saw are described on the web and had not been fixed for over a year.  Issue Details – I got the error rosdep init gives “Website may be down.” when I ran the command: sudo rosdep init.  I got a bit further by taking the following manual copy / setup step https://github.com/ros/rosdistro/issues/9721.  But I got a later problem described at:  http://answers.ros.org/question/234425/how-to-fix-error-when-running-rosdep-update.  You can avoid all these issues on a laptop at least by using the latest Ubuntu LTS and new ROS versions I described above.  BUT I had found that my working version of Eclipse / Papyrus on Ubuntu 14.04.03 had problems with the 16.04 LTS.  So I currently have both Ubuntu versions installed as a dual boot.  14.04.03 for Papyrus / UML development and 16.04 for running ROS / Gazebo.  Not very satisfactory but no doubt the issues will get fixed.

You could run NOOBS – the new out of the box OS installer or buy / make an SD Card for Raspbian. BUT you would get lot of extras that are designed to teach kids programming.  With the latest ROS updates I will be sticking with Raspbian for the time being.  Maybe someone can suggests tips for stripping out the OS to level just techie non-kiddie stuff.

———————

Installing Ubuntu ROS And Gazebo (The  Robot Operating System and Simulator) On A Raspberry PI 2

(<- Learn some basic linux first ???  (I did a course years ago on unix when I used it professionally so did not bother).  Tips anyone ? – You could try http://www.ee.surrey.ac.uk/Teaching/Unix/)

Quick Tips
– I found I needed to reboot my home hub to get wireless + ethernet networking going.
– I have a blue tooth logitech keyboard with an integrared mouse pad that operates with my PI for about £30.
I tried a smaller cheaper game console like one but when it arrived it was broke.
You could spend about £10 on such a thing from amazon but the keys are tiny.
NOTE! Apparently not all USB keyboards work so check it out first or buy one that is for the PI (I got mine at Maplin)
– I got a PI wifi USB dongle of amazon for next to nothing.  It just plugs in and works.
– I got a 3.2 inch PI touch screen for the PI for £16 !  It works with Raspbian the NOOBS installed version but so far not the recomended Ubuntu install
I have followed some instructions but it has not worked.  I will be putting in a support request.
– Creating an SD Card with a new PI operating system on it https://www.raspberrypi.org/documentation/installation/installing-images/windows.md
– I missed a step during the install about resizing the second partition after reboot and I soon got an out of disk space error.  Be careful.
– I currently use an HDMI cable from my television to my PI to allow me to find out my PIs IP address but if my PI auto emailed me its IP address when it booted up I could simply connect to via my laptop using putty (teminal emulator software you can download) or VNC (to get a connection to a GUI running on the PI.  I initially chose to install kubuntu but later backed off on this as my laptop is old and Kubuntu was a bit slow.).

For a text editor I use to always use vi when I used unix before – it is always guaranteed to be available.  There is now something called nano (but do not forget to run it as sudo nano as unless you are in your default $Home folder you will not be able to save (cd $home  will return you to home if you strayed into /etc/network etc ).

Laptop Access https://www.raspberrypi.org/documentation/remote-access/vnc/

Networking ???
– ??? Go anywhere laptop integration with no screen?

  1. Auto email the PI IP address??  I looked at using WhatsApp but the ports this APP use were blocked in a local pub.
  2. Get support for two wireless network adapters on your laptop and get your PI to access the web over wirless to a fixed IP address on your laptop the access the web via the second wireless shared internet connection on your laptop.
  3. URL access using dynamic dns for remote internet access.  But you need to configure your wireless router to allow the dynamic IP address configuration through.  Which is not going to work in the pub ???

Implementing Autostart Commands ???

Further documentation – More news and comment Soon

http://wiki.ros.org/ROS/Tutorials

Gazebo The Robot Simulator

I am not even yet sure it is a good idea to even put Gazebo on a PI so I have tried and succeeded getting it running on my laptop instead.  You can run Gazebo without ROS on Ubuntu 14.04.03
http://wiki.ros.org/gazebo_ros_pkgs
http://gazebosim.org/tutorials
http://gazebosim.org/tutorials/?tut=ros_wrapper_versions#GazeboversionsandROSintegration

Also if you want to run the binaries on the PI only up to version 2 is possible yet version 7 is available if you build from source.  I failed to do this on my laptop when I last tried.  I think I will stick to running Gazebo on my ubuntu laptop instead of messing with it on my PI.  I am seeing the PI as a delivery target rather than a development environment.

Actually what I am really after is being able to understand the object models available from the world / object mapping implementations within ROS:

http://roboearth.org
http://http://robohow.eu/
http://lider-project.eu/?q=what-is-lider
http://www.knowrob.org/

We have had a quite a few meetups on this topic knowledge modelling / understanding topic which I will have to write up.  More soon…

Making The Semantic Web Work

From a linkedin Artificial Intelligence Group I am In. 
I commented on the post:

Paul Houle
Applying Schemas for Natural Language Processing, Distributed Systems, Classification and Text Mining and Data Lakes

Making The Semantic Web Work

http://www.slideshare.net/paulahoule/making-the-semantic-web-work

James Kitching

Hi Paul

I have been working my way through your slide presentation which I am very much enjoying. I have so far got to slide 38 and have a number of comments (Although perhaps I should have waited till I got to the end!):

1) You mentioned the maturity of BPM in understanding and modelling processes. But you have (so far) not mentioned the far greater maturity of the discrete event simulation application area (see https://en.wikipedia.org/wiki/Discrete_event_simulation). This is an area similar to the Allen Algebra you did mention (https://en.wikipedia.org/wiki/Allen’s_interval_algebra) where you comment “A complete theory is not fully developed but their are some pretty good tools available”. I think it is likely that discrete event simulation tools will indirectly (after a descriptive transformation) fully support Allen Algebra . Discrete event simulation has grown out of manufacturing industry’s need for business process optimisation. BPM’s heritage has more grown from the administration and financial services industries and is far less mature. I believe discrete event simulation is closer to what you are implying as a possible eventual target application for the semantic web.

Discrete event simulation is quite a complex topic to understand. You can get a flavour of what it involves from the wikipedia article I have quoted. You can get an impression of what and why this would be applicable to AI research at http://www.hemseye.org/wp/what and http://www.hemseye.org/wp/why (the how is far more difficult – but see this website and blog for more). How and why is works is described in this lengthy link https://www.youtube.com/watch?v=zycpLaeunuY. You can say more about this in that you can go on from using simulation and apply machine learning or genetic algorithms to optimise or experiment on a simulation using what if scenarios. This kind of technology could be used to develop tool usage application or capability stategies (See Professor Murray Shanahan’s article “The Brain’s Connective Core and its Role in Animal Cognition” Philosophical Transactions of the Royal Society B, vol. 367(1603) (2012), pp. 2704-2714.).

I have this knowledge as I use to work for the Lanner Group. They are simulation tool company that grew from a branch of the IT department of British Leyland (originally the Austin motor company). They originally developed software to simulate and optimise the production activities within their car factory. Whilst at Lanner I worked on a project writing the specification to integrate Lanner’s Witness simulation engine into Popkin Software’s System Architect CASE tool. This application automated the translation of process models (in IDEF3 / BPM) with System Architect into animated 2D discrete event simulation models driven by the Lanner software.

For my part I am interested in developing an understanding and communication interface that has the capacity to merge and combine bootstrapped understanding (traditional linguistics / conceptual semantics) with an organic style / deep learning system modelled but modelled more on the skills of a language learning child .

Concerning how humans learn language and therefore what skills a computer might need I am just reading the PhD thesis of Barend Beekhuizen who recently completed his studies at Liden University in the Netherlands (reference http://www.lotpublications.nl/Documents/401_fulltext.pdf). I think so far that this is a really excellent common sense work in tackling the whole scope of the computerised language understanding problem, which so many have missed in the past.

As you implied elsewhere when you look across various areas of independent research the sum of understanding we actually have is far closer to achieving what is required than most people realise.

2) There appear to be a lot of very bright people still working on OWL. Yet you seem to have a very negative view of this technology for your area of research interest. Are these OWL researchers wrong or do they have other interests in applying OWL that do not currently appeal to you? If these people are interested in OWL for reasons that are not of interest to you what are these interests?

3) I believe that linguistic understanding needs to be grounded in association with a real world understanding and experience of the world and the objects and processes within it. Interesting / useful links:

http://ros.org
http://roboearth.org
http://http://robohow.eu/
http://lider-project.eu/?q=what-is-lider
http://www.knowrob.org/
http://ccrg.cs.memphis.edu/projects.html
http://lanner.com

As you will see if you read my website (http://hemseye.org) I am not currently an academic AI researcher, I am a commercial software engineer. You will also see that I would like obtain a PhD in this area and work ideally work in academia. I have been offered a PhD position based on a submission I made to Dr Behazad Bordbar and Dr Mark Lee from Birmingham University in the UK. Note the interests I have expressed above are beyond the scope of this original project. I would be very interested in seeking funding to be able to undertake this work full-time. I ideally want to work in an open source manner (see http://hemseye.org/wp/open-source/). I am considering applying for EU seventh framework funding. It would assist my chances of obtaining funding to partner with other researchers in seeking this funding. I am very grateful for the support and interest Behazad and Mark have given me to try to develop and contribute in this research area. Depending on the terms of the finance I may manage to obtain and the scope of the work I seek to pursue they may be interested in continuing to support my research interests. I would further be interested in hearing from any other academics who might be interested in collaborating with me or assist me in pursuing an academic career.

dkpro-wsd Word Sense Disambiguation Framework

Ah ! I have just discovered https://github.com/dkpro/dkpro-wsd/ which came out in 2013.  I have been wasting my time designing something similar (re-inventing the wheel!). Bah!  I really could do with some academic input 🙁

Linguistics Computing and Natural Language Understanding – Learning The History

I have today received a copy of “Using Computers In Linguistics A Practical Guide”.  I got this book after seeing it referenced on wikipedia (https://en.wikipedia.org/wiki/Natural_language_understanding) against the phrase:

The interpretation capabilities of a language understanding system depend on the semantic theory it uses. Competing semantic theories of language have specific trade offs in their suitability as the basis of computer automated semantic interpretation [21]”.

These range from naive semantics or stochastic semantic analysis to the use of pragmatics to derive meaning from context.[22][23][24].

References:
21: Using computers in linguistics: a practical guide by John Lawler, Helen Aristar Dry 198 ISBN 0-415-16792-2 page 209
22: Naive semantics for natural language understanding by Kathleen Dahlgren 1988 ISBN 0-89838-287-4
23: Stochastically-based semantic analysis by Wolfgang Minker, Alex Waibel, Joseph Mariani 1999 ISBN 0-7923-8571-3
24: Pragmatics and natural language understanding by Georgia M. Green 1996 ISBN 0-8058-2166-X

This looks to be an excellant introductory book on historical approaches to language understanding. I need to learn my history so as not to repeat the mistakes of the past if I am going to contribute to developing a computer system that understands natural language.

I am also waiting on a book on dependency grammar which was used in an early but unsuccessful venture into the field of language understanding.  Interest in this particular field is however now growing http://depling.org/depling2015/ (also https://en.wikipedia.org/wiki/Dependency_grammar).

– Ok I am a bit of a geek but this is my train set…

Understanding Syntax and Conceptual Text Modelling – A Journey

I am not certain that it will be possible to automatically create BPMN reliably  from text but it will be fun trying.  This task will require prior object knowledge,  knowledge of object properties actions and responses, understanding of the meaning of words (a dictionary or as it is known in this trade a Lexicon), access to ontologies (for relative how things are related and logically deriving further knowledge), a model of how words relate to one another (linguistic theories), a system (systems) for word sense disambiguation, a mechanism for classifying words and sentence types into parts of speech, a mechanism for classifying or better still viewing objects in a real world context in relation to other objects and the environment.  Thinking about the basics of word understanding you need a visual spacial context and a sense of number to appreciate “this”, “that” “those” before understanding the spatially abstract “the”.  As far as real world object representation is concerned I think you could integrate and dynamically build or load a view of an object in a virtual 3D space using web3d (see http://www.web3d.org/standards).

I have started building a UML model for the software components necessary to achieve this task.  After studying some of the work of senseval I can see that there is no one size fits all solution to word sense disambiguation.  I think it therefore makes sense to implement multiple solutions and associate the most applicable to particular words (this could be done automatically against a marked up corpus).  I feel a natural implementation for this will be to use a service locator to find the most relevant word sense disambiguation provider implemented via a provider interface.

As previously described in How dynamic creation of BPMN in part involves the classification of textual information into the following categories:

  1. Activity
    Activities will be associated with verbs and represent processes.  Processes can be associated with additional information such as set up time, minimum or maximum batch size or a processing rate and pre and post process queues of a defined capacity.
  2. Entity
    Entities are the things or information that gets transformed by processes and travel through a process model.  When an entity is transformed by a process it could renamed e.g fleece to yarn in wool processing.
  3. Resource
    Resources are additional things that are needed to support the processing of entities.
  4. Event
    Events are things that happen and are created by a trigger. They may pass information and cause an action.  Events can illicit a response and be either synchronous or asynchronous.
  5. Actor
    Actors are the sources of system inputs and destinations for outputs or the source or destination of external events. Actors can be the source or destination of “entities”.
  6. Goal
    Goals are difficult to define but a likely to be identified by the fact that they involve systems that create added value.
  7. System
    A system is a group of things that have a definable boundary and probably has has a goal.

Looking at these categories they are a subset of the data you can find described in schema.org.  I have been thinking that the schema.org XML schema might be a better intial target mapping than BPMN.

An obvious implementation for this problem would be a deep learning classification engine.  Before this can be considered I need a better understanding of word and sentence meaning (Semantics, Pragmatics and Conceptual meaning) .

There are multiple theories available for grammar.  I started with generative grammar and am now reading about dependency grammar.  I have again hit the frustration of not being able to read references as I am not a member of a University library.

I am often getting the basic story of a topic off wikipedia and then trying to find peer reviewed journal references.

I finally found some good references about deep learning.  Some people with have been telling me I should give up my study of linguistics and forget these procedural approaches to solving the problem of language understanding and focus on understanding deep learning.  From what I have read so far in academic papers (i.e no hype but an explanation), deep learning is about classifying and understanding things through a hierarchical chain.  Each neural layer in deep learning currently tends to needs training before it can be used to feed into the next layer.  Deep learning is not a means of stirring a pot of neuron soup before letting it settle out into a brain.  From what I have read deep learning represents advanced pattern matching tool.  I have seen articles about how to build a brain which I have not yet read.  It maybe that my understanding of what you can do with deep learning is out of date but I also read 2015 articles.  I have found http://deeplearning.net which does appear to be an excellent source for finding out the state of the art.