In today's last video (November 16, 2013) we explore in more detail the use of pronouns in a conversation. We are introducing the tracking mechanism explained by the Role and Reference Grammar.
We will be back soon with new videos showcasing our exploration of conversation between us and the computer.
Here we expand on the exploration of reflexive versus non-reflexive pronouns. There is a lot to learn, at least for the Thinking Solutions Project Turing team, to implement an effective context tracking engine. The use of reflexives and the teachings of RRG allow us to simply follow the RRG recipe as you can see in this video.
We have one final video coming up later today to show how this all comes together in a conversation. See you there!
Reflexive pronouns play an integral part to conversation. Their presence helps identify when we are talking about someone already introduced, or someone within the current clause. Although a reasonably technical area, today's videos explore the benefits in having reflexive and non-reflexive pronouns in English.
We leverage the Role and Reference Grammar's (RRG) teachings on this subject. The Role and Reference Grammar is a linguistic model that has been formulated using the combined knowledge from all human languages. In conjunction with Patom theory, we are able to start to deploy a brain-based conversational machine.
The next video will expand on the information here.
What is conversation? At one level, it is comprised of two or more participants learning new information about each other. This information is required to validate and respond to statements and questions. And in a conversation, information that is ambiguous can easily respond with a clarifying question. Let's explore this in the video.
There is a big month ahead as we develop and explore language further. See you back here soon.
Today's second video shows when something happens is useful in conversational tracking.
That's it for now, gotta run! So sorry for the rambling today...Too many things to think about this afternoon, I'm afraid.
Today's first video shows how who did what to whom is useful in conversational tracking.
The second video today will look at the tracking of time.
To finish our explanations today, we are including some examples of German statements queried in German and English. And we also show the query of the actor position.
There is a bit more to come in our build up before we summarise our development work that is showcasing Project Turing and RRG.
This video shows our progress in handling context. Today, we show how sentences entered with time and location information are available to the question answering element of our prototype.
For those of you who don't know, we have dedicated the Project Turing prototype to the memory of Alan Turing, the English computer designer and pioneer who also made famous his test for machine intelligence: a computer that can win the imitation game can be considered intelligent.
There are a few new features we will bring to you and explain in relation to a question-answer system in the coming days. See you then!
Today, before the Melbourne Cup (horse race), we explore the Thinking Solutions development of question handling.
Leveraging our previous development, the question and answer system uses the wealth of information and disambiguated text to determine answers to questions asked in a conversation.
Interestingly, just the word boundary discrimination, RRG categorisation and word-sense disambiguation provides a number of new capabilities for products. An upcoming video will discuss the range of products we see coming from the conversational engine.
Project Turing provides a language independent/multilingual pattern matching engine. Here we look at how the context tracking works with different languages at once. The German and English examples are only indications of the types of patterns to be retained and disambiguated by the context-tracking engine.
These are very early days ion our 2-month long initiative, so feedback to us now is essential to ensure our R&D effort is maximised to year-end.