Here we are in 2013 prior to starting our 2-year development plan to commercialise the technology that has been demonstrated using Project Turing.
In this interview by Beth Carey, the Thinking Solutions'director of business development, I explain the scientific challenges we have solved to produce the Project Turing prototype and my view of the next 5-10 years which I believe will see the domination of the computer industry with voice and text-based machines that comprehend what we want and allow for accurate translation and communication.
Although predicting the future requires a few things to come together. there is no reason this isn't a likely scenario in the near future.
Questions in a language enable a vast variety of answers: but how many ways there are to get the same result. Why? We believe, not surprisingly, that our brain's pattern matching capability means we use our ability to match these patterns the same way we match other ambiguous sensory patterns to see things and to move around in the world.
This is an interesting area because so many hours of programming time can be lost in trying to identify all these different ways to say the same thing. Thanks for joining us. See you again soon.
Why does it seem like, in language, if you can't think of 100 ways to say the same thing, you probably aren't trying?
At Thinking Solutions, our focus has been on providing the tools needed to see the meaning within a storm of possible meanings and the many ways of saying it. Today, we look at the start of how our machine will respond to questions - using a more natural way of interacting by adding information to a simple 'yes'.
We hope you enjoyed today's videos and look forward to seeing you again shortly with our latest updates.
Questions and their answers cover a myriad of aspects in context. Here we explore the context of a sentence in which the undergoer is in fact a full clause. By questioning the information in the embedded clause and using pronouns, we see a typical English exchange that looks quite natural.
Coming up: how the variety in our answers allows for natural interchange even with simple yes/no (polar) questions.
Today, we look at some of the basic elements in sentence recognition: understanding possessives and sentences that embed clauses.
The good news is that these structures are dealt with the same way by the Project Turing prototype. There is a long way to go before our year-end target expires, so keep watching for our next updates.
Languages use embedding to make them more succinct. Why? Because they can because our brains can do it easily. Here we break down sentences composed of more complex clauses to see how we can still work with the information stored.
We are now about 1/4 of the way through our 2 month development allocation for conversation, so progress is interesting to follow. There will be more on this topic shortly. See you then.
In today's last video (November 16, 2013) we explore in more detail the use of pronouns in a conversation. We are introducing the tracking mechanism explained by the Role and Reference Grammar.
We will be back soon with new videos showcasing our exploration of conversation between us and the computer.
Here we expand on the exploration of reflexive versus non-reflexive pronouns. There is a lot to learn, at least for the Thinking Solutions Project Turing team, to implement an effective context tracking engine. The use of reflexives and the teachings of RRG allow us to simply follow the RRG recipe as you can see in this video.
We have one final video coming up later today to show how this all comes together in a conversation. See you there!
Reflexive pronouns play an integral part to conversation. Their presence helps identify when we are talking about someone already introduced, or someone within the current clause. Although a reasonably technical area, today's videos explore the benefits in having reflexive and non-reflexive pronouns in English.
We leverage the Role and Reference Grammar's (RRG) teachings on this subject. The Role and Reference Grammar is a linguistic model that has been formulated using the combined knowledge from all human languages. In conjunction with Patom theory, we are able to start to deploy a brain-based conversational machine.
The next video will expand on the information here.
What is conversation? At one level, it is comprised of two or more participants learning new information about each other. This information is required to validate and respond to statements and questions. And in a conversation, information that is ambiguous can easily respond with a clarifying question. Let's explore this in the video.
There is a big month ahead as we develop and explore language further. See you back here soon.