When You Don’t Know Something, You Ask Questions
“fundamentally, you can’t create answers to all possible questions that any human might ever ask by hand, and we have no way to do it by machine. If we did, we would have general AI, pretty much by definition, and that’s decades away.”—Benedict Evans 
One of the great debates raging in the tech community in early 2017 is that Voice First systems like Alexa, Siri, Cortana, Google and others are too “dumb”, “they can never answer every question”. The truth is, of course they are. These generation one systems fail outside very constrained domains. Yet using the same criteria to humans, we all look pretty “dumb”. The invention of writing, the printing press, the book, the floppy disk and the Internet allow any properly equipped human to answer any question, or get a fairly good “feeling” of the potential answers.
It turns out humans think in a “fuzzy” way. Our answers, as logical as they may seem sometime may have foundations in logic but they are fuzzy in the way they are translated to humans. Most humans chose not to speak in segments of facts connected together and if we do, we suffer a life of loneliness. Humans use analogy and reference to express things. Many researcher and observers assume this is exformation (information to be discarded). However the things we use to present concepts, ideas, even commands have a multiplex quality to how it was said.
For computers to solve what I call the log2(n) – n paradox (or the Evan’s paradox ) they need to deal with the Fuzziness of humans and that it is not the “insolvable” problem that even learned experts suggest.
The simple answer to the log2(n) – n paradox is no system needs to know every answer—just as no human needs to. However there are simple and effective ways to solve the paradox; actively learning and understanding the intent in questions. This forms a basic function that can be applied to any problem. The other side is finding the ontological resources on the internet. The assumption that an API is needed is invalid. If a human can see it, Voice mediated AI can find it and interact with it in the same way as a human. Thus one does not need to wait for and API to be built, it is already available.
I have been asked by over 100 people about how these real problems are solved and if they are truly decades away. Many have witnessed the product of my research in demonstrations I have presented. Some were shocked that this was still being debated as they witnessed the active learning systems I have crudely cobbled together. I am an ugly coder and am driven solutions rather then programming art, but it works. Once established programmers discover these solutions, they will be an order of magnitude better.
In this Special issue of Multiplex Magazine I explore how ultimately there are no limits to the questions one could ask a Voice First device and there are no limits to the answers using active learning and doing what humans do when they do not understand a question, ask for more detail.