15 comments

  1. I would expect an ontology-building tool to emerge using social factors to allow anyone in the world to contribute, much like a wiki. In time, such a resource might grow large enough to provide computers with an information base so broad and deep that it would become difficult to stump them in a Turing test.

    Which makes me really tempted to create a public ontology where we use Turing tests to identify gaps in the knowledge. I wonder if there’s already an open-source Turing-test discussion engine in php floating out on the net…

    ugh, too many ideas, not enough time.

  2. Anne,

    I’ve been thinking along similar lines. I think there could be various word games that serve as a frontend for a public ontology database. Luis Von Ahn’s “Verbosity” certainly fits the bill (play it at http://www.gwap.org), as could a Family Feud-style game (“Name something you keep in a medicine cabinet”).

    Thanks for the comments,

    John

  3. Hi John,

    You’ve hit it on the head with Ontologies, almost!

    Have you read ‘On Intelligence’ by Jeff Hawkins of Palm fame? (http://en.wikipedia.org/wiki/On_Intelligence)

    He outlines a Memory Prediction Framework as a way to develop intelligent machines. In the context of Hawkin’s framework, a computer should only need to learn a minimum set of relationships before it can begin predicting things (like search requests) on its own.

    A publically generated list of Ontologies is simply a massive ‘cheat sheet’ and there is noting intelligent about that. If the computer were truly intelligent, we wouldn’t have to put in any (public) effort at all.

    James

  4. James,

    Thanks so much for the comment, I’ve put Hawkins’ book on order through my library. From the Wikipedia article the theory sounds fascinating, though I have to admit that I’m skeptical the memory-prediction framework could provide a complete answer. Hawkins seems to be describing an intelligence that isn’t “artificial” in the conventional sense, but that emulates a theoretical model of how animal brains get the job done. That’s admirably ambitious, but do we have a sense of how far it can get us? By contrast, it’s reasonably clear that a semantic/ontological approach could get us very far indeed, and provide great raw material for any number of AI applications.

    A really interesting discussion, I appreciate the post.

    John

  5. If there are search solutions for websites that understand human language why is it so hard to make a search engine act the same way?

  6. Casey,

    It is a confusing situation, but it all comes down to what we mean when we say “understand”. Yes, some search solutions are programmed to parse human language. They can recognize that a particular word is a transitive verb, and that the noun preceding it is its subject while the noun after it is its object. They can recognize tense, conjugation, number, and gender in words. But that’s very different from saying that they understand what you’re talking about. They don’t, and getting them to would be a much more difficult challenge.

    Let’s say that I gave you a sentence in another language: “Blek felk floop”. I could then tell you that “felk” is a verb, “blek” is its subject, and “floop” is its object. From that, you could conclude that blek has felked floop, or is felking floop, or plans to felk floop at some point in the future. You could count how many times the word “floop” appears in a document. You have a basis for processing the sentence’s structure, yet you still have no idea what I’m actually talking about; you don’t understand it. It’s the same for a search engine.

    But if I translated it into “Tigers eat apples”, all of a sudden you understand me. The word “apple” alone brings with it an image, a shape, a feeling in your hand, a taste and smell, colors, varieties, ciders, pies, strudels. You already know that it’s a fruit, it’s a living thing, it’s something you eat, that it contains seeds and is covered in skin. You can recognize that the sentence is untrue (tigers are carnivores, which don’t eat apples). You understand all of that, while a computer understands none of it.

    So there’s a vast gulf between the ways human beings experience the world and talk about it, and the way computers do. But people like you and me nonetheless expect search engines to understand us as well as another human being would, and that makes the human standard the correct benchmark for assessing search quality.

    Thanks so much for the great question, I hope I was able to clear it up.

    John

  7. Thank you for the explanation, your article is a very interesting take on search engines. The question is-is it possible to enable search engines to understand humans?

  8. Casey,

    I would say that true understanding is an impossibility, though philosophers and computer scientists may differ on that point. But I think that there’s a great deal of work that can be done to make search engines behave as though they do understand human language. And the closer we can get to that, the more satisfying the search experience will be.

    John

  9. John,
    I also enjoyed your article. I am not an expert in this field but what you say seems to make absolute sense. Is the way forward then to get humans to understand search engines more?
    A lot of people still haven’t grasped how metadata works. I work in the media and I recently overheard a conversation where someone made this point. The speaker bemoaned the fact that he couldn’t find a programme on a famous poet only to discover that the programme’s metadata didn’t include the poet’s name!

  10. John,

    Thank you for the article. Very thought provoking. I couldn’t help but think that a lot of design today is the actually flipping the Turing test on it’s head. Rather than trying to build a computer to sound like a human, we’re trying to encourage humans to sound like computers. Boolean search is a prime example of that. Suggest functions are a more sophisticated form of that.

    Another thought as well: with any human interaction, we typically allow for give and take. This seems to go against expectations when dealing with computers. I might ask a human “so what can you tell me about X?” And my interlocutor would respond “I’m not sure what you mean?” To this I would try alternate forms of getting across my idea (using other words, explaining the context, etc.) With computers there seems to be a higher expectation of “give me an answer right away.”

  11. Patrick & Anthony,

    I want to be careful to say that people hold search engines to a human standard, but that’s ultimately unattainable. A function like search suggest eases the burden on people to formulate more effective queries, and best bets directly introduce a human hand into the results. Neither one is making the machine any smarter, just patching its deficiencies.

    In some cases, I do think that it’s a good idea to also get people working in more machine-friendly ways. Jared Spool had a great presentation years ago comparing the handwriting recognition systems of the Apple Newton and the Palm Pilot. The Newton allowed people to write in their own longhand, and the machine did its best to try to make sense of it. It often got it wrong, but that’s entirely to be expected since it can often be difficult for one human being to understand another’s handwriting. How could we possibly expect a stupid machine to do better?

    The Palm pilot, though, asked people to modify their writing just a little bit and follow a more machine-readable standard. The result was a much more reliable handwriting recognition. People can be asked to adapt a bit if it’s really going to result in major improvements in the quality of the results.

    I would emphasize that both human beings and machines have their burdens in a search transaction. The person must submit a query that’s at least good enough that a reasonable person could understand what they’re trying to find. That could mean defining the scope of the query, or correctly saying that they want this AND that, this OR that, this but NOT that. It’s okay to require that the person express their needs precisely and logically. But if the user has fulfilled that minimum requirement, then the burden shifts to the system to provide a response that’s as good as what a reasonable person would provide. It’s then okay to hold the search engine to the human standard.

    Hope that makes sense. I really appreciate your thoughtful comments.

    John

  12. Reading all these indeed gives us a great amount of enthusiasm to know the practical walkthro or the Turing’s ideas. quantifying the search and inturn making it fall under the relationship is really a great way to go about. Specifically when we are talking in the lines of ontology relationship, there is also a wide range of analysis of how the brain associates its thinking when we do the physical search in human body. I would like to suggest the reading of the book “Phantoms in the Brain” by Ramachandran, V. S. & S. Blakeslee (1998). who inturn discuss the behavioural pattern of human search. As the way goes, even all the AI based algorithms also follow similar patterns to bring for a focussed and greater pointed search.

    i strongly believe in your article when you spoke about future of searches that includes the ontology and the natural language parsers. This is ofcourse been researched worldwide including me 🙂 to quantify the algorithmic search with the next level of human thinking which could one day match with user intelligence.

    Thanks for such a great article…

  13. Hi John,
    I really liked your article. Another angle that I wanted to bring attention to (from UI perspective) is that of the “The problem of Interaction Modality”. And how it changes the expectations and makes the problem of user expectations worse. I used to work with Voice User Interfaces a lot and I have seen this very same problem arise there. If the system sounded very human like and had some functions of search, lookup or shortcuts, people just assumed that the search commands or any other interaction commands would be very well executed with high competency. The modality affected their expectations very much. The fact that a system spoke back to them with replies (to queries) made people assign intelligence to it and when the system did not perform as expected they felt let down which is similar to a system failing the Turing test. So i think the modality of interaction will have a huge impact on the user expectations as well which is another thing the designer should account for when designing an interactive system.

Comments are closed.