AI & Natural Language ProcessingÉ On the Other Hand
Both before and since the NRF 2017 Big Show Expo, I’ve been commenting on the arrival of new AI (artificial intelligence), “natural language “, VR (virtual reality) and AR (augmented reality) technologies. There’s some really exciting stuff out there, and it’s not all just theoretical – practical use cases are driving their adoption. Most importantly, all of these technologies are finding their way into consumers’ daily routines, and most don’t even know or particularly care about all the deep science that went into making the technologies real for them.
But on the other hand, I’m reminded that behind all these exciting new capabilities are humans, or more specifically, code written by humans (but don’t get complaisant about that, because “self-generating ” and “self correcting ” AI engines that can write their own code – supposedly to either correct flaws in the code itself or to create new capabilities, are making their way into the mainstream. None other than Elon Musk considers this “our greatest existential threat “, and that’s saying a lot, considering things like global climate change, nuclear devices in the hands of rogue states, etc.). And the thing about humans which can be funny, annoying, and occasionally even catastrophic, is that they make mistakes. That’s why there are still armies of coders across the globe.
What reminded me of the frailties of human-generated code was a piece in this morning’s paper entitled “Drawls a problem for voice devices ” (Benny Evangelista, San Francisco Chronicle, 01/30/17) which is either hilarious or sad, depending on one’s point of view. The gist of the article is that systems that can understand natural language often struggle with regional accents. Or in other words, to get the most utility out of Apple Siri or Amazon Alexa or any of their ilk, we all have to sound like newscasters on the Evening News, at least for now. The article stated that “despite all the hype about the rise of voice-assisted devices using Alexa and Siri, linguistics researcher Rachael Tatman found people complaining on social media that the technology still doesn’t understand them… ‘the South is the largest demographic region in the United States,’ she said. ‘If you’re using a voice-based virtual assistant and you can’t deal with Southern speech, you’re going to have problems reaching this market.’ “
Oops! It appears Artificial Intelligence (AI) with Natural Language Processing (NLP) is still very much a work-in-progress, whether or not it was “all over ” the NRF 2017 expo floor. In fact, the current state and the future of AI/NLP was the main topic of discussion at the Virtual Assistant Summit last week in San Francisco, a conference attended by Apple, Google, Microsoft, Samsung, and Panasonic (just to name a few). The SF Chronicle article quoted one speaker, Stephen Scarr, CEO of search services Info.com and eContext, as saying that “as an example of the challenge, a recent YouTube video showed Amazon’s Alexa misunderstanding a young boy’s request to play a song, and instead offering to play an audio porn channel. ” Double oops!
In looking at the program I noticed that there wasn’t a single retailer among the list of speakers. That’s worrisome, because the focus on a lot of development in AI/NLP has consumers specifically in mind. If there’s any one thing we should have learned since 2010, it’s that consumers are driving technology adoption, not businesses. But retailers in particular continue to be caught between the pillar and the post when it comes to new technology adoption. Businesses are challenged have to update their core systems while at the same time interacting with consumers the way consumers want to interact with each other – in other words, via the latest and greatest in digital capabilities.
Why Is NLP So Hard?
While NLP technologies are exciting to think about, it may be many years before they are a “done deal “. I nosed around the Internet to get a sense of the complexity of the problem, and came upon a blog written by Ian Mercer, a serial entrepreneur and creator of Abodit, a home automation system powering the “World’s Smartest House”. Mercer encapsulated the issues perfectly, and so I share his observations without modification:
Challenges in NLP
Sheer complexity of sentence structure
- Indicative (facts) vs. subjunctive (e.g. “what if … “)
- Sentence boundary detection (e.g. “… in the US. Govt. … “)
- Ambiguity in Language
Syntactic ambiguity and polysemous words
Meaning is context sensitive
- Depends on the people present e.g. “How far is it? ” (miles, km?)
- Depends on the social context: “That was bad! “
- Depends on the location, e.g. “Play <song> upstairs “
- Depends on the time of day, e.g. “Let’s go eat “
- Depends on prior sentences: “The third one “
Recognizing named entities (people/places/…)
Slang, jargon, humor, sarcasm, spelling mistakes, grammar mistakes, and abbreviations, …
English is highly ambiguous
The ambiguity present in the English language is such that even humans cannot decipher sentences with 100% reliability. The ambiguity takes many forms including:
- Syntactic ambiguity “I saw the Grand Canyon flying to New York. “
- Preposition ambiguity “I saw a man on the hill with a telescope “
- Anaphora resolution “John and Mary took two trips around France. They were both wonderful. “
- Ambiguity as a result of polysemy “He left the bank five minutes ago. He left the bank five years ago. He caught a fish at the bank. “
That’s just for the English language, and notice that Mercer doesn’t even mention regional accents!
Wait? No!
Retailers are famous for holding back from experimenting with technologies even when the technologies themselves are proven and mature. But in this case as in so many other “consumer facing ” capabilities, retailers can’t afford to wait. The pace of technology adoption is accelerating, and consumers continue to demonstrate that if any technology is easier to use than to ignore, they’ll adopt it into their lives.
So as RSR has recommended consistently, retailers need push the “go faster ” button. That means that they have to, (1) get involved in the dialogue about emerging technologies, and (2) experiment. Go to the conferences, get on advisory boards, establish a “lab “, and learn how to “fail fast ” just as the tech companies and consumers do!