JDA Labs: Machine Learning Is Coming – And Don’t Be Surprised If It Has A French Accent
Last week I had the opportunity to travel to Montreal to visit JDA Labs, the company’s real-life playground where dozens of young PhDs dream up their wildest ideas for retail, manufacturing and wholesale and then set about the trial-and-error task of getting some of them to work. Think robots, HoloLens devices, and all kinds of scanning devices everywhere you look in an airy, largely glassed-in play area. Some work. Some don’t. And that’s the whole point.
The Labs’ purpose is fairly straightforward: conjure up the “What if we could… ” ideas of some of the world’s most brilliant scientific and engineering minds, provide them with free reign (in both budget and creative license) to bring them to life, and then, and this is the really interesting part – when a project seems “close ” – they partner with local universities, retailers and manufacturers to apply real-world business applications for ways to put these tools to real use.
The company has never opened its doors to analysts before, and before we got started, JDA’s Wayne Usie addressed the handful of press and analysts in attendance to give the big picture.
- JDA views the supply chain as a bigger problem than ever, citing that efficiency has actually lowered in recent years. This is due to several factors: faster moving inventory, lower visibility into that inventory, slower (and more inaccurate) data, siloed channels, latency caused by disparate system – the list goes on and on. But the result is what it sees as a problem of inversion, resulting in what it estimates to be a 1.1 trillion dollar problem.
- The company is serious about pivoting to a fully software as a service model – for real. They could not have been clearer about that throughout the day. The company has enjoyed consistent growth, yet they are still learning new things about the retail and manufacturing businesses every day to help them predict the future better. For example, did you know that one of the biggest indicators of when someone is going to buy tires is the day they first registered their car? Makes sense, but it took a lot of technology to figure that out.
- The company knows there are things it doesn’t know, and it’s too late now to try to do everything. As a result, they are starting to lean on partners far more than in recent years.
With that as contextual backdrop, it was time to hear from Suresh Acharya, the man tasked with overseeing the Labs and its output. Some of the more interesting things I learned:
- The Labs anticipates that a substantial portion of the projects it is currently working on will make it into real life. That’s a big ask. Keep in mind, a lot of what the people working there demoed to us over the course of the day didn’t quite work – but again, that’s the point: it’s not supposed to. This is tech that inherently shouldn’t work yet. To think that a substantial portion might one day in the future make it into real life is a tall order, indeed.
- Many of the projects are based in machine learning, and I would imagine one of the most difficult parts of Suresh’s job is getting business-minded people to understand exactly what that can and will mean for business. For example, one of the most important aspects of machine learning is that there are very few pre-set rules. You give the technology both the input AND the output – it’s job is to figure out the patterns and correlations that drove the output. This is a massively different way of thinking than what retailers are used to, and takes some time to wrap your head around.
- Within the machine learning field, there are two very different types of solutions. Those that are supervised, and those that are unsupervised. An example of each:
- You deposit a check into an ATM. Despite the fact that many of us have varied ways of writing a “7 “, the machine has been given every detectable version of a 7 from which to compare your particular way of writing the numeral. When it can pick your 7 from the list its been given in advance it has employed a supervised version of machine learning.
- However, an unsupervised instance would be able to find patterns in non-labeled data. This is far more complicated, and is represented by the research being done with autonomous vehicles today. There is simply no way someone could program a car with every possible scenario it might encounter on the road, and therefore the machine must “think ” for itself.
- As a result of all of this, much of what JDA is working on so far relates to supervised machine learning. The input might be weather, the output might be sales, and it’s up to the machine to determine how weather has affected sales. A large tire manufacturer is already using a predictive analytics tool to figure out when someone needs new tires – before they do. And on the supply side, the experiment includes stocking individual stores appropriately according to 20 different streams of data the manufacturer is using to predict retail sales (weather, past sales, etc). So that when a customer walks in needing a new set, they already knew that was going to happen and have them in stock.
- Part of the problem with all of these efforts can be identified as the “intangibles. ” To wit, Suresh noted that we know that if we took the best point guard in the NBA today, the best center in the game, and so on and so forth until we’d assembled the best five-man basketball team on earth, they probably still wouldn’t win. Because it’s not about how great they are, it’s about the relationships between them. And in his experience, that’s exactly the same situation with product attributes. The relationships between them are what make these efforts at deep learning so incredibly challenging.
So then we saw some examples. I’m not allowed to get into them here, and I completely understand why. But like I said: some of them worked, some of them didn’t. And I was totally okay with that. What was fascinating was to get a peek into the world of deep learning in a very real way, and I genuinely hope to do more things like this again in the future. I found it incredibly valuable.
One last thing, I’d heard about the Labs quite a bit in briefings before, as well as met with some of its representatives at the company’s Focus user group. However, I always kind of wondered why it was located in Montreal. The answer? The University of Montreal is home to arguably the greatest mind in the deep learning world, a man by the name of Yoshua Bengio.
In November 2016, Google announced it is creating a new AI research group in its Montreal office and will invest $4.5 million over three years in the Montreal Institute for Learning Algorithms, an AI research lab part of the University of Montreal. Part of Google’s investment will involve funding Bengio’s research projects as head of the Montreal Institute for Learning Algorithms. Bengio, his colleagues and other deep learning experts are no doubt the reason why Google has decided to maintain a strong presence in the Montreal area.
As you can see, big things are happening in the province of Quebec. Bonne chance.