Artificial Intelligence (AI) has now been around for several decades and promises to change the way you live your life. In fact, most technology you use every day already operates using AI concepts like machine learning and natural language processing. Although we’re quite far from a universal AI machine that can learn and understand as arbitrarily as humans do, an increased availability of computing resources and advanced algorithms facilitate the use of AI concepts in data analysis and insight identification. A recent IBM-sponsored Harvard Business Review report outlines the trend of many organizations towards an increasingly data-driven decision making process using both internal and external data and advanced analytics to compete in an “insight economy”.
Pattern recognition is the main objective of most machine learning applications. Analyzing the relationship between observed inputs and outputs (i.e. pricing and sales, shipping strategy and inventory, intelligence and lead conversion), leads to an objective prediction of what outputs you should expect if you tweak those same inputs. For example, let’s say you want to analyze your pricing models over time to determine what model works best. Sure, this is something that has been done for years by pulling up Excel and manually going through and analyzing the data, but this requires human judgement that introduces an extra source of uncertainty and subjectivity. Additionally, if you didn’t happen to land upon the ideal pricing model in practice, there may be a better undiscovered model that will remain hidden if you simply look at sales vs pricing. Machine learning and pattern recognition offers an alternative objective resource that uses your past pricing strategies and resulting sales to predict what model should be used moving forward, even if your data doesn’t explicitly show it.
Computers are certainly not perfect. Algorithms can provide false insights. This is clear if you consider the simple obvious fact that computers and algorithms are designed by imperfect humans who can inject their own biases. We believe the union of computers (first) and humans (second) is the ideal sweet spot of impact and efficiency. Most people (us included) certainly don’t aim to replace human decision with computers, but it’s naive to ignore their value by not exploiting the emerging cognitive and predictive power of modern computing capabilities.
AlphaGo and Machine Learning
At this point it’s old news that intelligently designed algorithms with massive computing power can beat some of the best humans at intellectual competitions. Deep Blue beat the chess Grandmaster Garry Kasparov, Watson defeated the Jeopardy! champion Ken Jennings, and last week Google’s DeepMind AlphaGo took down the world champion Go player Lee Sedol. As the tasks become increasingly difficult due to both scale and nuance, experts in the AI community step up to the plate and devise new approaches that combine existing ideology with the cutting edge to solve both real-world and academic exercises.
If you’ve never heard of the game Go, it’s an ancient Chinese board game about 2,500 years older than Chess with 297 more piece positions, 32,090 more possible first move configurations, and an estimated 50 orders of magnitude more possible board configurations. If you didn’t catch that last fact, it’s worth repeating:
Go has approximately 50 orders of magnitude more possible board configurations than Chess! That’s 1050 more moves which is conveniently the square root of a googol (1 googol = 10100 = Google’s namesake).
As a point of reference, Go has approximately 10170 total possible board configurations which is between a googol and googol2 which is more than (the number of atoms in the universe)2. We recognize that these numbers may be inconsequentially impressive and arbitrary, but the key consequence is that when a computer plays a human in a Go match, it can’t simply search the space of possible moves and choose the best one. There are too many! If this were possible, many things about the way we model physical phenomena would be different including better weather predictions and more accurate modeling of complex biological systems. This is the critical differentiator between this task and previous machine learning accomplishments.
As described in Google’s blog, the technique used to teach a computer how to play Go is to do just the opposite. Their strategy is to let the computer learn how to play the game by trial and error. After each round of practice, the AI machine learns how to play the game better. Using what’s known as unsupervised learning, it intrinsically learns when a move contributed to its ultimate success or failure. As a result, AlphaGo developed the equivalence of nuanced planning and strategy. This is similar to how a human learns, except in the case of modern machine learning with massive computing power, computers are often more effective learners.
For comparison, so-called supervised machine learning tasks start with a set of known inputs and outputs. In the case of chess, the set of inputs is a sequence of chess moves in response to an opponent’s. The set of outputs could be the evolving probability of victory as moves are made. The goal is to use this measured input-output relationship to predict future outputs based on yet to be seen inputs in order to maximize the overall probability of victory. For smaller tractable problems where the space of possible choices is “manageable” like Chess or Jeopardy!, a brute force solution can be achieved where the machine learning algorithm allows the computer to choose the best solution from a library of choices since all of the possibilities are known. However, in the case of Go, the space of possibilities is far too large to choose the optimal one out of a library of moves. The machine learning algorithm is trained to act in a more intuitive way mimicking how the best humans play the game with intuition and short to long-term planning. It’s notable that the best Go players in the world were both baffled and impressed by AlphaGo’s strategy because it was so far outside the traditional framework. The algorithm found new ways to solve a problem that were not obvious to those that designed and inspired it because of its optimized and objective learning process.
Despite the excitement around AlphaGo, it’s been noted by several experts that the field is still quite far from a universal AI machine as it’s traditionally defined. A universal AI machine may never be achieved, and in practice this is likely a matter of semantics. It’s possible that the specific approach used for AlphaGo is particularly well-suited for some tasks but poorly suited for others. However, this doesn’t detract from it being another significant step in an exciting direction of digital cognitive function, and it offers motivation for the rest of us to evaluate all the available technologies that can improve how we tackle our every day tasks in life and business.
So how about something useful?
Yet another computer beating a human at a seemingly trivial competition may be newsworthy the first or second time, but it certainly leads us to wonder where/if these technologies are used. Practicality should always trump novelty when setting priorities, but for demonstrating a principle, fanfare and hoopla are easily justified. The short answer is that real-world applications have already arrived (i.e. the emergence of driverless cars, image recognition, Amazon product shipment strategies, language processing). However, AlphaGo’s achievement deserves extra attention due to the subtleties of the task that it solved and the way that it solved it in comparison with the current industrial state of the art.
Although there are several areas using advanced machine learning approaches to the point that it’s now ubiquitous in many software and hardware applications, there are still areas and industries relying on technology that have yet to benefit from these advancements. For example, there’s a myriad of intelligence-related applications that require human judgement. Whether it’s deciding what product to buy, which pricing model to use, or how to control your inventory, the influx of new machine learning approaches that are helping business leaders make more informed decisions is remarkable. It’s becoming more clear as time passes that we should optimize how lessons are learned and allow an objective agent to modify the way decisions are made. Instead of relying solely on our inferences and assumptions to determine how we operate, we should leverage information from our previous success and failure to allow the data to speak for itself.
A significant challenge of introducing an automated decision maker to help your enterprise is identifying where improvement is needed and what types of data are available. We can segment data into two categories that have the principal difference of ownership. You have your internal data including sales, operational models, metrics of effort, wins and losses, etc. There is a second critical category including data disseminated out in the world for anyone to acquire and analyze. This is data ranging from documented conversations, reviews, pricing trends, hiring trends, discrete events, and other movements at both the organizational and individual levels. The value derived from combining your internal data with external indicators will put you in a position to devise a powerful system of inputs and outputs necessary to achieve a reliable prediction model and to understand where your company fits with competitors and the market at large. Machine learning applications like AlphaGo have the ability to spot trends and identify paths that the human element may miss. From a business and marketing intelligence perspective, a holistic understanding of hundreds of data sources ranging from internal activities to external market movements is a daunting manual task as the volume and variety of data continues to increase as time passes.
It seems like the age of machine learning and automated pattern recognition in marketing and business intelligence will continue to be a cornerstone of extracting insights from big data resources to compete in an “insight economy”.