As the fields of machine learning and AI become more entwined, it is important that we have a clear understanding of their differences.
Machine learning comprises a set of algorithms that enable software to be informed by historical datasets and past events, ideally without the need for human intervention. Whilst machine learning is very valuable and has numerous useful applications, it is not as straightforward as it may appear. Questions around how the system will learn (whether by supervised, unsupervised, or reinforcement learning), or how bias will affect outcomes, are only the beginning of the myriad of complexities when working with machine learning technologies.
Due to the flood of data now available, scientists are using algorithms to find all manner of surprising correlations. One company found that deals which are closed during a new moon are, on average, worth 43% more than those closed when the moon is full. Other strange findings include people answering the phone more often when it is snowy, cold or very humid, as well as responding more to emails when it is sunny or less humid. Yet another study showed that taller people are better at repaying loans – unsurprisingly, the company defined this correlation as spurious. (For many other spurious correlations, see here.) Part of the problem is that most machine learning systems do not combine reasoning with their calculations. They simply ‘spit out’ correlations, whether they make sense or not, and because there is no rationale for an answer, it is left to expensive experts on the end of the process who must interpret the results. By adding a layer of automated or semi-automated reasoning as a layer on top of machine learning systems, these correlations and insights become infinitely more useful.
Whilst machine learning is very valuable and has numerous useful applications, it is not as straightforward as it may first appear.
Machine learning is not AI. We define AI as the process of enabling computers to complete tasks and processes, that was previously thought of as being specifically human.
The ability to understand and explain the reasoning behind a decision is one of the many differences between artificial and human intelligence. Whilst we may not fully understand ourselves, as humans, we can usually offer up some sort of rationale for our decisions. Machine learning algorithms are ‘black boxes’ — they provide answers based on the data they have learned from. We can see their conclusions, but we do not know how they arrived at them, which limits our ability to improve the machine learning solutions when something goes wrong. Furthermore, machine learning fails to meet the regulatory obligation to demonstrate precisely how automated decisions are made, necessary for compliance.
So, whilst it is true that algorithms can make our systems smarter, they can also produce some strange results which we have to identify as bizarre, and then work to refine. By adding a model of human-like reasoning, we can create AI solutions that are much more useable and beneficial for businesses and their customers than by using machine learning alone.