To achieve this, most systems rely on models – data created from real world analysis of information, sometimes based on two or more selections of data.
Machines are now being built that are capable of learning and identifying information through analysis and use of such data.
One good example of this is the image recognition and analysis systems you’ll find in your iPhone. Apple has analyzed millions of images to create models that can identify hundreds of items in images on the phone itself. That core data is a model. Like AI technology itself, the models used to create and manage this core data are also evolving fast.
Some of the biggest technology companies, including IBM, Amazon, Apple, Microsoft, Google and Alibaba, have also created Artificial Intelligence-as-a-Service (AIaaS) solutions that make AI models available to developers. What’s on offer varies, but most offer models for image recognition/computer vision, speech or video-related services. Google also offers translation APIs.
The availability of these AIaaS models means developers can begin to build innovative solutions on the back of those models, accelerating development and bringing significant project cost savings.
Only as good as the data
Data is everything. Something every form of AI requires is accurate and objective data. AI is only as good as the intelligence it gets, and flaws in the kind of information provided to these systems (through omission or design) can have a significant impact on the results generated by these intelligent machines.
Ensuring good data requires highly efficient and accurate information gathering and analysis systems. This is leading to another challenge, in that the number of roles for AI professionals is increasing at a rate that exceeds those with such skills entering the recruitment market. At the same time as this talent shortage, members of the public and government regulators are beginning to recognize the employment and ethical challenges that may be created through the deployment of AI in everyday life.
Types of learning models
That’s not to say that every form of AI we interact with is equally smart. Most systems rely on one or more different technologies.
It’s important to think about the nature of the questions we are asking AI to resolve.
These are complex. We ask AI to predict the most precise correlations based on the values it learns – if X is one thing and Y is another thing, what is the most precise correlation between both things? How can the accuracy of that correlation be improved to predict how someone looks based on how they sound?
In general terms, AI is moving from basic pattern-matching into applied deep learning capable of analyzing complex scenarios to deliver solutions. Some terms you’ll encounter:
- Pattern matching: Machines are equipped with a library of patterns from the real world and given appropriate reactions to such patterns. You might see this in an image recognition AI that can tell the difference between pictures of trees and pictures of plants
- Neural networks: These are the AI equivalent to how neurons connect in the human brain. These systems consist of connected networks of algorithms (neurons) that learn and act together to approximate logical thought. There are many different forms of neural networks. Recurrent neural networks are often used within speech recognition, while convolutional neural networks are the type used for most image recognition
- Deep learning: These are vast collections of neural networks and can consist of multiple layers of decision-making machine intelligence capable of ingesting and being trained using incredibly huge amounts of data before working out the most appropriate action. These systems drive most speech recognition and computer vision solutions
- Generative adversarial networks (GANs): At its very simplest, a GAN consists of two neural networks (generator and discriminator) that compete against each other. The idea is that one network will create data that it thinks might be seen as authentic by a human, while the other will criticize until the GAN comes up with something that both more or less agree on. You might feed this machine images of human faces, and it will attempt to create fake faces – the two adversarial networks will argue until they are convinced they have created something that seems real
These are not the only types of machine learning models. Investment of time and money in the sector is intense, so multiple approaches are being explored.
Researchers recently created a deep neural network called long short-term memory (LSTM) , which will support near instant tasks, such as on-demand language translation.
Building on Darwin’s theory of natural selection, evolutionary computation (also known as neuro-evolution) is a little GAN-like: algorithms act like genes and mutate and combine randomly as they try to evolve to deliver the best solution. Researchers think this approach may help build AI models that can train and build new models on their own. Uber AI Labs is exploring this sector.
As you explore different forms of machine learning technologies, you’ll encounter multiple additional terms: deep neural networks (DNN), stochastic gradient descent (SGD), genetic algorithms, reinforcement learning (RL), deep reinforcement learning (RL), and more.
Hardware also matters
As you can imagine, all these computations take time. While models may be trained on bigger computers, advances in processor technology mean they can run on smaller devices, including smartphones.
The evolution of new processor and GPU technologies has helped speed up this development. Google, Microsoft and Apple all offer chips to handle machine learning functions, such as Google’s Tensor Processing Unit and Apple’s Neural Engine.
Big players, such as IBM Watson, Amazon Web Services and Microsoft Azure also offer on-demand access to GPU arrays for use in training and creating machine learning models.
These include storage platforms. Google, Apple and others all offer API’s and frameworks that developers can use when creating or deploying AI – CoreML or Cloud AutoML, for example.
Rapidly evolving processor, GPU and operating systems combined with fast-paced machine learning developments mean the AI systems we use this year will inevitably be superseded by those we see next year.
Ultimately it comes down to how rapidly machines can process the complex data sets we provide to them and how effectively they can learn from the vast data collections we train them with.
Gartner believes the market for AI chips will grow annually by 52%, from $4.27 billion in 2018 to $34 billion by 2023.
This is the second blog in a four-part series about how AI works, what data it needs and what happens when AI goes wrong. The other articles are: Everything you always wanted to know about AI (but were afraid to ask), The secret life of algorithms, and From supercomputers to smartphones: where is AI today?
Jon Evans is a highly experienced technology journalist and editor. He has been writing for a living since 1994. These days you might read his daily regular Computerworld AppleHolic and opinion columns. Jon is also technology editor for men's interest magazine, Calibre Quarterly, and news editor for MacFormat magazine, which is the biggest UK Mac title. He's really interested in the impact of technology on the creative spark at the heart of the human experience. In 2010 he won an American Society of Business Publication Editors (Azbee) Award for his work at Computerworld.