AI, it’s smarter than you
- Details
- Created on 02 November 2016
- Written by Steve Burrows
When I were a lad AI stood for artificial insemination; when I was eleven years old we were taught about the AI of cattle in graphic detail - I can’t remember in which subject the topic came up - which knowledge was received with many juvenile blushes. As I recall we were taught about AI before we learned about human reproduction. Today however AI has, for most of us, a very different meaning.
Artificial Intelligence (“AI”): according to some it will be our saviour, and to others it is the greatest threat mankind has faced. We’ll come back to that, but first off - what is it? There are many definitions of AI, which is not surprising because there are many perspectives on intelligence, but the common theme is that AI is a man-made mechanism that can perceive its environment, make choices and learn to improve for a specific purpose.
For example a simple AI mechanism might see the shapes of letters (perception), decide which letter a shape represents (choice), and learn from its errors to improve its character recognition accuracy. Pretty benign stuff. Combine our Character Recognition AI with a Language AI and a Translation AI and we can build a mechanism which can read handwriting in one language on a sheet of paper and translate it to another language - a common task which is beyond the education of many humans. Every aspect of this process is based upon decisions - taking multiple inputs such as the shapes which make up a letter in the alphabet in order to decide which letter it is, or the possible meanings of individual words in a sentence in order to determine the context of the sentence and therefore the specific meaning of a specific word.
In many modern AI mechanisms the decision-making engine is based upon our very limited understanding of how a brain works, which is to say it is constructed as a Neural Network. Basically a neural network is a mechanism whereby multiple decision inputs or factors are received by multiple nodes (“neurons”). Each neuron makes a decision about the simple input factor it has received, and these decisions are then passed as input to another neuron which takes these inputs and combines them to produce a logical output. The layers and combination of neurons form a decision-making network which is able to reduce many simple decision factors into fewer and fewer more complex factors until there is only one output - the decision. Each input at each stage of reduction has a weighting according to its importance - so for example when making a domestic decision, pleasing one’s spouse might be weighted as more important than pleasing one’s dog. The neural network is, crudely, a sophisticated filter. Join several such filters together, each processing the outputs of its predecessors, and we can create decision-making mechanisms capable of handling very complex problems which have many inputs of differing significance.
Neural networks are not the only tool used in Artificial Intelligence, but they are one of the most common and powerful mechanisms for reducing complex problems with many input considerations down to a simple decision. A big multi-layered (“deep learning”) neural network can rapidly process decisions which people would find difficult or slow because of the complexity of input factors, but most importantly it can learn to improve itself by recursively adjusting the weightings of its inputs to produce more reliable decisions.
This is the really important bit: because the neural network can learn to improve its problem solving, it can produce solutions which have not been invented by its programmers.
Artificial intelligence then can make complex decisions faster than people, and can “invent” new, more reliable, solutions to problems. A couple of very recent examples:
Researchers at the Google Brain project set out to see if they could create a pair of neural networks which would develop encryption of the communications between them which another neural network could not decrypt. It took the networks some 15,000 iterations to get there, but eventually the cooperating pair of networks developed encryption between them which the adversarial network could not break. The researchers do not understand the cryptography being used between the networks, it was invented by the networks.
Researchers at the world-renowned Massachusetts Institute of Technology (MIT) have recently produced an artificial intelligence system with two neural network layers. One network generates faces: it assembles the common components of faces: shape, eyes, mouth, nose etc. to form new “human” faces which may have never been created by nature. In essence it is an artist, creating new original images. The other network applies scariness to images by filtering out the stylistic elements which seem to characterise horror and applying them to otherwise ordinary images, again creating new original artworks. You can pop over to
These two examples demonstrate that artificial intelligence is capable of creating original work, which is creating a new conundrum for lawyers - who is the creator of an invention or work created by an artificial intelligence? It could be the author of the AI system, or the company that has bought the software and trained it for a particular application, or potentially even the computer program itself - the law may yet have to recognise artificial intelligence systems as entities in their own right. All very sci-fi, but the reality is here - if the MIT’s face creating AI creates a new Mona Lisa who will own the rights to the image? Similarly if an Artificial Intelligence makes a decision which affects someone, such as a medical diagnosis, who will be liable for that decision and its consequences?
Artificial Intelligence is not new, the first concept for the design of an AI computer system was developed in 1956, but with the low cost and high power of modern computers AI systems are now finding real-world applications in many activities, including translating languages, predicting court judgements (they’re apparently pretty good at it), and driving cars. It will not be long before they become common in business; deciding which services to offer to a customer, making investment choices, designing improvements to products, and myriad other complex tasks which currently employ intelligent people.
Aside from the legal issues that are emerging with the practical application of AI, we business people will have to start deciding how we employ it in our enterprises. AI will bring competitive advantage, we can expect to see AI systems becoming relevant to our businesses within the next five years, and from then on they will be yet another element in the technological arms race for competitiveness.
As I said early on, some people claim that AI may be the greatest threat mankind has ever faced. These doom-mongers are not luddite cranks, they include global leaders such as physicist Stephen Hawking, entrepreneur Elon Musk, and numerous other luminaries. AI is a threat in two ways - one is that it can make human intelligence redundant, which will be a major social disruptor, and the other is that we may not be able to control the decisions that AI chooses to make - ultimately AI may evolve to make decisions which do not suit humanity. Both of these threats are a good few years away, but the potential for wholesale job displacement is likely within the lifetimes of those of us who have not yet retired and a certainty for those in the early stages of their careers.
In the meantime AI systems will continue to develop; writing original texts, creating original images, creating original solutions to problems and becoming better at driving cars. More immediately, I suspect that one of the first mass-market applications of real artificial intelligence will be cyber-war, with the development of AI systems both to penetrate computer systems and to defend against cyber-attacks. Earlier this year the US Defence Advanced Research Projects Agency (DARPA) ran its first Cyber Grand Challenge competition for autonomous cyber attack systems to detect weaknesses in software, and autonomous cyber defence systems to automatically fix vulnerabilities. Also this year the brains at MIT, which gave us the Nightmare Machine referenced earlier, developed a new artificial intelligence system, named AI2, to detect cyber attacks in progress. First results showed that AI2 is three times more effective than the automated cyber attack detection systems on sale today.