The problem with defining AI

Tim Olsen, Intelligent Automation Director at Hays Technology

We probably all think we know what AI is. We have images of chess-playing robots, and probably even Arnold Schwarzenegger’s Terminator in the most extreme examples. But think again, where do you draw the line between established technologies and AI?

Perhaps we assume it is any technology that allows computers to think like humans, to interact with their environment, to understand, to predict, to make decisions.

How to define AI?

The problem is that, once a computer is successful in completing one of these activities, it becomes normalised by the technical community. AI is perceived as the ‘magic’ that is always just out of reach, so when the audience gets to see behind the wizard’s curtains and understand the code, the rules, the illusion is broken, and it just becomes another application in the tech space.

At one stage, chatbots were considered AI, but these are now deeply entrenched and assimilated into the domain, so we look to the next stage, conversational AI – and when that also becomes commonplace, we will move the goalposts again.

In her book, Machines Who Think, Pamela McCorduck wrote: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems — there was a chorus of critics to say, 'that's not thinking'.”

Larry Tesler recognised this when he created his theorem, which is popularly known as: "Artificial Intelligence is whatever hasn’t been done yet".

New AI applications tend to be renamed and classified once they are delivered, and no longer fall under the AI moniker. Is ‘real’ AI an impossible target, or one that is destined always to remain tantalisingly out of reach?

Artificial intelligence, or just algorithms?

The fundamental problem is that we don’t agree on what ‘Intelligence’ actually is, and the closer we place ‘AI’ applications under the microscope, the more we realise that they are not truly intelligent at all, but ‘just’ algorithms.

AI was originally defined as machines that display ‘human-like’ cognitive skills such as learning or problem solving, but a more modern definition is now based on rationality; that is, the ability to ‘reason’, which in turn implies that one’s beliefs are formed from established facts, and one’s actions conform to these beliefs. This would suggest a broader scope than just ‘computer vision’ for example – it suggests an ability to take in information, analyse it, judge it, and act upon it.

Perhaps autonomous driving is such an example of end-to-end application of AI – yes, it is still just lines of code at its most micro level, but the breadth of cognition and action is unrivalled in the domain.

No doubt when we’re all sitting in queues of self-driving Teslas we will move the goalposts once more.

Read more on automation.
 

Author

Tim Olsen
Intelligent Automation Director, Hays Technology

Tim worked in digital transformation for 20 years developing solutions to improve user journeys and experience for blue chip clients. More recently he grew the UK’s largest RPA CoE and went on to specialise in helping organisations overcome their barriers to scaling automation. He is a thought leader and evangelist for Intelligent Automation, and leads the IA Consulting specialism for Hays.

00