Why we should embrace AI, not fear it
Those who are scared of a tech revolution they don’t understand need to get clued up
The media has not got a clue about artificial intelligence (AI). Or technology. 'Robots are coming for your job' is a popular cry, but the next day it's fears about AI starting World War III.
Not only do robots and AI have very little to do with each other, but AI is at a very early stage. What's more, it can be split into several separate technologies.
The masses are being misled into fearing automation and a nebulous super-intelligence, but it’s those with a working knowledge of how AI works – and how it can be exploited – that will be best prepared for the future of work.
What is AI?
What is AI? Watch our explanation, brought to you by Honor
There is no precise answer to this question, but it's got nothing to do with robot overlords. AI is a field of computer science that examines if we can teach a computer to ‘think’.
AI as a phrase has been around since 1956 when it was coined by American computer scientist John McCarthy, six years after English mathematician Alan Turing had published a paper called 'Computing machinery and intelligence' in 1950.
AI is generally split into various subsets that try to emulate specific things that humans do. Speech recognition mimics hearing, natural language processing mimics writing and speaking, image recognition and face scanning mimic sight, and machine learning mimics thinking.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
That’s a lot of different, often unrelated technologies; AI is an umbrella term, and certainly not a general purpose technology.
Why is AI so hyped up?
Research into AI is currently riding the wave of increased computing power and big data. Together they make AI both possible and imperative; as a society we now produce way too much data to ever process ourselves or get any insight from. Collected data is growing 40% a year, and it's mostly going to waste.
The existence of all this data also means that AI software has enough information not only to work with, but to learn from. Is this AI’s big moment? Venture capitalists and technology giants such as Amazon, Google, Facebook, Microsoft and Apple think so, and are investing heavily in research.
It’s these companies that have unimaginably huge data sets collected in the last few decades, and a vested interest in automating tasks on that data. Together they’re becoming the arbiters of AI know-how, so it’s AI techniques developed by Google et al. that are being used by scientists to trawl through data to get new insights.
There’s about to be an AI-powered knowledge explosion.
Supervised machine learning
Machine learning is the act of computer scientists training a computer to do something. It's about automating repetitive tasks, essentially training a computer to recognize patterns, and categorize data.
The classic example is image recognition or 'AI vision'; give a computer a large number of images containing labeled objects, and the computer can learn to identify them automatically. The computer creates what AI researchers call a neural network; a virtual brain connection similar to a basic process in the human brain.
However, creating a neural network like this takes a lot of human labor, and also a lot of processing power. Google AI and the University of Texas recently used AI on a labeled data-set of signals from the Kepler space telescope to discover two exoplanets when astronomers had failed to find anything.
It's also being used to identify cracks in reactors, and even help engineers at the UK's Joint European Torus facility capture and deploy nuclear fusion energy.
This is supervised machine learning, and while it's getting better at not forgetting, its usefulness at predicting patterns in data is hamstrung by the data it is fed.
Unsupervised machine learning
What if a computer system could self-teach, building algorithms guided not by humans, but by data?
Unsupervised machine learning (also called 'true AI' by some) is really what AI researchers want to achieve. It's where you only have unlabeled data, and you ask the computer to learn things without specifically telling it what the right answers are.
For example, Google developed an image recognition neural network and then gave it YouTube for a week to see if it could recognize common objects. It found cats – even though it didn't know what a cat was. For AI, that’s impressive, but it also shows the current limits of what AI is capable of.
However, that same neural network – now called DeepVariant – is now being used to accurately identify mutations in DNA sequences, presented to the computer as images. The AI is essentially spotting the mistakes made by DNA-sequencing machines; it’s gaining insight from data where there would have been none. AI is also being used to spot fake paintings.
This is what AI is being used for; to make computers better at their job.
Neural networks
This is just one of many machine learning techniques. Neural networks mimic what happens in the human brain, but don't think for a moment that AI is on the verge of replicating humans. A neural network in AI can handle hundreds, thousands, and sometimes millions of inputs, with data flowing one way.
It's clever stuff, but the human brain has billions of interconnected neurons; we are all several orders of magnitude more complex than AI. So when you hear the phrase 'deep learning', keep it in context; true computer intelligence and artificial general intelligence (AGI) are some way away.
Will AI 'take our jobs'?
There is a lot of fear about AI taking people's jobs. It's made worse by the fact that many economies are experiencing slow growth and job insecurity. AI is about making computers more capable, which will have a significant impact on how society runs. A lot of routine work will be automated, reducing administrative workload.
It means people will be able to concentrate on the higher value work without the soul-destroying report-writing duties. It means scientists will make more discoveries, doctors will have access to more cutting-edge knowledge and save more lives, and police will be able to do more policing.
AI is about boosting productivity, and it may spawn a thousand startups that carve new companies and industries.
The future for AI
AI is a way for computer scientists to get computers to catch up with the reality of big data, and get them to perform tedious manual tasks that are now way beyond us given the deluge of data we're now surrounded by.
It’s a basket of techniques, not a general purpose technology, and it's not about to automate everything.
Although it will have an effect on many industries, all businesses will need a persuasive business case for AI – most probably to solve a really specific, narrow problem – as well as the services of data scientists that specialize in AI, and a lot of well-ordered data that the AI can learn from.
Will AI change everything? Perhaps, or maybe the hype – and the funding for research – will dry up as researchers hit a wall. After all, AI is already on the verge of becoming a bland marketing term to sell phones. Even if these early days of AI do prove a significant milestone for humanity, it's probably going to be a slow-burner.
However, what we do know for sure is that having an understanding of AI is going to become more important for more professions. For all of us living through the data explosion era, AI is the missing piece of the jigsaw.
TechRadar's AI Week is brought to you in association with Honor.
Jamie is a freelance tech, travel and space journalist based in the UK. He’s been writing regularly for Techradar since it was launched in 2008 and also writes regularly for Forbes, The Telegraph, the South China Morning Post, Sky & Telescope and the Sky At Night magazine as well as other Future titles T3, Digital Camera World, All About Space and Space.com. He also edits two of his own websites, TravGear.com and WhenIsTheNextEclipse.com that reflect his obsession with travel gear and solar eclipse travel. He is the author of A Stargazing Program For Beginners (Springer, 2015),