What is AI capable of, really?

A profile of a human brain against a digital background.
(Image credit: Pixabay)

The possibility of artificial intelligence (AI) has captured our collective imagination ever since the concept of computers entered public consciousness. Now, however, interest is reaching new highs as some of the incredible feats promised by its advocates are starting to be realized. 

OpenAI’s ChatGPT has been one such platform gaining significant attention in recent months, but there are plenty of other AI platforms being developed for all kinds of purposes, from predicting financial markets to driving cars and even creating works of art.

AI is often seen as the next frontier in technology, and a new leap forward for humanity - but how intelligent is it really, and how much can it actually help to improve our lives and advance human progress?

Giving us answers

Since its launch in November 2022, the ChatGPT chatbot has already generated multiple stories of its outputs escalating in absurdity. Some are in awe of its abilities, while others are more skeptical - but both reactions are understandable, given its ability to explain Einstein’s theory of relativity with a rhyming poem on the one hand, and its apparent low IQ score of 83 on the other.

Owned and operated by OpenAI, a company founded by Elon Musk and with serious backers such as Microsoft, ChatGPT is a Generative Pre-trained Transformer, and a Large Language Model (LLM) that has been trained on vast amounts of literature to respond to all sorts of queries and instructions in a way akin to human intelligence.

Fundamental to the inner workings of ChatGPT is its reinforcement learning technique, a type of unsupervised machine learning. In essence, it adapts itself based on how it is used, learning and adjusting from the feedback it receives to optimize results. However, even with this impressive ability, issues can arise. 

As AI expert and former Google research director Peter Norvig points out, this can lead AI to create new and unpredictable mistakes, as it comes up with its own solution independent of human guidance. These issues will still need to be corrected at some point, tweaking the design continually to rectify such errors.

We have already seen countless mistakes from ChatGPT, such as giving erroneous coding advice. There will always be blindspots, things that programmers hadn’t accounted for, and things the AI will miss in trying to teach itself. And the more it adapts, the more people will continue to find ways to trip it up.

ChatGPT software and logo on phone screen

(Image credit: Shutterstock / Ascannio)

What’s more, as many commentators have already noted, the danger of such errors is that people are more inclined to believe some answers when coming from an AI, especially given the exalted press some models have received. The seduction of chatbots like ChatGPT, with their human-like responses and sheer confidence in their statements, regardless of how factually accurate they are, is a worrisome concern.

Even when ChatGPT gets things right, and its error rate comes down to acceptable levels, what exactly is doing for us? It is ultimately a repository of knowledge, regurgitating whatever is fed into it, admittedly in sometimes surprising and impressive ways. But it is humans who decide what it outputs. It is somewhat circular - it can’t tell us more than what we already know and fed into it to start with; it just repeats what we already ‘told’ it to learn.

As Google explained after it fired one of its engineers for believing LaMDA was sentient, the AI was only telling him things he wanted to hear, based on the millions of books it was trained on. It was simply predicting the most likely words to respond to his prompt with.

If we’re looking for an AI that can find us answers to questions that are beyond our scope of understanding, then ChatGPT isn’t there yet. No matter how compelling the illusion, it doesn’t understand anything in the sense that a human does, being able to grasp at the deeper meanings and implications of what it knows, or to generate new ideas and insights based on such knowledge. That remains a human trait.

Predicting the future


(Image credit: Shutterstock / Rawpixel)

A natural place for AI to flex its muscles proper is in the field of predictive modeling and simulations. The advantages are clear: if AI can intelligently sort and analyze masses of data, learning and adapting on the fly as new data comes in to give probable outcomes, then it could save a lot of human time and effort, and perhaps achieve what humans never could, period.

Axyon AI is a tool designed for the finance industry that utilizes machine learning to predict how financial assets will perform, as well as creating strategies for investors to follow. 

Of course, the amount of factors and data associated with markets is seemingly infinite, as Daniele Grassi, the co-founder and CEO of Axyon AI, concedes to TechRadar Pro. He believes, though, that Axyon AI can get around this by “looking at asset performance prediction from a learning perspective rather than a knowledge one.” 

Another issue related to financial data, aside from its sheer scale, is what is known in the industry as its Point-in-time-ness. Grassi explains: 

“Reinstating data that refers to past events, such as fixing an incorrect fundamental indicator for a certain stock in 2012 and ‘pretending’ that the correct value was known, can lead to false discoveries.”

The auto-machine learning technology present in Axyon AI, Grassi claims, can adapt to changing market conditions. However, part of the side effects of the adaptability of AI systems is their sensitivity. Grassi gives the example of big changes in world events that could lead to “spectacular failures” if the AI modeling isn’t sophisticated enough to cope with them, echoing Norvig’s aforementioned caution. 

And despite the issues with feeding data, Grassi still emphasizes the importance of having the right kind: “A large contribution to the recent success of AI in many fields is the availability of abundant, high-quality, cost-effective, diverse and representative data”.

In surmising the scope and abilities of predictive technologies, Grassi stated that “AI is not a silver bullet for all problems involving predictions: its success heavily depends on the nuances of the problem being modeled and the size and quality of the available data describing that problem.”

He also notes that AI can be very accurate when predicting patterns in large datasets with complex relationships, but where there is a lack of data, or explanation and understanding are required, “traditional statistical approaches might be preferable.”

Driving our cars

Autonomous driving

(Image credit: Shutterstock)

Another popular application for AI systems is self-driving cars. Big tech is keen to get involved in this industry, and many modern electric cars have systems that allow for full automation, such as Tesla autopilot. 

One of the main advantages is their purported increase in safety over human drivers. However, drivers - in the US at least - are already pretty safe, with it taking a massive 100 million miles on average for a fatality to occur. We can only wait until self-driving cars become as prevalent as manually-operated vehicles to see if they can beat that figure.

Even putting aside how safe they supposedly are, there are various social issues surrounding driverless vehicles that are potential obstacles to them becoming the norm.

Having a mix of normal and driverless cars together on the road will be tricky from a legal perspective. For example, if there is an accident between them, who is to blame? Will the human be at fault as they aren’t programmed to follow traffic laws unerringly like an AI? Or will the AI be blamed for not being intelligent enough to predict the driver’s behavior and avoid the collision? 

And when the inevitable lawsuits happen, it won’t be human drivers being sued, but the manufacturers, so perhaps they’ll be so costly as to even empty the cavernous pockets of the likes of Google and Apple - no wonder the latter has postponed its plans to enter the market

When it comes to driverless trains, at least they are fixed to preordained tracks that aren’t shared with the public. It is much easier and practical for an AI to start and stop at the right times, with far fewer variables and unpredictable events to contend with. But driverless cars on public roads throw up many more sticky societal and human issues that are more than just computational problems.  

Where we are and where we’re going

machine learning and AI

(Image credit: Shutterstock.com / Jirsak)

To what extent AI will be able to improve our lives in a substantial manner is anyone's guess. Perhaps it will take truly astonishing leaps in certain areas, but remain resolutely hopeless in others. Perhaps there will be serious social and cultural barriers, and maybe other unforeseen obstacles that will get in the way of their development.

Regardless, there will always be limitations, and humans will still be better equipped for certain tasks than AI will. Great human achievements result from innovation, ingenuity, instinct, intuition and creativity, not just the kind of left-brain thinking that AI is capable of. 

Since we are still much in the dark in understanding these faculties, trying to instill them within an artificial intelligence seems like an impossibility, and perhaps always will be. So while AI can be an exceptionally useful tool, the idea that they will surpass us altogether is some way off.

At either end of AI, and oftentimes in between, there will be human beings, so the same human errors and mistakes will always be there, while the human faculties key to understanding and innovation that are so vital to progress look set to remain in our minds alone.

Lewis Maddison
Staff Writer

Lewis Maddison is a Staff Writer at TechRadar Pro. His area of expertise is online security and protection, which includes tools and software such as password managers. 

His coverage also focuses on the usage habits of technology in both personal and professional settings - particularly its relation to social and cultural issues - and revels in uncovering stories that might not otherwise see the light of day.

He has a BA in Philosophy from the University of London, with a year spent studying abroad in the sunny climes of Malta.