Are we exaggerating AI capabilities?

An abstract representation of artificial intelligence
(Image credit: Shutterstock / vs148)

2023 saw AI and tools such as ChatGPT and Bard change the world, although, frankly, a lot of what we’re being fed is an over-exaggeration. The problem is that since AI has burst onto the scene thanks to generative AI tools, vendors have been keen to capitalise on the AI hype. As a result, we are currently seeing the term overused, misrepresented, and exaggerated.

Currently, there is some level of intelligence to be found in ‘AI-powered’ technologies, but there is certainly no sentience. It’s also not a new innovation – machine learning has been around since the 1950s when Alan Turing formulated some of the key concepts associated with the field.

Sentience versus Intelligence

To grasp the full potential of AI, it is vital to recognize the difference between ‘intelligence’ and ‘sentience’. Although AI reveals a level of intelligence, especially when it comes to data processing and analysis, it utterly lacks self-awareness and consciousness. Undoubtedly, generative AI has ushered in considerable advancements in specific crucial applications, particularly in revolutionizing how we engage with data. No longer restricted to structured datasets, AI can now extract valuable insights from a diverse range of unstructured inputs. On a broader scale, though, the appeal of aligning with AI has resulted in its excessive utilization in marketing, often leading companies to exaggerate their capabilities.

Egon Kando

Vice President EMEA Sales at Exabeam.

This can pay off in the medium term, but longer-term risks putting off customers and even impeding further AI-linked innovation as disillusionment sets in. That’s why organizations would do well to avoid getting caught up in AI hype. It is instead critical to comprehend what AI genuinely offers and then align this with specific business needs. The mission should be to automate consistent, auditable and repeatable processes rather than blindly pursuing AI solutions without having first evaluated their impact and effectiveness.

The question then remains: what next? Certainly, the application of AI spans a practically endless list of use cases, with one of the most crucial being its integration into the cybersecurity sector. Organizations and their adversaries are actively vying to incorporate AI into their defensive and malicious endeavors. Within this framework, several pertinent issues come into play, providing valuable insights into the present real-world capabilities of AI.

Cybersecurity powered by AI

AI will transform cybersecurity by enabling the processing of huge volumes of data to spot patterns and anomalies that might otherwise evade human identification. In this capacity, the sophisticated predictive analytics of AI assume a pivotal role in augmenting threat detection, facilitating a more proactive and preemptive cybersecurity approach. Through the integration of AI, organizations can markedly improve their security posture by automating intricate and time-consuming security workflows. This not only modernizes security operations but also mitigates the potential for human error, a common vulnerability in cybersecurity. As the cyber environment develops, the inclusion of AI in cybersecurity strategies is not merely advantageous but imperative to stay ahead of complex cyber threats.

Securing AI-based systems

AI is becoming an integral component of our digital landscape, which makes its own security a priority. One of the most important elements of securing AI-based systems is highlighting the internal vulnerabilities that might be exploited by malicious actors. This includes addressing risks, such as the creation of misinformation by AI, which can have wide-ranging and damaging impacts. Moreover, the challenge of model deterioration during successive iterations requires constant oversight to ensure consistent performance and stability.

A further issue is the privacy and security of data used by publicly-accessible Large Language Models (LLMs). This data, vast and varied in scope, is vulnerable to breaches, compromising both the AI system’s integrity and the security of sensitive information. Therefore, effective strategies must be developed to protect AI solutions, concentrating on strong encryption, continuous monitoring and robust access control.

Facing AI-enabled threats

Integrating AI into new cyber threats brings more sophisticated challenges which traditional cybersecurity solutions are not equipped to handle. Perhaps the most concerning development is the creation of ‘deepfakes’, which challenge the validity of digital content, leading to a flood of misinformation and ‘fake news’. To address these AI-enabled risks, it is critical to develop equally-advanced strategies, such as the implementation of AI-powered tools which can identify and counteract these incipient threats.

Now is the time for companies to invest in AI research to stay one step ahead of emergent AI-enabled threats, including AI literacy training for security professionals, helping them to better anticipate and neutralize these risks. In addition, there should be a collective collaboration from across the cybersecurity community to agree on rules and guidelines for accountable AI use.

This is just the starting line and, in the years to come, the hype and exaggeration will be replaced by more concrete progress towards human-adjacent sentience and intelligence. As this evolution continues, companies which concentrate on how AI can genuinely deliver today will be best placed to enjoy the benefits tomorrow.

We've listed the best cloud storage.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Egon Kando, Vice President EMEA Sales at Exabeam.