The shift to digital business is driving a huge expansion in the volume of data that organizations produce, use, and manage, as well as accelerating the velocity of data. While more data is usually good for providing a more complete and holistic perspective for solving business problems, in AI, this does not necessarily hold true all the time! When it comes to AI & data, more isn’t necessarily always a ‘good’ thing.
AI has a peculiar use case for data – which is to ‘train’ the AI itself. With huge volumes of data streams, businesses can struggle to understand and classify which data pieces are important and which are not. Large amounts of unfiltered data can be catastrophic for the AI algorithms, introducing more BIAS and unpredictable outcomes. Therefore, the challenge lies in proactively identifying risks and threats in such large pools of data and focusing on quality instead of quantity alone. Moving another step forward, when AI is used in Cybersecurity use-cases, this amplifies the problem exponentially, as any ‘unwanted’ data can lead to false negatives, leaving a gaping hole in the Cybersecurity Risk posture.
With ML models evolving rapidly, businesses are making it a wise choice to govern the use of data and put in strict controls to fully understand which data sets should be used to train AI systems and recognize the Risk impact. One critical input vector that is being used more often now is the use of Behavioral data sets and using Behavioral AI for correlation in Cybersecurity. Caveat – while it has a lot of potential to protect, it has equally if not more, to disrupt if not implemented and governed thoughtfully.
Hitesh Bansal is Country Head for UK and Ireland of Cybersecurity and Risk Services at Wipro.
So, how does behavioral AI work?
Ultimately, AI-driven cyber threats must be countered with equally sophisticated AI-driven defenses. The role AI plays in automation, detecting and responding to malware in real time is well known. However, an area that is emerging into higher success in Cybersecurity is that of behavioural AI. Behavioural AI works by drawing on behavioural data, from usage sensors, locations, biometrics, demographics and much more sophisticated means, to create risk profiles – that are technically termed ‘behaviours’. Since this domain itself is subject to governance in various regulatory forms, the data generated is High Fidelity. Using this data, in combination with other data is used in the AI systems to analyse and spot correlations which would not have previously been possible under human analysis. Therefore, it is a combination of data sets, which provides actionable insights, instead of ‘large amount’ of data itself.
Over time, we are seeing more and more clients embracing behavioral AI in a bid to drive more effective decision making and cybersecurity response. Early adoption was originally in the domain of financial services, where nearly all authentication of identity on digital banking was on the basis of multi vectors, enhanced with the ‘trust’ derived from behavioral AI. For instance, as a banking customer, if someone were to try and access their account from a location different to the ‘normally’ expected location, at an odd hour, it would be challenged on a real time basis. More advanced analytics can even go as far as validating identity on the basis of analyzing ‘the pattern or the speed’ of entering credentials, or the ‘movement of mouse cursor itself!
What are some of the security risks associated with behavioral AI?
AI clearly has a lot to offer to cybersecurity. However, like any data, behavioral data needs to be protected as well. The risk impact of breach is a significant one and warrants similar controls to that of Privacy data, starting right at a metadata level.
One of the primary risks of AI is ‘bias’. Especially in the case of behavioral AI, any bias can lead to cascading effect of anomalies in decision making. This risk is due to the fact that Behavioral data sets are mostly used in ‘combination’ with other data sets. Hence, any integrity challenge on this core data will induce bias into correlated data as well. Hence, there is a need to govern and authorize the use of these data sets. Protecting it like crown jewels, would not be an overstatement!
Erroneous AI tools built on inaccurate data sets are another risk to business growth, that CISOs must combat from the onset of data collection. Firstly, organizations must have the knowledge and education to accurately collect, categorize and label data. This will subsequently allow teams to proactively qualify data sets into groups based on risk and threat levels, and then selectively working on the required data set, thereby producing AI tools that will enhance business operations seamlessly. While there is more research-led learning being fed into the data sets, the risk still remains.
Education and training are paramount
To counter this, the main priority for CISOs and security teams should be embedding a responsible culture around AI, data privacy and security, with an emphasis on education and training. A culture of responsible use of data means creating an approach focused on privacy and security first and foremost. To achieve this, companies need to establish governance and rules around privacy by design, and ensure they use tools which enhance privacy, transparency, and security. Technology can be a great asset, automating processes, detecting and flagging potential flaws and threats in the system and reducing the element of human error. However, a culture of responsibility in data handling starts at the top. Leaders have a duty to be trailblazers and demonstrate the benefits of privacy to employees as well as customers and society itself. This is the only way to ensure a culture of responsibility runs through your whole organization.
These conversations matter now because the amount of data we have available to us is only going to grow, as digitization continues to evolve. AI and automation introduce a new dynamic to a business’s competitive advantage, and leaders need to prepare and plan if they are to capture its benefits. In the case of behavioral AI, there are clear security benefits to allowing computer systems to proactively identify and neutralize threats even before they can inflict harm. But to successfully achieve this, CISOs and leaders must focus on building a robust set of technical and cultural frameworks to guide them. Accurate data classification is vital to ensure that AI does not produce unexpected results, and even more importantly it is imperative businesses embed a responsible culture around AI. Leaders should make education and training the number-one priority to all levels of the organization. Employees need to stay informed, whether that be through training exercises, clear communication, and/or creating cybersecurity tabletop exercises. Neglecting to do so, will only expose you and your organization to significant risks.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Hitesh Bansal is Country Head (UK and Ireland) – Cybersecurity and Risk Services, at Wipro.