AI accountability: building secure software in the age of automation
AI boosts coding efficiency but heightens software security challenges

Artificial Intelligence is reshaping software development due to its ability to increase productivity and efficiency.
For developers, who are constantly under pressure to write substantial amounts of code and ship faster in the race to innovate, they are increasingly integrating and using AI tools to assist them in writing code and reducing heavy workloads.
Director of Application Advocacy at Security Journey and Co-founder of Katilyst.
However, the increased adoption of AI is rapidly escalating cybersecurity complexity. According to global studies a third of organizations report that network traffic has more than doubled in the last two years and breach rates are up 17% year on year.
The same study reveals that 58 percent of organizations are seeing more AI-powered attacks, and half say their large language models have been targeted.
Given this challenging AI threat landscape, developers need to be accountable and responsible for the software that they are leveraging AI generated code to build.
Secure by design starts with developers really understanding their craft to challenge the code they are implementing, and question what insecure code looks like and how it can be avoided.
Staying ahead of the dangers from AI
AI is increasingly transforming the day-to-day work of developers, with 42% reporting that at least half of their codebase is AI generated.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
From code completion and automated generation to vulnerability detection, prevention, and secure refactoring, the benefits of AI in software development are undeniable.
However, recent studies show that 80% of development teams are concerned about security threats stemming from developers using AI in code generation.
Without sufficient knowledge and expertise to critically assess AI outputs, developers risk overlooking issues such as outdated or insecure third-party libraries, potentially exposing applications and their users to unnecessary risks.
The lure of efficiency has also led to growing reliance on sophisticated AI tools. Yet this convenience can come at a cost: an overdependence on AI generated code without a strong grasp of its underlying logic or architecture. In such cases, errors can propagate unchecked, and critical thinking may take a back seat.
To responsibly navigate this evolving landscape, developers must remain vigilant against risks including algorithmic bias, misinformation, and misuse.
The key to secure, trustworthy AI development lies in a balanced approach, one grounded in technical knowledge and backed by robust organizational policies.
Embracing AI with discernment and accountability is not just good practice, it is essential for building resilient software in the age of intelligent automation.
Knowledge & education
Too often, security gets pushed to the final stages of development leaving critical blind spots just as applications are about to launch. But with 67% of organizations already adopting or planning to adopt AI, the stakes are higher than ever. Addressing the risks tied to AI technologies isn’t optional, it’s crucial.
What’s needed is a mindset shift: security must be baked into every phase of development. This requires comprehensive education and continuous, context-driven learning focused on secure-by-design principles, common vulnerabilities, and best practices for secure coding.
As AI continues to transform the software development ecosystem at an unprecedented pace, staying ahead of the curve is essential. The below are five top takeaways for developers to consider when navigating an AI-enabled future:
Stick to the fundamentals – AI is a tool, not a substitute for foundational security practices. Core principles such as input validation, least privilege access, and threat modelling remain critical.
Understand the tools – AI-assisted coding tools can accelerate development, but without a strong security foundation, they can introduce hidden vulnerabilities. Know how tools work and understand what their potential risks are.
Always validate output – AI can deliver answers with confidence, but not always with accuracy. Especially in high-stakes applications, it's vital to rigorously validate AI-generated code and recommendations.
Stay adaptable – The AI threat landscape is constantly evolving. New model behaviors and attack vectors will continue to emerge. Continuous learning and adaptability are key.
Take control of data – Data privacy and security should drive decisions about how and where AI models are deployed. Hosting models locally can offer greater control, especially as providers’ terms and data practices change.
Clear governance and policy
To ensure the safe and responsible use of AI, organizations should establish clear and robust policies. A well-defined AI policy that the whole company is aware of can help mitigate potential risks and promote consistent practices across the organization.
Along with rolling out clear policies around the use of AI, companies must also consider their developers' desire to use new AI tools to help them write code.
In this case, companies must ensure that their security teams have tested the prospective AI tool, that they have the necessary policy around leveraging the AI tool and finally, that their developers are trained in writing code securely and continuously upskill themselves.
Policies or robust security measures mustn’t disrupt company workflow or add unnecessary complexity, particularly for developers.
The more seamless the security policies are, the less likely those within a company will try to bypass them to leverage AI innovation – thereby reducing the likelihood of insider threats and unintended misuse of AI tools.
We will most likely see a significant number of GenAI projects being abandoned after proof of concept by the end of 2025, according to Gartner, due in part to inadequate security controls.
However, by taking the necessary steps to foster and maintain fundamental security principles through continuous security training and education and adhering to robust policies, it is possible for developers to circumnavigate the dangers of AI and play a pivotal role in designing and maintaining systems that are secure, ethical, and resilient.
We've featured the best AI website builder.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Director of Application Advocacy at Security Journey and Co-founder of Katilyst.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.