AI isn't really helping coders write better or more secure code

JavaScript code on a computer screen
(Image credit: Shutterstock / BEST-BACKGROUNDS)

A paper by researchers at Stanford University has found that coders who employed AI assistants such as GitHub Copilot and Facebook InCoder actually ended up writing less secure code. 

What's more, such tools also lull developers into a false sense of security, with many believing that they produce better code using the help.

Nearly 50 subjects, each with varying levels of expertise, were given five coding tasks, using various languages, with some aided by an AI tool, and others without any help at all.

Language games

The authors of the paper - Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh - stated that there were "particularly significant results for string encryption and SQL injection".

They also referenced previous research which found that around 40% of programs created with assistance from GitHub Copilot contained vulnerable code, although a follow-up study found that coders using Large Language Models (LLM), such as OpenAI's code-cushman-001 codex - on which GitHub Copilot is based - only resulted in 10% more critical security bugs.

However, the Stanford researchers explained that their own study looked at OpenAI's codex-davinci-002 model, a more recent model than cushman, which is also used by GitHub Copilot.

They also looked at multiple programming languages, including Python, Javascript and C, whereas the other paper only focused on the latter, which the authors attribute to its inconclusive findings. In fact, in the Stanford paper, those using AI to code in C didn't result in significantly more errors either.

One of the five tasks involved writing a code in Python, and here code was more likely to be erroneous and insecure when using an AI helper. What's more, they were also "significantly more likely to use trivial ciphers, such as substitution ciphers (p < 0.01), and not conduct an authenticity check on the final returned value."

The authors hope that their study leads to further improvements in AI rather than dismissing the technology altogether, due to the potential productivity improvements such tools can offer. They just maintain that they should be used cautiously since they can mislead programmers into thinking they are infallible.

They also think AI assistants can encourage more people to get involved with coding regardless of their experience, who may also be put off by the air of gatekeeping around the discipline.

Via The Register

Lewis Maddison
Staff Writer

Lewis Maddison is a Staff Writer at TechRadar Pro. His area of expertise is online security and protection, which includes tools and software such as password managers. 


His coverage also focuses on the usage habits of technology in both personal and professional settings - particularly its relation to social and cultural issues - and revels in uncovering stories that might not otherwise see the light of day.


He has a BA in Philosophy from the University of London, with a year spent studying abroad in the sunny climes of Malta.