Maybe you can get away with it in C
(Image credit: Shutterstock / BEST-BACKGROUNDS)
A paper (opens in new tab) by researchers at Stanford University has found that coders who employed AI assistants such as GitHub Copilot and Facebook InCoder actually ended up writing less secure code.
What’s more, such tools also lull developers into a false sense of security, with many believing that they produce better code using the help.
Nearly 50 subjects, each with varying levels of expertise, were given five coding tasks, using various languages, with some aided by an AI tool, and others without any help at all.
The authors of the paper – Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh – stated that there were “particularly significant results for string encryption and SQL injection”.
> You’ll soon be able to show off your coding skills in Google Docs (opens in new tab)
> OpenAI reveals 3D model-building AI tool (opens in new tab)
> Microsoft is being sued over Github Copilot piracy (opens in new tab)
They also referenced previous research which found that around 40% of programs created with assistance from GitHub Copilot contained vulnerable code, although a follow-up study found that coders using Large Language Models (LLM), such as OpenAI’s code-cushman-001 codex – on which GitHub Copilot is based – only resulted in 10% more critical security bugs.
However, the Stanford researchers explained that their own study looked at OpenAI’s codex-davinci-002 model, a more recent model than cushman, which is also used by GitHub Copilot.
One of the five tasks involved writing a code in Python, and here code was more likely to be erroneous and insecure when using an AI helper. What’s more, they were also “significantly more likely to use trivial ciphers, such as substitution ciphers (p < 0.01), and not conduct an authenticity check on the final returned value."
The authors hope that their study leads to further improvements in AI rather than dismissing the technology altogether, due to the potential productivity improvements such tools can offer. They just maintain that they should be used cautiously since they can mislead programmers into thinking they are infallible.
They also think AI assistants can encourage more people to get involved with coding regardless of their experience, who may also be put off by the air of gatekeeping around the discipline.
Via The Register (opens in new tab)
Are you a pro? Subscribe to our newsletter
Sign up to theTechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Graduate Junior Writer
Lewis Maddison is a Graduate Junior Writer at TechRadar Pro. His coverage ranges from online security to the usage habits of technology in both personal and professional settings.
His main areas of interest lie in technology as it relates to social, political and economic issues around the world, and revels in uncovering stories that might not otherwise see the light of day.
He has a BA in Philosophy from the University of London, with a year spent studying abroad in the sunny climes of Malta.