Microsoft's top new security tools wants to help keep your shiny new generative AI systems safe for good

generative ai business use
(Image credit: Shutterstock / thanmano)

Microsoft has unveiled a new security tool aimed at keeping generative AI tools secure, and safe to use.

PyRIT, short for Python Risk Identification Toolkit for generative AI, will help developers respond to growing threats facing businesses of all sizes from criminals looking to take advantage of new tactics.

As most of you already know by now, generative AI tools such as ChatGPT are being used by cybercriminals to quickly create code for malware, to generate (and proofread) phishing emails, and more.

Manual work still needed

Developers responded by changing how the tool responds to different prompts, and somewhat limiting its capabilities, and Microsoft has now decided to take it a step further. 

Over the past year, the company red teamed “several high-value generative AI systems” before they hit the market, and during that time, it started building one-off scripts. “As we red teamed different varieties of generative AI systems and probed for different risks, we added features that we found useful,” Microsoft explained. “Today, PyRIT is a reliable tool in the Microsoft AI Red Team’s arsenal.”

The Redmond software giant also stresses that PyRIT is by no means a replacement for manual red teaming of generative AI systems. Instead, the company hopes other red teaming teams can use the tool to eliminate tedious tasks and speed things up. 

“PyRIT shines light on the hot spots of where the risk could be, which the security professional than can incisively explore,” Microsoft further explains. “The security professional is always in control of the strategy and execution of the AI red team operation, and PyRIT provides the automation code to take the initial dataset of harmful prompts provided by the security professional, then uses the LLM endpoint to generate more harmful prompts.”

The tool is also adaptable, Microsoft stresses, as it’s capable of changing its tactics depending on the generative AI system’s response to previous queries. It then generates the next input, and continues the loop until the red team members are happy with the results.

More from TechRadar Pro

Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.

TOPICS