'The risk for SMBs is not reckless use of AI, but invisible workflow change': Legal firms are falling behind when it comes to setting rules for AI use
Get your AI policy in place before anything else, report warns
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You are now subscribed
Your newsletter sign-up was successful
- 43% of organizations still have no plans for AI policies, report finds
- At the moment, workers are adopting AI more quickly than companies are writing policies
- Nexos.ai calls for SMBs to get basic policies in place – they can evolve from there
Even though 70% of legal workers are already using general-purpose AI for work, 43% of organizations say they still don't have formal AI policies in place (and no plans to create them).
New research from Nexos.ai has revealed the biggest risk relating to AI tools could actually be coming from a lack of visibility and governance.
And SMBs are generally the most at-risk simply due to their nature of having fewer resources – both in terms of workers and procedures.
Article continues belowAI is mostly going unmanaged
Nexos.ai found workers regularly pasting contracts, NDAs or legal correspondence into public chatbots to save time, putting sensitive information at risk. While enterprise-grade AI products promise maximum data security and no customer data training, public versions aren't so tight.
Data security (46%) was cited as legal teams' biggest concern, ahead of ethical issues (42%) and legal privilege (39%), but how workers are interacting with public chatbots isn't tallying up with concerns.
Nexos.ai also noted that SMBs may already have legal AI workflows in use without them being formally established and recognized, because AI adoption happens gradually and without governance, leaving companies playing catchup to govern the correct and safe use of AI after employees have already started to use the tools.
"The risk for SMBs is not reckless use of AI, but invisible workflow change," Head of Product Zilvinas Girenas wrote.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
But it doesn't need to be difficult – the report explains that a basic AI policy doesn't need to be complex. Defining approved tools, banning use cases and pinpointing sensitive data restrictions could suffice – or at least, they could be better than current governance scenarios.
Looking ahead, Nexos.ai suggests that companies start off with a simple AI policy to keep sensitive data out of unapproved tools. Prior to widespread AI adoption, the report calls for companies to approve tools before teams adopt them, but once it's implemented, Nexos.ai still recommends that humans have oversight before AI-generated content is used in legal applications.
"If those tools get embedded before the company has defined approved use, data boundaries, and review steps, efficiency arrives faster than governance," Girenas concluded.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
With several years’ experience freelancing in tech and automotive circles, Craig’s specific interests lie in technology that is designed to better our lives, including AI and ML, productivity aids, and smart fitness. He is also passionate about cars and the decarbonisation of personal transportation. As an avid bargain-hunter, you can be sure that any deal Craig finds is top value!
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
