Stop telling AI it's an expert programmer, you're making it worse at its job — new research shows the best results need specific prompts

ChatGPT app
(Image credit: Getty Images/ alexsl)

  • Telling AI it's an expert in something is causing it to go down a totally different route
  • By introducing a persona, AI might not be able to think for itself, reducing output quality
  • The best prompts explain the task to AI and give it all the context and tools it needs

New research has claimed asking AI to 'act as an expert' doesn't actually improve result reliability despite being a widely used prompt enhancer.

More specifically, it might help with alignment-style tasks such as writing, tone and structure guidance, but it likely hurts knowledge tasks like maths and coding.

Per the data these so-called expert personas underperformance base models on benchmarks likely because they're triggering the AI to shift into instruction-following mode rather than fact recall.

Article continues below

Stop over-engineering your AI prompts

"We specifically discourage crafting (system) prompt for maximum performance by exploiting biases, as this may have unexpected side effects, reinforce societal biases and poison training data obtained with such prompts," the paper, written by researchers affiliated with the University of Southern California (USC) reads.

Separate research along the same lines found that while persona prompting can help shape tone and style, it does nothing to add factual capability to a model.

Instead, prompt length and accuracy matters. A comprehensively designed prompt will ultimately give AI as much context as it needs to act autonomously and generate higher-quality output.

The paper introduces a new PRISM (Persona Routing via Intent-based Self-Modeling) solution, whereby AI generates answers with and without a persona and compares which answer is best. The AI then learned when to apply personas in the future, falling back on the base model's functionality when personas hurt output quality.

Adding to the complexity of prompt engineering, the researchers also uncover differences in model types, noting that reasoning models benefit more from context length while instruction-tuned models can be most sensitive to personas.

In short, it seems that model developers are doing all the work needed to ensure generative AI gives us the best output, and that we should only aim to give chatbots tasks and share relevant context without dictating how they should go about creating a response.


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

TOPICS

With several years’ experience freelancing in tech and automotive circles, Craig’s specific interests lie in technology that is designed to better our lives, including AI and ML, productivity aids, and smart fitness. He is also passionate about cars and the decarbonisation of personal transportation. As an avid bargain-hunter, you can be sure that any deal Craig finds is top value!

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.