Here’s what happens when you ask ChatGPT to make a car based only on efficiency – and why a good prompt is so important

ChatGPT Car
(Image credit: Reddit)

I've enjoyed seeing what ChatGPT thinks about my personality based on my favorite characters and what my life would look like as an image, so seeing what the AI chatbot believed to be a perfectly efficient car in a Reddit post was a little surprising. The post featured an image based on a request for the "ultimate family car design based on pure efficiency."

No room for frivolities like comfort, cost, safety, or laws of physics. Just the cold, wind-slicing mathematics of going farther with less. ChatGPT obliged with an image of a vehicle committed to the concept of minimizing drag over anything else. The front is long and needle-nosed, the body narrow and exaggeratedly teardrop-shaped. It looked more like a weapon than a family car.

The comment section agreed, with comments wishing any driver good luck in a crosswind, or pointing out that any pedestrian unfortunate enough to cross the road ahead of the vehicle would be skewered through.

Notably, the problem wasn’t that ChatGPT hallucinated technical nonsense. It was that the prompt itself presented a one-dimensional request, with no detours, no seatbelts, and no airbag warnings.

ChatGPT's ultimate family car design based on pure efficiency from r/ChatGPT

You can think of AI prompts as a kind of instruction manual for the machine. And in this case, the manual basically said to make an efficient car, and that nothing else mattered. It really encapsulates issues with poor prompt writing.

ChatGPT doesn’t read between the lines. It reads the lines. And unless you’re crystal clear about what matters, it will focus on what you give it relentlessly, no matter what else you might have intended. If you didn't write it out, it doesn't count.

Prompt improvement

The issues with the prompt can be broken down into a few categories. First, it treated efficiency like an end goal with no trade-offs. Second, the prompt didn’t define efficiency. Real-world efficiency is a balance between factors. Getting people somewhere quickly and safely is efficient. Doesn't matter how fast you get there if you aren't in one piece at the end.

Finally, there was no mention of real-world constraints. No safety regulations, no cost targets, no expectations around materials or buildability. You wouldn’t design a real car this way. You’d start by juggling half a dozen competing priorities and looking for a compromise that didn’t kill people or bankrupt them. But the prompt simply stated, “Do one thing perfectly.” So ChatGPT did.

If you don't want an aerodynamic death dart, you should include more than efficiency. Specify the size, cost, and other details you would like to mention. You might ask ChatGPT to "Design an energy-efficient family car that seats five adults comfortably, complies with U.S. safety regulations, and costs under $35,000 to produce. It should handle well in crosswinds, carry at least 15 cubic feet of cargo, and use a hybrid or electric drivetrain.”

The improved prompt defines success clearly. Efficient is vague, but achieving 50 mpg while remaining street-legal is not. It also identifies the audience for a family car, and names some hard constraints in budget and safety. And it should have a follow-up to consider trade-offs and blind spots. You might add a prompt asking ChatGPT to “Tell me where this design might fail, and suggest alternatives that could improve resilience or safety.”

That last one’s underrated. When you ask ChatGPT to self-critique, it often reveals assumptions you didn’t know it was making. And sometimes, that’s where the real ideas start. Otherwise, it's like making a wish to a genie without thinking about the wording. That gets you a car that turns squirrels into shish-kebab.

You might also like

TOPICS
Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.