Sure, keep training Bard and ChatGPT until they no longer need us
The greatest trick Bard pulled is convincing us to train it
It's happening again. Scores of us are rushing to test another AI, this time Google Bard, which Google released as a public beta on Tuesday.
We do this ostensibly to understand the implications of generative AIs, but what we're really doing is improving a third-party company's product. Without us, Bard, ChatGPT, and AI-infused Bing Chat mess up, say awful things, and generally prove less useful for us and, without a doubt, the companies that built them.
The hours, days, and weeks we spend essentially training these AI do make them better and move them closer to seeming sentient. As they get smarter and more accurate, we trust them more, and in a virtuous circle, use them more because we trust them more.
It's this act of training, using, training some more, and using even more that has me worried. No one really knows where all this is going. Is it a world where, for every task, there's an AI partner? Most of the AIs, including the recently unveiled Bard, are pitched as project assistants and idea generators.
Unlike an impersonal search engine (though Bard is backed by Google's massive Knowledge graph) we tend to develop relationships with these AIs. Every prompt leads to a conversation. It's like we all have a new co-worker, one who seems to know a little bit about everything.
Bard, like ChatGPT before it, is cheerful, and helpful, but always apologetic when it gets things wrong. That last bit leads to an even deeper relationship with the AI because it makes them seem vulnerable and more human.
The more you interrogate Bard, ChatGPT, or Bing's chat, the more they get out of you. When you give Bard's response a thumbs up, you're telling the AI brain something important about its large language model. You are not deriving information from them as much as they are silently sucking your information soul away to make themselves smarter and more like a real human assistant or better yet an AI that can effectively take on your tasks and do your job.
None of this is to say, stop using these AI chatbots. They're not nefarious any more than they are truly generous or kind (aside from the programming that makes them seem that way).
We ostensibly get some real utility out of them. Still, approaching these conversations with the understanding that this is not a one-way relationship where you take and take while Bard and ChatGPT give and give is probably smart. I suggest approaching it all as more of a mutually beneficial but slightly asymmetrical relationship where, for instance, Bard is getting far more than it gives. Not AI smart, but smart enough for the average human.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
A 38-year industry veteran and award-winning journalist, Lance has covered technology since PCs were the size of suitcases and “on line” meant “waiting.” He’s a former Lifewire Editor-in-Chief, Mashable Editor-in-Chief, and, before that, Editor in Chief of PCMag.com and Senior Vice President of Content for Ziff Davis, Inc. He also wrote a popular, weekly tech column for Medium called The Upgrade.
Lance Ulanoff makes frequent appearances on national, international, and local news programs including Live with Kelly and Mark, the Today Show, Good Morning America, CNBC, CNN, and the BBC.