Anyone seduced by AI-powered chatbots like ChatGPT and Bard – wow, they can write essays and recipes! — then encounter what are known as hallucinations, the tendency of artificial intelligence to fabricate information.

The chatbots that guess what to say based on information gathered from all over the internet can’t help but get things wrong. And when they fail – by publishing a cake recipe with wildly inaccurate flour measurements, for example – it can be a real buzzkill.

However, as mainstream tech tools continue to integrate AI, it’s important to understand how to use it to serve us. After testing dozens of AI products over the past couple of months, I’ve come to the conclusion that most of us are using the technology in a suboptimal way, mostly because the tech companies have given us bad directions.

The chatbots are the least useful when we ask them questions and then hope that all the answers they come up with on their own are true, that’s how they were designed to be used. But when instructed to use information from trusted sources, such as credible websites and research papers, AI can perform assistive tasks with high accuracy.

“If you give them the right information, they can do interesting things with it,” said Sam Heutmaker, the founder of Context, an AI startup. “But on its own, 70 percent of what you get is not going to be accurate.”

With the simple adjustment of advising the chatbots to work with specific data, they generated understandable answers and useful advice. That has transformed me over the last few months from a crazy AI skeptic to an enthusiastic power user. When I traveled using a travel itinerary planned by ChatGPT, it went well because the recommendations came from my favorite travel sites.

Directing the chatbots to specific high-quality sources such as websites of well-established media and academic publications can also help reduce the production and spread of misinformation. Let me share some of the approaches I’ve used to get help with cooking, research, and travel planning.

Chatbots like ChatGPT and Bard can write recipes that look good in theory but don’t work in practice. In an experiment by The New York Times’ Dining Table in November, an early AI model created recipes for a Thanksgiving menu that included an extremely dry turkey and a dense cake.

I’ve also encountered terrible results with AI-generated seafood recipes. But that changed when I experimented with ChatGPT plugins, which are basically third-party programs that work with the chat. (Only subscribers who pay $20 a month for access to ChatGPT4, the latest version of the chat, can use plugins, which can be activated in the settings menu.)

In the ChatGPT add-ons menu, I selected Tasty Recipes, which pulls data from the Tasty website owned by BuzzFeed, a well-known media website. I then asked the chat to come up with a meal plan including seafood, ground pork and vegetable sides using recipes from the site. The bot presented an inspired meal plan, including lemongrass pork banh mi, grilled tofu tacos and everything-in-the-fridge pasta; each meal suggestion included a link to a recipe on Tasty.

For recipes from other publications, I used Link Reader, a plug-in that let me paste in a web link to generate meal plans using recipes from other credible sites like Serious Eats. The chat pulled data from the sites to create meal plans and told me to visit the sites to read the recipes. That took extra work, but it beat an AI-made meal plan.

When I was doing research for an article about a popular video game series, I turned to ChatGPT and Bard to refresh my memory of past games by recapping their plots. They messed up on important details about the games’ stories and characters.

After testing many other AI tools, I concluded that for research, it was important to pin down reliable sources and quickly double-check the data for accuracy. I finally found a tool that delivers that: Humata.AI, a free web application that has become popular among academic researchers and lawyers.

The program allows you to upload a document as a PDF, and from there a chatbot answers your questions about the material along with a copy of the document, highlighting relevant parts.

In one test, I uploaded a research article I found on PubMed, a government search engine for scientific literature. The tool produced a relevant summary of the long document in minutes, a process that would have taken me hours, and I looked at the highlights to double check that the summaries were accurate.

Cyrus Khajvandi, founder of Humata, which is based in Austin, Texas, developed the app when he was a researcher at Stanford and needed help reading dense scientific papers, he said. The problem with chatbots like ChatGPT, he said, is that they rely on outdated models of the web, so the data may lack relevant context.

When a Times travel writer recently asked ChatGPT to write a travel itinerary for Milan, the bot guided her to visit a central part of the city that was deserted because it was an Italian holiday, among other snafus.

I had better luck when I requested a vacation itinerary for me, my wife and our dogs in Mendocino County, Calif. As I did when planning a meal, I asked ChatGPT to incorporate suggestions from some of my favorite travel sites, such as Thrillist, which is owned by Vox, and the travel section of The Times.

Within minutes, the chat generated an itinerary that included dog-friendly restaurants and activities, including a farm with wine and cheese pairings and a train to a popular hiking trail. This saved me several hours of planning, and most importantly, the dogs had a wonderful time.

Google and OpenAI, which works closely with Microsoft, say they are working to reduce hallucinations in their chatbots, but we can already reap the benefits of AI by taking control of the data the bots rely on to come up with answers.

To put it another way: The main advantage of training machines with enormous data sets is that they can now use language to simulate human reasoning, said Nathan Benaich, a venture capitalist who invests in AI companies. The important step for us, he said, is to pair that ability with high-quality information.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *