Understand the limits (and consequences).
Firstly, it is important to understand how the technology works in order to know exactly what you are doing with it.
ChatGPT is essentially a more powerful, exclusive version of the predictive text system on our phones, suggesting words as you type to complete a sentence by using what it’s learned from massive amounts of data scraped from the internet.
It also cannot check whether what it says is true.
When you use a chatbot to code a program, it looks at how the code was put together in the past. Because code is constantly being updated to address security vulnerabilities, code written with a chatbot could contain bugs or be insecure, Mr Christian said.
Similarly, if you’re using ChatGPT to write an essay on a classic book, chances are the bot is constructing seemingly plausible arguments. But if others have published an erroneous analysis of the book on the Internet, that may appear in your essay as well. If your essay were then posted online, you would be contributing to the spread of misinformation.
“They can fool us into thinking they understand more than they do, and that can cause problems,” said Melanie Mitchell, an AI researcher at the Santa Fe Institute.
In other words, the bot does not think independently. It can’t even count.
Case in point: I was stunned when I asked ChatGPT to write a haiku poem about the cold weather in San Francisco. It spat out lines with the wrong number of syllables:
Mist covers the city,
Strong wind chills to the bone,
Winter in San Fran.
OpenAI, the company behind ChatGPT, declined to comment on this column.
Similarly, AI-powered image editing tools like Lensa train their algorithms with existing images on the web. Therefore, if women are presented in more sexualized contexts, the machines will recreate that bias, Ms. Mitchell said.