A Human Writer’s Take on ChatGPT

AI

Image by jcomp on Freepik

 

Artificial Intelligence and advanced language models like ChatGPT have been in the news lately. Students can use the technology to write papers which raises the question of how universities will ensure academic integrity. Some suggest that we can use AI against itself with programs like GPTZero which can detect the use of AI in writing.

Assistant Professor Markus Eger points out that, “GPTZero can be wrong. The student may have actually written the paper. Ultimately, students should be writing because they want to learn something, not just to get a grade.” Eger points out that a student’s goal shouldn’t be a degree, but rather, the knowledge and abilities that degree represents. 

School papers are one concern. What about professional writing? Will AI language models like ChatGPT replace writers?

One of the biggest advantages of human-generated writing is the ability to add a personal touch to the content. Whether it's a personal anecdote or a unique perspective, human writers are able to infuse their own personality into their writing in a way that machines simply cannot replicate. This personal touch can make writing more engaging and memorable, helping to build stronger connections with readers.

Another advantage of human-generated writing is the ability to understand and effectively convey complex concepts and emotions. While AI algorithms have made impressive strides in the ability to generate text, they still lack the depth of understanding and empathy that humans possess. This can result in writing that is less nuanced and less effective in conveying the intended message.

The last two paragraphs you just read were written by ChatGPT. That’s right, I enlisted AI to champion the benefits of human writers.

I must share that ChatGPT isn’t flawless. As a test, I asked ChatGPT to write an article based on one of our professor’s research. It incorrectly placed him at a University in England. That’s something AI folks call a hallucination. Also, the writing, though basically accurate, was redundant, lacked depth and included no quotes or different perspectives.

Another concern with using AI to write content is that it can’t recognize misinformation, public relations spin, or outright propaganda. Whether it’s a person writing code, or a media gatekeeper, it ultimately comes down to the question of who is deciding what information to include, what to omit, what is true, and what is misinformation. 

Something humans can do that AI can’t is that we can consider motives and bias. A press release issued by a company will present that company in the best possible light, because that’s why it was written. We understand that considering the source of information is often crucial to determining its veracity. 

Eger said, “ChatGPT is biased and what’s more, we don’t know what was used to train it because the company isn’t sharing that information.” That’s why he and one of his students are conducting research to examine the bias in ChatGPT.

Previous models for discerning truth involved humans seeking the widest range of facts and viewpoints and then having the freedom to decide for themselves. AI doesn’t necessarily work that way. ChatGPT draws from the information that was used to “train” it. Someone decided what sources to use in that training. Though ChatGPT doesn’t currently search the internet, other AI applications do.

AI will have the same challenges humans have in searching available information online. The most recent and frequent mentions on a topic will be the ones that show up in searches. Just because there are more digital records, or more recent records of a statement, doesn’t necessarily make it true. Furthermore, with AI’s ability to generate huge amounts of content, the prospect of AI itself perpetuating false or misleading information is very real, limiting people’s ability to have access to the widest range of facts and viewpoints.

It’s easy to imagine a future where AI generated content, which may be biased, is able to overwhelm our channels of communication to the point that you can’t find alternate opinions or the additional facts needed to make up your own mind. 

Many stories about AI suggest doomsday scenarios leading us to believe that AI is conscious and can act independently. “That’s a lot of hype. It’s just click bait. This type of AI has no intentions or plans. It doesn’t do anything it isn’t asked to do,” Eger said. “We shouldn’t be distracted by that. There are very real, immediate risks such as misuse by bad actors.”

“It’s a tool and it can be misused. It can also be wrong and make mistakes,” Eger said. He points out that spam filters and self-driving cars are beneficial uses of AI. “The important thing is being able to recover before something goes too wrong.” Sometimes an email you need gets put in the spam folder but you can retrieve it. Likewise, sometimes you may need to take control of a self-driving car. 

Using some skills, like driving or writing, can be fun. They also have other benefits. Writing teaches us humans important things such as how to analyze information, organize our thoughts, and communicate with and persuade others. 

Eger reminds us that AI can generate text but it doesn’t know what it means. Humans create meaning. Our ability to connect with other human beings through words is far more important than the ability to generate text.

Human communication allows us to empathize with and understand each other. Understanding and empathy are needed now more than ever and that’s why human writers remain relevant and important.