ChatGPT, Thinking Chamber or Thought Stopper?

BY Sabrina Dunn | May 24, 2023 | Student Essay Awards, Student Writings

As someone with moderate experience with ChatGPT, I think that it has both benefits and drawbacks. ChatGPT, as I personally have used it, has been successful in helping me when I get stuck on something. During papers, scriptwriting, etc., I sometimes get stuck in my writing and have no idea what direction it could go in, no way to end it. So I could put in my work and ask ChatGPT to make an ending. Usually it makes a number of different endings, showing me all the ways I could go with my paper. From there my mind is sparked, and I might take pieces from the suggestions to form my own unique ending.

In this way I find ChatGPT immensely useful and not harmful. While it could be used directly to plagiarize, using it in this way to spark ideas is no different than speaking to a well-educated professor or peer about the paper — except that it saves time since ChatGPT can both read your paper and also reveal suggestions much faster, if not instantaneously.

Concerns come when people ask ChatGPT to write an entire essay/script/work, copy/paste that essay into their own document, and edit some parts to their liking. Within minutes they have “their own” essay. This is problematic because it’s plagiarism. And if you want to debate that you can’t plagiarize an AI, then it’s at least not a paper that you have written. It gets an assignment done earlier, but the student hasn’t learned anything, hasn’t really written anything. They lose the essential writing skills that will benefit them in the future.

But then the question becomes: with an AI like ChatGPT existing now, will these essential writing skills even be necessary? Yes and no. I think there are ideas that the AI will not think of, that it is incapable of thinking of because some ideas have not been thought of yet or are based on personal experience. ChatGPT can try to emulate these ideas but has obviously not ever experienced them. This is why humans need to write, converse, and discuss ideas that a site could not come up with.

As far as the ability to write a paper in general, I think such things might become obsolete in the future due to sites like ChatGPT, Grammarly, and others. The formality of writing a paper, with its form, structure, etc, is not necessary to learn in my opinion. There are so many different “standards” of writing, such as MLA, APA, etc. If an AI could be used to transform your paper between formats, I don’t see that being an issue because I don’t think that skill is necessary or useful to humanity and writers.

 I think ChatGPT increases the ability to come up with alternate paths for the works we have. This isn’t inherently a bad thing, as stated before, but it does raise a question about what happens to our minds if we rely on someone else to think for us. When I’ve used ChatGPT, I’ve often reacted, “I would have never thought of that.”

With overuse of the site, is it possible we’ll stop thinking at all? Stop coming up with any ideas because the ideas can be formed for us? Maybe. I think this is the greater danger of the site (as well as a potential Terminator situation, but I won’t get into that). Having an AI to discuss, to bounce ideas off of, to enhance your work, to have high-level conversations with about your works, is extremely useful. It allows people who don’t normally have access to minds in the higher, scholarly conversational field to attain this same level of conversation from their homes.

To ensure learning, it must be used only in limited ways, and even then there are still errors with it. When asking ChatGPT]  to complete a bigger work, it often misses certain key moments or ideas. It may misinterpret conversations in a story to mean something else, resulting in a goofy ending. It may take things in a direction that the work certainly could go in  but that nobody would ever take. AI is not some foolproof thing that will overtake us (…yet).

While ChatGPT still makes these seemingly silly mistakes, I think it is safe, or at the very least its use is detectable; it’s obvious when something was written by an AI rather than a person. When AI becomes so aware that there is no discernable difference between a real, all-knowing human (if such a person exists) and ChatGPT, that is when we should really all become worried. As far as its use in academia, it should be monitored if possible. Current software can’t detect plagiarism from ChatGPT, and I would encourage us to create a program that can.