In 2022, when ChatGPT was released to the public, educators asked themselves the same question: What are we going to do about this?

The potential that we would not be able to tell whether a student wrote their essay or response was disheartening and terrifying. If students could simply let a machine do the work for them, how would they hone the skills they need for their future in college, business or the trades? Two years later, that answer for some schools is to not fight AI, to accept it into the classroom like we have with Google and Microsoft Word. This is the wrong path for education to take for a host of ethical reasons.

ChatGPT and its competitors are not technically artificial intelligence, but rather Large Language Models (or LLMs). Partially trained by scraping the internet for writing and programmed to mimic the collected information, their algorithms create highly advanced predictive text (think autocorrect, but better). However, the makers of these LLMs do not ask permission from writers if they can use their copyrighted works or not, nor are these authors given attribution or compensation for their work. The mere existence of ChatGPT and other LLMs is unethical; they are created via copyright infringement. Using them in the classroom condones the behavior. If these companies can plagiarize, why can’t students?

In a school setting, no one is saying responses created by LLMs should be copied word for word and turned in, but rather LLMs can be used for brainstorming or organizing a student’s thoughts. This goes against the very core of education. One of education’s goals is to teach students critical thinking skills, organizational skills and analysis skills. How will students obtain these skills if they rely on AI to do it for them? The answer is they won’t. AI will become the writing and reading equivalent of following your GPS the second or third time you go to a restaurant rather than memorizing the route. It’s a machine that rewards laziness instead of incentivizing growth.

By using AI for brainstorming, editing or organization, you are also removing humanity from the creative process. Asking AI for suggestions on how to fix a story or essay robs others of the chance to closely interact with someone’s writing. Students miss the opportunity for teachers or their peers to share their experiences, generate connections, and offer expertise. Including others into the creative process builds connections – another benefit of school that doesn’t get enough attention. Connection to others reduces loneliness and its insidious mental and health effects.

Some educators may ask what’s the difference between introducing AI into the classroom and the introduction of search engines like Google or word processors. The difference is neither of those things replaced the human in the creative process. You still had to type the whole essay, you still had to use the search engine to find sources and read them to get the information you needed. They expanded human knowledge. ChatGPT and other LLMs do the opposite. By taking out the work, they take out the learning.

There simply is no ethical argument for the use of AI in the classroom.

Not only were these machines created via unethical means, they discourage critical thinking, short circuit the learning process and disconnect students from each other. AI is new technology. While it may seem to be everywhere already, we as a society still have a choice to dictate where this technology can be used – and whether we should be using it at all. Given its unethical nature, the question as to whether it should be used in the classroom is a no-brainer.

Join the Conversation

Please sign into your CentralMaine.com account to participate in conversations below. If you do not have an account, you can register or subscribe. Questions? Please see our FAQs.

filed under: