A few posts back, I mentioned I would weigh in on artificial intelligence prose generators–“bots”–specifically the much-reported-upon ChatGPT. The media coverage has included everything: hand-wringing, speculations on the extinction of critical thought, predictions about the death of the high school essay or the re-institutions of oral and handwritten exams, not to mention worries about spurious content and job loss among educated citizens, as well as wild enthusiasm for automating tedious writing tasks and excitement about where tech is taking us.
It’s not as new as most people think. AI has been providing customer-service responses and generating basic summary content for news-related websites for awhile; but OpenAI’s open-source platform, which is currently free and for use by anyone (not to mention educating itself as each user inputs prompts and questions) has so rapidly gained “tech adopters” that those of us who teach writing cannot ignore it completely. And we shouldn’t ignore it, but neither should we throw our hands up in surrender and predict the end of the art of writing as we know it.
A recent New York Times article reflects the kind of discourse taking place at the institution where I work. It’s fascinating to me to see how quickly the conversations have evolved in the usually slow-moving environment of academia. I find that at my college, my years of laboring with students who lack strong backgrounds in written expression or confidence in their writing have suddenly attracted the attention of full-time faculty members–they want to know how they can tell if students are using AI assistance to write essays (when said profs have no pedagogical experience in writing) and how to change the wording of their assignments to “fool” the programmed generators, among other pressing questions. These inquiries tend to come tinged with a sense of slippery-slope fallacy: does this mean academia will go to hell in a handbasket?
I refuse to send out a firm forecast, though my intuition says no; instead, academia, and society, will change.
And despite the daily-proven, scientifically-accurate, anecdotally-obvious FACT that change is normal and indeed necessary, most people (and their societies and institutions) fear change. Hence, the media and institutional brou-ha-ha.
Let’s face it, writing can be hard. There will always be people who do not want to do the work of writing from the soul, brain, heart, emotion, experience, dread, you-name-it. Painting is hard, too. But people who don’t want to practice and experiment with visual art can use paint by numbers, clip art, or AI. There will always be a few folks who learn to play an instrument for the joy of it and for the challenge of continually learning new approaches to the process of music making; the rest of us can be audiences, if we like. People who write because they can’t not write? They won’t use bots unless they want to experiment with them: make perverse use of the programs, play with them to see what the human’s skills can do in concert with algorithms, bits, bytes, and data. I know artists who are already collaging with AI-generated art to create new, human-mediated visuals.
I recognize the fear factor here, but I don’t buy into it because I am so curious about what will happen next. I’m interested to see how changes will occur, which changes will make a difference and which ones will just vanish, and whether pedagogy will develop toward, away from, or parallel to AI developments in numerous spheres–to name just three of numerous possibilities. Change is exciting, but it’s also hard. I can’t say I am as excited about adapting my fall semester syllabus to reflect whatever the university decides to do in light of ChatGPT, but since I’ll have to adapt to a new “learning management system” anyway, I may as well accept that “a change is gonna come.”
*For the definition of automatic writing, Wikipedia has a fairly complete page.