Automatic writing*

A few posts back, I mentioned I would weigh in on artificial intelligence prose generators–“bots”–specifically the much-reported-upon ChatGPT. The media coverage has included everything: hand-wringing, speculations on the extinction of critical thought, predictions about the death of the high school essay or the re-institutions of oral and handwritten exams, not to mention worries about spurious content and job loss among educated citizens, as well as wild enthusiasm for automating tedious writing tasks and excitement about where tech is taking us.

It’s not as new as most people think. AI has been providing customer-service responses and generating basic summary content for news-related websites for awhile; but OpenAI’s open-source platform, which is currently free and for use by anyone (not to mention educating itself as each user inputs prompts and questions) has so rapidly gained “tech adopters” that those of us who teach writing cannot ignore it completely. And we shouldn’t ignore it, but neither should we throw our hands up in surrender and predict the end of the art of writing as we know it.

A recent New York Times article reflects the kind of discourse taking place at the institution where I work. It’s fascinating to me to see how quickly the conversations have evolved in the usually slow-moving environment of academia. I find that at my college, my years of laboring with students who lack strong backgrounds in written expression or confidence in their writing have suddenly attracted the attention of full-time faculty members–they want to know how they can tell if students are using AI assistance to write essays (when said profs have no pedagogical experience in writing) and how to change the wording of their assignments to “fool” the programmed generators, among other pressing questions. These inquiries tend to come tinged with a sense of slippery-slope fallacy: does this mean academia will go to hell in a handbasket?

I refuse to send out a firm forecast, though my intuition says no; instead, academia, and society, will change.

And despite the daily-proven, scientifically-accurate, anecdotally-obvious FACT that change is normal and indeed necessary, most people (and their societies and institutions) fear change. Hence, the media and institutional brou-ha-ha.

Let’s face it, writing can be hard. There will always be people who do not want to do the work of writing from the soul, brain, heart, emotion, experience, dread, you-name-it. Painting is hard, too. But people who don’t want to practice and experiment with visual art can use paint by numbers, clip art, or AI. There will always be a few folks who learn to play an instrument for the joy of it and for the challenge of continually learning new approaches to the process of music making; the rest of us can be audiences, if we like. People who write because they can’t not write? They won’t use bots unless they want to experiment with them: make perverse use of the programs, play with them to see what the human’s skills can do in concert with algorithms, bits, bytes, and data. I know artists who are already collaging with AI-generated art to create new, human-mediated visuals.

I recognize the fear factor here, but I don’t buy into it because I am so curious about what will happen next. I’m interested to see how changes will occur, which changes will make a difference and which ones will just vanish, and whether pedagogy will develop toward, away from, or parallel to AI developments in numerous spheres–to name just three of numerous possibilities. Change is exciting, but it’s also hard. I can’t say I am as excited about adapting my fall semester syllabus to reflect whatever the university decides to do in light of ChatGPT, but since I’ll have to adapt to a new “learning management system” anyway, I may as well accept that “a change is gonna come.”


*For the definition of automatic writing, Wikipedia has a fairly complete page.

Photo by Startup Stock Photos on

8 comments on “Automatic writing*

  1. I asked ChatGPT to compose a post on the opportunities and concerns its use may have for society. Here is what it wrote:

    ‘While this technology has the potential to be incredibly useful, it also presents some challenges and opportunities for society.

    On the one hand, AI text generators can help to reduce the human effort required for a variety of tasks. For example, automated customer service responses can save businesses time and money. However, this technology also raises some important ethical questions. How should AI-generated text be regulated? Should it be subject to the same laws and standards as human-generated text?

    On the other hand, AI text generators also bring with them some exciting new opportunities. For example, they can be used to create new forms of entertainment, such as interactive stories and virtual worlds. They can also be used to generate new insights from data, such as providing automated reports on complex topics.

    Ultimately, AI text generators present both risks and opportunities for society. It’s important that we consider both the potential benefits and potential risks of this technology, and create regulations and standards that ensure its responsible use.’

    Liked by 1 person

  2. I love this take, as someone skeptical of the doom-saying!

    Liked by 1 person

  3. […] Ann E. Michael, Automatic writing […]


  4. KM Huber says:

    I’m not sure whether or not this is germane but increasingly, I must rely on Apple dictation, if I am to write at all. Apple dictation is really quite good and works pretty well to comment on a blog post or straightforward conversation in a chat. However, in writing fiction or a blog post, predictive text doesn’t always appreciate my sentences or word choice. Some of that is enunciation when I talk too fast but in fiction, especially, there seems to be little appreciation for metaphor, at least the way I choose to make one.

    I had not heard of ChatGPT. Quite interesting.


Comments are closed.