Artificial Intelligence, Real Grieving

March 29, 2024

Education

As I work with teachers across the country on methods for teaching and assessing writing, the AI question looms larger with each passing week. I find it fascinating that understanding what this means for us as instructors and writers is a genuine grieving process, and everyone is at their own point in it.

The process of comprehending the impact of AI on writing, critical thinking, and the teaching of writing is too big and impossible to contemplate at first glance. This is not just because generative AI is still new enough to feel like magic (which, if someone said that was how it worked would make it make a lot more sense), but also because trying to understand its implications for the future is overwhelming. It isn’t just a new wrinkle in writing and the teaching of it; its very nature is changing.

Many people are still stuck in the denial stage, which is understandable. It’s hard not to feel like, “Stop it! Keep kids from using it! You know what, just turn it off!” These thoughts will seem charmingly naïve in a few years. I remember when I first started teaching and the heated debates about getting kids to stop using online citation makers. Now, the challenge is getting them to even use online citation makers instead of just plugging in links (which, let’s be honest, is far more logical than adhering to MLA formatting). 

I also remember a year and a half ago when I first realized why I was seeing these weirdly well-written essays that didn’t sound like an individual that couldn’t be found in my normal plagiarism searches. It was such earnest denial (“We’ll block it! That’ll do it! We’ll catch all the cheaters who use it!”), and then anger (“Damn it, how many Paradigm Shifts / Sea Changes do I have to do in my career? What if I just coast until I retire?”). And like all grief, it now seems like such a waste of energy.

It’s important to remember that you can’t rush the grieving process. It’s becoming increasingly futile to try to stop students from using AI, and that we’re actually doing them a disservice by doing so. The challenge and opportunity lie in guiding students on how to use AI ethically, effectively, and in a way that can lead to an explosion of critical understanding. The urgency is immense. Because a lot of people are about to make a huge leap in critical agency. And a lot of others are about to take a big ol’ step down.

And even writers and educators who have reached the “acceptance” stage will have to combat the slow-moving, reactive bureaucracy of systems whose response is to block generative AI or veeeerrrrry slowly craft policies about it. They will get there eventually. The grieving process is painful, but there is a strange comfort in knowing it is a process. It’s funny to me how much of my seminars now focus on exploring the possibilities and pitfalls of AI, while also trying to counsel people in balancing their current stage of the grieving process and guiding them toward acceptance. All while going through it myself. But hey, that’s how it works, right? That’s how we get there.

, , ,

About The Byronic Man

Recently voted "The Best Humor Site in America That I, Personally, Write," The Byronic Man is sometimes fiction, but sometimes autobiography. And sometimes cultural criticism. Oh, and occasionally reviews. Okay, it's all those different things, but always humorous. Except on the occasions that it's not. Ah, geez. Look, it's a lot of things, okay? You might like it, is the point.

View all posts by The Byronic Man

Subscribe

Subscribe to our RSS feed and social profiles to receive updates.

13 Comments on “Artificial Intelligence, Real Grieving”

  1. angeliquejamail Says:

    Oof. Yeah, this one is a tough one. It’s hard for me to reconcile the usually critical thinking superheroes that my colleagues are with (for a few of them) the headlong rush into a technology we cannot even begin to fathom the real consequences of. It’s queasy-making. Codifying what some of us are going through as grief makes so much more sense. Thank you for this.

    Reply

    • The Byronic Man Says:

      Where I teach I’m on a leadership team and we were discussing what to focus on next year – which concepts to continue with, add, drop, etc – and I really was hammering that AI is basically the only thing we should be talking about. Most of the staff believes it’s only an issue for Language Arts and Art, others are sort of huddled in a ball saying “someone make a policy that deals with this”. Which I get – forgetting the terrifying global implications, it’s like someone just said to teachers, (“Office Space” manager voice): “Yyyyyeah, if you could just change everything about how you teach… and how you grade… that’d be great. And, if you could do it right now? As you’re teaching and grading? Even though we don’t understand it yet? Thaaaaanks…”

      I’ve sort of grabbed onto the only thing I can grasp that seems like a constant: ethical vs unethical. Where’s the line? How do we use it ethically? But I’m also being really transparent with my students that we’re all fumbling along.

      Reply

      • angeliquejamail Says:

        I think you’ve hit the nail on the head with all of that. Especially the Office Space part. ;)

        At our school we’re focusing significantly on the ethical considerations, teaching the students how to use it in ways that are as least-bad and unethical as possible. Are they going to get it right every time? Not a chance. Is it better than nothing? Absolutely.

        GenAI freaks me out completely. I focus in my classes on what it robs them of if they use it in CW for idea generations, outlining, drafting, and editing, since those are the skills I’m teaching them, and we also have healthy discussions about how it has impacted the publishing and entertainment industries in particular. I’d say the conversation has been fruitful, but I’m still about as far from feeling sanguine about any of it as it is possible to be.

        We just don’t know how this will all end up.

        Reply

  2. BrainRants Says:

    I just had a similar conversation with my evil genius wife. Long and short of it was, AI – just like double-edged steel swords and nuclear weapons – is here now. My only impressive contribution to the conversation was this: “Well, I’d say the most deadly and dangerous thing mankind ever invented was religion, so there’s that.”

    Reply

    • The Byronic Man Says:

      Yeah, I can vacillate wildly – my wife describes me as a “perpetually wounded idealist” – which is not wrong – and I can be super excited about new technologies for the potential good they can unlock (I have a sweetly/sadly naive PowerPoint somewhere from around 2006 all about how the internet will create a renaissance for ethical, independent businesses and social connection). And that’s true here, but there’s this constant sense of “Oh… oh no… this is going to be bad, isn’t it…” hanging over it. Like, people who know such things have been saying for 15 years, “AI is coming, and we want to set up safeguards now, because once it’s here it’s too late,” and… now we see why (even as we’re at a stage that will certainly seem almost quaint in 5 years)

      Reply

  3. valenciartist Says:

    In a sense I am all in favour of technology and of its development and implementation. It has made our life much easier and enjoyable (in many ways, not that our life was hard and boring back in the 80’s), but where I think AI becomes a big problem is when it is used by children who don’t know the “real” way of doing things and can get to think that the way things are done are through AI. I mean things like painting, writing, composing music… Many artists I know are totally against AI while others are taking advantage of it as a tool. In some ways it reminds me of the synthesiser and how many musicians meant that it would be the end of the actual band. The band continued… This is much bigger so we will have to wait and see.

    Reply

    • The Byronic Man Says:

      I agree about the danger to kids – it feels almost frantically urgent to me because they need to start learning these skills now. And, of course, everyone’s just trying to figure it out as we go, figuring out what it is, how it works, and the implications. A friend of mine who’s a professional artist works only in AI right now and gets a LOT of heat from fellow artists, saying that she’s not creating anything. Her response is that all art draws from influences and established techniques (I mean, consider collage); that she’s crafting things through the specificity of how she articulates them. It’s an interesting debate that feels like a microcosm of this really really huge issue

      Reply

      • valenciartist Says:

        Reference to your artist friend and her arguments, I don’t think she’s going to get much support from traditional, professional artists, perhaps other artists like her might support her reasoning. But to me it does not make sense. Borrowing, or stealing from the past or from others is done, but you must do it with your skills with the brush, the pencil or with your modality, not by typing in a description on a programme. That can be done by non-artists as well and they will achieve the same results. I know artists that use AI but only as a tool, or to illustrate some text or as a teaching tool, if they are teaching, but not to say, “here is my artwork” to the world or to people… It is a huge issue, I agree. I would suggest to “artists” who only work with AI that they label themselves as such to not confuse their public (if they’ve any)…

        Reply

  4. Endless Weekend Says:

    Isn’t the act of considering the implications of AI on children’s critical thinking is (forgive the pun 🙃) critical to addressing the issue?
    Like the introduction of social media, there are significant implications to children (and adults…) that we are still struggling to understand…

    Reply

  5. Michael Wegner Says:

    It used to be sliderules. Then calculators. Now we put computers in the classrooms, and kids bring smartphones to school. Technology can’t be stopped, but it can be steered. Teachers and parents, especially parents, have to teach ethical behavior. 

    Reply

    • The Byronic Man Says:

      I agree – and the challenge (as it ever has been) is trying to teach people who are acclimated to carrot/stick motivators to learn ethical behaviors and disciplines for their own sake – for what is gained – rather than purely for some external reward. Huge opportunity/challenge

      Reply

  6. Ms. G Says:

    Fascinating. I hadn’t thought of this as a grieving process. YUP. It is.

    Reply

  7. Jakob Ryce Says:

    Very true. As an English tutor and writer I can definitely see the impacts of AI in all faucets of writing and learning. It has made me enjoy my teaching more however, as I enjoy the academic connections I make with my students, since there are a lot of conversations to be hard around these topics.

    However, as a copywriter… my work has dried up completely. Without blowing my self promotion trumpet too much, I did write an article all about how AI has infiltrated the copywriting industry and how we might combat this as our jobs are slowly replaced. You can read this here: https://jakobryce.com/2024/04/16/the-pen-vs-the-algorithm/

    I think the challenge is to accept these technologies are here to stay and are only going to get smarter. On a positive note, each AI tool I have tried (Gemini, Chat GPT, etc) seem to be programmed to purely assist and encourage us to improve our human productivity, which is good; and so I feel job replacement (or the responsibility) lies more on the shoulders of agencies and companies who feel it’s ok to replace writers with machines. The saying “Don’t shit in your own nest” comes to mind.

    Reply

Every Time You Leave A Comment, An Angel Gets Its Wings.