Rethinking Wellness

Rethinking Wellness

Share this post

Rethinking Wellness
Rethinking Wellness
Why I Don’t Use A.I. in My Writing, Research, or Illustration

Why I Don’t Use A.I. in My Writing, Research, or Illustration

Plus, the links: How to deal with a mom teaching anti-fat bias to her daughter, sleep-tracker woes, RFK Jr.'s vaccine-committee overhaul, and more

Christy Harrison, MPH, RD's avatar
Christy Harrison, MPH, RD
Jun 18, 2025
∙ Paid
18

Share this post

Rethinking Wellness
Rethinking Wellness
Why I Don’t Use A.I. in My Writing, Research, or Illustration
9
2
Share
Photo by Tamanna Rumee on Unsplash

Welcome to another installment of the Rethinking Wellness link roundup! Twice a month I share a small selection of links from around the internet that are relevant to the conversations we have here, along with some quick takes and occasional deeper dives for paid subscribers.

This time the take/dive is about why I don’t use A.I. in my writing or research, or in the illustrations accompanying my posts, with reference to a recent piece of A.I. bullishness in the New York Times.

Links

Here are some pieces that got me thinking in the past few weeks. I found value in all of these, but links are not endorsements of every single detail in the piece or everything the writer ever wrote.

Help! My Niece Is Learning [Anti-Fat Bias] From Her Mother. I Can’t Let This Happen. (Jenée Desmond-Harris for Slate, with quotes from me)

RFK Jr. guts the U.S. vaccine policy committee (

Katelyn Jetelina
)

  • Related: RFK Jr. taps allies and COVID vaccine critics among picks for CDC advisory panel. Here's who's on the list. (CBS)

  • Related: Kennedy Announces Eight New Members of C.D.C. Vaccine Advisory Panel (NYT)

RFK Jr.’s Letter to Congress (

Paul Offit
)

Kennedy Says ‘Charlatans’ Are No Reason to Block Unproven Stem Cell Treatments (NYT)

Are Sleep Trackers Making Us Ontologically Insecure? (NY Mag)

Black plastic spatulas, anti-vaccine fears, and the illusion of control (

Mara Gordon, MD
)

In Case You Missed It

Why Rigid Food Rules Sometimes Lead to Secret Binges (It's Not Just You)

Why Rigid Food Rules Sometimes Lead to Secret Binges (It's Not Just You)

Christy Harrison, MPH, RD
·
Jun 4
Read full story
Menopause Misinformation, and How to Set Boundaries on Wellness Talk

Menopause Misinformation, and How to Set Boundaries on Wellness Talk

Christy Harrison, MPH, RD
·
Jun 9
Read full story
Why Wellness Misinformation Is Rampant, and How to Avoid It - with Matthew Facciani

Why Wellness Misinformation Is Rampant, and How to Avoid It - with Matthew Facciani

Christy Harrison, MPH, RD and Matthew Facciani
·
Jun 16
Read full story

Take/Dive: Why I Don’t Use A.I. in My Writing or Research

Recently someone I was working with brought me a beautiful piece of writing crafted with the help of generative A.I.1 It totally nailed the voice I was going for, and it captured sentiments I was too busy or distracted or just plain lazy to put into words myself. I was tempted to publish it but ultimately decided not to: Over the years I’ve made it a personal policy not to use generative A.I. in my writing or research, or in the images accompanying my posts (though I do use a headline analyzer “powered by A.I.,” and my team uses an A.I. transcription tool to create rough transcripts of the podcast, which a human editor then cleans up).

I was trying to think through all the reasons why I don’t feel comfortable using the technology in those aspects of my work, when I came across this week’s NYT piece about A.I. by Kevin Roose and Casey Newton—two tech writers whose thoughtful approach I generally appreciate, even if I don’t always agree with them about everything. In this piece (which was a transcript of a conversation between the two writers, who also co-host the podcast “Hard Fork”), I thought they seemed overly bullish on A.I. The conversation was full of cheery anecdotes about how useful and integral A.I. has become to them and the people they know—they use it for interior decorating, for recipes, for journalistic research. A friend of Roose’s uses ChatGPT “voice mode” to replace podcasts. And Newton describes his boyfriend, a software engineer at Anthropic, as “probably the biggest power user I know…he will give his A.I. assistant various tasks, and then step away for long stretches while it writes and rewrites code. A significant portion of his job is essentially just supervising an A.I.” To me, those stories felt kind of sad and dystopian—not to mention concerning from a misinformation standpoint, given how unreliable A.I. often is at research. But to Roose and Newton, these anecdotes were evidence of how ubiquitous and “genuinely useful” the technology has become.

There was one section that resonated with me, though: As Roose said, “I think that yes, people will lose opportunities and jobs because of this technology, but I wonder if it’s going to catalyze some counterreaction…a kind of creative renaissance for things that feel real and human and aren’t just outputs from some A.I. company’s slop machine.” (In between those ellipses, he compared this situation to the emergence of the slow-food and farm-to-table movements as a reaction to fast food, which I didn’t love: though I get the comparison, I’ve come to see how those movements and their moralization of food helped pave the way for the orthorexic MAHA moment we’re in now.) Roose’s 2021 book, Futureproof: 9 Rules for Surviving in the Age of A.I., is essentially a manifesto for this “creative renaissance for things that feel real and human,” so I found it a little odd in this piece to see him speak approvingly of an acquaintance who had “started using ChatGPT as her therapist after her regular human therapist doubled her rates.” But I do appreciate his acknowledging the fact that some people aren’t as happy to integrate A.I. into their daily lives as he and Newton seem to be, and that for some of us the human approach will always carry greater value.

This piece gave me the push I needed to try to articulate my many misgivings about this technology. So today, at the risk of coming across as moralizing about A.I., I want to explain why I’m against using it for writing, research, or illustrating my posts. I’ll also share some ways you might consider protecting yourself when using A.I. in your own life.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Food Psych Programs, Inc. | Artwork by Tara Jacoby.
Publisher Privacy ∙ Publisher Terms
Substack
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share