Why I Don’t Use A.I. in My Writing, Research, or Illustration
Plus, the links: How to deal with a mom teaching anti-fat bias to her daughter, sleep-tracker woes, RFK Jr.'s vaccine-committee overhaul, and more

Welcome to another installment of the Rethinking Wellness link roundup! Twice a month I share a small selection of links from around the internet that are relevant to the conversations we have here, along with some quick takes and occasional deeper dives for paid subscribers.
This time the take/dive is about why I don’t use A.I. in my writing or research, or in the illustrations accompanying my posts, with reference to a recent piece of A.I. bullishness in the New York Times.
Links
Here are some pieces that got me thinking in the past few weeks. I found value in all of these, but links are not endorsements of every single detail in the piece or everything the writer ever wrote.
Help! My Niece Is Learning [Anti-Fat Bias] From Her Mother. I Can’t Let This Happen. (Jenée Desmond-Harris for Slate, with quotes from me)
RFK Jr. guts the U.S. vaccine policy committee (
)Related: RFK Jr. taps allies and COVID vaccine critics among picks for CDC advisory panel. Here's who's on the list. (CBS)
Related: Kennedy Announces Eight New Members of C.D.C. Vaccine Advisory Panel (NYT)
RFK Jr.’s Letter to Congress (
)Kennedy Says ‘Charlatans’ Are No Reason to Block Unproven Stem Cell Treatments (NYT)
Are Sleep Trackers Making Us Ontologically Insecure? (NY Mag)
Black plastic spatulas, anti-vaccine fears, and the illusion of control (
)In Case You Missed It
Take/Dive: Why I Don’t Use A.I. in My Writing or Research
Recently someone I was working with brought me a beautiful piece of writing crafted with the help of generative A.I.1 It totally nailed the voice I was going for, and it captured sentiments I was too busy or distracted or just plain lazy to put into words myself. I was tempted to publish it but ultimately decided not to: Over the years I’ve made it a personal policy not to use generative A.I. in my writing or research, or in the images accompanying my posts (though I do use a headline analyzer “powered by A.I.,” and my team uses an A.I. transcription tool to create rough transcripts of the podcast, which a human editor then cleans up).
I was trying to think through all the reasons why I don’t feel comfortable using the technology in those aspects of my work, when I came across this week’s NYT piece about A.I. by Kevin Roose and Casey Newton—two tech writers whose thoughtful approach I generally appreciate, even if I don’t always agree with them about everything. In this piece (which was a transcript of a conversation between the two writers, who also co-host the podcast “Hard Fork”), I thought they seemed overly bullish on A.I. The conversation was full of cheery anecdotes about how useful and integral A.I. has become to them and the people they know—they use it for interior decorating, for recipes, for journalistic research. A friend of Roose’s uses ChatGPT “voice mode” to replace podcasts. And Newton describes his boyfriend, a software engineer at Anthropic, as “probably the biggest power user I know…he will give his A.I. assistant various tasks, and then step away for long stretches while it writes and rewrites code. A significant portion of his job is essentially just supervising an A.I.” To me, those stories felt kind of sad and dystopian—not to mention concerning from a misinformation standpoint, given how unreliable A.I. often is at research. But to Roose and Newton, these anecdotes were evidence of how ubiquitous and “genuinely useful” the technology has become.
There was one section that resonated with me, though: As Roose said, “I think that yes, people will lose opportunities and jobs because of this technology, but I wonder if it’s going to catalyze some counterreaction…a kind of creative renaissance for things that feel real and human and aren’t just outputs from some A.I. company’s slop machine.” (In between those ellipses, he compared this situation to the emergence of the slow-food and farm-to-table movements as a reaction to fast food, which I didn’t love: though I get the comparison, I’ve come to see how those movements and their moralization of food helped pave the way for the orthorexic MAHA moment we’re in now.) Roose’s 2021 book, Futureproof: 9 Rules for Surviving in the Age of A.I., is essentially a manifesto for this “creative renaissance for things that feel real and human,” so I found it a little odd in this piece to see him speak approvingly of an acquaintance who had “started using ChatGPT as her therapist after her regular human therapist doubled her rates.” But I do appreciate his acknowledging the fact that some people aren’t as happy to integrate A.I. into their daily lives as he and Newton seem to be, and that for some of us the human approach will always carry greater value.
This piece gave me the push I needed to try to articulate my many misgivings about this technology. So today, at the risk of coming across as moralizing about A.I., I want to explain why I’m against using it for writing, research, or illustrating my posts. I’ll also share some ways you might consider protecting yourself when using A.I. in your own life.
1. A.I. can short-circuit critical thinking.
One of my central aims with Rethinking Wellness is to encourage readers/listeners to think critically about wellness and diet claims, and to help you acquire skills that you can use to parse any new claims or trends you come across in the future. I want to support you in thinking for yourself—weighing evidence, considering your own needs and goals, and coming to your own conclusions based on accurate information and a true understanding of risks and benefits. And to achieve that goal, I need to model how to do those things, too. I need to practice what I preach by digging deep into the evidence and doing the sometimes-boring slog through scientific research to see what I actually think of it.
If I were to use A.I. to summarize the research for me, it would bypass this kind of critical engagement. (It could also lead to a world of mis- and disinformation, which we’ll come back to shortly.) Even when I’ve tried just using A.I. as a jumping-off point for further research on my own, I’ve found that it often ends up being more work to try to figure out what’s accurate and what’s missing from the A.I. overview. It wastes time I could be using to find the best available evidence, understand the nuances, and develop my own analysis of the topic at hand.
The same is true for writing: If I used A.I. to turn my research into a cohesive written piece (which it could do in a matter of seconds, given the right prompts), I would miss out on all the critical thinking that the writing process requires. Shaping a mountain of quotes and notes into an outline and then a rough draft, grappling with what to keep or cut, tinkering with word choices, developing or condensing ideas: these are all essential aspects of the act of writing that require careful reflection and analysis, which helps me understand the issue in a much deeper way. Writing helps me learn and make sense of the facts.
I know we’re all short on time, and we can’t be experts in everything. In my non-professional life, I often just want to find answers to everyday questions as quickly as possible. But I still think it’s worth putting in a few extra minutes to find quality sources of information and cross-reference them with each other, rather than just going with whatever A.I. spits out. That’s especially true when it comes to health information, which A.I. can get wildly wrong.
2. A.I. is still making sh*t up.
Probably the biggest reason I don’t use A.I. in my work is that I don’t trust it to get the facts right.
In 2023, I wrote about how A.I. spreads health misinformation, including by co-signing dubious diagnoses like “adrenal fatigue” and “chronic candida” and giving recommendations for equally spurious cures. Just yesterday, I asked Google’s A.I. how to cure chronic candida, thinking maybe it would’ve gotten wise by now. Instead, it recommended that I “consider a candida cleanse diet” and limit or eliminate sugar and refined carbs. Nowhere did it mention the fact that the concept of “chronic candida” as popularized by wellness culture is largely pseudoscience—or that cutting out sugar to “starve the yeast” is not evidence-based.
In my personal life I sometimes look at the A.I. summary when searching for health content, partly to see if there are any good jumping-off points and partly just out of curiosity. But more often than not, I find that the summaries are wrong or unverifiable. Sometimes they rely on outdated information even when the science has evolved. Frequently when I click the links to the A.I.’s stated sources, I can’t find any actual support for its claims. The general background may be correct, but the specific details often appear made up.
Some A.I. enthusiasts argue that misinformation isn’t so much of a problem on the platforms anymore; in the Times piece, Newton quotes Anthropic’s CEO as saying that humans are now more likely to “hallucinate” false facts than A.I. (a claim Newton treats with appropriate skepticism). But in my experience, health misinformation is alive and well on A.I. platforms—which is why I don’t trust them to help me with my reporting here (or with managing my personal health).
If you’re looking for trustworthy information about health and wellness, there are far better places to look—check out this piece for more on how to find reliable sources.
3. I want humanity to win.
I value my humanity and that of other writers, artists, and people in general. I believe that my quirks, my flaws, my uniquely weird turns of phrase, the ways I often fall short and might occasionally nail an idea, are part of what makes me me, makes me human. A.I. erases those things. A couple years ago I played around with using ChatGPT to do some final editing and polishing on a few articles, and I thought it made me sound “better” and clearer but ultimately less like myself. I tried doing another pass or two to re-inject more of my own voice, but I quickly realized I was having to edit almost every line and often reverting to my original draft, one sentence at a time. I fired ChatGPT as my editor within a matter of days.
The biggest thing I took from Roose’s book Futureproof is that humans will never be able to outcompete the robots by becoming more like robots ourselves; instead, we need to focus on what makes us human, and do more of that. One of those things is to “leave handprints” in our work—put in personal touches, attention to craftsmanship, human effort and detail that no machine or bot could match. I want to preserve the handprints in my work, so I keep A.I.’s robotic paws out of it.
It’s possible A.I. will end up making my job obsolete anyway, who knows. But I don’t want to hasten my own demise by removing everything in my work that’s unique to me, by making myself sound like a quippy amalgam of everyone else on the internet (which is what the ChatGPT edits felt like).
I know it’s not possible in every profession to avoid using A.I., especially when everyone else is doing it and productivity goals have shifted as a result. I know younger journalists and content creators are facing pressures I don’t at this point, 23 years into my career. But to the extent that we’re able to, I wonder what it would be like if more of us resisted A.I. creep in our jobs and deliberately emphasized our humanity. How can we put more handprints in our work? How can we show up as uniquely ourselves—and where can we be valued for it? How can we create more spaces where this is possible for more people?
4. I signed up to be a journalist and dietitian, not an A.I. manager.
There are annoying things about every job, but on the whole I’m very lucky to be able to do what I love and love what I do. I studied and trained to be a journalist, and later a registered dietitian, and now I do both—and that’s what I hope to continue doing for as long as I’m able. I don’t want my job to morph into what Newton’s boyfriend does, “essentially just supervising an A.I.” For some people that might be fun—and I guess if you’re a software engineer at Anthropic, you knew what you were getting into. But that’s not my idea of a good time, and I’m going to do everything in my power to make sure it doesn’t become my job.
5. A.I. has ethical problems.
Generative A.I. models were trained on millions of pirated books (including mine), scientific papers, and works of art, all without any form of compensation to the creators. And now these A.I. platforms are competing with and undercutting the very artists and authors whose work they stole. I hope the many class-action lawsuits making their way through the courts ultimately come out in favor of the creators (myself included), but regardless: I don’t want to use tools that were developed in this way, nor do I want to feed them any more of my data. I know there’s no ethical consumption under capitalism, but I don’t want to participate in this economy if I don’t have to.
I also have ethical qualms about the role of A.I. in journalism and media. Although generative A.I. is so new that these ethical considerations are still being ironed out, I think at the very least it’s reasonable to expect some guardrails and transparency about whether and how media companies are using A.I., and in my view they’d gain a lot more credibility if they eschewed it completely for writing, illustration, and most forms of research (though this piece does make a good case for some limited exceptions in certain types of research). Given all the issues discussed above with misinformation—not to mention the fact that A.I. platforms can sometimes inadvertently plagiarize existing work—using them to generate journalistic text is extremely risky.
There are many other ethical issues with A.I., but I’ll end with this: I feel that if I have an audience who’s paying to read/listen to my content (i.e. you), I owe you my best, most original work—and that means no A.I. Clearly some popular Substackers feel differently, and I understand that some audiences don’t necessarily mind. Maybe you don’t even care if I use A.I. or not! (Though if you’ve read this far, I have a feeling you might.) I don’t expect everyone to feel as strongly about this issue as I do, and I’m open to the fact that I might sound like a backwards scold to some people—or even that I might be wrong about all of this. Still, here’s my pledge to you: I’ll always strive to make high-quality and trustworthy work that helps you in some way, without the help—or harms—of A.I.
*
I also recognize that I may not always feel this way about A.I.—another aspect of my humanity is that I have the right to change my mind. In the future maybe there will be some form of generative A.I. that doesn’t have all these problems and will be too good to pass up. (If that ever happens, I’ll let you know.) Or maybe one day our A.I. overlords will compel everyone to use it, and we’ll have no choice!
But from where I sit now, and as long as I’m able to make my own decisions based on my values, I don’t see generative A.I. having any place in my writing, research, or illustration. Whatever flaws and fingerprints you see in Rethinking Wellness, you can be sure that it’s made by (and for) humans.
*
I hope this roundup gave you some food for thought, and I’d love to hear from you. Also, please let me know if there are any recent pieces (published within the last few weeks) you’d like me to consider for the next installment! Feel free to comment below, or submit them here.
In this piece, when I say “A.I.” I’m talking about generative artificial intelligence—A.I. models like ChatGPT and other apps that can generate text, images, or videos in response to user prompts.
Amen! Thank you.
I feel the same way.
I also wonder how people can speak extemporaneously (let alone authentically or intelligently) about a topic they haven't actually digested themselves.
Hi Christy. Thank you for writing and researching with integrity. The human voice, the use of critical thinking and the use of nuance get lost in AI generated content.
I am an OB clinical nursing instructor for BSN students. Use of AI in my students papers is definitetly an issue. Future and current health care professionals need to be able to distinguish facts from mis and disinformation. They also need to follow up their facts with proper research. They also need to be ethical in their decision making processes as they care for patients. I tell my students to trust their voice and that I want to hear their voice in their paper, not a bot. I emphasize that they will have peoples lives in their hands and that they must step up to a higher level than students pursuing other careers. If a fine arts student writes an AI generated paper nobodys life is at risk (no offense) but an RN student who writes an AI generated paper could literally kill someone because they do not know their stuff.