I Struggled to Find a Job After College. To Pay Rent, I Started Doing Something Highly Controversial.
March 20, 2026
Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.
When I graduated from UC–Berkeley with my “useless” comparative literature degree, into one of the bleakest job markets in recent American memory, I thought to myself, There must be a loophole somewhere.That was what brought me to marketing myself as an “academic editor,” and an “admissions essay advisor,” on various freelancing websites last fall. I figured I had done my fair share of editing for friends throughout the years, and I needed another gig to supplement my inconsistent substitute-teaching paychecks. But I soon realized that one job description could help pay my rent more than most: “A.I. humanizer.”
It’s an A.I. world, and we’re all just living in it. But somehow, I have managed to defy the odds, becoming the rare outlier to not only protect my job from the A.I. monster’s bite but profit from its terrors. While I maintain my fair share of ethical quandaries regarding the specifics of this hushed-up day job, I do not have the luxury of abandoning the morally murky role until another, more dignified way of paying my rent becomes available. So, to reconcile my disgust for the A.I. monster, and the way I feed it, I give you my confession: I am an A.I. humanizer. This is how I turn chatbot-generated personal statements into shining portraits of undeserving applicants, for a price.
I didn’t plan on this career path. I imagined that my literature degree would catapult me into the offices of literary magazines, publishing houses, or graduate classrooms, where people pore over words with admiration the way I do—not into meetings with clients who take every shortcut possible to avoid writing their own. These fantasies, however, were demystified. I quickly became familiar with the LinkedIn void, a phenomenon that haunts all my fellow graduates in the form of hourly spam emails and “entry-level” job postings (positions that actually require two to four years of experience and already have 100-plus applicants). After one too many summer months of hopeless applications, all I had was a job at a substitute-teaching agency, where I would be lucky to get work three days a week. I created a profile on the freelancing website Upwork. I knew that college application season was beginning, and I hoped that while substitute teaching, I could spam through enough 600-word personal statements a week to cover my half of the rent, at least for a couple of months, until something better came along.
One fateful September afternoon, I received a message on Upwork. This to-be client had “written” (i.e., prompted) their way through a rough draft of their first college application essays. The client requested that I rewrite the essay to have a more personal voice—to be more “authentic.” The initial conversation and contract negotiation between a client and me is a delicate dance—one I have now mastered. On Upwork, there are strict regulations surrounding academic dishonesty that restrict the types of services I can advertise and what clients can officially propose. “A.I. humanization” is a growing profession, and for industries beyond academia, these words can be said out in the open. But if the work will be submitted for a grade, or to a university for admissions decisions, I cannot market any form of ghostwriting or rewriting. I cannot even state that I will “polish” individual sentences without my contract proposal being disqualified. A client also cannot ask for those services in a contract without being blocked. But there are loopholes. While the official contract cannot explicitly mention the practice of ghostwriting, rewriting, or in-line editing, it can acknowledge revision, commentary, and any form of feedback. In direct messages between client and freelancer, matters of A.I. usage and ghostwriting can be discussed.
I wish I could say that when I received my first A.I.-humanization request, I felt more apprehensive about taking the job. I wish I could tell you that my staunch hatred of this technology, especially in academics, made me turn my head in disgust. My financial reality, however, left less space for such moral dilemmas. I needed the cash. I figured, at 60 bucks for every 600 words (which I could rewrite in an hour with my eyes closed), I could make rent in a week—even a few days if I typed fast enough. In situations when clients wanted more-substantial rewrites, I could charge a few hundred dollars for an essay. Most of my clients needed not one essay rewritten, but 15. By the height of the editing season, I was working with upward of 20 clients a week.
Some of those clients are middlemen, running their own application counseling services overseas and asking me to rewrite hundreds of essays that were translated into English using A.I. My first month, with no client history or experience on Upwork, I made about $2,000. That number only snowballed, and I nearly paused all my substitute teaching to keep up with demand. By the last month of application season, I made nearly $7,000—more money than my friends who had sold their souls to corporate America in a postgrad panic. Of course, the financial gains required the selling of my soul too.
The task is simple: rewrite sentences one by one until the essay passes various A.I. checkers like Originality.ai, GPTZero, or ZeroGPT. While none of my formal education prepared me for this type of editing, the largely one-dimensional style of bot writing is always easy to detect. The death by em dash. The constant delving into critical issues in today’s modern landscapes. Every essay I receive comes littered with sentences following the structure “It’s not X; it’s Y.” Or, when the bot feels sassy: “Not X. Not Y. But Z.”
I find it incredibly telling that A.I.’s favorite way to describe any phenomenon is via evasion, or telling us what something is not. This, to me, represents a bot’s incapacity to actually create (despite all it generates), because creation requires a unique and autonomous relationship with the world. To create, one must act within the world. The process of creation is therefore one of reflection. A bot, however, relies on a body of (unconsented) data collection, meaning all it can do by way of describing the personal experience of a prompter is fill an essay with anecdotes or clichés that do not represent the user’s experience but can pretend their way through it.
The bot’s final product is exactly that: an essay that pretends to divulge, to confess, to promise, and to portray. The essay reads more like an idea of an essay, the skeleton of reflection with no meat. This writing style works just fine for a corporate slide deck that is equally disconnected from the lived world. But for the admissions essay, the dry and uninspired robot voice turns one teenager after the next into only the archetype of a teenager, writing like a grown-up. I imagine the A.I. bot like a child playing dress-up, donning an oversized blazer and glasses for a game of “businessman.” The bot and the baby know nothing of the world it describes, besides a handful of overused jargon that, like anything, loses its meaning if repeated enough times.
Here, the true tragedy hides: Applicants today would rather sound like that bot, who knows nothing of the world but can produce 600 typo-free words, than sound like themselves: young, dramatic, messy, and mistake-ridden. A.I. can be sassy, but it cannot write the tenderness of a high school drama club. It can know the words for mourning, but it cannot describe the empty rooms of past loved ones. Those pictures require patience, time, and pain to conjure on the part of the applicant. They require friction, in a world that grows increasingly slack and unrequiring of its inhabitants. This, I believe, is one of the main motivators for college applicants’ overreliance on A.I.
Not only do the words pop out in a matter of seconds; you also have a bot telling you that this is a “captivating, passionate essay that is sure to impress the [insert university here] admissions board.” This validation that A.I. gives its user—or rather itself—is another reason students are so magnetized to these programs. In the process of applying to schools, an entire future, an entire lifetime, feels on the line. A teenager insecure in their academics, social standing, or identity might see A.I. writing as a savior, a way to avoid unwanted labor and protect themselves against their perceived shortcomings. The bot boom in academia writ large puts on display the insecurity of students just as much as it does their laziness.
So, despite the seemingly simple nature of my task (switching out synonyms, cutting clichéd metaphors), I often find myself reaching the end of my edits and confronting this larger problem. What I am wrestling with is an essay not just written by A.I. but poorly imagined by A.I. It is these clients, who rely on this technology for not only words but ideas themselves, who turn my job from trivial to impossible. Oftentimes, I will rewrite an entire essay from scratch, but if I do not change it “enough” (just how much is enough I have never been able to calculate), the A.I.-checker—which is, of course, itself a bot—will tell me that the essay is still 100 percent A.I.-generated. Sometimes, my revisions end up with an even higher score for A.I. generation than the original, simply because I have already run the essay through an A.I.-checker multiple times by the point I reach my final draft, making the technology more familiar with the material. In these situations, I am left with no other option than to rip apart the structure of an essay to tell a new story that the A.I. doesn’t own. To these students, who perceive A.I.’s banal flatness as a hallmark of good style, my new essays are not acceptable. They “sound too weird.” In these moments, all I want to say is, “It should.” Instead, I find myself fighting to get paid at all.
All this brings me to the ultimate conclusion that what I am doing is meaningless. Fundamentally, my client wants not authenticity but innocence, the ability to get away with something. As a writer, I dedicate myself daily to the delicate nature of words: the ways they move us and influence us. As somebody interested in a teaching career, I firmly believe that the literacy problem in this country is, at its core, a threat to social justice. I mourn for all the children who lost years of critical education during COVID-19. Now that college application season is over, I am substitute teaching again. I am watching these students, kindergartners during the pandemic, fail to read basic sentences or spell words like want. The contrast between these students and those whose essays I write is heartbreaking. But, more than their differences, I fear their ultimate connection; that students will continue growing overreliant on this technology as it targets younger and younger audiences with the promise of efficiency and convenience.
This convenience is more than laziness. It is submission: Unlike original writing, which shows us what we can do, A.I.-generated words show us what we refuse to do. Donning my academic editor title, I imagined myself tasked with the act of tidying, turning teenage madness, drama, beauty into writing that is still dramatic and beautiful—just grammatically correct. But today I am tasked with the seemingly simpler but hopeless job of putting life back into writing. With each essay, I peel off the layer of idle ease that is the A.I. generation and see what remains: only hints of a life, of a story, of a human. And with each 600-word essay I try to revitalize, I am reminded of our daily cultural choice: either to lean back and let technology entertain us, work for us, be us—or to live.
Search
RECENT PRESS RELEASES
Related Post
