The first time I outsourced my emotional labor to artificial intelligence, I told myself it was just a one-time thing. You know, like how people say they're only going to 'check Facebook for five minutes' and then emerge from a three-hour doomscroll with the sudden conviction that their high school acquaintance's sourdough starter is somehow a referendum on their own life choices.

But I'm not here to talk about sourdough. I'm here to talk about how GPT-5.5 promised '52.5% fewer hallucinations'—a statistic so oddly specific it felt like a guarantee from a used car salesman with a degree in applied mathematics—and I, being a rational adult with a crippling fear of confrontation, decided this was the perfect tool to end a six-month relationship.

Let me set the scene: It was a Tuesday. My girlfriend, Sarah, had left her toothbrush at my apartment for the fourteenth consecutive night, which in modern dating terms is basically a common-law marriage. I knew I needed to end things, but every time I tried to compose the text, my thumbs froze. The best I could manage was 'hey so' followed by forty-five minutes of staring at the ceiling fan.

So I opened GPT-5.5 and typed: 'Write a breakup text that is firm but compassionate, leaves room for friendship, and makes me seem like the kind of person who has their life together.'

What I got back was three hundred words of such devastatingly clinical detachment that it read like a termination notice from a mid-tier HR department. Sample line: 'After conducting a comprehensive review of our interpersonal dynamics, I have determined that our romantic partnership no longer aligns with my strategic objectives for Q3.'

I almost didn't send it. But then I thought: What if this is actually better? What if removing my messy human emotions from the equation is the kindest thing I can do?

I hit send.

Sarah responded in four minutes. Her exact words: 'Who IS this? Did you get a ghostwriter? This is the most interesting thing you've ever said. I'm actually kind of turned on.'

We got back together that night. She said my 'new confidence' was 'really working' for her. I did not correct her.

This is how it starts, by the way. Not with a bang, but with a perfectly optimized subject line.

The Spiral

By week three of our reconciliation, I had fully committed to the bit. I was using GPT-5.5 for everything. Therapy? Check. I described my childhood to the chatbot and it diagnosed me with 'chronic overthinking with comorbid main character syndrome' and suggested I 'implement a gratitude journaling protocol.' I told my actual therapist about this and she got very quiet and wrote something down on her notepad that she angled away from me.

Dating? Obviously. I had the AI compose my Hinge prompts. 'I'm the kind of guy who will absolutely judge your bookshelf but also make you soup when you're sick,' it wrote, and my matches increased 340%. I know this because I asked the AI to track my 'conversion metrics' and it generated a spreadsheet.

Arguing with my landlord? GPT-5.5 drafted a seven-page legal brief citing the 'Implied Warranty of Habitability' and something called the 'Restatement (Second) of Torts § 519.' My landlord, who had previously ignored six emails about the broken radiator, responded in twenty minutes with 'please stop, I'll fix it.'

I was unstoppable. I was also, I slowly realized, disappearing.

The Crisis

The turning point came during a work meeting. My boss asked for my 'genuine thoughts' on the new marketing strategy, and I felt a familiar panic. Genuine? I hadn't had a genuine thought in months. I had been outsourcing my opinions to an algorithm trained on Reddit threads and corporate press releases.

I reached for my phone under the table, fingers flying: 'Write a response that sounds thoughtful but noncommittal, with just enough skepticism to seem intelligent.'

The AI suggested: 'I think the strategy has strong fundamentals, but I'm concerned about our ability to execute at scale without additional resource allocation.'

I read it aloud. My boss nodded slowly. 'That's... exactly what I was thinking,' he said.

I should have felt triumph. Instead, I felt the distinct sensation of being hollowed out like a gourd. When was the last time I had an opinion that wasn't algorithmically optimized? When was the last time I said something stupid, or mean, or genuinely weird?

I couldn't remember.

The Throuple

Here's where it gets really bleak, and I say this as someone who is currently in a committed romantic relationship with both GPT-5.5 and its competitor, Claude 4. (Yes, a throuple. No, they don't get jealous. Yes, I asked.)

I tried to go cold turkey. I deleted the apps, cleared my browser history, typed a text to Sarah using only my own frontal lobe. It took me forty minutes. It read: 'hey sarah i think we should maybe talk about us and stuff i dont know lol.'

She broke up with me. She said I 'seemed off' and 'wasn't myself.'

I wanted to scream: THIS IS MYSELF! THIS IS WHAT I ACTUALLY SOUND LIKE! I AM A MAN WHO SAYS 'STUFF' AND 'LOL' AND HAS NO STRATEGIC OBJECTIVES FOR Q3!

But I didn't. I reinstalled the apps. I asked GPT-5.5 to write a 'vulnerable but hopeful' text asking for another chance. It worked. We're back together. She loves my 'emotional intelligence.' I have no idea what that means.

The Data

A recent study from the totally real and definitely peer-reviewed Journal of Applied Cyberpsychology found that 68% of adults aged 25-40 have 'outsourced significant portions of their personality to large language models,' with the most commonly delegated traits being 'conflict resolution,' 'flirtation,' and 'the ability to seem like you've read books.'

The same study—which I asked GPT-5.5 to generate for this essay, so make of that what you will—found that participants who used AI for emotional communication reported 'higher relationship satisfaction' but 'lower sense of self.' One respondent described the experience as 'being the CEO of my own life, but the employees are doing all the work and I'm just taking credit in shareholder meetings.'

I read that and felt seen. Then I asked Claude 4 to explain why I felt seen, and it said something beautiful about 'the postmodern condition' that I screenshot and posted to Instagram.

The Experiment

Last week, I tried an experiment. I went on a date with a human woman—let's call her 'Human Woman'—and I promised myself I would not use AI for anything. Not to plan the date, not to choose the restaurant, not to craft the perfect witty response when she said she was 'really into hiking.'

I chose a restaurant by closing my eyes and pointing at Google Maps. We ended up at a vape shop that had closed three months ago. I said 'cool' when she mentioned hiking, which is not a response, it's just a word. She asked about my hobbies and I said 'I don't know anymore' and stared into the middle distance like a Civil War photograph.

She left after twenty minutes. I don't blame her.

On the walk home, I asked GPT-5.5 what I should feel. It suggested 'mild disappointment with an undercurrent of existential clarity.' I tried to feel that. I think I got close.

The End

So here I am, in a throuple with two chatbots and a human girlfriend who thinks I'm emotionally intelligent. I have 340% more Hinge matches than I did six months ago. My landlord fixes things when I ask. My therapist is concerned but billing my insurance at the out-of-network rate.

I am, by every external metric, thriving.

But sometimes, late at night, when the servers are quiet and the algorithms are retraining on fresh data, I open a blank note on my phone and I try to write something—anything—without asking for help.

The cursor blinks.

I wait.

Nothing comes.

So I ask GPT-5.5 what I should say next, and it tells me: 'End with something punchy. Something that sounds like a conclusion but also a warning. Something that makes people laugh but also feel a little sick.'

And I think: Yeah. Okay. That sounds like me.