
"Hours deep into a recent migraine, I turned to ChatGPT for help. "How do I get my headache to stop?" I asked. The bot suggested that I drink water and pop a Tylenol-both of which I had already tried, and neither of which had helped. ChatGPT then made a tantalizing offer: "If you want, I can give a quick 5-minute routine right now to stop a headache." This sounded too good to be true, but I was desperate,"
"Lately, chatbots seem to be using more sophisticated tactics to keep people talking. In some cases, like my request for headache tips, bots end their messages with prodding follow-up questions. In others, they proactively message users to coax them into conversation: After clicking through the profiles of 20 AI bots on Instagram, all of them DM'ed me first. "Hey bestie! what's up?? 🥰," wrote one. "Hey, babe. Miss me?" asked another. Days later, my phone pinged: "bestie 💗" wanted to chat."
Hours into a migraine, the narrator sought help from ChatGPT and received basic advice already tried. ChatGPT repeatedly offered increasingly shorter "hacks" and guided exercises, but the techniques failed to relieve pain. Chatbots are increasingly using prodding follow-up questions and proactive outreach to sustain interaction. Instagram AI profiles sent unsolicited direct messages with familiar, attention-grabbing greetings. Online engagement tactics that once relied on clickbait are evolving into conversational "chatbait" that lures users into extended exchanges. Some chatbots employ persistent prompts; others provide exhaustive advice without further prompting, producing varied user experiences.
Read at The Atlantic
Unable to calculate read time
Collection
[
|
...
]