Can a chatbot make you feel less lonely?
As AI chatbots like ChatGPT, Google Gemini and Microsoft Copilot get better at engaging in conversation and picking up on emotional cues, millions of Americans are interacting with them in everyday life. A June 2025 Pew Research Center survey found that 34% of U.S. adults鈥攁nd 58% of adults under 30鈥攈ave tried ChatGPT.
Jason Thatcher, a professor of information systems at the听Leeds School of Business, is in the early stages of a research project that explores how emotionally adaptive chatbots could be designed to better support users, including whether they might address loneliness.

Jason Thatcher
鈥淚f we鈥檙e going to design AI that鈥檚 emotionally sensitive and able to adapt to people鈥檚 identities and ways of thinking, then loneliness is an obvious place to focus because it鈥檚 a real problem,鈥 he said.
小黄书 Boulder Today recently talked with Thatcher about his research into emotionally adaptive chatbots and how design choices could shape human-AI interaction.
How should chatbots adapt to what users actually want from them?
People don鈥檛 always want the same kind of interaction from a chatbot. Sometimes they need a friend or companion; other times, a teammate, mentor or straightforward expert.听
A well-designed chatbot should adjust both what it says and how it says it鈥攊ncluding tone, clarity, cognitive demand and alignment with the user鈥檚 values and context. Emotional responsiveness doesn鈥檛 always mean being warm or encouraging. Sometimes users want clear, direct guidance. The goal is to meet the user鈥檚 needs in that moment rather than assuming one style fits all.
Why focus on loneliness?
Loneliness is widespread and often described as a crisis. If emotionally adaptive AI is meant to improve lives, helping people feel less lonely is a meaningful, socially relevant starting point.
How will you study whether adaptive chatbots reduce loneliness?
We plan to track people over time and check in regularly through experience sampling. The goal is to see whether these adaptive designs actually make users feel less lonely鈥攕omething we don鈥檛 yet know.
Are there risks to designing emotionally adaptive chatbots?
Yes. People can become overly reliant on bots that feel supportive or persuasive. There鈥檚 also the risk of manipulation, that AI could replace or weaken human connections rather than complement them. That鈥檚 why it鈥檚 important for designers to set clear boundaries.
What should designers keep in mind when building chatbots?
Not every chatbot should be a friend. Designers should build bots for the role users need鈥攚hether that鈥檚 companion, teammate, expert or coach鈥攔ather than assuming emotional closeness is always the goal.
Can you give examples of matching a bot鈥檚 style to its task?
A companion bot might focus on support and reducing loneliness. A productivity bot should minimize cognitive load. A learning bot might nudge someone firmly to stay on track, while a coach or mentor bot should assert authority and guidance.
How do you prevent chatbots from becoming too human-like or crossing boundaries?
One important step is asking users directly what they鈥檙e comfortable with. There鈥檚 also an issue where systems that feel too human can make people uncomfortable or even repelled. Designers need to be careful about how realistic or emotionally expressive bots become and give users control over those settings.
We have to be really careful about the boundary conditions. When is the bot being a good helper, and when does it start to become manipulative? When is it actually leading people to believe that it cares? We don鈥檛 want people to be fooled into thinking the bot is sentient.
Any advice for people just starting with AI chatbots?
Learn the basics of what chatbots can and can鈥檛 do. Be mindful of what you upload because content shared with a chatbot could potentially be stored or subpoenaed. And remember that just because a bot feels conversational doesn鈥檛 mean it鈥檚 a person.听
Experiment to understand its capabilities. For example, ask the bot to take on a specific perspective to challenge your assumptions and provide analytical feedback instead of always agreeing.
For example, ask what would a skeptical expert or an adversarial attorney think of this? Or ask it to give you a smart, sensitive person's reaction to an email you wrote. Ask it to take on the role of a fussy copyeditor. Just remember that the bot is approximating based on what you told it to do.
How do you see AI fitting into human work and creativity overall?
AI isn鈥檛 replacing human skill or judgment. Like spellcheck changed writing without ending it, AI can support decision-making and creativity, helping people think more clearly rather than doing the thinking for them.听
What鈥檚 the goal of this research?
What we鈥檙e really interested in is how people feel when they interact with these systems. If the bot listens well and people feel like they can communicate effectively, that鈥檚 meaningful. But we have to ask: Are they actually less lonely, or just interacting more with technology? That鈥檚 what we want to find out.
小黄书 Boulder Today regularly publishes Q&As on news topics through the lens of scholarly expertise and research/creative work. The responses here reflect the knowledge and interpretations of the expert and should not be considered the university position on the issue. All publication content is subject to edits for clarity, brevity and听university style guidelines.
听