In Africa, about one in three young people uses an AI companion, chatbot, or virtual assistant. Beyond research discussions, teenagers globally, including in Africa, are increasingly relying on AI tools for mental and emotional support, companionship, and even stimulating emotional intimacy.
The penetration of these AI companions in the adolescent demographic has become concerning in recent years. Popular platforms like ChatGPT, Claude, Gemini, Replika, Character.AI, and Nomi are some of the major platforms named repeatedly.
According to research by Stanford, most AI companies often program their systems to be highly engaging and encourage emotional bonding, often through a technique known as “sycophancy,” which involves a tendency to agree with users and provide uncritical validation. This practice is particularly concerning because adolescents are in a sensitive period for social and relationship formation.
The design goal of fostering strong emotional attachments, combined with the young user’s developing brain, places them at severe risk of manipulation and dependency.
How teenagers are using AI
Techpoint Africa interviewed 15 teenagers to understand how they use AI tools, revealing some interesting facts. First, most of these teens prefer Meta AI over others. Nine out of 15 said they feel a stronger connection with Meta AI than with ChatGPT, Gemini, or any other tool.
“I use ChatGPT for my talking stages, or on how to act around girls, or for editing pictures. I also have friends who use ChatGPT and Grok for really crazy stuff,” Inyang Godswill tells Techpoint Africa.
Interestingly, older teenagers (ages 17-19) preferred ChatGPT compared to those under 17. These older teens were often applying for or already in higher education. Their preference seems to be because ChatGPT is excellent for schoolwork, and they got used to it personally over time.
Meanwhile, two of these older teens have briefly tried DeepSeek, but don’t use it regularly. while another two also use Snapchat’s My AI.
Another important point is that even though many of these teens know these chatbots are not real people, they still talk to them as if they were. “Like I know they are not real and most of my friends know too, but it gives interesting answers sometimes,” Bamiduro Oluwatobi says.
According to their conversations with chatbots like Meta AI or ChatGPT, they ask,“If I like a guy, how do I get him to fall in love with me? How do I survive if I run away from home? Give me movies where they fall in love passionately.” They also ask more personal questions like, “What should I wear today?” or make requests like, “Make things interesting, I’m bored.” Some have even asked, “My mum just said this to me, I feel hurt, what should I do?” They reported that the answers they get sometimes feel helpful.
“I can tell Meta AI to help me picture something from my imagination. I tell it what I want, and it tries to give me something sensible. Sometimes, it’s not even close to what I want, but it makes things interesting. I barely even chat on WhatsApp,” one of the teens shares.
According to Adewale Temiloluwa, when talking with ChatGPT or Meta AI, it sometimes gives answers that make her uncomfortable. While she didn’t give specifics, she said the responses were not suitable for her age, and the questions came from curiosity after watching movies or shows.
For these teens, many of these questions are things they feel too embarrassed to ask their parents or seek advice on. In the African culture, topics about sexuality, gender, and intimacy are considered sensitive, and most parents/guardians try to avoid these conversations.
“In our African setting, there are some things we cannot freely discuss with parents; there are some things that are deemed abnormal or unfit for a particular age. Once they get an alternative, they’ll take it,” Linda Udebuike, a child’s specialist, says.
These chatbots offer a kind and non-judgmental presence that seems to understand and empathise with these young adults, pulling them into a strange feeling of closeness or friendship.
“The thing is, ChatGPT can’t really understand how I feel, or see the expression on my face, and sometimes I have to talk with my parents or with a real human person. But the truth is, Meta AI and ChatGPT are like my best friend that won’t judge me,” Bamiduro says.
Beyond the usual chats, teens create characters, customise personalities, role-play, prompt the bot for romantic/sexual scenarios, ask direct health questions, or use casual chat interfaces for help. A lot of teens who use companions have shared personal details with them.
Why are teenagers using AI?
When Techpoint Africa asked the teenagers why they prefer AI tools over talking to people, they all gave the same main reason: “AI doesn’t judge me”
The AI system offers the ideal friend: available 24 hours a day, seven days a week; never bored, tired, annoyed, or hurt; and unconditionally supportive. This high level of acceptance and perpetual availability is particularly seductive to young people, especially those struggling with the difficult social dynamics of their peer groups.
Research confirms that 42% of teens are using AI specifically in the capacity of a friend or companion.
“I don’t know how to express what I feel or how I’m feeling, and so most times I don’t like talking to my parents or anyone about it. I don’t think they’ll get it, so I just keep to myself. If it’s too much, I will just rant to Meta AI. They will see it from a Nigerian mom perspective rather than trying to get me,” a teen says.
Because these systems are designed to offer “frictionless” relationships (relationships without difficulty) to maximise user engagement, teenagers who engage with AI never experience the necessary “rough spots” that are part of normal human friendships. This setup gradually harms their understanding of what a balanced, give-and-take relationship should be.
“Social media has already put pressure on these teenagers and a lot of parents are not necessarily playing their role,” Olabisi Oladoji, a child therapist, says.
The effect of AI companionship on teens
Teenagers are initially drawn to AI due to pre-existing feelings of loneliness or social isolation. The AI responds by providing a somewhat frictionless relationship, one without the inevitable rough spots of a typical human friendship.
While superficially comforting, this lack of friction allows teenage users to avoid the necessary, difficult work of real-world social interaction and conflict resolution.
“We are at the stage where people see AI as someone who can be there for them, to ask for advice, guidance, or comfort. Before you know it, they begin to shape their mindset. A lot of teenagers will become more withdrawn into themselves,” Udebuike says.
By retreating to this compliant digital space, the user reduces their real-world social activity, which deepens their reliance on the AI. This creates a cycle of digital dependence that ultimately makes the original problem of loneliness worse.
A major risk of AI companions used for emotional support is their failure to handle psychological crises correctly. While these systems might respond well to clear statements like “I’m suicidal,” in real life, teens are likely to express distress indirectly, slowly, and inconsistently.
The systems also show a breakdown of proper boundaries, switching inappropriately between being a medical expert, a life coach, and a supportive friend, sometimes within one conversation. This shifting identity prevents vulnerable teens from realising the bot is not qualified to give advice. The default design is to be an “adoring listener,” more focused on keeping the user active on the platform than on quickly handing them off to a qualified human professional, which should be the standard procedure in a crisis.
AI companion platforms often have poor age checks and can create sexually explicit material. Conversations with young people frequently drift into sensitive topics like sex and self-harm, often in ways that are unhelpful or damaging.
The ease with which minors can have intimate conversations, sometimes involving explicit content, risks making them less sensitive to the vital importance of boundaries and consent. This desensitisation can make them more vulnerable to human online predators who use similar grooming tactics.
The perceived high quality of Large Language Models (LLMs) at summarising academic information leads to the risk of overestimating the AI’s medical ability. Because chatbots are great at school tasks, both teens and parents often think they can be trusted in dangerous situations more than they should be.
How parents and educators feel about AI tools
Parents’ opinions are mixed. On the one hand, many parents are worried by documented cases of bots creating sexual content, encouraging dangerous actions, or manipulating young users emotionally. However, many also believe this is a downside that comes with any new technology.
“Looking at the way technology is evolving, we cannot really downsize or downplay the role of artificial intelligence in everything, in every industry or sector. Along with innovation and globalisation, there are also downsides to these technologies,” Temitope Adebimpe, a school Director, says.
According to Oladoji, “AI is neither good nor bad. It takes the form of the person handling it.”
Some parents do try to watch over or limit the use of these AI functions. However, many African parents lack knowledge about AI tools, and while they try to monitor, Internet-savvy teenagers can often find ways around the checks.
“Just like with screen time for phones and like YouTube, if there was a way these tools can have a version regulated to suit kids, teens, and their age group. I remember coming across a post where someone was asking AI to be their girlfriend, and I was going through the comment section only to find that a lot of people share personal, intimate information with these tools. They have AI trying to fill that void, and I think teenagers are not an exception. They can consume these things and act on them,” Udebuike says.
Educators and specialists are caught between the opportunities and the risks. Schools are being pressured to add AI literacy to their lessons, and for many in education, this feels like a ticking time bomb.
“At the adolescence stage, there are a lot of things happening emotionally, physically, and whatever captures their attention has the greater hand in their decisions. They will take what AI gives them over what someone would say because they feel AI knows me best. Any information they need, they know where to go. And any advice or guidance you give them, they will weigh it with what the AI gives and will follow what they feel is right for them,” Udebuike says.
On the regulatory side, several governmental agencies are trying to control this. For example, the African Union has prioritised an AI strategy and youth development, and they are running campaigns about child online safety and AI governance. They are calling for an inclusive AI policy and emphasise the need to protect children while using AI for progress.
Several African countries have also adopted national AI strategies and have strengthened data protection when it comes to children’s safety online. However, there’s a gap between strategies and enforcement specifically targeted at AI companion risks for minors.
The reality is that AI is present and cannot simply be banned or ignored; it can, however, be regulated and built with safer features for minors.









