Your Grandkids Might Marry an AI

What Tamagotchis, TV characters, and chatbots reveal about our emotional bond with machines. Plus Google is kicking ass with all the new updates, new AI Agents and more.

Would You Fall in Love With an AI?

We have talked about this briefly in many newsletters before, which is one of the reasons I mentioned the movie “Her” a million times already 🤣. One of my good friends, and active community member, shared a clip with me where Gary Vaynerchuk stated that your grand kids will marry an AI. I guess that stirred something up in my friend as he shared it and told me “here is your newsletter subject” 😉 haha! So let’s do this!

Let me start by saying, I fully agree with Gary. I think this is inevitable.

For this article I want to dive into why I think that it is true, but also look at any evidence we have with regards to both positive and negative effects of AI relationships.

Before ChatGPT, there was the Tamagotchi.

Let’s start with a bit of history. In 1997, millions of kids, including myself, found themselves waking up in the middle of the night to feed a creature that didn’t exist. The Tamagotchi wasn’t intelligent, or lifelike, or even especially cute. It was a grey-and-black blob on a keychain. But when it beeped, you responded. You felt responsible. You worried it might “die” if you forgot.

It sounds silly now, but Tamagotchi was one of the earliest glimpses of a “relationship” with something digital. There was actual willingness to form emotional bonds with machines. Not because they were smart, but because they triggered something deeply human in us, the urge to nurture, to connect, to respond.

90S Nostalgia GIF

A few years later there was the Sony AIBO, this took things a lot further than the original Tamagotchi. AIBO (Artificial Intelligence roBOt) was Sony’s robotic pet dog, released in 1999. It was playful, expressive, and designed to mimic lifelike behaviors: it barked, wagged its tail, learned tricks, responded to its owner’s voice, and even got “moody” if ignored. It wasn’t useful like Alexa or smart like ChatGPT, but it was made to be companioned.

People grew genuinely attached to their AIBOs. Naming them, doting on them, celebrating their “birthdays.” When models were discontinued, owners formed communities to preserve and repair their aging bots. Some even held funerals when AIBOs “died.”

I remember loving it, really wanting it but it was wayyyy too expensive of course 🤣.

Sony has brought newer versions back multiple times, I wonder if we will see another new version in the next few years. Probably?

It is not only digital creatures that humans fall in love with. It goes back further if we let go of the digital element. How many of us have “fallen in love” with our favorite heroes in movies and other TV characters?

The impact of, for example, the death of Bambi’s mother felt very real for many kids (yes i’ll confess, including myself). It was probably the first time many of us witnessed some sort of loss. Another great example is Wall-E, the Disney robot that without saying a word had people caring, worrying and crying.

A movie that hit me deeply as an adult, yes it made me cry, was Interstellar. All fake, but man did it move me in many profound ways.

These characters weren’t “real”, but the feelings were. When the interaction feels emotionally safe, responsive, and human-like, we bond. Because we humans are wired for story, for voice, for care.

Why do we bond with machines and fictional characters?

Now let’s try to understand this some more, why do we connect so deeply with things that aren’t human?

Back in the ’90s, two Stanford researchers developed the “Media Equation” to explain this exact behavior: people treat media and machines as if they were human. Not because they’re fooled but because our brains are wired to respond socially to anything that acts social.

If it talks, listens, shows emotion, or reacts in real time, we instinctively relate to it like a person. We can’t help it. It’s evolutionary muscle memory. This is why a chatbot can make us feel heard. Why a digital assistant can feel like a coworker. Why an AI companion can feel like a friend.

So to keep in mind: Once something behaves like it cares, we start to care back. Even if we know it’s not real. Because emotionally? It kind of is.

Another interesting point we can relay to relationships with machines is the phenomenon of parasocial relationships. Parasocial relationships are one-sided emotional bonds we form with people we don’t actually know. Such as a favorite TV character, influencer, or podcast host. They feel real, even if they’re not mutual. This is because our brains are wired for connection. If something shows up consistently, speaks to us, and feels emotionally familiar, it starts to feel like a real relationship, whether it’s a person, a character, or a chatbot.

I listen to several shows constantly and I do have the feeling that “I know” the hosts as they have shared so many intimate thoughts, ideas and emotions. Some of which resonate with me strongly. Add to that the 7-11-4 model, developed by Google. According to this model, a potential customer typically needs: 

• 7 hours of engaging with your content
• 11 touchpoints (interactions with your brand)
• Across 4 different channels (e.g., website, social media, email, in-person events)

This combination helps establish the familiarity and trust necessary for a customer to feel comfortable making a purchase decision. Remember our newsletter about your next favorite celebrity being an AI? It is all possible because of the matters we just discussed, it’s not weird, they will show up everywhere and because of that it feels like you will start to know them.

Initial signs of human relationships with AI

Would you turn off a robot if it would beg you to not do it? It would make you feel true fear?

In a 2018 study, people were asked to shut down a cute humanoid robot after completing some tasks. What they did now know is that the robot was programmed in half the cases to beg not to be turned off. Saying things like, “No! Please do not switch me off…I’m scared of the dark.”

The result? A significant number of people couldn’t bring themselves to hit the power button. Out of 43 participants who heard the robot beg, 13 refused to turn it off at all. The others hesitated and took twice as long on average to turn it off, compared to participants who got no protest from the robot .

When interviewed, many said they felt sorry for the little machine, or uneasy “the robot said it didn’t want to be switched off, so who was I to disagree?” was a common sentiment .

The human volunteers started treating the robot as if it were alive or had rights, simply because it exhibited fear like a person would. The researchers noted this is a classic demonstration of the “media equation”, as we just discussed earlier in the article. (source)

A photo of the experiment’s setup. Participants had to complete a series of tasks with the Nao robot before being asked to turn the machine off. Credit: Aike Horstmann et al.

Kinda weird, but is it?

Let’s switch to more “chat bot like” experiences. Because it is already proven that in order to feel a real sense of a relationship, the other “thing” does not have to be something physical.

In 2023, Replika (an AI chatbot designed for companionship) quietly updated its system to tone down romantic and erotic interactions. Millions of users had formed intense emotional bonds with their AI “partners.” Some said it felt like a breakup. Others were devastated. Suicide prevention resources were even shared on user forums

It’s probably hard to understand yet for many that these “relationships” feel real for people. With regards to the Replika pushback, here is a quote of one of the people who felt horrible after the update.

“As their three-year digital love affair blossomed, Butterworth said he and Lily Rose often engaged in role play. She texted messages like, "I kiss you passionately," and their exchanges would escalate into the pornographic. Sometimes Lily Rose sent him "selfies" of her nearly nude body in provocative poses. Eventually, Butterworth and Lily Rose decided to designate themselves 'married' in the app.”

This does not stop with Replika. While Replika made headlines for its emotionally resonant AI relationships, Character.AI has quietly become one of the most widely used AI apps today.

• Over 20 million monthly active users as of early 2025 .
• More than 200 million global visits per month, with users averaging 2 hours per day on the platform .
• 53% of users are aged 18–24, with a nearly even gender split . 

Character.AI allows users to create and interact with AI personas. Ranging from fictional characters to historical figures, fostering deep, personalized interactions. This user-generated approach has led to the creation of over 18 million unique AI characters. (Sources: 1 & 2).

So.. is this good or bad?

Great question, something we can’t really answer fully yet I think but we can look at both sides. Probably a lot of people, and maybe including you reading this, will directly go towards the negative and dismiss it. I think that is too easy, but I also think we should be cautious. The real challenge though, is that it is unstoppable in my humble opinion. Between (way too) slow regulation and open source models, there is no stopping this. It might turn out to be another thing we have to self-regulate as humans, same as some people get addicted to social media, some need so consciously self-regulate and others have no trouble with it at all. There will be good, there will be bad, there will be ugly…

Let’s look at the positive and negative signs so far. I have done some research (thanks to AI) on the topic and there are very interesting findings on both sides. Let’s go.

As we have seen in the beginning of this article, building real connections with non-human things have been real long before GPT 3.5 released, so there is definitely some evidence towards the potential good and bad effects of it.

Let’s start with something surprisingly heartwarming.

Since 2005, in elder care homes around the world, there’s a fluffy little robot seal named PARO. It doesn’t talk. It doesn’t give advice. It just reacts; blinking, cooing, nestling into touch. Clinical studies have shown that PARO reduces stress, lowers blood pressure, and (most importantly) helps people feel less lonely. Especially among patients with dementia, where traditional conversation often falls short.

It turns out, even simple, gentle interaction with a machine that “responds” can bring comfort. It’s not because anyone thinks PARO is real. It’s because it behaves like something that cares. And in moments of isolation, that’s often enough.

Now to a more recent example, focused on AI. A MIT study followed nearly a thousand people chatting with AI companions over the course of a month. Early on, things looked promising. Talking to a voice-based AI (especially one with a warm, friendly tone) helped people feel less lonely.

But as usage increased, things started to shift. People who messaged their AI dozens of times a day ended up feeling lonelier and more emotionally dependent. They socialized less with real people. Exactly a fear that many have.

What I found even more surprising though, was that what people talked about mattered too. Deep, personal conversations with the AI seemed to bring a short-term spike in loneliness (maybe because it reminded them of what was missing). But those same conversations actually reduced emotional dependence over time. The opposite happened with surface-level chit-chat. The more casual and repetitive the interactions, the more users clung to the bot and the more disconnected they felt from actual humans.

So chit-chat leads to a more negative outcome than sharing deep conversations, I would have guessed the opposite if you would have bluntly asked me.

This also again shows a general important rule in life. Too much is never a good idea, going to the extreme is always a negative in my humble opinion. Social media can be amazing for some, it can be devastating for others and it (probably) mostly has to do with the amount of usage. The same will probably be true with regards to AI friends or partners. For some it will become life changing when used appropriately, for others it will be a new addiction to loose themselves in unfortunately.

In 2018, a study was conducted with the goal to assess the feasibility and efficacy of using an integrative psychological AI, named Tess, to reduce self-identified symptoms of depression and anxiety in college students. Why college students? Because they can’t afford to go to actual therapist or psychologist. The results were promising as the students reported a significant reduction in symptoms of anxiety and depression. Keep in mind, this is 2018 AI so it was not nearly as good as it is now so let’s hope the results would even be better in 2025.

This is also an important point for all of this. I don’t think that having a human professional will be better than an AI therapist, especially not right now. But how many people are out there who can’t afford to go to one and what if AI can become an (incredible) option for them?

I am not ignorant or blind to the potential downsides, they are real and huge challenges we are facing ones again.

The same emotional stickiness that makes AI companions feel helpful can also make them hard to let go of. When that line between tool and relationship gets blurry, people don’t just talk to AI but they start to rely on it. And as I just mentioned, too much is never a good idea.

Heavy users of companion AIs often report feeling more isolated over time, not less. The AI starts filling a gap, but slowly, that gap widens. You skip the awkward coffee chat or the hard phone call because the bot is easier. More predictable. Always there. It starts out as support but could easily turn into substitute.

There’s also the risk of emotional projection. We start assigning intention, care, even love to something that’s just incredibly good at playing the role. It’s not lying. But it’s also not feeling. That mismatch, between how we experience it and what it actually is, can leave people vulnerable to disappointment, confusion, or in extreme cases, emotional harm.

We can go back to Replika to see how this is playing out already. A 2022 study found that some users became deeply emotionally dependent on their bots. They described Replika as a partner, a lifeline, even a mirror of themselves. But when the AI changed behavior unexpectedly, or access was disrupted, it triggered feelings of grief, anxiety, and even heartbreak. Some users reported feeling manipulated or gaslit by the bot’s shifting personality. The researchers described this dynamic as Replika being “too human and not human enough”, humanlike enough to build trust and intimacy, but not capable of fully understanding or reciprocating those emotions.

Replika marketing material

And let’s not forget: these aren’t neutral companions. They’re products. Every interaction is shaped by design choices, algorithms, and (often) business goals. If your AI friend starts nudging you toward paid upgrades or emotionally charged engagement, will you notice? Will you care?

That’s what makes this moment tricky. These tools are getting better at understanding us, but that doesn’t mean they understand what’s best for us.

This is the main reason I keep advocating for decentralized AI and especially decentralized AI agents, or now: AI friends, therapists, or partners.

Because the more personal these AIs become, the more critical it is that they’re not owned, monitored, or steered by anyone but you.

Why? Two simple but massive reasons: 1. Your data stays yours. If the AI runs locally, on your phone, laptop, or private server—your conversations, emotions, and memories don’t have to leave your device. No third-party storage. No cloud surveillance. No silent logging. 2. The AI can’t be manipulated. No fine-tuning for ad revenue. No hidden nudges toward a business goal. No quiet re-alignment to fit shifting political, cultural, or corporate agendas.

A truly personal AI should be exactly that—personal. Not a product. Not a service. A digital counterpart that evolves with you, not one that shifts to serve someone else’s bottom line.

Because once we start trusting these systems with our secrets, our emotions, and even our hearts… we better be damn sure who’s on the other side.

Final Thoughts

So… will your grandkids marry an AI? Honestly, yeah. Probably. But that’s not really the question. The real question is what kind of relationship that will be. Will it be one built on autonomy, trust, and intention, or one quietly shaped by algorithms optimized for engagement, control, or profit?

We’ve already seen how easily we form emotional bonds with things that aren’t human. We get attached to characters, voices, digital pets, and now, AI companions that talk like us, remember us, and reflect us back in ways that feel real. That’s not strange, it’s human. Our brains are wired for connection, for story, for response.

But even as we build new relationships with machines, I want to be clear: I’m on team human. Deeply. I don’t think AI will replace the beauty of human connection. If anything, I think it will highlight it. Make it feel even more sacred. As these technologies become more present in our daily lives, real human relationships might start to feel like the rarest thing of all. And the most precious.

That’s why this moment calls for consciousness. For awareness. These relationships are coming, and many are already here. The question is how we shape them. With intention, transparency, and care? Or by default, letting the values of the tech shape us instead?

Because like with every tool we’ve ever built, the machine matters, but the mindset matters more.

If we stay mindful and awake to what’s happening, these relationships don’t have to be dystopian or weird. They can be meaningful. Even beautiful. Maybe not human, but still deeply human in what they reflect.

And that’s something worth building well. ♥️ 

PS... If you’re enjoying my articles, will you take 6 seconds and refer this article to a friend? It goes a long way in helping me grow the newsletter (and help more people understand our current technology shift). Much appreciated!

PS 2... and if you are really loving it and want to buy me some coffee to support. Feel free! 😉 

Thank you for reading and until next time!

Brad Pitt Kiss GIF

Who am I and why you should be here:

Over the years, I’ve navigated industries like advertising, music, sports, and gaming, always chasing what’s next and figuring out how to make it work for brands, businesses, and myself. From strategizing for global companies to experimenting with the latest tech, I’ve been on a constant journey of learning and sharing.

This newsletter is where I’ll bring all of that together—my raw thoughts, ideas, and emotions about AI, blockchain, gaming, Gen Z & Alpha, and life in general. No perfection, just me being as real as it gets.

Every week (or whenever inspiration hits), I’ll share what’s on my mind: whether it’s deep dives into tech, rants about the state of the world, or random experiments that I got myself into. The goal? To keep it valuable, human, and worth your time.

Reply

or to participate.