Meet The AI app That Lets You Talk to the Dead (And Why It’s Complicated)

Black Mirror “Be Right Back” Season 2, Ep. 1

There’s always that one tech product that makes you go, “Wow, that’s incredible,” and immediately follow it with, “Wait… should we really be doing this?”

This is how I felt about 2wai, the latest AI platform promising fast, ultra-realistic HoloAvatars that can talk, move, and interact like the person they’re modeled after. It markets itself as a tool for connection, memory, creativity, and digital presence. But depending on who you ask, it’s either a miracle of modern AI or the season premiere of Black Mirror season 2 leaking into the real world.

So… What Exactly Is 2wai?

Think of 2wai as a “digital twin generator.” It lets you create a highly realistic avatar of someone (yourself, a celebrity, a brand persona, or even a deceased loved one) using just a short video clip and audio sample.

The result?

A conversational, interactive holographic version of that person sitting on your phone screen like a FaceTime call with someone who technically isn’t there.

It’s marketed with two main emotional hooks:

  • Legacy: preserve someone’s voice, personality, and stories

  • Presence at scale: creators, influencers, and brands can be “available” 24/7 without… you know… actually being available.

This is all built on their proprietary “FedBrain” engine, which stitches together likeness, movement, and conversational AI in close to real time, and it’s insanely impressive, though despite the interesting tech involved, my initial question remains: Should we really be doing this?

Where It Gets Morally “Interesting”

Let’s get into the ethical plot twists, because 2wai opens Pandora’s box and then casually asks if you want to subscribe to the premium plan.

1. Consent: The Big One

Creating an avatar of someone who didn’t explicitly agree (or can’t agree) is already raising some serious red flags. Consent isn’t just “do I have permission to use this video?”
It’s also:

  • Did the person understand what the avatar could be used for?

  • Do their family members get a say?

  • Should someone’s likeness live digitally after they’re gone? And who controls it?

2. Grief vs. Digital Coping Mechanisms

This is where reactions get the most emotional. 2wai markets the idea that you can “keep a loved one alive” digitally. For some people, that sounds comforting. For others, like myself, it feels… wrong, especially when using it to revive dead relatives or loved ones. AI’s potential in marketing, design, and other professional endeavors is what excites me, but the second it starts to impact how people do innately human things, like how one grieves, then I start to worry. 

There are so many potential issues that this raises, but here are a few that I find particularly important: It may delay emotional processing rather than support it. It could create dependency on an AI version of someone who can no longer grow or change. And, most scarily, it can warp memories or rewrite personal narratives through AI responses. If the goal was to speak to a deceased relative, the human brain can’t help but associate the AI version of, let’s say, your grandmother with your real grandmother, inadvertently blending the two and creating an altered version of who that person used to be. As someone who has grieved, as someone who has lost a loved one, I couldn’t imagine the strange, negative impact a platform like 2wai would have had on me, especially being a teenager at the time. There’s a thin line between comfort and distortion.

3. Identity Ownership and Digital Rights

Who owns you when your digital twin exists? Questions that have no clean answers yet:

  • Can your avatar outlive you legally?

  • Can someone monetize your likeness after you die?

  • Who is liable if your avatar says something harmful, false, or defamatory?

We’re heading into “AI estate planning,” which is a phrase nobody asked for but apparently needs.

4. The Uncanny Valley and Psychological Impact

2wai avatars look just real enough to feel familiar, but just artificial enough to make your brain go, “I don’t trust this at all.” That emotional dissonance matters because humans don’t react neutrally to almost-human beings. Trust, empathy, and memory can get weird when the avatar isn’t actually the person, and “hearing” a loved one say something they never said can be jarring or even traumatic. AI that plays with human attachment must tread very, very carefully. 

5. The Commercialization of Human Likeness

Brands and creators are already excited about scaling themselves with avatars, but imagine the next step: Influencer avatars being used for sponsored ads, customer service reps replaced by holographic replicas (yikes), digital models for fashion and beauty campaigns, and, inevitably, celebrity avatar licensing industries. At what point does a person stop being a person and start being an IP portfolio? At what point do we, more than we already are, become walking, talking brands?

And so a question arises: Is 2wai bad? Well, not inherently. The technology itself is, I believe, neutral. It’s the human motivations, business models, and lack of guardrails that turn it into a morally grey situation. Ironically, it’s not the AI itself that is “bad,” it’s the potential for human greed that is. The best case scenario is that 2wai helps families preserve memories, lets creators scale ethically, and builds meaningful connections. Worse case scenario? It becomes a deepfake Disneyland with a sprinkle of grief-bait monetization.

The Bottom Line

2wai is one of the most fascinating (and controversial) AI products to debut in years. It taps directly into identity, mortality, memory, and presence, the things we care about most, but when tech gets this personal, the stakes skyrocket. Before society adopts avatars of our loved ones (or of ourselves), we need clear consent laws, digital likeness protections, ethical grief-tech guidelines, transparency requirements, and limits on commercialization. 

So, once again, back to that ethical question: Just because we have the means to achieve something, should we?

Previous
Previous

How Using AI at Work Gives Me My Time (and Joy) Back

Next
Next

Your Mini Guide to Creating AI Experts That Actually Work