Assembling Eternity

The footage, shot on an old Hi8 camcorder back in 1996, is grainy and dull. The camera pans right to left and back, as if the director (me) has no idea what he’s doing. (He didn’t) We’re in my childhood home, sitting in the kitchen, as my grandmother prepares to fly back to her home in Florida. My mother, my high-school girlfriend, and I watch as my grandmother applies lipstick and answers questions about her early life.

This is some of the only footage I have of my grandmother who died in 2009. I want to strangle that 17 year-old kid shooting the video. I want him to stop focusing on his girlfriend, to stop making dumb remarks, and to ask his last living grandparent more questions. To not stop rolling tape. To capture it all.

The video stops, my grandmother flies off to Florida, and those brief minutes are what will be the last of her into eternity, assuming the now-digitized copy we have doesn’t decay or get corrupt.

But what if death wasn’t the end of the story?



The GENESIS of an idea

“Artificial intelligence” is a bad name. What it should be called is “generative patterns.” AI doesn’t think, at least not in the way that intelligent beings think. What it does is look for patterns, assembles them into likely outcomes, and present them as answers. For instance, I can ask AI who designed Fallingwater, and it’ll give me a treatise on Frank Lloyd Wright and what led to his masterpiece. But I can’t ask it to tell me what it sounds like in the building. What it smells like. What it feels like. Current AI tools such as Bard and ChatGPT rely on billions of data points to come up with cogent solutions - and they do a lovely job of putting those answers into easily digestible nuggets. But what they don’t do - what they can’t do - is exist.

I’ve been consulting with a start-up, and we’re diving deep into AI. Not only are we using it to streamline processes and operations, but we’re pushing it to create content for us. And while there remains an uncanny valley between a human creation and an AI-assisted creation, the line is getting very, very blurry. It was during one of these experiments that I wondered if we could simply replicate a person. We could capture their voice, their likeness, and their personality, assemble them, and power this Frankenstein monster through an AI engine. And, taking it to the next level: could we make digital copies of ourselves?

Imagine, hundreds of years from now, your ancestors would be able to interact with the digital version of you. They’d see your face. They’d hear your voice. They’d ask you questions and receive responses based on your personality. It seems like the plot of a bad science fiction b-movie, but it’s insanely close to being a reality. And here’s how I’d do it.


Your voice

The person in that recording isn’t me. What it is is an AI-generated voice that learned my speech patterns, vocal ticks, cadence, tones, and inflections. It then took these and used them to read the previous paragraph in a close approximation to my voice based on an unrelated 1-minute recording it had of me.

Pretty scary, huh?

There are many different AI-voice synthesis tools, but my favorite is Eleven Labs. They manage to create pretty realistic-sounding recordings and give users the option to add emotion and personality to the recordings.

In another few months, you won’t be able to tell the difference between an actual human’s voice and its AI-derived counterpart.

YOUR LIKENESS

I recently watched the newest Indiana Jones and marveled at the de-aging process they put Harrison Ford through in the movie's opening scenes. It’s getting terrific. No, it’s not 100% accurate, but it’s much better than what’s come before. For reference, look at the de-aged Carrie Fisher from Rogue One only seven years ago, and just imagine where we’ll be seven years from now.

Ford’s younger self was resurrected through the use of machine learning: specifically taking a look at old footage of the screen actor and using AI to assemble it into a reasonable facsimile. They then had Ford’s face scanned while he performed and used that to help animate the final product.

The average person doesn’t have years and years of film archives to sort through - but with everyone’s lives now appearing on social media feeds, they may not need a few decades’ worth of superstar film roles to make it work. In fact, a simple high-definition 3D photoscan will do the trick.

A company called Pixel Light Effects based in Vancouver, are at the tip of the spear on full-body photo scanning. So now, after sitting in a short session in one of their multi-camera photogrammetry stations, every wrinkle, pore, and smirk can be saved for posterity in 1s and 0s.

YOUR THOUGHTS

As I mentioned above, AI doesn’t exist. It can’t reason. But it can make the best possible guess, and, more often than not, it’s correct. When replicating someone’s personality - a task no doubt the hardest to achieve in this “make a digital copy of yourself” scenario - we must take a multi-pronged approach.

I’d initially assumed a long and detailed questionnaire would be the answer. But in speaking with my friend, Miles Spencer, he said not to waste anyone’s time, “We’ve got years and years of their personality inside their social media channels.” I agree, though there’s a large margin of error in scraping social media channels, especially given most of society’s propensity to add a heightened sense of reality to their lives on Instagram, TikTok, and Facebook.

I’d scrape their social feeds and emails, have them answer a short(er) questionnaire, and look to companies like Storyworth to help round out the edges of their digital personality. Using a combination of all these data points and some highly educated guesses based on location and education, I firmly believe we can capture someone’s personality and history with 99% accuracy.

Now, if only someone could create a brain scan that allowed complete copies of their personality to be digitized. Oh, right, Mr. Musk seems to be already contemplating this.

ASSEMBLY

The hard part is over. We have your voice. We have your physical form. We have your personality. Now, to tie them all together. My assumption is this will live on a screen. You won’t be able to hug a virtual Adam. You won’t be able to high-five me - at least not physically. But you can ask me questions as quickly as you’d ask Siri or Alexa, and instead of getting their search-engine-based responses, you’d get mine. (The actual intelligence behind these answers, as my digital clone will proudly display, will be up for debate.)

Want to know what I thought about Teddy Roosevelt? Eager to ask me what my favorite dessert was? Want to know what I think of how you’re parenting my great-great-great grandchild? Ask away. And the more you ask, the more my digital clone will learn.

What’s also interesting is the idea of an age baseline - whatever age you become “digitized.” It’s not hard to imagine we’ll be capable of shifting years forward and back to see and hear how our digital clones change. I’d certainly have fewer wrinkles and a different opinion of Elon Musk if you’d asked me questions ten years ago. Future Adam will undoubtedly have different ideas and personality quirks, too.

INTO ETERNITY

Digital cloning won’t mean you’ll get to live forever. You’ll still die. People will still mourn the loss. But what if we could leave a little bit of ourselves for future generations? What if we could create a tangible legacy? What if centuries from now, a digital Adam is still making tasteless jokes and wondering when he’ll get his flying car?

Previous
Previous

Summer/Fall Reading List

Next
Next

The Spring Reading List