Building Robot Charles (and Discovering What I Actually Do)

This weekend, I was waylaid by a stomach bug so mean that it must have been sent by Lucifer himself. I could barely stand for more than a minute at a time and spent most of the day lying down. 

The only upside was that it created a great parenting experiment. I told the kids we were starting bedtime early because I needed to rest, and that they'd have to get ready on their own. I begged them not to make too much noise in the hallway. 

The result? The quietest bedtime in weeks. They didn't push back on taking a shower, there were no squabbles, and they didn’t need to go back downstairs for anything they “forgot” to bring up to their room. I'm now seriously considering faking an illness every night to produce the same result!

The real lesson, though, was that work I thought was adding value at bedtime was actually unnecessary. And while I was acting like a compliance officer, things would go more smoothly if I reframed my role as a motivator.  

That thinking turned out to parallel my AI building experiment this week. Inspired by a colleague, I decided to build an AI version of myself as an executive coach — Robot Charles. I’d show it to you, but it's not yet complete. Technically, the build was straightforward. But as with my previous experiments, the hardest part was curating the knowledge base, since it’s hard for AI, at least for now, to determine from training text whether relevant points reside in a sentence, a paragraph, or several paragraphs. Hence, getting Robot Charles to the finish line requires tedious manual curation of excerpts from my writing.

The most valuable part of the experiment, however, was that it forced me to reckon with two related questions: What do I do? And of the things I do, what's really valuable? 

The first question is self-directed. The second is harder, because it requires us to think of the world from others’ perspectives. It was a surprisingly useful thought exercise in an era of emerging AI capabilities.

For example, it was relatively easy to train Robot Charles to ask good questions, and it will always be able to hold more information than I ever could. But asking the right question given a client's mood or context, being a human witness to their experience, and building enough trust to provide space for a well-timed joke in the middle of a serious topic are how I genuinely add value. Currently, it's hard for AI to replace those contributions.

I’m not worried about AI taking all our jobs, but I am frequently worried about people who underestimate the potential for AI to change what makes them valuable (or not). I usually hear the view as, “I get how AI could do analysis or some other lower-level skill, but it can’t replace my experience.” Of course, that logic fails to recognize that much of our “experience” is just an accumulation of information and analyses, and any judgment of AI capabilities that doesn't include the word “yet” is wildly short-sighted. 

What worries me most is that this kind of thinking is often a self-involved assessment in which we risk overvaluing what we do—the implicit logic being that it must be valuable because I do it. 

The good news is that even as AI reaches parity with some of our skills and makes other skills commodities, we don’t have to be lost. After all, Four Seasons probably uses the same standard laundry machines as Holiday Inn, and high-end restaurants use the same granulated sugar you can buy at the grocery store. Superior value doesn’t always come from having superior inputs. Instead, it’s the creative combination of inputs and knowing which inputs others find to be valuable that matters.

Next
Next

AI Can’t Replace the Librarian