A play on Einsteins famous quote: “insanity is doing the same thing over again expecting a different result”. Graphic by the author.

Insanity is doing the same thing as everyone else expecting different results

I asked my AI-friend Perplexity: “Are companies offerings becoming more and more similar because of AI?” and it suggested the opposite is true:

Helge Tennø
4 min readNov 29, 2024

--

  • “AI offers significant competitive advantages by enabling companies to predict market trends, personalize customer experiences, and optimize operations.
  • AI’s ability to analyze vast amounts of data allows businesses to uncover unique insights and opportunities for innovation.
  • Generative AI, in particular, enhances personalization by tailoring marketing messages and product recommendations based on customer profiles.
  • While AI improves efficiency by automating routine tasks and supporting decision-making, it also fosters innovation by identifying new patterns and combinations that might not be immediately apparent to humans”

This sounds like a nice marketing text for AI, but I’ve personally felt (anecdotally) that:

  1. We over-rely on these algorithms to do the work for us. As in: assuming we can buy our way out of the problem by investing in “data” instead of thinking our way out of it by figuring out if we are asking the right questions?
  2. The people making them are remarkably similar. Both in terms of the abundance of white tech. bro’s in their 30’s and 40’s and also that they are mostly engineers (not plumbers, physicians, social scientists, nurses or gardeners) turning every topic into mathematics, reducing it to 1’s and 0's.

I’m seeing that we are trying to achieve competitive advantage or even differentiation by being better at math or collecting more data. Rather than distinguishing between what has influence (causality) and what is just a pattern in the data (correlation).

We are verging towards getting worse at the process of understanding our customers because we are limiting ourselves to the data the computer can crunch most easily (we are studying what people are doing instead of being inspired by why they are doing it).

The gap between what we want our organizations to become and what we put into them to get there is growing.

So I ask Perplexity again: “Isn’t this just nice marketing? Isn’t reality that most AI-engines are the same and so companies will use the same AI to find the same answers to the same questions?”

In which Perplexity responds:

“You’ve highlighted a critical insight that many AI optimists overlook. The reality is likely closer to your skeptical view [shortened]:

Convergence Risks in AI:

• Most AI models are trained on similar datasets

• Large language models like GPT are fundamentally similar in architecture

• Companies often use identical foundational models (OpenAI, Google, Anthropic)

• Training data sources significantly overlap

• Optimization algorithms are remarkably consistent across tech companies

Potential Consequences:

• AI will likely generate increasingly homogeneous outputs

• Competitive differentiation becomes harder

• Innovation might slow as AI systems converge on “optimal” solutions

• Reduced human creativity and unique perspectives

• Risk of creating echo chambers of “acceptable” responses”

Now there are counter-arguments that models can be fine-tuned, proprietary data, human creativity etc.

But this is where I think it all resides:

AI is an arms-race. No-one wants to be left behind because we don’t want to be the only one not having an oracle helping us look into the future (FOMO).

But AI doesn’t offer a competitive advantage. Because everyone is using it in the same way: Buying best-practice solutions, trained on the same data optimized in the same way.

As you don’t see having a car, a phone or using electricity as offering a competitive advantage.

And as this sameness becomes apparent investments are increased to outspend each other on the math and the data instead of saying STOP! And asking: what is everyone else doing? Which questions are everyone else asking?

And then: what questions should we be asking?

Because the AI is captive to the questions we ask. With the same data set, depending on the question the output is different (as an example my two questions to Perplexity above).

And so the power is not in the math and the data (you still need lots of it), but in the questions.

Ask silly questions get silly answers. Be the only one to ask a question, be the only one with the answer.

We are outspending each other to be the best at AI. But by changing the focus away from what is directly in front of us (the algorithm) towards what has been holding our hand all along (our questions) the future is offering itself to us almost for free.

--

--

Helge Tennø
Helge Tennø

No responses yet