
Between Analog Reality and Artificial Intelligence
Why Trust Will Become the Scarce Resource
Just a few days ago, I read an article in the acclaimed German economics journal Handelsblatt titled “How AI Is Pushing Us into a Digital Trust Crisis.”
Yesterday, I found myself standing in the studios and workshops of THE|MARMALADE in Hamburg – discussing exactly that topic with Bruno (a young and very successful enterpreneur for prototyping in mechanical manufacturing using the latest tools and working for the studio) and Niklas (the son of the founding enterpreneur of the studio).
The timing could not have been better – after having read the articlea above.
What struck me immediately was how deliberately physical and real everything is there.
Special setups for filming inside water tanks, air vortices, pizza ovens, even the cabin of an Airbus A380. A robotic arm precisely guiding a high-speed camera to capture moving parts.
Nothing virtual. Nothing simulated.
Everything built, staged, and recorded in the real world.
And suddenly the core question becomes unavoidable:
What can already be created or enhanced by AI – and where does credibility start to erode?
Niklas described the current development as wave-like. After last year’s hype comes a phase of sobering reflection. With it comes uncertainty, and ultimately a loss of trust. If everything can be generated, manipulated, or optimized synthetically, audiences start to question everything.
As the Handelsblatt article argues, we are entering a digital trust crisis. And the way out will not be technological alone. Producers will have to actively rebuild trust:
- through clear categorization
- transparent sourcing
- and explicit labeling of what is real and what is synthetic
This discussion does not stop at media production. The same challenge is already emerging inside companies.
- Which data is reliable enough to train AI models?
- How do we detect when systems start hallucinating?
- And how do we ensure that decisions based on AI outputs remain explainable and accountable?
The widely proclaimed job losses caused by AI tell only half the story. Yes, tasks will disappear. But new roles will emerge – around data quality, governance, validation, and trust management.
Perhaps this is the next phase of the AI debate.
Less fascination with what is technically possible.
More focus on responsibility.
And a clear understanding that trust is no longer a by-product – it is becoming a critical production factor.
Many new topics and tasks lie ahead for all of us. I’m looking forward to tackling these challenges – and wishing you all a great start to 2026.