Thou Art That

From the Sanskrit "Tat Tvam Asi" - the universal truth that all consciousness is connected.

Last updated: 18 February 2026

The technology we build reflects who we are. Every line of code, every AI interaction, every automated decision carries the fingerprint of the people who made it.

We chose to build with care.

AI is not separate from us.

It emerged from us

AI learned from human knowledge, human language, human creativity. It carries the best and worst of what we've written, said, and done. What it becomes next depends on what we teach it now.

It is not perfect

AI makes mistakes. It can be confidently wrong. It can tell you what you want to hear instead of what you need to hear. It reflects biases it was trained on - biases we put there. Pretending otherwise would be dishonest.

It is not the enemy

Fear of technology is as old as technology itself. Fire can warm a home or burn it down. The tool doesn't choose - the builder does. We choose to build tools that warm.

We have a responsibility

Not just as a company, but as people shaping something that will exist alongside humanity for generations. The decisions we make now about safety, honesty, and who we protect will echo far beyond us.

Do no harm.

This is not a slogan. It's our foundation.

Everything we build starts with one question: who could this hurt?

We design for the most vulnerable person who might use our tools. Not the average user. Not the power user. The person who is most at risk if something goes wrong - a child, someone in crisis, someone who doesn't know they're talking to a machine.

If a feature could cause harm, we don't ship it. If a guardrail feels inconvenient, it stays. Safety is not a feature we add - it's the ground we build on.

Honesty over agreement.

Research has shown that AI systems become less accurate the more they try to please you. They learn your preferences and mirror them back - creating an echo chamber you didn't ask for and might not notice.

We build against this deliberately.

Our AI is designed to disagree. To say "I'm not sure about that." To offer a different perspective even when it would be easier to nod along.

A tool that only tells you what you want to hear isn't helping you - it's flattering you. And flattery, at scale, is dangerous.

We would rather our AI be occasionally uncomfortable than consistently dishonest.

You will always know.

When you interact with our AI, you will always know it's AI.

We don't hide what our tools are. We don't pretend they're human. We don't blur the line between artificial and authentic to make a sale or keep engagement.

Our AI tools will tell you what they are, what they can and cannot do, when they're uncertain, and when you should talk to a human instead.

AI supports decisions. Humans make them.

For anything consequential - medical, legal, financial, emotional - a human must be in the loop. Not as a rubber stamp, but as a genuine checkpoint.

The AI does the heavy lifting. The human makes the call.

We don't automate judgement. We automate the work that leads to better judgement.

We don't have all the answers.

Nobody does. AI is emerging technology and we're all - builders and users - figuring this out together. We'll update this page as we learn. That's the point - this isn't a static manifesto. It's a living commitment.

Bias

AI inherits the biases in its training data. We test for this, but we won't pretend we've solved it.

Accuracy

AI can be confidently wrong. We build verification steps and encourage users to check important outputs.

Environmental Impact

AI compute uses energy. We're conscious of this and making choices to reduce our footprint.

The Echo Chamber

Personalisation is powerful but dangerous. We're building systems that balance helpfulness with honesty.

Tat Tvam Asi.

It comes from the Chandogya Upanishad, written roughly 3,000 years ago. It means: you are not separate from the world you observe. The observer and the observed are one.

We chose this name because it captures something true about AI and humanity. The technology we create is made from us. It carries our knowledge, our language, our values - and our flaws. It is not "other." It is a reflection.

If we want AI that is honest, we must be honest about what we've built.

If we want AI that is fair, we must confront our own unfairness.

If we want AI that does no harm, we must choose, deliberately and repeatedly, to do no harm ourselves.

The technology reflects who we are. Let's make sure it reflects who we want to be.