Making information traceable.

We build provenance tools for journalism and deliver AI training for newsrooms — helping media organisations build trust with their audiences through transparency and technology.

Information Provenance

Content Credentials let audiences verify where digital content comes from. They already exist for images and video. We're building the first implementation of the C2PA standards for text, developed with the BBC and Stanford University.

1

Extract

AI identifies claims from canonical sources like government press releases.

2

Match

Statements in an article are linked to the original claim, even when the wording differs.

3

Sign

A tamper-evident manifest is created, cryptographically binding both together.

4

Embed

A verifiable credential sits inside the article for anyone to inspect.

AI Training for Newsrooms

We work with media organisations to build real AI capability across their teams, from editorial leadership to technical staff.

AI Fundamentals

How AI works, what it can and can't do, and what it means for journalism. No hype, no jargon.

Workflow Mapping & Test Beds

Map existing processes, identify where AI adds value, and design focused experiments to test it.

Newsroom Leadership

For editors and leaders: understanding risk, setting guidelines, and making informed decisions about AI adoption.

Technical Innovation

For teams ready to build: responsible vibe coding, prototyping with Python, and developing working test beds using AI tools.

We've worked with professionals from

BBCStanford UniversityCarbon BriefWashington PostNPRLondon School of EconomicsCBCCorrectivPoynterAl JazeeraMedia CapitalReuters Institute BBCStanford UniversityCarbon BriefWashington PostNPRLondon School of EconomicsCBCCorrectivPoynterAl JazeeraMedia CapitalReuters Institute