For readers outside the Netherlands, a little context helps. NRC is one of the main Dutch newspapers. Peter Van der Meersch is a well-known senior media figure who previously led NRC and later held senior roles within Mediahuis in Ireland. That is one reason this story travelled beyond the Dutch press and into English-language coverage as well.
The facts of the case are straightforward. NRC investigated Van der Meersch’s use of AI in his own newsletter work and reported that fabricated quotations had been published. The Guardian then reported on 20 March 2026 that Mediahuis had suspended him from his fellowship role after NRC’s findings, and that several quoted people said they had not made the statements attributed to them.
In his own response, Van der Meersch wrote:
“I summarised reports using AI tools and worked from those summaries, trusting they were accurate.”
and:
“I wrongly put words into people’s mouths”
Source: Columbia Journalism Review and NL Times .
Van der Meersch’s apology matters because he is acknowledging a real editorial failure. At the same time, the mistake was not only that he trusted the output too much at the end. Our reading is that the workflow itself was too loosely instructed and too weakly verified. He was clearly using tools such as ChatGPT, but there is no sign here of a more that would automate parts of the checking around quotes, claims, and source references before publication.
That gap is not unique to one editor. It is common across much of the news industry and across white-collar work more broadly. Software engineers have moved faster into agentic workflows and have spent more time getting used to delegating meaningful work to AI systems under review, with logs, tests, and explicit approval points. Many other professions are still earlier in that transition.
The issue is operational. There was no reliable loop that forced the draft back to source evidence before publication. That is exactly why incidents like this are useful to study. They show where an agentic workflow could have helped by automating parts of the verification work that were apparently left undone.
If a team wants to use AI responsibly, it cannot rely on a vague instruction to “check the output carefully.” That is not a system. A system needs explicit stages. It needs rules for what AI may do, what it may suggest, what it may never invent, and what must always be tied back to primary material. It needs structured handoffs between drafting and verification. It also needs a hard stop when evidence is missing or weak.
Skills and agentic workflows matter here because they turn that kind of control into written procedure. The useful systems are not just drafting tools. They are loops with checks, corrections, and repeatable control points.
In a responsible editorial or knowledge workflow, “please verify” is not enough. AI can help collect source material, compare versions, draft working notes, and prepare a first pass. But any direct quote has to carry its source with it: transcript or recording reference, speaker name, date, and the exact passage it came from. If a generated quote does not match the source wording exactly, it cannot be kept as a quote. It either becomes a paraphrase with attribution, or it gets deleted.
Every factual claim about dates, roles, events, numbers, and allegations needs the same treatment. The model can draft the sentence, but the workflow must attach the evidence before the sentence survives. A verification step checks whether every quote and claim has evidence attached, and anything unsupported is blocked rather than left hanging for later.
A final human reviewer should see not only the polished draft, but also the evidence trail and any unresolved exceptions. The output is published only after the workflow has either cleared those checks or explicitly escalated unresolved issues.
That can be implemented without much ceremony. An editorial-verification can require the model to extract every direct quote, attach the source document, speaker, and timestamp or paragraph reference, and flag any wording that does not match exactly. The same skill can require every non-trivial factual sentence to carry a source note. A publication step can refuse to proceed if any quote or claim still lacks evidence.
The same logic applies in article workflows more broadly. A publishing skill can treat critical review as a gate, not a courtesy pass, and pair that review with an explicit verification pass for quotes and claims. The review should stay hostile but fair, focused on weak claims, unsupported statements, structural confusion, SEO vagueness, and lines that sound polished without being grounded. For this kind of article, that review should also ask two simple questions: which claims are still too weak for publication, and which quoted lines have not been verified against the source.
That kind of setup is not limited to journalism. The same pattern matters in research, policy, legal review, compliance, investor communications, and internal reporting. Anywhere an organisation wants AI to help with high-trust material, the question is straightforward: where is the loop that catches bad output before it becomes public or operationally binding?
AI can make that work faster and more efficient, but it does not take over the accountability. The final responsibility still sits with a person, and in practice that means the reputation on the line is still human as well.
Our view is that responsible AI adoption starts there. Not with hype or blanket bans, but with shaping the work into a process that can be guided, verified, and improved over time. That is what we mean by an agentic workflow.
If your team is trying to use AI responsibly in real work processes, the practical questions are usually the same: where the evidence sits, who verifies what, and which step can block publication when the draft outruns the source material. That is where skills, , and review flows start to matter. Helping teams make that shift is part of our work, and it is also why we thought this case was worth using as a concrete example. If that is the kind of approach you are looking for, talk to us .
