Turning an LLM prompt into a multi-agent discussion
What if instead of having one LLM interpreting different sources, we have multiple "agents of trust" working as a decentralised reasoning engine? Let me explain what it could look like.
LLMs are missing a sense of trust. Not because they’re broken, but because we haven’t built that in yet. Remember Google Search? We’ve got so used to LLMs that PageRank now sounds like a dusty book no one’s opened since 1990. But it’s still one of the reasons Google feels more reliable. PageRank measures the importance of a webpage based on how many other important pages link to it, like votes of confidence. It’s not perfect, but it gives more weight to trustworthy sources.
Trust is the missing layer
Rumours, conspiracy theories and misinformation spread easily because LLMs don’t reason like humans. They don’t judge credibility or question whether something makes sense. They don’t say: “Wait, that makes no sense.” They don’t know the difference between a crypto scammer’s blog and a BBC journalist, they just see patterns.
What’s worrying is that it only takes 5 to 8 fake articles and a handful of dodgy tweets to convince a large language model that a lie is true. Start a rumour, and you’ll see Gemini, Copilot, and ChatGPT go along with it.
LLMs are trained on massive amounts of text scraped from the internet. If a site repeats a lie and no one steps in to set the record straight, the model will keep repeating the lie until someone publishes something that changes the pattern.
Something I learned from working with BBC journalists is that if you’re a media organisation, trust is your product. You’re not just publishing stories, you’re asking people to believe you. Before something gets published, it goes through a review process where editors or peer reviewers check that the information is accurate, the sources are credible, and the claims are backed by solid evidence.
A practical solution to the trust problem
What if instead of having one LLM interpreting different sources, we have multiple “agents of trust” working as a decentralised reasoning engine. Let me explain what it could look like.
You deploy LLM agents, each with a clear worldview, ideology, or domain speciality. Think of them like human experts sitting around a table. Each agent has:
A persona (ex. journalist, teacher, lawyer, musician, scientist, humanist, optimist)
Access to different source sets
A reasoning style (ex. strict logic, gut feeling, evidence-based)
What they do:
Read a claim or question
Interpret it based on their worldview
Consult their LLM
Debate or discuss with other agents
Reach a conclusion or consensus (or acknowledge disagreement)
It’s like a jury of minds reasoning together, with transparency about why they believe something.
Group reasoning
This sets up the shift from a one-on-one chat to a simulated multi-agent discussion. It makes it more social, and offers an interesting user experience.
One way to activate “group reasoning” could be by asking ChatGPT to invite some friends. At that poiont, the model acknowledges the request and ask for a prompt:
Prompt:
The user enters a prompt and ChatGPT selects a panel of agents with distinct viewpoints and areas of expertise:
Analysis:
ChatGPT adopts a moderator role and allows each agent to share their perspective, focusing on evidence, logic, social context, or scientific principles:
Consensus
Finally, a brief discussion between the agents takes place. This shows how real experts might reason together, agree, disagree, and refine their views. Then, ChatGPT summarises the group’s consensus:
The psychology of group reasoning
Talking to one model feels like a one-on-one debate. But instead of a single voice you had a room full of “people”, each with their own view, then it’s not about being right or wrong. It’s about listening, comparing, and thinking more clearly.
There’s a reason this feels better:
You naturally compare your views with others to understand what’s reasonable. When there are several people in the conversation, your brain doesn’t just react defensively. It starts comparing: Who makes the most sense? Which reasoning feels right? This makes space for reflection, not just reaction.
In group decision-making, when multiple experts weigh in independently, their collective judgement is often better than any one individual. That’s what the “decentralised engine” is doing, reaching better conclusions through structured disagreement.
Seeing a claim through the eyes of a Skeptic, Humanist, Journalist, and Physicist helps you mentally simulate different ways of reasoning.
It feels safer to change your mind when others are thinking aloud too. You’re not being told you’re wrong, you’re watching a conversation unfold and choosing who to align with. That’s powerful.
Collaboration models
You could define different ways agents interact:
Majority vote: useful when high-trust consensus matters.
Weighted score: some agents get more say depending on context.
Contrarian flag: highlight when one agent strongly disagrees.
Chain of trust: agents pass claims down a line of verification, like peer review.
Conclusion
ChatGPT and other models aren’t media organisations, but people sometimes treat them like media sources. They ask for news, facts, opinions. And that’s the biggest problem users, and LLMs, are facing right now. These models can sound confident, even when they’re wrong.
The proposal here isn’t to build a smarter model, but a more transparent, social experience. One where multiple agents, each with a defined identity and point of view, respond to the same prompt. A scientist, a sceptic, a historian, an entrepreneur, a teacher, a journalist, an investor. They don’t need to agree. In fact, disagreement makes the experience more human.
This turns a one-on-one chat into something more interesting. It feels more social, and more interactive. And it doesn’t require a new UI, just a new way of thinking about conversation.
Links
This post was inspired by the conversation @kevinweil, Chief Product Officer at OpenAI, had with @lennysan
Salesforce AI Research Delivers New Models to Make Future Agents More Intelligent, and Trusted






