Misinformation as a CS Problem

Misinformation as a CS Problem

Turning Misinformation into a Numbers Game: How We Simplify to Solve.


Introduction

We've always known that misinformation is not just tough to tackle—it's a loaded gun, constantly evolving and notoriously slippery. So, what do engineers do when faced with a complex problem? We simplify. We break it down into manageable, measurable components. At Critique, we approached misinformation as a classic engineering challenge, reducing it to a numbers game.

okay, that's the chatGPT answer out of the way. For the purposes of determining whether or not a fact can be verified. We have determined it can exist in one of three states:

  1. Verified - This is a recursive definition of sorts, but for the purpose of practicality it serves quite well. This is a fact that exists in multiple reliable sources, each source being deemed reliably itself by virtue of having a history of producing sources verified by multiple other sources.
  2. Unverified - A contrapositive reliable fact to this fact exists. (reliable taken from definition 1)
  3. Misinformation - There exists no supporting reliable sources for this fact, neither does there exist a reliable source that contradicts this fact. This state being that which corresponds most often to "new" information (breaking news articles).

The entire graph kicks off with a new piece of content being received, based on sites users visit, or manual content they are critiquing.It begins with the semantic parsing of facts that the content contains. This step is a trivial function call to an LLM agent deemed here in as the Initializer It's very first task when receving new content is to dissect it for an array of facts. Once extracting the facts, these are sent to the remainder of the agents for verification. Concurrently however, the initializer is responsible for updating the source database these facts (pending classification) are extracted from. This new information then kicks off an update of the active sources cache, where we are analyzing what new facts the agent is seeing is already in the cache, to determine if there is a trend/spike in a particular fact.

The initializer has passed on parsed facts to a research agent, who's task is for each fact, search for supporting sources. Retrieving more than one reliable source for each fact will classify it as verified. In parallel, the fact is converted into a contrapositive, and searched identically. in the unlikely event both searches return reliable sources, the fact is classified as unverifiable, and subsequently both the sources retrieving said results get a hit to their reliability rating. Most of the time, we receive only a result from either search, and the fact is classified accordingly. For typically breaking news articles, both searches do not return a result, since this fact doesn't exist much and as result the fact is classified as unverified. However, in this instance, the source does not get a reliability update, since the fact itself is unverified.

So, we've glossed over some details, a lot of implementation, and at the time of writing some of this logic has not been pushed to production yet. Nevetheless, we hope this insight provides transparency into the process, and gives an intuition into why you can rely on the neutrality of the solution we are building.