Parliament
Parliament is an open-source news project that organizes news by Confidence, Bias, Newsworthiness, and Contextualization to tell maximally truthful news in a clear and measurable way.
Install / Use
/learn @kornha/ParliamentREADME
Parliament
Parliament is an open-source news project that organizes news by Confidence, Bias, Newsworthiness, and Contextualization to tell maximally truthful news in a clear and measurable way.
Key Concepts
- Parliament is the open-source repo that anyone can contribute to.
- See the hosted website (in pre-release) at www.parliament.foundation.
- Download the iOS app here.
- Parliament is based on the whitepaper Consensus News.
- Anyone can host a Parliament, provided there is compliance with the LICENSE.
This project is in pre-release, and as such, is a preview. Many features are in development.
<div align="center"> <img src="https://github.com/user-attachments/assets/fc4219d8-0ad8-4adb-b3e1-d8201f6e5255" alt="01f62b98-cedc-4730-b564-d71607a066b3" height="300"> <img src="https://github.com/kornha/political_think/assets/5386694/512b8b18-edec-4b15-9af2-67d70388def0" alt="01f62b98-cedc-4730-b564-d71607a066b3" height="300"> <img src="https://github.com/kornha/political_think/assets/5386694/e8d7932f-1f84-4f1b-ae1a-5553aa15759e" alt="01f62b98-cedc-4730-b564-d71607a066b3" height="300"> <img src="https://github.com/kornha/political_think/assets/5386694/f52bb89e-18f4-4a8a-819b-acc629cb79f8" alt="01f62b98-cedc-4730-b564-d71607a066b3" height="300"> </div>Table of Contents
Whitepaper
Consensus News
Mission
Parliament's mission is telling maximally truthful news in a clear and measurable way.
Think about how you understand news. You may consume one or multiple news sources. It may be social media content, cable news, online or print news, your favorite Telegram group, or another news provider. Ultimately, you will infer the claims these sources are making, and when you are sufficiently satisfied, you will either arrive at a conclusion about what you think has happened or consider yourself unsure. Implicitly, you are assigning a unique judgment value to various news outlets, favoring the credibility of some over others. Once you have enough credible confirmations for your own personal threshold, you may consider something to be effectively true.
For people, our perceived credibility in certain outlets is weighed by many things, including our Confidence in the accuracy of the outlet (this X account has been right many times, so they're probably correct now), and by our Bias towards the outlet (this news provider conforms to my beliefs of the world, so I'll trust their interpretation).
We also perceive some news events to be more important than others. This may be due to various factors, such as ramifications that impact our lives, significant geopolitical events, events that confirm or reject our beliefs, local concerns, unique occurrences (man bites dog), or other considerations. For example, on August 9, 1974, when the New York Times published NIXON RESIGNS in capital letters on their front page, they were expressing significant urgency in the event. While the importance of the Story is different to each person, i.e., it may have been more interesting to an American autoworker than to a Nepalese farmer, the New York Times felt it was newsworthy enough for their audience to warrant an all-caps title. This judgment we call Newsworthiness.
News, when told, also includes a varying contextualization around the event that may significantly alter our understanding and perception of an event. In the most simplest case, an image can be so powerful, only to reverse its meaning when it is zoomed out. The fairness and completeness around which a Story is told we call Contextualization.
Each human consumes news differently due to their unique experiences and circumstances. This is partly why the same event can be perceived so differently by different individuals. Parliament's mission is to tell maximally truthful news in a clear and measurable way.
Let's dive into how the system works.
Confidence
Confidence: The likelihood of something being true based on how true it has been in the past.
How do we know that something is true? This is a philosophical question, and there are several product approaches to addressing it. X, for instance, relies on community notes. FactCheck.org focuses on fact-checking. These two approaches work by direct content validation. A strategy that, when implemented correctly, can be extremely accurate.
Parliament takes what you may call the additional level of abstraction to the above. In the view of Parliament, FactCheck.org, @elonmusk, X fact checker, NYT, a random account on IG, etc. are each what we call an Entity. Otherwise put, Parliament does use fact checkers too, it just tries to use all of them. And, based on if they've been right or wrong in the data we've collected, we know how much to trust them going forward. In this mode of thinking, we can assign news Confidence scores, a score between 0.0 and 1.0, where 0 represents maximal certainty that something is false, and 1.0 represents maximal certainty that something is true. Total certainty is unknowable, and both 0.0 and 1.0 can never be achieved.
Confidence Algorithm
We will only cover a high level of the confidence algorithm here, as the specifics are written in the code.
Let us say as a human I am introduced to a new user on X/IG/TikTok etc. Now, depending on who introduced me, perhaps the platform's algorithm, perhaps a friend, I may view content differently. Regardless, I will have a measure of doubt as to how much I trust the author (aka Entity), which may change over time.
At Parliament we assume new sources are of even uncertainty; currently we assume new people are 50% likely to be honest.
We then look at the Claims a given Entity has made. If these claims are mutually agreed on by many parties, old and new, we can start to collect a Consensus on a claim, much like a blockchain algorithm, which is a possible future implementation. Once claims form a level of consensus, we can then punish the people who have been wrong, and reward the people who have been right. Of course, once we punish/reward the Entities, other claims may change consensus, creating a finite ripple effect.
Currently we track confidence in an Entity by:
/**
* Calculate the confidence of an entity based on its past statements.
* @param {Entity} entity The entity object.
* @param {Statement[]} statements The array of statement objects.
* @return {number} The confidence of the entity.
*/
function calculateEntityConfidence(entity, statements) {
let totalScore = BASE_CONFIDENCE;
let count = 0;
// Loop through statements most recent first
for (let i = 0; i < statements.length; i++) {
const statement = statements[i];
if (statement.confidence == null) {
continue;
}
count++;
// more recent statements are penalized/rewarded more
const decay = Math.pow(DECAY_FACTOR, i);
const decidedPro = statement.confidence > DECIDED_THRESHOLD;
const decidedAgainst = statement.confidence < 1 - DECIDED_THRESHOLD;
if (!decidedPro && !decidedAgainst) {
continue;
}
let isCorrect = false;
let isIncorrect = false;
if (decidedPro) {
isCorrect = entity.pids.some((pid) => statement.pro.includes(pid));
isIncorrect = entity.pids.some((pid) => statement.against.includes(pid));
} else if (decidedAgainst) {
isCorrect = entity.pids.some((pid) => statement.against.includes(pid));
isIncorrect = entity.pids.some((pid) => statement.pro.includes(pid));
}
// this allows us to weight the confidence equally for distance to 0 or 1
const adjustedConfidence = Math.abs(statement.confidence - 0.5) * 2;
if (isCorrect) {
totalScore +=
CORRECT_REWARD * (1 - totalScore) * decay * adjustedConfidence;
}
