Programming note: I am currently finishing up my Masters, graduating in December, and have less time available to write here than I would like. I will get back to my regular posting schedule by the end of this year.1
Who can I trust to give me accurate information?
This question is essential for anyone trying to understand the world in 2022. While I don’t have a definitive answer, here are a couple of strategies that I find useful.
1. Minimal-trust Investigations
Holden Karnofsky’s minimal-trust investigations has been such a valuable tool for me in navigating the world. In his excellent blog cold takes, he writes:
Most of what I believe is mostly based on trusting other people.
For example:
I brush my teeth twice a day, even though I've never read a study on the effects of brushing one's teeth, never tried to see what happens when I don't brush my teeth, and have no idea what's in toothpaste. It seems like most reasonable-seeming people think it's worth brushing your teeth, and that's about the only reason I do it.
I believe climate change is real and important, and that official forecasts of it are probably reasonably close to the best one can do. I have read a bunch of arguments and counterarguments about this, but ultimately I couldn't tell you much about how the climatologists' models actually work, or specifically what is wrong with the various skeptical points people raise. Most of my belief in climate change comes from noticing who is on each side of the argument and how they argue, not what they say. So it comes mostly from deciding whom to trust.
I think it's completely reasonable to form the vast majority of one's beliefs based on trust like this. I don't really think there's any alternative.
But I also think it's a good idea to occasionally do a minimal-trust investigation: to suspend my trust in others and dig as deeply into a question as I can. This is not the same as taking a class, or even reading and thinking about both sides of a debate; it is always enormously more work than that. I think the vast majority of people (even within communities that have rationality and critical inquiry as central parts of their identity) have never done one.
The idea is, every once in a while, get all the way down to bedrock truth by yourself. Before reading his post, I had a practice that was somewhere between a rigorous minimal-trust investigation and a wandering rabbit hole, often driven more by curiosity and attention span then by a systematic attempt to find the truth. Even this wandering approach was incredibly useful to me, in that I often accidentally found out whose sources checked out and whose did not, or who consistently and confidently mispredicted past events. However, Holden gave me a language, framework, and clear motivation to describe that process, and I have found it invaluable. He writes:
Minimal-trust investigation is probably the single activity that's been most formative for the way I think. I think its value is twofold:
It helps me develop intuitions for what/whom/when/why to trust, in order to approximate the views I would hold if I could understand things myself.
It is a demonstration and reminder of just how much work minimal-trust investigations take, and just how much I have to rely on trust to get by in the world. Without this kind of reminder, it's easy to casually feel as though I "understand" things based on a few memes or talking points. But the occasional minimal-trust investigation reminds me that memes and talking points are never enough to understand an issue, so my views are necessarily either based on a huge amount of work, or on trusting someone.
Let’s tackle those two benefits one by one. First, more directly related to the topic of this post:
It helps me develop intuitions for what/whom/when/why to trust, in order to approximate the views I would hold if I could understand things myself.
Given unlimited resources, time, and curiosity, with no cognitive biases, we would each individually do a minimal-trust investigation on everything relevant to the decisions we need to make. In the real world, of course, none of those assumptions hold true, and we need to rely on the wisdom and knowledge gained by our fellow humans countless times every day. A few minimal-trust investigations can help narrow down the voices that are calling for your trust.
An extreme example would be doing a minimal-trust investigation into the supposed link between vaccines and autism leading you to learn all about the fraudulent paper from Andrew Wakefield in the Lancet. This would let you disregard Wakefield’s future work, and at least add serious skepticism to people citing Wakefield or claiming a causal link between the MMR vaccine and autism. However, lots of claims are not as easily discredited as this one, and certainly not immediately- Brian Deer’s investigation was not published until 2011, over a decade after Wakefield’s now retracted and discredit paper was originally published.
Often, I find that some important piece of a mental model is wrong or based on shaky assumptions, but has been repeated so much that everyone believes it. A classic example of this during the COVID-19 pandemic was the idea that SARS-CoV-2 spread through droplets, not aerosols; this turned out to be wrong, and 60 years of science was based on the same errors. In Karnofsky’s former role as the CEO of GiveWell, he tried to find the data backing up the claims that charities made about how effective their programs were:
Some made claims like "LLINs are extremely proven - it's not just the experimental studies, it's that we see drops in malaria in every context where they're handed out." We looked for data and studies on that point, put a lot of work into understanding them, and came away unconvinced. Among other things, there was at least one case in which people were using malaria "data" that was actually estimates of malaria cases - based on the assumption that malaria would be lower where more LLINs had been distributed. (This means that they were assuming LLINs reduce malaria, then using that assumption to generate numbers, then using those numbers as evidence that LLINs reduce malaria. GiveWell: "So using this model to show that malaria control had an impact may be circular.")
Other times, you may not find any obvious fraud or error, but simply exaggeration, cherry picking, or other reason to doubt the reasoning. However, you must be careful of not falling for the fallacist's fallacy- finding a flaw in an argument does not necessarily mean that the conclusion of that argument is invalid; people make bad arguments for true things all the time, and a poorly argued paper full of misclassification bias that shows smoking is a cause of lung cancer is not evidence against the fact that smoking in fact causes lung cancer. However, such a paper should make you very skeptical about the claims from researchers who published that paper, as well as cautious about reporters or newspapers who uncritically repeated the findings.
I will talk later in this post about how to use Bayesian updating in this problem, but here is the short version: remember what voices or institutions your minimal-trust investigations show to be the most honest and rigorous, and trust them in the future on issues like this. Remember which actors are fraudulent, hyperbolic, using motivated reasoning, or otherwise showing themselves to be untrustworthy, and act accordingly in the future. For actors in the middle of those extremes, who did not intentionally peddled misinformation but who also did not sufficiently check their sources, take their future claims with a grain healthy dose of salt, check their sources, and if you notice a pattern, stop listening to them.
Holden’s second major benefit from minimal trust investigation:
It is a demonstration and reminder of just how much work minimal-trust investigations take, and just how much I have to rely on trust to get by in the world. Without this kind of reminder, it's easy to casually feel as though I "understand" things based on a few memes or talking points. But the occasional minimal-trust investigation reminds me that memes and talking points are never enough to understand an issue, so my views are necessarily either based on a huge amount of work, or on trusting someone.
Getting to ground truth is hard. It takes a long time, and sometimes I come away with something as unsatisfying as “people smarter than me who have studied this their entire lives disagree completely, and I can’t make heads or tails of who is right”. This practice helps me to remember what I do not truly know, and helps me act with the appropriate amount of intellectual humility.
2. Bayesian Updating
How much should I change my belief, based on a new fact? Again, in the extremes, this is an easy question: If I am sure the fact is true, then it should be my belief. If I am certain it is false, I should not change my belief at all. Bayesian updating helps with the middle ground.
There are lots of really great explanations of how to use Bayesian updating that explain the concept better than I could, so I will simply link to a few of my favorites here and then explain how this helps find good information and information sources.
This 3Blue1Brown is probably the best explainer I've found:
And this talk is a great way to take it from math formulas to something you can easily do in your head on the fly:
So, how does this apply to finding good information and sources? I use it to apply what I learn about a given source during a minimal-trust investigation. If during my investigation, I find that a scientist has made some common errors, for example using an inappropriate statistical test, I will adjust how much I would expect evidence from that scientist to be false, based on how easy the error is to make. Similarly, if a journalist makes an error that could have been easily avoided by the simplest of follow up or fact checking, I will weight future evidence from that journalist much lower in the future.
Sometimes, someone I have learned to trust a lot in one specific area makes a claim outside the area of their expertise. The easy road here is to believe them if it confirms your worldview, and tell them to "stay in their lane" if it doesn't. I've found Bayesian updating to be a better solution. The simple question how likely would it be that this person made this claim, if the claim was in fact true? can help you update your beliefs appropriately- not flipping back and forth from every new factual claim, but instead adjusting how much you believe something, depending on how strong the evidence is.
3. Reverse Gell-Mann Amnesia
Michael Crichton coined the term Gell-Mann Amnesia, describing it like this:
Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them.
In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
That is the Gell-Mann Amnesia effect. I’d point out it does not operate in other arenas of life. In ordinary life, if somebody consistently exaggerates or lies to you, you soon discount everything they say. In court, there is the legal doctrine of falsus in uno, falsus in omnibus, which means untruthful in one part, untruthful in all. But when it comes to the media, we believe against evidence that it is probably worth our time to read other parts of the paper. When, in fact, it almost certainly isn’t. The only possible explanation for our behavior is amnesia.
If Gell-Mann Amnesia is forgetting how fallible the news is, even when you are reminded of its fallibility, Reverse Gell-Mann Amnesia is intentionally remembering what you learn about the credibility of information sources. If a given newspaper writes a topic on an event you were personally witness to or a subject in which you have expertise, and gets obvious things wrong, they have saved you the time of doing a minimal-trust investigation! Go ahead and update the amount of weight you give to evidence from that newspaper in the future.
This comes with some obvious pitfalls to avoid. First, if you discount evidence from any source that says things you disagree with, you will never learn new things or have your worldview challenged- you will simply create a more and more heavily insulated echo chamber for yourself. Don't use this tool on things you think are true, but rather on things you are highly certain are true. Hopefully the practice of minimal-trust investigations can help you distinguish between the things you really know, and know all the way down to the foundations, versus what you just think you know.
Second, it can be easy to overgeneralize. I think the quote above describing Gell-Mann Amnesia is actually a good example:
But when it comes to the media, we believe against evidence that it is probably worth our time to read other parts of the paper. When, in fact, it almost certainly isn’t. The only possible explanation for our behavior is amnesia.
Crichton is throwing out the entire profession of journalism as worthwhile because the articles he reads that he has direct knowledge or expertise of are often wrong. Fortunately, there are other options. You can discard the information sources that are consistently wrong in favor of ones that prove correct. You can find where a specific source has a comparative advantage, and trust them in that domain but be more skeptical outside of that domain. You can track which sections of the newspaper are consistently proven correct, and which are not. You can read multiple sources with differing perspectives, and focus on areas where there is a consensus.
Like almost every other unconscious bias, knowing what Gell-Mann Amnesia is is not sufficient to not falling for it. The practice of Reverse Gell-Mann Amnesia should be intentional. One antidote for amnesia is working really hard to remember better- an easier solution is often just to simply write it down. Personally, I have a bookmark folder for examples where an information source has proven untrustworthy. This helps take the onus from my fallible brain into something that won't forget.
A recent example is the Boston University gain of function controversy. Because I have some expertise and experience in this subject matter, I paid close attention to how people were reporting the facts. I also read the original sources myself, including the preprint paper describing the experiment in question. Some news outlets led with the most salacious headline possible, with articles full of factual errors or omissions.


While I don’t think many of us need reminding, this shows that we should not take a NY Post headline at face value.
Other more careful journalists did their due diligence, put the numbers in perspective, and looked into the claims made by motivated actors. Kelsey Piper has consistently written articles that look good in hindsight, and on the subjects that I have knowledge about she gets it right. She has proven herself to be a very good journalist who I trust, across multiple domains, and proved it again this time.
Domain experts like Marc Lipsitch also weighed in, and showed their domain expertise as well as their reasoning:

Marc did a great job of science communication in that thread, noting the overreaction and the scientific value instead of doubling down on a righteous tirade. I won’t yet near-unquestioningly trust Marc on any topic like I do with Kelsey, but I can be pretty confident in what he shares about biosecurity.
4. Thinking in Index Funds
Sam Atis writes about his approach to finding truth in a sea of conflicting information:
Read his post- it’s short and well written, I won’t reword it here. The basic idea is to leverage the vast amount of human capital that goes into getting things right to your advantage, by trusting the people who have done the research.
…the basic idea that ‘other people are less lazy than me and have probably come up with some aggregation of their views that I can steal’ is something that applies to pretty much everything I want to learn about. Here are some examples of quasi-index funds:
Thinking about which charities to donate to is long and difficult, and I’m probably not smart enough to do it properly anyway. But GiveWell does a load of research, so I should just donate to whatever charities they recommend as the most effective.
Thinking about who is going to win the next UK election is hard, so I’m just gonna defer to the betting odds and Britain Predicts rather than come up with some long explanation as to who is going to win and why.
I view this as a direct antidote to the near-nihilism of Crichton above, a way to leverage the wisdom of the crowd by listening to the right crowd, and listening only when the incentive structures mean truth is likely to win out. It is also a counterbalance to the immense amount of time and effort needed for a good minimal-trust investigation. It certainly cannot replace drilling all the way down to bedrock truth, but it is a great time saver for issues that are low stakes.
Conclusion
These tools used in concert can be powerful in separating the wheat from the chaff in our world of information overload. Every once in a while, do a minimal-trust investigation and find out for yourself what is the ground truth. While you do it, remember who comes out looking good, and who doesn't. Believe those people accordingly in the future. Leverage the wisdom of the crowd. Rinse, and repeat.
Apparently, the physicist Richard Feynman did a minimal-trust exercise on brushing teeth and concluded it was bunk.
Relevant Gell-Mann video: https://youtu.be/rnMsgxIIQEE?t=166