Thursday, April 25, 2024

Review of Superforecasting: The Art and Science of Prediction by Philip E. Tetlock, and Dan Gardner

This book review was written by Eugene Kernes   

Book can be found in: 
Book Club Event = Book List (06/29/2024)


Watch Short Review

Excerpts

“Every day, the news media deliver forecasts without reporting, or even asking, how good the forecasters who made the forecasts really are.  Every day, corporations and governments pay for forecasts that may be prescient or worthless or something in between.  And every day, all of us – leaders of nations, corporate executives, investors, and voters – make critical decisions on the basis of forecasts whose quality is unknown.” – Philip E. Tetlock, and Dan Gardner, Chapter 1: An Optimistic Skeptic, Page 10

“Unpack the question into components.  Distinguish as sharply as you can between the known and unknown and leave no assumptions unscrutinized.  Adopt the outside view and put the problem into a comparative perspective that downplays its uniqueness and treats it as a special case of a wider class of phenomena.  Then adopt the inside view that plays up the uniqueness of the problem.  Also explore the similarities and differences between your views and those of others – and pay special attention to prediction markets and other methods of extracting wisdom from crowds.  Synthesize all these different views into a single vision as acute as that of a dragonfly.  Finally, express your judgement as precisely as you can, using a finely grained scale of probability.” – Philip E. Tetlock, and Dan Gardner, Chapter 7: Supernewsjunkies?, Page 141

“On the one hand, we warned, groupthink is a danger.  Be cooperative but not deferential.  Consensus is not always good; disagreement not always bad.  If you do happen to agree, don’t take that agreement – in itself – as proof that you are right.  Never stop doubting.  Pointed questions are as essential to a team as vitamins are to a human body.  |  On the other hand, the opposite of groupthink - rancor and dysfunction – is also a danger.  Team members must disagree without being disagreeable, we advised.” – Philip E. Tetlock, and Dan Gardner, Chapter 9: Superteams, Page 184


Review

Is This An Overview?

Forecasting is a skill that everyone uses everyday to predict the effects of potential changes.  Like any skill, forecasting can be improved.  Experts are often sought out for decisions and event interpretations, to forecast what will come about.  Although many provided forecasts appear valuable, their quality is often undetermined.  The public tends to favor those who make the future appear more certain, even though their overconfidence is a source of lower quality forecasts.  On average, experts can provide a better narrative of events, but their forecasts are as good as random guesses. 

Part of the reason for the poor performance of forecasts is that reality is complex and dynamic, making predictions difficult.  Society might have more knowledge and computational power, but less confidence in predictability.  There might be limits on predictability, but people can become better at making forecasts.  To find out how people can make better forecasts, and methods to avoid, many diverse people participated in a forecasting research project. 

What made some people better at making forecasts, what made people superforecasters, was based on how they thought about information, how they used information.  Not intelligence, not ideology, not numeracy skills.  The forecasters were doubtful of their claims, and sought to improve them.  Complex problems which seemed impossible to forecast, were reconsidered through a variety of questions seeking to find ways for the event to occur, or not occur.  They looked for the base rate, a general probability of an event happening before going to the unique case.  Anchoring their views to the outside view, rather than the inside view.  They seek to improve their own forecasts by looking for what others think about the event, they look for alternative forecasts.  They adapt to new information, update their forecasts to new information, and try to not underreact or overreact to the information. 

These methods of thinking, these guidelines might improve decision making, but better to change guidelines than make a terrible forecast.  People can become better at forecasting, but teams have better results than an individual superforecaster, as each member can help others to refine ideas, and no individual can do everything.  But teams take effort to make them productive, and can create processes that exacerbate bad decisions.

 

How To Get Better At Forecasting?

To become better at forecasts, people need to practice.  There is a lot of tacit knowledge that cannot be learned through how others describe forecasting.  Feedback is needed to train in any skill, including forecasting.  But the feedback to forecasts, usually lack quality.  They do not provide immediate feedback nor provide clear results.  Without appropriate feedback, people can become overconfident in their forecasts.  People can gain an illusion of control from seemingly favorable random outcomes.  Judging forecasts would depend on running many forecasts, such as in weather.  But there are forecasts that cannot be rerun, such as history.  Need to run experiments to verify claims. 

The language around what people mean by possibilities need to be more specific rather than ambiguous.  People can mean drastically different possibilities, which can create a dangerous misunderstanding. Teams can use a chart to numerically define possibility claims, to reduce confusion.  Numbers are an opinion, but can be used to reduce confusion.  Forecasts also need timelines.  Without timelines, forecasts become perpetually in dispute at what they meant. 

 

Caveats?

Forecasting on problems will always have uncertainty.  As referenced in the book, no matter the quality of the better decision making, there will be uncertainty and wrong decisions.  The process of decision making matters more than the outcome, as there will be more opportunities for better decisions with a better decision making process than a randomly favorable outcome under a worse decision making process. 


Questions to Consider while Reading the Book

•What is the raison d’etre of the book?  For what purpose did the author write the book?  Why do people read this book?
•What are some limitations of the book?
•To whom would you suggest this book?
•What is forecasting?
•Who are superforecasters?
•How to know which forecasts are quality forecasts?
•How does uncertainty effect forecasts?
•What is Laplace’s demon? 
•How does ideology effect forecasts?
•Is intelligence needed for quality forecasts?
•Is math needed?  How to use math in forecasts?
•What is the base rate?
•How do teams compare to individual forecasters?
•How to Fermi-tize a question?
•How to be part of a team?
•What is groupthink? 
•How to update to new information? 
•What is Bayesian Theorem? 
•What do consumers/public want of forecasts? 
•How did doubt, and the absence of doubt effect medicine?
•What is the illusion of control?
•How should a leader make decisions? 
•What is regression to the mean?
•Who are the foxes and hedgehogs? 
•How are confidence and competence correlated? 
•What is the dilution effect? 
•What is the difference between a fixed mindset and a growth mindset? 
•What is perpetual beta?
•What was the quality of the forecast that was sent to the central bank during in November 2010?
•What happened to the CIA possibility chart?  
•How can an economic crisis be predicted? 
•What happened at the Bay of Pigs invasion? 
•Did decision did Seydlitz make?  What was the outcome?
•What is Auftragstaktik? 
•How have military decisions changed? 

Book Details
Publisher:               Crown Publishers [Penguin Random House]
Edition ISBN:         9780804136709
Pages to read:          253
Publication:             2015
1st Edition:              2015
Format:                    eBook 

Ratings out of 5:
Readability    5
Content          5
Overall          5