How can we design a room-temperature superconductor? How can a government stimulate economic growth? How can an NGO help reduce domestic violence? Why do communities become segregated on racial lines? How can I improve my mental health?
These important questions may seem unrelated. However, I propose that often there is a common issue about the strategies that people (whether individuals, professions, NGOs, funding agencies, governments, ...) propose to find answers or when definite answers are proposed.
Many strategies and answers involve a heavy dose of implicit beliefs. These are assumptions that are never stated. They may be elements of a worldview, which according to one definition, is
a commitment, a fundamental orientation of the heart, that can be expressed as a story or in a set of presuppositions (assumptions which may be true, partially true, or entirely false) which we hold (consciously or subconsciously, consistently or inconsistently) about the basic construction of reality, and that provides the foundation on which we live and move and have our being.
James W. Sire, The Universe Next Door: A basic worldview catalog
These implicit beliefs may relate to values and morality. But I want to focus more on implicit beliefs that are related to academic disciplines such as philosophy of science, psychology, political science, theology, economics, anthropology, sociology, ... Most of us have never studied these disciplines and some of us may be skeptical about some of them. But, my point is that everyone has implicit ideas about what is true with regard to the objects these disciplines study. Everyone has a philosophy of science. Everyone has ideas about how minds work and how to change societies. It is just that these beliefs are rarely stated.
Why does this matter? If implicit beliefs are never stated, they can never be tested, evaluated, critiqued, refined, or rejected. I believe that implicit beliefs are too often based on intuition, prejudice, common sense, or culture (social pressure to conform to accepted wisdom). This is not necessarily bad. Sometimes intuition, common sense, and culture are helpful and correct. We could not survive in life if we did not have them. We simply don't have the time, energy, and resources to constantly question and validate everything. On the other hand, if there is a vacuum, it will get filled with something. A major lesson from scientific history is that intuition, prejudice, common sense are sometimes wrong.
I now give three concrete examples of implicit beliefs. They cover computational materials science, public policy, and social activism.
Understanding materials using computers
Amongst others, there are two things, we would like more computational power to be able to do. One is to do reliable ab initio calculations of the properties of complex molecules and solids, from proteins to crystals with unit cells containing large numbers of atoms. Another is to do exact diagonalisation (or some alternative reliable method) of many-body Hamiltonians, such as the Hubbard model, on large enough lattices that finite-size effects are minimal or can be reliably accounted for.
Over the past decade, there has been a lot of hype about how quantum computers and/or machine learning techniques will solve these problems and thus initiate a new era of materials understanding, discovery, and design with significant technological and economic benefits. My problem is that these claims usually seem to have the implicit belief that the only obstacle to progress is one of computational power. This fallacy has recently been deconstructed and critiqued in detail in three beautiful essays by Roald Hoffmann and Jean-Paul Malrieu, Simulation vs. Understanding: A Tension, in Quantum Chemistry and Beyond.
Public policy
National economies around the world have been battered by the covid-19 pandemic. In response, governments of prosperous countries are spending big on stimulus packages. This involves taking on massive amounts of debt and significant government intervention in "free" market economies. Will these initiatives achieved their goals, particularly in the long term? Could they actually make things worse? Responses from pundits, both for and against, are laden with implicit beliefs. Unfortunately, economists cannot agree on the answer to the basic question, "Does government stimulus spending actually produce economic growth?" This issue is nicely discussed in a pre-pandemic podcast at Econtalk.
NGOs and social activism
Many NGOs are about change. They aim to build a better world, addressing problems such as domestic violence, poverty, climate change, corruption, racism,... They aim to promote education, human rights, good governance, democracy, health, transparency, .. I love NGOs. I support many: philosophically, financially, and practically. To survive most NGOs have to raise funds, whether from many small donors or large philanthropies. This requires a well-honed pitch that aims to inspire potential donors to give. Furthermore, the whole operation of most NGOs is laden with implicit beliefs, whether those of the founders, staff or donors.
Consider a hypothetical NGO whose goal is to reduce the number of murders in a country. I chose this example because it may at first appear less controversial and contentious than some. Almost everyone thinks murder is wrong (always) and societies should stop reduce it. But why do murders occur? Revenge, passion, drugs, alcohol, money, politics, racism, ... Will making the purchase of guns difficult reduce the murder rate? Gun lobbyists will claim "Guns don't kill people. Criminals do! Law-abiding citizens need guns for self-defense." (cringe). There are many other alternative strategies: increasing penalties (longer jail terms or even the death penalty), the number of police, weapons for police, community policing, drug rehabilitation, breaking up gangs, ... Wow! It's complicated. My main point is that the hypothetical NGO will probably have an implicit belief that one particular strategy is the best one. Furthermore, if you identify and question this belief reasonable debate may not follow, but it may even be claimed that you don't care about stopping murder.
Some NGOs and philanthropies have become mindful of these issues. In response to a grant application that I helped an NGO write we were asked what our "theory of change" was? I discovered that there is a whole associated "industry. According to theoryofchange.org (!)
One organisation which began to focus on these issues was the US-based Aspen Institute and its Roundtable on Community Change. ... [leading] to the publication in 1995 of New Approaches to Evaluating Comprehensive Community Initiatives. In that book, Carol Weiss, ... hypothesized that a key reason complex programs are so difficult to evaluate is that the assumptions that inspire them are poorly articulated. She argued that stakeholders of complex community initiatives typically are unclear about how the change process will unfold and therefore give little attention to the early and mid-term changes that need to happen in order for a longer term goal to be reached.This led to software designed to help organisations plan initiatives, with a particular emphasis on teasing out assumptions embedded in plans. A related method is construction of a logframe matrix [logical framework].
All models are wrong but some are useful. I first learnt this aphorism from Scott Page, in his wonderful course Model Thinking at Coursera. Models help us think more clearly. Simple quantitative models, such as agent-based models, in the social sciences, have the value that their assumptions can be clearly stated, and then the consequences of these assumptions can be investigated in a rigorous manner.
What do you think? Are there examples that you think involve implicit beliefs that need to be stated explicitly?
No comments:
Post a Comment