You are now the keeper of your very own Human Society™. Congratulations!
Your Human Society™ is a powerful and flexible network of sentient biological life forms. There are many ways to configure and optimize your Human Society™ to suit your aims. See your accompanying manuals for complete details.
This Quick Start Guide to Human Society™ will help you launch your Human Society™ using our Best Practice Model.
The Best Practice Model is a seven-step plan for organizing your Human Society™. This plan is proven to be highly effective, yet simple enough for beginners to use.
However, as it is human nature to skip reading any detailed instructions, we have also condensed the seven steps into one.
The One-Step Guide to Human Society™
Here is a simple, effective formula for human prosperity:
Give people freedom in an environment of trust.
Are you the keeper of a malfunctioning Human Society™? Don't panic!
This Troubleshooting Guide to Human Society™ is here to help you deal with your existing malfunctions.
Your Human Society™ is a network of hundreds, thousands or millions of human beings. As a biological entity, a human being will not behave as predictably and consistently as a technological device. A certain level of malfunctioning is normal, and to be expected.
However, you will at times want to track down an error and fix it. This Troubleshooting Guide will help you learn how to:
- track down the source of a malfunction in your Human Society™
- identify what kind of error it is
- take steps to correct the malfunction.
Of course, the best way to deal with malfunctions is to prevent them from happening in the first place.
Prevention is not the focus of this Troubleshooting Guide. To learn more about prevention, please refer to the Quick Start Guide to Human Society™, the Optimization Guide to Human Society™, the Human Society™ Strategy Guide, and the Human Society™ Complete Manual of Human Nature.
We're sorry. An error has occurred.
Diagnostic information follows:
The requested file, "Human Society™ Complete Manual of Human Nature", has been partially corrupted. The complete file is not accessible. The data at the specified index could not be read.
Please try again later, or contact the Manufacturer for assistance.
Welcome to the Human Society™ Strategy Guide. This guide will help you understand the many possible strategies for managing your Human Society™.
Should you run an empire? A theocracy? A dictatorship? A democracy? This guide is here to help you understand your choices.
Human beings are not simple machines, like a toaster, that exist for a single purpose. They are flexible creatures that can be dedicated towards many different aims.
In order to achieve such flexibility, human beings have a complex nature. This means that they aren't by default optimized towards any one particular purpose. This Human Society™ Strategy Guide will give you strategies for optimizing your Human Society™ for your own goals and values.
Human beings are sentient biological creatures. As such, they are more vulnerable to damage than mechanical or electronic creatures. They have evolved certain feelings and behaviors to avoid harm. Some of those behaviors can be counterproductive to the goals and values you have for your Human Society™.
This Human Society™ Strategy Guide will give you strategies to employ that can turn those feelings and behaviors from liabilities into assets.
The prosperity of your Human Society™ depends on how well you can predict human behavior. If you have a poor understanding of human nature, you will make inaccurate predictions, which will lead to ineffective strategies for managing your Human Society™.
Therefore, as the keeper of a Human Society™, you will need to have an understanding of human nature.
This Quick Start Guide to Human Society™ presents a seven-step model of human nature, called the Best Practice Model. This model yields a simple, but effective strategy for getting started with your Human Society™.
Of course, human nature is more complicated than the seven steps of a Quick Start Guide. For a comprehensive catalog of human nature, please refer to the Human Society™ Complete Manual of Human Nature. For alternative models and strategies, please refer to the Human Society™ Strategy Guide.
Each strategy is based on its own model of human nature.
What are human beings? How do they behave? How do they act when alone, and how do they act together in a society?
Each model of human nature answers such questions differently. Each different answer implies a different strategy for managing the humans in your Human Society™.
Human beings have a very complex nature. The Human Society™ Complete Manual of Human Nature is a very large book. A strategy by necessity will need to simplify this large complexity into a smaller model that is workable.
This Human Society™ Strategy Guide will explain the differences between the various strategies you may employ, the model of human nature that underpins each of them, the flaws of those models, and tradeoffs you make with each choice.
Models exist to make complex systems simple enough to understand and manage. But simplifying a system means you have to leave some parts of that system out. All models, therefore, are flawed in some way.
Human Nature is very, very complex. Modeling human nature accurately, therefore, is very, very difficult. Some models are better than others, but every model of human nature will be flawed in some way.
Still, you have to choose some model or other to operate your Human Society™ under. This means that at some point while operating your Human Society™, you will get human nature wrong. You will need to troubleshoot the malfunctions that are caused by the flaws in the model you choose.
When you choose a model of human nature to operate your Human Society™ under, you are also choosing the flaws of that model. The strategies you choose based on that model will likewise be flawed.
You may not be aware of what those flaws are. The flaws may not show up immediately. But over time, your Human Society™ will systematically err in the direction of the flaws in your model. Those errors will add up until they eventually become noticeable malfunctions.
Tip: When the malfunctions from your model begin to pile up too high, consider temporarily switching models in order to correct the systematic error of your favored model. The alternate model may not be better in the long run than your current model, but alternating can be a simple way to troubleshoot and correct any accumulated malfunctions in your Human Society™.
Note: alternating between models is itself a model, that has also its flaws. Even if you alternate between two, five or seventeen different models, you will eventually run into model fatigue, where none of your favored models seem to work very well anymore.
A sure sign of model fatigue is when the people in your Human Society™ start to drift toward the default model of human nature. The default model is an ineffective, selfish, low-trust model with a stagnant, short-term, zero-sum mentality that humans fall into when they lose faith in other models.
Tip: If the default model starts becoming popular in your Human Society™, you probably have model fatigue. The best way to troubleshoot model fatigue is with a paradigm shift. At this point, you will need to develop a new model of human nature that addresses the flaws in all your old models. This new paradigm should imply a fresh strategy that can push your Human Society™ forward in a new direction.
You use your Human Society™ solely at your own risk. There are no guarantees of consistency or understandability regarding Human Beings and Human Nature. Your Human Society™ is introduced to you as is and as available and without warranty of any kind, express or implied.
The Human Beings included in your Human Society™ are biological entities vulnerable to damage from many sources, including, but not limited to, war, violence, pestilence, disease, heat, cold, famine and drought. There are no express or implied remedies offered for any damage to Human Beings or any other biological creatures that are included with your Human Society™, either as supplied, or as the result of any operational use of your Human Society™.
THE ENTIRE RISK ARISING OUT OF YOUR ACCESS TO AND USE OF YOUR HUMAN SOCIETY™, AND ANY SUBSEQUENT CONTACT OR COMMUNICATION YOU HAVE WITH HUMAN BEINGS OR OTHER SOCIETIES, REMAINS WITH YOU.
HUMAN SOCIETY™ SPECIFICALLY DISCLAIMS ANY AND ALL WARRANTIES PERTAINING TO HUMAN NATURE AND HUMAN BEHAVIOR, AND SUBSEQUENTLY, ANY FITNESS FOR A PARTICULAR PURPOSE. THERE ARE NO WARRANTIES IMPLIED BY ANY USAGE, TRADE, OR OPERATION OF YOUR HUMAN SOCIETY™. NO HELP OR ADVICE OR INFORMATION (ORAL OR WRITTEN) OBTAINED BY YOU FROM ANYONE REGARDING ANY HUMAN SOCIETY™ SHALL CREATE ANY WARRANTY.
Every neuron in the human brain takes a set of inputs and produces a set of outputs based on those inputs.
Suppose you began to replace those biological brain cells one by one, with some sort of technological hardware and software. Suppose that technological replacement took the exact same inputs and produced the exact same outputs as the biological ones. At what point would that human being cease to be a human being, and become a machine, instead?
This is the Cyborg Paradox.
The Cyborg Paradox is a newer version of an ancient philosophical puzzle called Theseus's Paradox. That story supposes that a ship once sailed by the Ancient Greek hero Theseus is kept in a museum. Over time, pieces of the ship rot and are replaced with identical pieces. When all the original pieces of the ship are gone, is it the same ship?
Theseus's Paradox forces you to answer the question: what makes an object an object? The Cyborg Paradox asks a narrower question: what makes a human being a human being?
Does a human being cease to be human as soon as any biological part is replaced by technology? Is a person who wears glasses not a human being? What about someone who has had a knee replaced, or a hip? Or is the brain that defines a human being, and as soon as you start replacing brain cells, you cease to be human? What are the most essential elements of human nature?
A technological neuron may have the same inputs and outputs as a biological neuron, but it would differ in one key way: it would be most likely be vulnerable to damage in a different way from a biological neuron.
A technological neuron isn't vulnerable to the same things a biological neuron would be. Perhaps a technological neuron would last longer and be more difficult to break. Or perhaps it would break more often but be cheaper and easier to replace.
A technological brain would also have different energy requirements, A biological human brain requires so much energy, humans have to eat multiple times per day. A technological brain would have a different kind of vulnerability if its energy requirements aren't met.
In this scenario, none of the vulnerability differences would matter if the human beings weren't aware at some level that their neurons had changed. They would simply go on behaving the way they would behave with their biological neurons, because their inputs and outputs would be identical.
However, it's different once the humans become aware that their vulnerability has been altered. A change in perceived human vulnerability changes the perceived risk/reward ratios of their behavior. That, in turn, can affect their behavioral choices.
Human beings, like many biological creatures, react to vulnerability with, among other things, a fight or flight response. The human fight or flight response is simply a change, in one direction or another, in the willingness to risk harm to themselves. The fight response makes someone more willing to get hurt, and the flight response makes them less willing.
Making an individual human being knowingly more or less vulnerable changes their percieved odds of success or failure. This in turn can change how they respond to stimuli.
This is where the Cyborg Paradox kicks in. Technology added to human beings changes their odds of success or failure. This, in turn, changes their behavior in response to those odds. At what point is the change in behavior so large that it is no longer recognizably human behavior?
There is another paradox at work here, too. Changing the vulnerability of an individual in one direction can change the vulnerability of other individuals in the other direction.
For example, if a new technology makes individuals feel less vulnerable, those individuals may become more willing to fight. The increase in the willingness to fight, however, makes other individuals feel more vulnerable. The meek, who would likely lose such fights, are forced to adapt this change in the environment by hiding or fleeing.
So a technology that makes some individuals feel less vulnerable may have the paradoxical effect of making your Human Society™ as a whole feel more vulnerable.
The Cyborg Paradox asks the question, "What makes a human being a human being?" It is a difficult question to answer precisely. But at some level, human beings are defined by the risks they are able and willing to take in their environment.
Therefore, utmost care should be taken when applying a new technology to your Human Society™ Most new technologies enable people able to take risks that increase their productivity. But some may have the opposite effect.
A new weapon, or shield, or even a change in communication tools, by changing the odds of success or failure, by altering the perception of vulnerability, can push your Human Society™ out of a functioning equilibrium and into a mess of conflict and flight.
It is not always easy to predict which technologies will be damaging and will be which will be beneficial. This is one of the complex challenges of managing your Human Society™ as it grows and changes. To manage human vulnerability in your Human Society™ requires all your vigilance and wisdom.
Human beings do not function like computers or robots or androids. You do not communicate with them by simply issuing a series of commands through an interface. They will not immediately and flawlessly execute any commands you give them.
Human beings are independent biological creatures, with their own intentions and desires. In order to get human beings to behave in the way you want, you need to get to know those intentions and desires.
The best way to get to know human beings is through our Best Practice Troubleshooting Tip:
Listen to stories, deeply.
Stories are an integral part of human communication. If you want to interface with human beings, you need to be able to both listen to stories, and to tell them.
You can try to give commands to a human being, or to present logical information to a human being, but unless those commands and information are presented within a story, the commands and arguments are unlikely to be effective.
Understanding how human communication works is vital to troubleshooting your Human Society™.
Unlike cyborgs, robots, androids, or other technologies you may be familiar with, human beings are purely biological entities. As such, they do not output direct diagnostic information the way technological devices do. They lack measuring tools, gauges, or dashboards of any sort from which to directly monitor their status.
Fortunately, there are indirect ways to monitor your Human Society™ for malfunctions. For example, here is our Best Practice Troubleshooting Tip:
Listen to stories, deeply.
Stories are the primary diagnostic output of human beings.
However, only some of the information in a story is relevant to the health of your Human Society™. Human stories operate on multiple levels, and some of these levels are more likely to provide diagnostic information than others.
At one particular level of human stories, there is a message about human nature. What are humans like? What kind of problems to they encounter? How do they need to behave to overcome these obstacles?
It is at this human nature level that your diagnostic information will usually be found.
This is why we say to listen deeply. You must look beyond the surface level of a story to find the information in the story that is relevant to troubleshooting.
Stories are essential to human communication. The reason for this is the 1-2-1 architecture of the human brain. You can read more about the 1-2-1 architecture of the human brain in the Human Society™ Complete Manual of Human Nature.
Every human story operates on three levels:
This level contains the nuts and bolts of storytelling, the mechanisms of drama. Each story has a plot and a point of view. It has protagonists and antagonists and supporting characters. It has a conflict and a resolution. All of these elements are placed into a structure, arranged to hold attention from beginning to end.
On a higher level, a story inhabits a universe with a particular set of rules. In religious stories, there are heavens and underworlds, and guidelines for reaching each. Fairy tales are set in an enchanted past, science fiction is set in a technological future. The most compelling of these universes can be used over and over again to tell multiple stories.
Human Nature Level
An effective human story has a message about human nature. It tells humans about themselves, about who they are, about how they behave in certain circumstances, about what kinds of problems they encounter, and how they should act in order to avoid or overcome these problems.
In order to communicate effectively with the human beings in your Human Society™, you need to optimize your storytelling. Your stories need to work well on all three levels:
Your stories must be mechanically efficient, so that they hold the audience's attention from beginning to end.
Your stories should be set in a compelling universe, and remain consistent with the rules and history of that universe.
And finally, your stories must reveal something about human nature that rings true. If your stories are wrong about human nature, about how humans should and shouldn't behave to overcome the obstructions they face, your stories will not connect with your human audience with optimal effectiveness.
Human communication differs from robot communication. Human beings do not respond to data and commands with consistent output.
Human communication is a complex challenge for the keeper of a Human Society™.
The recommended model for a beginners is to think of the human brain as having a 1-2-1 architecture.
Human behavior is the outcome of the interaction between two separate brain systems, called System 1 and System 2. These systems were so named by the human behavioral economist Daniel Kahneman.
System 1 is designed for speed.
System 1 functions subconsciously, emotionally, and automatically.
System 1 quickly processes vast amounts of data in parallel, so that it can make instant decisions.
To enable speedy choices, System 1 builds large, complex models of the world, and stores these models subconsciously.
System 2 is designed for accuracy.
System 2 is slower than System 1, but makes fewer mistakes.
System 2 is conscious, rational, and deliberate.
System 2 processes data serially, step-by-step, to avoid errors.
Human nature gives preference to speed over accuracy.
System 1 is in charge, and has the first and last say on any decision. Any decision begins (always) with a System 1 model, proceeds (optionally) to a System 2 analysis, and then ends (always) with with a System 1 decision.
1-2-1 architecture. 1-2-1. 1-2-1 architecture. 1-2-1 architecture. 1-2-1. 1-2-1. 1-2-1 architecture. 1-2-1. 1-2-1 architecture. 1-2-1. 1-2-1 architecture. 1-2-1 architecture. 1-2-1. 1-2-1. 1-2-1 architecture. 1-2-1.
1-2-1 architecture. 1-2-1. 1-2-1 architecture. 1-2-1 architecture. 1-2-1. 1-2-1. 1-2-1 architecture. 1-2-1.
The logic and rationality available to System 2 only has a minor role in human decision making. To communicate effectively with your Human Society™, you must master the methods of System 1 information transfer.
System 1 is the more ancient system, having evolved in animals hundreds of millions of years before
human beings appeared on the earth. System 2 is a more recent development in human evolution.
Because it is a more recent development, it does not have as much control of the human mind
as humans themselves like to believe. They think of themselves as rational only because
System 2, the rational part of the brain, is also the conscious part of the brain.
critical system alert: possible virus detected
file output error (177CC)
Managing a Human Society™ is complex and fascinating. Much of that complexity comes arises from a puzzling bit of human nature: human beings have two different brain systems, each with a different purpose.
One system, System 1, is optimized for speed. The other, System 2, is designed for accuracy.
To communicate with human beings, it is vital to understand how these two different systems work.
System 1 is optimized for making quick decisions. To do so, it operates subconsciously and automatically, processing vast amounts of information in parallel, in order to make near-instant choices.
The primary task of System 1 is to manage the motor skills of the body. Each motor skill is a pattern of movement, so System 1 is also the part of the brain where patterns are recognized and stored.
As a pattern recognition system, System 1 learns primarily by repetition. As a pattern is experienced more and more, System 1 builds a model of that pattern so that it can recognize and react to it even faster the next time. As patterns and reactions become reinforced with repetition, humans acquire habitual behaviors.
In urgent cases, the repetition requirement for learning can be overridden. This is the role of emotions. Emotions are hard-coded, quick reactions to a certain patterns. When a pattern is accompanied with a strong emotion, that pattern is imprinted into System 1 memory much more quickly than otherwise.
By emphasizing speed over accuracy, and by allowing emotions to inject themselves in the process of learning, System 1 is prone to mistakes. Enter System 2.
In contrast to its sibling, System 2 is conscious, slow, and deliberate. Instead of patterns, System 2 handles facts and events. System 2 allows humans to learn and figure things out rationally and methodically, serially, step by step. It's a mechanism that allows humans to avoid the kind of mistakes that System 1 typically makes.
You may think that the optimal setup for human nature would be to have the human use System 1 whenever speed is the highest priority, and use System 2 whenever accuracy is the highest priority. Unfortunately, that is not the way human nature works.
The issue is that System 1 is highly energy efficient, but System 2 is not. If you ask humans to do too much System 2 work, like following a bunch of detailed instructions for too long, their brains will get tired.
Therefore, System 1 is the default. Moreover, System 2 never even gets to operate on its own, really. Any System 2 process is actually a 1-2-1 process, where you start with a System 1 model, then you do some System 2 reasoning based on that model, and then you conclude with a decision, which is made by System 1.
In essence, System 1 has a veto on every decision. You can lead humans through a rational process leading to a rational conclusion, and they will freely admit, "yes, that's a rational process with a rational conclusion," but if their System 1 produces a different conclusion, they will find some excuse to forego the rational choice.
The 1-2-1 architecture makes understanding human decisions difficult. System 1 processes are subsconscious and automatic, so they are a bit of a black box, even to the human being who is experiencing them. This lack of transparency makes it impossible for anyone to peek directly in and see exactly what model a human is using as a premise to reason from. No one, not even the humans themselves, can really see exactly why human beings make the final decisions they make. You can only sort of guess what's going on in System 1 by watching what goes in and what comes out.
Human beings have a separate communications system for each of these two systems. For transferring information into System 1, human beings use art. For transferring information into System 2, human beings use language.
So what if you want to present an idea to a human being, and you want that information to stick in both the System 1 and System 2 parts of that human being's brain?
Because of the 1-2-1 architecture of the human mind, what you cannot do is simply present a rational argument. Because if that System 2 rational argument does not agree with that human's internal System 1 model at the beginning of the process, the argument will get rejected by the System 1 decision at the end of the process.
Therefore, the first step in a comprehensive human communication plan is to change the internal System 1 model that humans are reasoning from. To accomplish this, you need to present the idea artistically, using the kinds of inputs that System 1 recognizes: using repetition, patterns, emotions, and repetition. And patterns. Isn't this interesting? And exciting?
Once you see that the idea is getting established, you need to repeat this process, and repeat it, until the new model gets firmly embedded into the human brain you are targeting. Then, and only then, can you begin to use reasoning to change their minds enough to influence their decisions.
Getting human beings to make rational decisions, therefore, is therefore a bit of a paradox. Because every decision is an emotional decision out of System 1, rational decisions require making an emotional commitment to rational decisions. Getting a human being to make rational decisions requires the tools of System 1 learning: training and practice, with a dash of emotional storytelling.
So to repeat, effective human communication functions with a 1-2-1 process:
1. Use artistic communication with patterns, emotions and repetition to establish a new model in System 1
2. From that new model, present a rational argument in System 2
1. Get the desired decision from System 1
Once you master this form of communication, you can begin to optimize your Human Society™ towards your goals and values.
You probably have some goals and values you wish to implement in your Human Society™.
But if your society is not prosperous, your goals and values will inevitably fail. This is true whether you are operating your Human Society™ in Isolation Mode or in Networked Mode. Human nature is such that there is a natural tension between prosperity and any other values you may want to introduce into your Human Society™.
Human prosperity is imperative to the success of any venture with your Human Society™. This Optimization Guide to Human Society™ will help you keep your Human Society™ as prosperous as possible while remaining consistent with your goals and values.
In Isolation Mode, you keep the population of your Human Society™ disconnected from other societies. While this limits external competition, you can still face internal resistance to your goals and values. Human nature is creative. Even in a simple and limited Human Society™, you will always have to compete against the human imagination.
If people can imagine a more effective Human Society™ than the one they are in, they will begin to resist your goals and values and work towards a better alternative. Even in Isolation Mode, it is wise to give some effort to optimizing your Human Society™.
In Networked Mode, your own Human Society™ can interact with those of others. The flow of ideas from one Human Society™ to another makes optimization particularly urgent. If your Human Society™ looks unprosperous in competition with its neighbors, the goals and values of your stronger, wealthier neighbor will quickly replace yours, either by conquest or by imitation.
For example, if you have pacifist values and do not want a military, you may find your Human Society™ invaded by a neighbor who does have a strong military. Your pacifist values will become useless if you are conquered.
Another example: if your values conflict with scientific and/or technological progress, a neighboring society is likely to become more technologically advanced than yours. The people in your Human Society™ who see the wealth in the neighboring society will likely migrate away from your values and towards those of your wealthier neighbor. In addition, your technological deficit will leave your military weaker, and again leave you vulnerable to conquest.
Whatever your goals and values, it is a good idea to optimize the performance of your Human Society™. The more wealth and power that your Human Society™ produces, the longer it can hang on to your goals and values in the face of competition.
In this Optimization Guide to Human Society™, you will learn how to optimize the performance of your Human Society™ as a whole, and how to optimize the output of each individual human being within it.
There are two primary directions towards which you can optimize: towards the individual human being, and towards your Human Society™ as a whole. It is not necessarily the case that if you focus on optimizing one, you will also optimize the other.
In fact, a pure optimization at either end results in very similar problems: a kind of theft that disincentivizes effort and risk. If you try to optimize one without considering the other, you can end up counterproductively making both perform suboptimally.
If each individual attempts to optimize their own personal well-being without any regard for the well-being of the Human Society™ as a whole, each individual will try to obtain as much wealth as possible for as little effort as possible.
The least amount of effort, in the purest sense, is simply to take the wealth or labor of others without giving anything in return.
In isolation, a single theft such as this creates no net change in the wealth of the Human Society™ as a whole. It only shifts the wealth of the individuals in it.
Of course, this becomes a problem when everybody in your Human Society™ does the same thing. If everybody is optimizing their individual wealth by stealing everyone else's wealth, you end up with a disincentive to bother to create wealth in the first place. Why create wealth if it's only going to be taken anyway?
If each Human Society™ attempts to optimize its own collective wealth without any regard for the well-being of the individuals as a whole within it, then the Human Society™ will try to get the most amount of effort out of each individual regardless of the well-being of that individual.
In a pure sense, this form of optimization steals human motivation from the individual. Their individual desires are considered irrelevant, so what they want to do doesn't matter. So the individual only gets to pursue efforts that are approved by the Human Society™ as useful. If the rewards the individual wants don't match the rewards the Human Society™ wants, there is no reason for the individual to pursue those rewards.
Just as in the pure individualism scenario, the efforts of the individual in this collectivism scenario end up getting exploited, too. It's just a different exploiter.
In both scenarios, you end up with a disincentive to bother to take any risks. Why do anything above and beyond the minimum required effort, if the rewards of that risk don't belong to you?
As the examples above show, neither pure Individualism or pure Collectivism produces optimal output. To achieve optimal performance in your Human Society™, you need to find a balance between Individualism and Collectivism.
However, both pure Individualism and pure Collectivism have the advantage of being models of human organization that are easy to explain. The simplicity of their stories is attractive, particularly during times that are unstable, confusing, and changing rapidly.
Optimizing your Human Society™ is therefore a communications problem. You need to provide a framework which directs your humans towards an optimal balance of Individualism and Collectivism. This framework needs to be packaged in a simple and attractive enough set of stories to compete with simplicity of pure Individualism or Collectivism.
There are several examples of successful moral frameworks in human history which have promoted balance over the extremes:
"Do unto others as you would have them do unto you."
Also called "The Golden Rule", this maxim allows for Individualism, but only to the extent that it doesn't harm others.
"Love thy neighbor as thyself."
This famous commandment doesn't call for putting oneself ahead of others, nor does it call for placing others ahead of oneself. Instead, it promotes a balance between the two.
"Give people freedom in an environment of trust."
Freedom is an individual right. Trust is a function of the collective environment. Optimal output in a Human Society™ depends on people trusting each other enough to grant each other freedom.
Of course, every Human Society™ needs to tell its own stories in its own way, in order to resonate with the population in its own particular time and place. If you fail to find a way to tell the story of balance in your Human Society™, you may find it drifting towards pure Individualism or pure Collectivism, or worse, a big unnecessary conflict between the two.
Marshmallow Economics is not a true economic discipline like Macroeconomics or Microeconomics. It is a metaphor, a moral framework that uses stories from psychology and economics to guide you in optimizing your Human Society™.
The metaphor is based on the famous Marshmallow Experiment, where pre-school children were given a marshmallow, and a choice. They could eat their one marshmallow now, or, if they waited to eat the marshmallow, they would get a second marshmallow later.
The initial results of the experiment showed that the children who waited grew up to have better outcomes later in life. At first, it was thought that these children had better character traits somehow. They hypothesized that their ability to delay gratification followed them throughout their lives and led to their success.
Follow-up experiments showed, however, that how much the children trusted the person giving the choice affected the decisions that the children made. If the presenter said something untrue before giving the marshmallow choice, the child invariably would decide not to wait.
The children's choices were more a function of how much trust they had in their environment than of their character. If you trust your environment, you assume that the offer is genuine, and that your choices are between 1 and 2 marshmallows. The logical choice in that case is to wait for 2 marshmallows.
And even if you considered the possibility that you might somehow lose that marshmallow while you waited, in a trustworthy environment, you expect that you will get more opportunities for marshmallows in the future. You are not so worried about what would happen if you lost the marshmallow, so you go ahead and take the risk.
However, if you can't trust your environment, you have to consider that the offer is not genuine. While you wait, your 1 marshmallow may be as likely to become 0 marshmallows as it is to become 2 marshmallows.
In an untrustworthy environment, opportunities are rare. You never know when the next opportunity will come. You could lose that marshmallow if you don't eat it now. So settling for 1 marshmallow in that case is actually a quite rational choice.
Most successful economic models are based on the assumption that human beings make rational decisions. The 1-2-1 model of the human brain asserts the opposite: that humans never make rational decisions. How do we reconcile these two things? Enter Marshmallow Economics.
Trust is a human emotion, evolved over millions of years to guide human decisions by evaluating risks. This feature of human nature, shown by the Marshmallow Experiment, is the key to Marshmallow Economics:
In untrustworthy environments, human beings take short-term risks with lower payoffs.
In trustworthy environments, human beings take longer-term risks with higher payoffs.
Trust, or the lack thereof, is often the emotional kick needed to allow or disallow a rational decision to proceed. If the decisions human beings make resemble the rational decisions of economic models, it's because the emotions of trust and distrust guided them in that direction.
In other words, in a prosperous Human Society™, a trustworthy environment helps bring System 1 and System 2 into alignment. In a trustworthy environment, human decisions made from emotions resemble more closely the decisions of a rational model.
Marshmallow Economics is about the role that trust plays in getting the human beings in your Human Society™ to go for that second marshmallow. In other words, you want the people in your Human Society™ to have a trustworthy environment which guides them to take the risks that have the highest average payoffs over time.
There's a challenge for the keeper of a Human Society™: individuals face a different kind of risk from a Human Society™ as a whole.
Suppose, for example, 100 humans are offered a double-or-nothing risk with a marshmallow. 90% of the people taking this risk will get 2 marshmallows, but 10% will end up with nothing.
If nobody takes that double-or-nothing bet, every person in your Human Society™ will have exactly 1 marshmallow. Your 100-person Human Society™ will have a total wealth of 100 marshmallows.
If everyone takes that double-or-nothing bet, your Human Society™ as a whole will have 180 marshmallows. However, 10 people in that Human Society™ will end up with nothing.
This is an important point. Some individual people end up with nothing. Your Human Society™ as a whole, however, never ends up with nothing. This is why the behavior of individual humans may not align with the optimal behavior for your Human Society™ as a whole.
Suppose the consequences of ending up with nothing leads half the humans to decline the bet. You end up with 50 people with 1 marshmallow, 5 people with 0 marshmallows, and 45 people with 2 marshmallows. Your Human Society™ as a whole now has 140 marshmallows, instead of the optimal 180.
In this example, we get a 40-marshmallow gap between what the Human Society™ could produce and what it actually produced. How can we change the conditions to reduce that gap?
Answering that question by understanding how the environment affects human risk taking is what Marshmallow Economics is all about.
Human beings who live in unstable, low-trust environments, such as those caused by poverty, crime, abuse, and/or oppression, will underperform relative to humans who live in high-trust environments.
In a low-trust environment, it is more rational to choose the certainty of a small, short-term payoff over the uncertainty of a large, long-term payoff. Some of these short-term decisions may have devastating long-term consequences.
And so the people who live in these low-trust environments rarely get the large payoffs of longer-term risks. This can send them into a cycle of suboptimal risk taking which can be difficult to escape. Poverty can beget poverty, crime can beget crime, abuse can beget abuse, all the result of short-term decisions having long-term negative consequences.
Other models of human behavior may simply blame the character of the people who behave in this suboptimal way, and say it's their own fault. Marshmallow Economics, on the other hand, looks at the risk profile of the environment itself, and tries to change that risk profile, so that it becomes more logical to pursue long-term risks over short-term risks.
Therefore, Marshmallow Economics tries to optimize the output of your Human Society™ by creating and maintaining an environment of trust in which long-term, high-payoff risks will be taken.
For more information on how to use Marshmallow Economics to optimize your Human Society™, check your Human Society™ documentation.
With this worksheet, you can play with various trustworthiness levels and see how Marshmallow Economics works. When do you optimize risks in your Human Society™ by playing a short-term game, and when is it better to play a long-term game?
Suppose you had a Human Society™ of 100 people, who earn marshmallows as they work. They can get their pay by playing either a short-term game, or a long-term game.
In the short-term game, each worker will immediately get paid with 1 marshmallow. In the long-term game, each worker will get paid later, and will randomly end up either with 0, 1, or 2 marshmallows.
Of those who play the long-term game, how many are expected to end up with:
2 marshmallows: %
Given those odds, what percentage of the population will take the risk to play the long game?
Trying for 2 marshmallows:
If 100% played the short game,
your total societal wealth would be: 100 marshmallows
If 100% played the long game,
your total societal wealth would be: marshmallows
With % playing the long game:
your total societal wealth would be: marshmallows
Gap from actual to optimal : marshmallows
Under what conditions in this scenario do you get better results for your Human Society™ as a whole if everybody plays the short-term game?
How do you think this simple theoretical scenario differs from the actual thing? How to real human beings assess their short-term vs long-term risks? When does a real Human Society™ flip from focusing on the short term to growing for the long term, and vice versa?
Marshmallow Economics aims to optimize the risks in your Human Society™. It aims to create environments of trust where it is logical to take long-term risks with large payoffs. If you find that suboptimal risks are being taken in your Human Society™, there may be obstacles in the environment of your Human Society™ that reduce trust and lead to conflict and suboptimal risk decisions. In this chapter, you will learn how to use Marshmallow Economics to help find and remove the obstacles to good risk taking.
Using the Marshmallow Economics metaphor, you can think of a malfunction in your society as an obstruction that prevents someone from trying for and obtaining their second marshmallow. Troubleshooting using Marshmallow Economics involves (1) identifying the level of obstruction, and (2) removing the obstruction accordingly.
As you monitor the human communication within your Human Society™, you may find stories about people being stopped at the 0-, 1-, or 2-marshmallow level. Each of these levels has a particular kind of language associated with it. Monitor the stories in your society for this language to help you locate the obstructions that are happening in your Human Society™.
Once located, fixing such a malfunction is not as simple as removing the obstruction, because it is usually not the obstruction itself that is the root of the problem. Obstructions are usually put in place because of a lack of trust. Removing the obstruction without addressing the underlying distrust can counterproductively lead to more distrust, more obstructions, and/or more conflicts. This is addressed in the next chapter, Removing Obstructions using Marshmallow Economics.
With a Level-0 Obstruction, someone is told, in essence, "No matter what you do, you will get 0 marshmallows."
This isn't where someone takes a risk and it doesn't work out. This is where someone is completely denied the opportunity to try, or if they do try, the game is rigged so that they will always lose.
Some example Level-0 Obstructions:
- Slavery or forced captivity. "You aren't allowed freedom."
- Denial of the right to vote. "You don't get a vote."
- Denial of property rights. "You aren't allowed to possess this kind of thing."
- Occupational restrictions. "You can't have this kind of job."
- Movement restrictions. "You don't get to live or travel in this place."
Level-0 Obstructions are difficult to justify without an environment of fear and distrust. Therefore, Level-0 Obstructions are nearly always preceded by stories about how some kinds of people cannot be trusted for one reason or another. If you hear this kind of "don't trust them" language in the stories you monitor, you can be confident a Level-0 Obstruction is not far away.
With a Level-1 Obstruction, someone is told, in essence, "You can have 1 marshmallow, but not 2."
A Level-1 Obstruction usually makes someone have to defend the one marshmallow they have, or to prove that they are worthy of it. To hold people at 1 marshmallow and to stop them from going for 2, doubts are thrown, new hurdles are introduced, and goalposts are moved upon the success of the first marshmallow.
When people are forced to defend their first marshmallow, it takes away energy and resources that could be used to pursue the second marshmallow. It makes obtaining the first marshmallow so difficult, that the odds of obtaining the second one as well is so low that it may not be worth trying.
When a Level-1 Obstruction happens, the language around it focuses on two things:Whether the person deserves the first marshmallow:
- "How did that person get a marshmallow?
- "I don't think that person is good enough for a marshmallow."
- "That person must have cheated to get that marshmallow."
- "That person only got a marshmallow because of [insert reason here]."
- "You should be grateful you have a marshmallow at all."
- "You are lucky to have that marshmallow."
- "Other people don't have marshmallows."
On the one hand a Level-1 Obstruction is not as bad as a Level-0 Obstruction, because at least people get 1 marshmallow. But on the other hand, Level-1 Obstructions are more common, and more difficult to get rid of, because they often aren't recognized as obstructions at all.
The perpetrators of Level-1 Obstructions will think of it as "letting you have 1 marshmallow", and not "preventing you from getting 2 marshmallows."
This is typical of a Level-1 Obstruction, that the marshmallow is assumed to be given to the person being obstructed, instead of earned or deserved. This assumption is based in distrust.
Level-1 Obstructions are the most difficult kind of obstacles to get rid of, because the people creating the obstruction view themselves as heroes for allowing the first marshmallow, and therefore resist being blamed for preventing the second.
With a Level-2 Obstruction, someone tells someone else, in essence, "You can have 2 marshmallows," but then fails to follow through on necessary steps to make sure it happens.
A Level-2 Obstruction is basically a broken promise. It's agreeing with someone that some obstacle exists, but then failing to remove the obstacle. It's not necessarily an act of distrust, like the other two levels, but it is an act of irresponsibility.
The language of a Level-2 Obstruction involves promises that get repeated, time and time again. They are repeated because if they promises had been kept, they wouldn't need to make the promise again.
The Best Practice System is a simple model of human nature that lays out a path to human prosperity in seven steps.
For a beginner, it is often best to explain this path in reverse order:
This is the goal you are trying to reach with your Human Society™. You want your Human Society™ to be full of wealthy and healthy human beings, with a minimum of poverty and suffering.
In order to achieve maximum prosperity for your Human Society™ as a whole, you need your individual human beings taking risks with the highest average payoffs over time.
People will not take long-term risks with the optimal payoffs if they are not allowed to take that risk in the first place. Freedom is a necessary prerequisite for risk taking.
A Human Society™ with freedom is, to the extent of that freedom, a Human Society™ where nobody is in control of everything.
So in order for a Human Society™ to have freedom, the people in that society have to be willing to let go of the desire to control everything.
For Step 4, in order for a human being to be willing to release control, they have to trust their environment enough that they feel they'll be OK if they do.
For Steps 5 & 6, in order for human beings to take good, long-term risks, they need two things:
- to be trusted by the environment to take the risk, and
- to trust the environment enough to believe the payoff could happen.
Therefore, a bidirectional environment of trust is needed for the Best Practice System to work.. Your Human Society™ needs to be both trusting AND trustworthy.
Human beings are flawed. Inevitably, even in a trusting and trustworthy environment, people will do things that other people don't like.
When that that trust is betrayed, people need to be able to forgive that betrayal. Without forgiveness, that betrayal turns into distrust, which turns into control, which reduces freedom, which leads to poor risk taking.
This is the key to everything in the Best Practice System.
Here is the most basic fact about human nature, the core part of the human condition: human beings are vulnerable. And they don't like it.
Human nature is a big jumble of feelings and emotions and heuristics and habits, all evolved to deal with all the various kinds of vulnerabilities human beings are subject to.
To make the Best Practice System work, you need to get the human beings in your Human Society™ to accept, at some level, their vulnerability.
Forgiveness is simply the willingness to trust, despite evidence to the contrary that one should. People are not going to be willing to trust against evidence, if they're not willing to accept being vulnerable, because forgiveness, by definition, makes the forgiver vulnerable.
Without that acceptance, people will never fully forgive, never fully trust, never fully release control, never grant full freedom. They will throw roadblocks in front of risks, and thereby prevent your Human Society™ from reaching its potential.
But if you can teach the human beings in your Human Society™ to accept their vulnerability, they can proceed on to Steps 2-7, and you can optimize the performance of your Human Society™.
When troubleshooting your implementation of the Best Practice System, it is helpful to monitor the stories in your Human Society™ for its opposite.
No human being likes mistakes and loss and failure. However, the ones who accept their vulnerability don't spend a lot of energy fighting against failure, or running away from loss. They are willing to err and fail and lose, because those things are steps on the path to improvement and success and victory.
Human beings who fight their vulnerability tend to lash out against even the smallest of threats. They strike first before anyone can strike them. If that doesn't work and they are harmed, they respond by escalating the harm in retaliation.
Human beings who flee from their vulnerability build walls around themselves, figuratively, and even sometimes literally. They try to avoid getting hurt by keeping a distance or a barrier, physically or psychologically, between themselves and anything with any risk of hurting them.
Perfectionism, defensiveness, a refusal to admit mistakes, and an insistence on certainty are personality traits to watch for in the stories you monitor in your Human Society™. These are all classic signs of a denial of vulnerability.
Human beings who do not accept their vulnerability end up distrusting people and things that might harm them.
Often, the fear of harm is projected outward onto other human beings. You will hear stories of people scapegoating other people, blaming them for anything that actually or potentially goes wrong.
Occasionally, human beings will blame themselves for their own vulnerability. When this happens, you will hear stories about self-defeating behaviors.
When people are unaccepting of their vulnerability, when they fear harm from every direction, people develop a low-trust mindset.
People come to believe that they can't trust that anyone will be kind or fair to them. Instead, they believe that everybody acts selfishly. As a result, people feel they have to act selfishly, too, because that's how the game is played. Morality devolves from a question about right and wrong, to a question of what you can get away with. Cheating and corruption becomes rampant. The stories you hear from a low-trust Human Society™ reflect this mindset.
If people feel that everyone is out to cheat them, they want to control other people so they can't get cheated.
The stories with malfunctions at this level usually contain a lot of discussion of rules, and the insistence on enforcing these rules upon the scapegoats.
People want freedom, and don't want to be controlled. In a malfunctioning Human Society™, you can't rely on that being the case.
Stories about these types of malfunctions involve people trying to do something they want to do, but running into two types of roadblocks:
- An oppressive insistence on strictly following every rule, and/or
- An unequal, unjust, and inconsistent enforcement of those rules. Often, this means that the powerful get away with breaking the rules, but the weak do not. At the same time, the strong are protected from harm by the rules, but the weak are not protected by those same rules.
People may have big dreams, but in a low-trust Human Society™, there is no opportunity to realize them. In their absence, you hear stories about people making impulsive decisions that turn out badly.
In a low-trust Human Society™, people don't take the kind of risks that have big payoffs in the end. You hear stories about people getting stuck in bad situations that they can't lift themselves out of. Whole communities, towns, cities, and countries stagnate or regress, and fail to get close to reaching their potential.
The Best Practice System aims to create environments of trust so that human beings in a Human Society™ can take optimal long-term risks with high payoffs.
Human beings will unavoidably disagree with each other. Resolving those differences without reducing trust is essential to making the Best Practice System work.
Left alone, a Human Society™ will usually evolve a ruleset for conflict resolution based on some combination of hierarchies and taboos. These rulesets, however, are usually suboptimal, and often counterproductive towards the aims of the Best Practice System.
is a system of
a Human Society™ in which
some people or groups are ranked
above others and that ranking system
is used to make decisions.
A taboo is an expectation within
the culture of a Human Society™ that
a certain thing will not be said or
a certain behavior will not be allowed.
Human beings disagree with each other. They clash. Without some mechanism to resolve disputes, these clashes will result in violence. In a distrustful environment with no clear mechanism for conflict resolution, the strong always win, and the weak always lose.
A hierarchy, therefore, is the natural output of an environment of distrust. The strong impose their will on those slightly weaker than them, who in turn impose their will on those slightly weaker than their level, and so forth, until an equilibrium is reached. People find their level, and to avoid further conflict, avoid taking risks that would put them in conflict with higher levels of the hierarchy, because they would likely lose. People also place obstructions against people lower than them in the hierarchy, in order to minimize the amount of conflict needed to preserve their place in the hierarchy. p>
Such a history-driven hierarchy in a Human Society™, while extremely common, is suboptimal. The hierarchy is intended as a means for creating an environment of trust, by making clear in advance who would win a conflict, and therefore theoretically making the conflict unnecessary. But by preventing potentially harmful conflicts, they also prevent potentially beneficial risks from being taken. They prevent better ideas from taking root in a Human Society™, because the winner is not necessarily the side with the better idea.
An alternative to hierarchies is taboos. With taboos, the rank of the person in the conflict doesn't matter. The winner is determined by who is on the side with the predetermined preferred outcome.
At first, a taboo may seem justified, because the best outcome at the time may seem obvious. It may seem that its predetermined outcome is better than a hierarchy, because initially, the outcome is probably at least somewhat correlated with the idea at hand. However, just as with a hierarchy, predetermining the outcome freezes the issue at hand in time. A taboo makes an idea lose its ability to evolve and grow. Over time, the correlation of the taboo with the quality of the idea erodes away.
Both of these methods of conflict avoidance have similar flaws. They are trying to create an environment of trust by avoiding conflicts. However, in the long run, because the winner of a conflict gets decoupled from the quality of the idea in question, it may have just the opposite effect.
The losing side of either of these methods can come to feel that their defeat is unjustified. They are losing for unrelated arbitrary reasons, instead of on their merits. This can lead them to distrust their environment, and to avoid taking risks that have arbitrary outcomes. This is not the effect the Best Practice System is aiming for.
The people on the losing side of a hierarchy begin to prefer taboos, and the people on the losing side of a taboo begin to prefer hierarchies. This sets up a natural conflict between one group of people who are pro-hierarchy and anti-taboo, and another group of people who are anti-hierarchy and pro-taboo. If your Human Society™ has a two-party system, the two parties are likely to devolve over time into those two camps. Each camp will use the advantage of their side to oppose the other side.
A movement in your Human Society™ to remove hierarchies usually begins with a drive to place a taboo on the hierarchy itself. It becomes frowned on for people to express a preference for or to defend the hierarchy, or to act as if they deserve to get their way or to win a conflict just because they are ranked higher in the hierarchy.
A hierarchy will respond to this movement by using the strength and power inherent to their place in the hierarchy to suppress the taboo, and defend the hierarchy.
This, however, is a false dichotomy. Each side thinks they are defending the most deserving outcome. Each side thinks their method is the best way to avoid conflict. Each side is wrong. This is an unnecessary fight. The Best Practice System offers an alternative.
Hierarchies and taboos both aim to avoid conflict. The Best Practice System, on the other hand, doesn't care whether conflict is avoided or not. The Best Practice System only cares if trust exists or not.
Under the Best Practice System, a hierarchy or a taboo can exist, provided it is useful in preserving trust. If it fails to preserve trust, it ceases to be a useful construct. Similarly, if avoiding a conflict preserves trust, the Best Practice System deems that a good thing. But if avoiding a conflict reduces trust, the Best Practice System would rather let the conflict happen.
Therefore, the Best Practice System does not necessarily seek to avoid conflict. It seeks instead to manage the conflict in such a way that the outcome of that conflict creates the most trust.
Therefore, in the aim of preserving trust, the Best Practice System offers some general principles of conflict management:
The winner of conflicts should correlate as strongly as possible to the merits of what the conflict is about.
Avoid arbitrary methods of conflict resolution that do not correlate with the topic at hand.
People will trust the outcome of conflicts much better if the winner is chosen on the merits.
The consequences of losing a conflict should not discourage future conflicts.
Physical harm, and other consequences that are not proportional to the risk being taken,
prevent good new ideas from emerging in your Human Society™.
Outcomes should be proportional to the risks being taken in the conflict.
Too large or small a reward for a victory, or too large or small a punishment for a loss,
creates distrust in the conflict resolution system,
which distorts how risks are taken in your Human Society™.
The Best Practice System does not want to avoid conflicts, or to predetermine their outcomes. Instead, the Best Practice System aims to manage the method of conflict. It wants the outcomes of those conflicts to be as trusted and trustworthy as possible, so that the best ideas emerge to make your Human Society™ grow more prosperous.
A taboo that predetermines the outcome of a conflict is not desired under the Best Practice System. However, a taboo can still be useful, provided that it is used to manage the method of conflict, to optimize the risk-taking in a Human Society™,
Similarly, a hierarchy that merely exists to preserve itself, as the vestiges of an irrelevant ancient conflict, is not desired under the Best Practice System. However, a hierarchy that emerges under a fair and trustworthy system of conflict, can be the sign of a healthy and prosperous Human Society™.
Under the Best Practice System, therefore, hierarchies and taboos are neither good nor bad, but that trust makes them so.
Every model is flawed in some way. The Best Practice Model promoted in the Quick Start Guide to Human Society™ is no different. The primary flaw of the Best Practice Model is that it is extremely difficult to get human beings to perform the prerequisites to freedom.
It is no simple task to get people to accept their vulnerability, to be willing to trust and forgive, and to relinquish their desire for control. Some sort of strong religious, philosophical or moral tradition which clearly and effectively communicates these values needs to be embedded into the daily lives of the people in your Human Society™ in order to make the Best Practice System work.
If communication of those values fails or breaks down, you will have to turn to some other model of human nature with some other strategy for maintaining your Human Society™.
If you do not promote any particular model, your Human Society™ will fall into the Default Model. The Default Model assumes all human beings act selfishly all the time. Kindness and altruism are assumed to appear only when there is some selfish benefit to being kind or altruistic. Otherwise, everybody cheats if they can get away with it.
In the Default Model, therefore, strength is the primary virtue. Only through strength can people be forced to behave properly, through the threat of punishment from the strongest people. If you are kind or altruistic or obedient without cause, you are either weak, stupid or foolish.
A Human Society™ operating under the Default Model tends to be highly hierarchical. The strong rise to the top, and impose their will upon the weak, stupid and foolish. The model holds that strong deserve their success, and the weak, stupid and foolish deserve their failures.
A big flaw of the Default Model, of course, is that it suppresses all sorts of risk taking. Why take any big, long-term risks when your winnings will be likely cheated away from you, or if some stronger person may arbitrarily decide to punish you for daring to challenge them? Hierarchies under the Default Model reinforce themselves. The meek have no chance of inheriting such a Human Society™. The further down the hierarchy you go, the less long-term risk-taking you get. The sum result of all this is that your Human Society™ as a whole ends up in stagnation or regression.
Another big flaw of the Default Model is that if there comes a time of emergency where the survival of the Human Society™ depends on the cooperative, altruistic behavior of the population, it can be difficult to get a population immersed in selfishness to respond in a timely and effective manner.
Because the Default Model is the default, every single religious, philosophical, moral, or political system that arises in a Human Society™ exists to some extent in order to provide opposition to that model. If they weren't opposing that model, such systems wouldn't need to exist at all. The fact that such opposing models tend to arise spontaneously in every Human Society™ should tell you all you need to know about the ineffectiveness of the Default Model.
You have many other alternative models for your Human Society™ besides the Best Practice Model and the Default Model. To help you sort through all your choices, we can divide these models and strategies into some basic categories:
Optimistic models of human nature assert that human beings are essentially good and generous, but there are external obstacles in your Human Society™ that hold them back from reaching their potential.
These obstacles, such as a lack of freedom, a lack of equality, power structures, or bad incentives, prevent human beings from reaching the full potential of their good and generous natures. The strategies based on these models focus on removing such obstacles from your Human Society™.
Pessimistic models of human nature assert that human beings are essentially bad and selfish, and they need to be guided by your Human Society™ away from their destructive nature. The strategies based on these models focus on controlling bad behavior through discipline and strength.
Some pessimistic models focus on reining in the bad and selfish behavior by forming collective institutions, such as families or tribes or religions or constitutional rights or rule of law. Other pessimistic models, such as the Default Model, focus on individual strength as the best way to overcome the selfishness of others.
There are other models which assert that there is nothing "essential" about human nature. Instead, these models assert that human behavior is dependent on the environment at any given time and place.
The strategies based on context-dependent models focus on manipulating the environment to achieve desired outcomes.
The Best Practice Model is a such context-sensitive model of human nature, which focuses on trust in the environment. There are other contexts to consider, too, like the size of the Human Society™, the amount of scarcity, and the extent to which basic needs are met.
Pragmatic models make no claim whatsoever about human nature. Instead, pragmatic strategies simply aim to function through trial and error to see what works and what doesn't. They keep what works and throw out what doesn't. Why something works or doesn't isn't considered an important question.
The problem with pragmatic models is that without a theory of human nature behind them, they lack a moral foundation. Without a good moral story to tell, it is difficult for pragmatists to establish and maintain trust within their Human Society™.
This is because pragmatism is more of a System 2 idea than a System 1 idea. A pragmatic decision may be logical from a System 2 standpoint, but by abstaining from a belief about human nature, pragmatists abstain from a good System 1 model to tell stories to support their decisions from.
People may be willing to try a pragmatic approach for a while, but because there's no story about human nature to identify with, people are less attached to pragmatic models than other models. Pragmatists can appear inconsistent, because they may promote one idea one day, but if the data changes, they may promote its opposite the next. Their motives may come into question, as their amorality can be easily mistaken for immorality.
Therefore, it is difficult to make a pragmatic model stick compared to other types of models. In the long run, people prefer models that tell a story about human nature over those that don't, no matter how flawed those models may be. The ability to tell a story to support their model is why both the Default Model and its opposing religious systems can persist for millenia, while pragmatist philosophies come and go like the weather.
In general, the models of human nature that permeate your Human Society™ will usually look something like the ones listed above. But that's only in general.
Specifically, however, individual humans won't always have such clear, coherent and rational views of human nature. Most humans build their System 1 models of the world not through a coherent rational analysis that is then trained into their brains through deliberate study and practice. Instead, most humans build their models of human nature via the messy process of living their individual lives.
Yes, humans may grow up in a culture that tries to teach them a coherent worldview through some kind of religious, philosophical or moral cultural tradition. But humans also grow up in a real Human Society™ with other real human beings and live through all sorts of experiences that can influence how those models get formed in their brains.
Are the people around them kind and generous, or cruel and selfish? Do they experience love, or neglect? Loyalty, or betrayal? Do they grow up in a time of plenty, or a time of scarcity? Do they feel safe and secure, or are they under constant threat of violence and danger? Are they surrounded by health, or by sickness and death? Are their lives predictable, or is one day never the same as the next? Do they feel free to pursue their dreams, or are they stifled with limited choices?
The story of an individual human life isn't the kind of story formed by an artist painting a deliberate picture in order to make a point. It's a mess of random life events that doesn't necessarily form a cohesive narrative. As a result, the model of human nature that any specific individual human being lives by usually ends up being an incoherent hodge-podge.
It may be tempting to tear down these incoherent models, by pointing out the incoherence of their operating model, but this will be an ineffective strategy. These models live in System 1, they're not the rational output of System 2. System 1 doesn't change by making a rational argument. System 1 changes through experience, through repetition, through emotion.
A human being, and a Human Society™, is not a static entity that builds one model and sticks with it forever. Human beings have experiences that form their model, but those experiences don't stop just because they've formed a model. They continue to have more experiences, that change their models even further. Some people's lives change slowly, and their models change slowly with them. Other people have dramatic events happen to them, and those events can trigger large changes to their models.
Models are always changing, so at any given moment, those changes may not make sense together as a whole unit. A System 1 model in the mind of a human being is just a snapshot in time. Coherency and incoherency, over the lifespan of a human being or a Human Society™, can ebb and flow, like waves. An incoherent hodge-podge of a model is a normal and expected part of the process.
Incoherent models have their flaws, obviously. Those flaws will become problems, often. But that incoherent model with those flaws can also serve as stepping stones to a better, more coherent model that works better. Coherency is like an equibrium that people reach, and settle at for a time, until something happens that jolts them out of that equilibrium. Then they're off on a journey, through incoherency, in search of a new equlibrium.
So an incoherent model, for a time, may not necessarily be a bad thing. A human being, and your Human Society™, may need an incoherent model now and then, in order to move on from a broken model to a better one. So rather than trying to destroy any incoherent models you may encounter in your Human Society™, it is usually a better strategy to be more forgiving of these flaws, to try to understand how someone got to the place they are now, and through that understanding, guide them gently and wisely toward a better, more coherent model.