Dynamic remodeling of in-group bias during the 2008 presidential election. Author Affiliations Edited by Richard E.
Nisbett, University of Michigan, Ann Arbor, MI, and approved January 29, 2009 ↵1D.G.R. and T.P. contributed equally to this work. (received for review November 12, 2008) Abstract People often favor members of their own group, while discriminating against members of other groups. Keywords: In-group favoritism, or solidarity, is a well documented aspect of human behavior (1–5, 49). Several explanations for the evolution of in-group favoritism have been proposed. Another possibility for the evolution of in-group favoritism involves reciprocity heuristics (3, 19). Whatever the mechanism for the evolution of in-group favoritism might have been, a flexible sense of group identity is essential.
In this field study, 395 Democrats were recruited from public spaces in Cambridge, MA to act as dictators in a modified dictator game. Results Fig. 1. Significant in-group favoritism exists among men in June (A) and August (B), but not September (C). Rand_et_al_science_2009. Rand_nowak_natcomm. Dynamic social networks promote cooperation in experiments with humans. Author Affiliations Edited by Douglas S.
Massey, Princeton University, Princeton, NJ, and approved October 18, 2011 (received for review May 23, 2011) Abstract Human populations are both highly cooperative and highly organized. Human interactions are not random but rather are structured in social networks. Footnotes. Individual versus systemic risk and the Regulator's Dilemma. Author Affiliations Edited by Jose A.
Scheinkman, Princeton University, Princeton, NJ, and approved June 15, 2011 (received for review April 15, 2011) Abstract The global financial crisis of 2007–2009 exposed critical weaknesses in the financial system. Many proposals for financial reform address the need for systemic regulation—that is, regulation focused on the soundness of the whole financial system and not just that of individual institutions. The recent financial crises have led to worldwide efforts to analyze and reform banking regulation. In this context, we use a deliberately oversimplified toy model to illuminate the tensions between what is best for individual banks and what is best for the system as a whole. Our work complements an existing theoretical literature on externalities (or spillovers) across financial institutions that impact systemic risk (15–32).
Model Consider a highly stylized world, with N banks and M assets. We define Xij as the allocation of bank i to asset j. Direct reciprocity in structured populations. Author Affiliations.
Divine Intuition: Cognitive Style Influences Belief in God. Fudenberg_Rand_and_Dreber_Cooperation_in_an_Uncertain_World. Greene_Rand_and_Nowak_Spontaneous_Giving_Calculated_Greed. Powering up with indirect reciprocity in a large-scale field experiment. Author Affiliations Edited by John C.
Avise, University of California, Irvine, CA, and approved April 3, 2013 (received for review February 14, 2013) Abstract A defining aspect of human cooperation is the use of sophisticated indirect reciprocity. We observe others, talk about others, and act accordingly. Cooperation occurs when we take on costs to benefit the greater good. Direct and indirect reciprocity involve repeated interactions, creating future consequences for one’s actions: it can pay to cooperate today to receive cooperation from others tomorrow.
Most of the literature on the evolution of cooperation uses the Prisoner’s Dilemma and related frameworks: players can pay a cost to give a greater benefit to one or more others. All of these mechanisms are relevant for the evolution of human cooperation, but direct reciprocity and indirect reciprocity occupy a central place: most of our key interactions are repeated and reputation is usually at stake. Fig. 1. Results. From the Cover: Evolution of fairness in the one-shot anonymous Ultimatum Game. Human Cooperation. David Rand: "How Do You Change People's Minds About What Is Right And Wrong?" The question that has preoccupied people for a long time is "How do you stop that from happening?
" There are a lot of good answers. For example, if you interact repeatedly with the same person, then that changes things. If the other person has a strategy where they'll only cooperate with you tomorrow if you cooperate with them today, it becomes in your self-interest to cooperate. Or, if people can observe what you're doing, you'll get a reputation for being a cooperator or a non-cooperator. And if people are more inclined to cooperate with people that have cooperated in the past, then that also creates an incentive to cooperate. What all these different mechanisms boil down to is the idea that there are often future consequences for your current behavior. For example, we did an experiment with a utility company in California.
In order for any of that to work, it relies on people caring about you being cooperative; people have to care that you do the right thing. RAND: But hold on. Drand-cv.