Standardizing Giving Research with The Giving Game

Note: This is a very technical (and somewhat “meta”) post intended for a narrow audience: people highly engaged with charitable giving research, and particularly those who are interested in producing research that can and will be applied to improve real world giving decisions and outcomes. It is part of a broader series on giving research, most of which is more accessible.


Using a common experimental format makes it easier for researchers to contextualize their findings and leverage each other’s work. For instance, researchers studying generosity have run hundreds of “Dictator Games”. This practice enables meta-analysis of those experiments, and increases our understanding of their individual and collective findings.

The Dictator Game’s structure (the first player determines how to split an endowment—often cash—between themself and the second player) makes it highly useful for studying dynamics related to generosity. It studies how much people give. In this post, I argue that researchers who want to improve real world giving outcomes should adopt a common experimental framework structured to study who people choose to give to. As I’ve previously argued, there are much larger wins to be had from improving the quality of giving than from improving the quantity of giving.  

Specifically, I propose researchers adopt the “Giving Game” as a standardized research framework. A Giving Game is an experimental paradigm that can be used to study which charities people support and why.  One or more players are given an endowment—typically cash—and a defined charitable choice set, and must decide where to give. Thus the Giving Game is structured to study which beneficiaries players choose to support and how they make those choices. Below, I’ll discuss other aspects of the Giving Game’s design that make it an attractive tool for learning about giving behavior, and more importantly, how to improve it.


The benefits of adopting a common experimental approach are obviously maximized if the “right” approach becomes the standard; adopting the “wrong” standard could easily be worse than no standard at all. Different approaches will be better and worse suited for different uses. This post is meant to generate research that will ultimately improve real world giving behavior as much and as quickly as possible, and judges different research frameworks accordingly.

The following example will help illustrate the criticality of matching a research framework to the questions it’s meant to produce. Learning about people’s preferences across charities is clearly an important part of giving research, and there are multiple frameworks that can be used to study those preferences. The table below compares two of those frameworks along dimensions relevant to creating actionable research. The first is a “Utilitarian” framework, which would be a standard way for economists to express preferences across charities. The second is a “Decision Tree” framework, which the Giving Game method utilizes.


Decision Tree

How are charitable preferences modeled?

Models preference relations using indifference curves/surfaces, or alternatively by a “utility function.”

Models preferences as the output of a series of choices, i.e. as a flow chart where forks correspond to decisions and branches to the resulting choices.

Is this model easy for a layperson (e.g. a typical nonprofit practitioner) to understand?

No. As the number of charities being modeled increases, the mathematical representations become more complex. Indifference curves (which can be depicted with a straightforward graph) quickly become multi-dimensional indifference surfaces. So the model of someone’s preferences regarding just a handful of charities is quite difficult for laypeople to understand without significant translation.

Yes. Modeling complex giving decisions may require a complex decision tree, but that structure is still fundamentally easy to understand.  No matter how complex a decision tree gets, tracking the decisions people make and the outcomes they lead to is as simple as following a flow chart.  

How much flexibility does this model give researchers?

Not much. To make all that math work out, the Utilitarian framework makes important, but not necessarily realistic, assumptions. For instance, it implies a stable set of preferences over all alternatives and that preferences are transitive and consistent. This “restricts what sort of patterns we could see, and imposes certain consistencies in behavior” that might not be realistic.

A lot. “A decision tree approach could allow less restricted patterns of behavior, and allow the possibility that more things could influence giving choices.” For instance, Decision Trees easily model highly context-dependent behavior by expressing said contexts as separate branches of the same tree.

Does the model produce results in an actionable format?

No. The Utilitarian framework can provide an excellent model of charitable preferences (both absolute and relative) as they are and the intensity of those preferences, but it gives relatively little insights into how those preferences were arrived at, or how to change them.  

Yes. Decision Trees tell us how people arrive at different giving decisions, and the factors that can change those decisions. Its models convey prescriptive information in a way that lay people can easily understand (e.g. you need to get people to do giving research to improve outcomes).

While the Utilitarian and Decision Tree frameworks are both perfectly reasonable approaches to modeling charitable preferences, they produce information that will be useful in different contexts. For producing actionable giving research, the Decision Tree framework has clear advantages.


Another advantage of the Decision Tree framework is that it naturally breaks giving decisions down into component decisions. This imposes a form that lends itself to efficient problem-solving techniques.

The simplified example diagram below illustrates this. It provides a basic model in which a donor decides where to give based on a series of three binary choices, producing 8 possible outcomes. Structuring the problem in this way allows researchers to focus on understanding how donors make the most important decisions: those that lead to, or away from, the best outcome(s). Branches of the tree that only lead to suboptimal outcomes can be ignored.

By focusing only on the decisions that lead to the best outcome(s), we can learn how to produce those decisions (and therefore outcomes) in an efficient manner. In the example above, there are 7 decisions that donors make, but only 3 of them are relevant to producing the desired outcome. Those 3 decisions are the ones we should spend our scarce resources learning how to influence.

Once the most important decisions have been identified, it’s easy to construct a Giving Game that focuses on those decisions. Returning to the example above, let’s consider a researcher studying how widely donors define their circle of compassion. (Decision 2b in the diagram above). The best outcome is only possible if they define their circle of compassion broadly. Researchers can test their hypotheses about this decision by running Giving Game experiments that make this choice highly salient to subjects.

For example, they can configure a Giving Game where players choose between supporting a nonprofit that performs cataract surgeries in the developing world (which restore sight for ~$50-100) and a nonprofit that trains guide dogs in the developed world (which cost about $48,000 to train). Since the first charity provides a greater benefit at a lower cost, if players in the developed world choose the second charity, it’s presumably because they’ve defined their circle of compassion narrowly. This basic structure could be used to test multiple interventions, with a goal of finding ones that cause subjects to support the first charity in significantly greater numbers than a control group.


I’ve argued for standardizing an approach to studying giving behavior, and that the Giving Game has several important structural advantages that make it well suited for use as that standard. If researchers adopt this standard, the whole of their output will be more than the sum of the parts.

The next post in this series will discuss how this synergistic process can be facilitated if researchers using the Giving Game framework aggregate their data in a shared resource, supplemented by Giving Game data from non-research contexts.

Jon Behar
Jon Behar
As COO, Jon helps coordinate The Life You Can Save’s various projects and set the organization’s overall strategic direction. He founded and continues to run our Giving Game project, a global philanthropy education initiative that teaches people skills to give more effectively and makes these lessons tangible by providing workshop participants with real money to donate to the charities of their choice.

Prior to joining The Life You Can Save, Jon spent ten years at a prominent hedge fund, working primarily in the areas of risk management, portfolio optimization, and algorithm development. He has also served on the board of directors for GiveWell, a widely-respected charity evaluator.

Jon now lives on Bainbridge Island, WA with his wife Meghann Riepenhoff (an acclaimed artist) and their dog Oso.
The views expressed in blog posts are those of the author, and not necessarily those of Peter Singer or The Life You Can Save.


What will your impact be?

Find out using our Impact Calculator.


What's the most effective way to fight extreme poverty?

Stay informed with our latest news!

Yes, I would like to subscribe to your newsletter and receive further emails with your latest news. I understand that I can unsubscribe at any time and I agree to your privacy policy.