Tải bản đầy đủ (.pdf) (173 trang)

Obesity

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (1.96 MB, 173 trang )

O

Obesity

Introduction

Tara-Lyn Camilleri-Carter
Monash University, Melbourne, VIC, Australia

Conservative estimates suggest that over one
billion people worldwide are either overweight
or obese (Wang and Speakman 2016). Obesity
levels have more than doubled since the 1980s,
with the estimate of overweight and obese adults
climbing to ~38% of the global population
(NCD-RisC 2016; Ng et al. 2014). Obesity is
responsible for approximately three to four million deaths per year worldwide, predisposing
suffers to a plethora of related diseases such as
type 2 diabetes, cardiovascular diseases, metabolic syndrome, chronic inflammatory diseases,
and 13 types of cancer (Buescher et al. 2013;
González-Muniesa et al. 2017; Ng et al. 2014;
Pearson-Stuttard et al. 2018; Whiteman et al.
2015; Wilson et al. 2018). Despite the costly
implementation of public health policies, obesity
levels continue to rise around the globe due to the
complicated epidemiology and etiology of obesity (González-Muniesa et al. 2017). This entry
concisely considers some of the possible evolutionary reasons why obesity has arisen, beginning with the evolution of our diet. Next, it will
cover the evidence for the longstanding thrifty
genotype hypothesis as a possible origin of obesity, and finally, consideration will be given to
the role protein targets may play in the rise of
obesity in humans.



Synonyms
Adiposity; Body mass index; Diet; Nutrition;
Overweight

Definition
Clinicians define the state of being overweight or
obese by body mass index (BMI), which is calculated by body mass in kilograms divided by the
square of the person’s height in meters. BMI from
18.5 to 25 is considered within a healthy range, or
less likely to be associated with increased risk
from certain weight-related diseases (GonzálezMuniesa et al. 2017). The categories of underweight (BMI <18.5), overweight (BMI 25–30)
to obese (BMI >30) are associated with increased
risks of certain diseases, costs to reproductive
performance, and life expectancy (GBD 2015
Obesity Collaborators et al. 2017; Global BMI
Mortality Collaboration et al. 2016; Hammiche
et al. 2012; van der Steeg et al. 2008; Zain and
Norman 2008).

© Springer Nature Switzerland AG 2021
T. K. Shackelford, V. A. Weekes-Shackelford (eds.), Encyclopedia of Evolutionary Psychological Science,
/>

5500

The Evolution of Our Diet
The diet of an individual consists of many nutrients that are required in various proportions relative to other nutrients. These nutrients interact
with each other and with the individual’s life
stage, genome, microbiota and condition. It is

generally true that a single nutrient needs to be
consumed at balance, and that over- or underconsumption is harmful to an individual’s health
(Raubenheimer et al. 2005). Therefore, in order to
maximize fitness, an individual must balance
ingestion and absorption of nutrients in appropriate
proportions. Precisely which foods an organism
ingests to achieve this balance is determined by
how each nutrient effects appetite and satiety –
which is specific to the evolutionary history of
each organism (Simpson et al. 2004). The three
major energy-contributing nutrients in any diet
are carbohydrate, fat, and protein, also termed macronutrients. These macronutrients can explain a
great deal of the physiological and behavioral variation between organisms and thus play an important role in determining evolutionary fitness
(Raubenheimer and Simpson 2016).
Over the past ~40,000 years since the Paleolithic period, human diet has changed drastically,
this change in diet has accelerated even further
over the past ~50 years. Fossil studies of past
humans show us they were limited for energy
because sources of simple fats, starches, and
sugars were much less abundant than they are
today (Eaton et al. 1996), whereas complex
sources of carbohydrate such as roots or tubers
and sources of protein from fish or lean game meat
were the main staples of a hunter-gatherer diet.
Hunter-gatherer populations today, such as the
Hadza, have very low levels of obesity (<5%)
and rarely experience metabolic or cardiovascular
diseases (Pontzer et al. 2018). Importantly, the
Hadza people spend a lot longer engaging in
physical activity than most of their Western counterparts, accumulating over 120 min each day

(Pontzer et al. 2018). When many societies transitioned from a hunter-gather to agricultural lifestyle, diets increased in carbohydrate – mainly
starchy grains. Although the shift to agriculture

Obesity

occurred at differing times across societies, evidence from multiple regions indicate this rise in
starch coincided with a reduction in protein. The
agricultural revolution also produced an increased
disease burden, due to closer living quarters, and
increased the risks of famine due to reliance on
crops (Prentice 2005; Wells 2012). Due to the
availability of mass refining and improved transport of sugar and grains, the carbohydrate content
in human diet increased further during the industrial revolution, although still at this time, obesity
prevalence was low and considered a luxury of the
wealthy (Simpson and Raubenheimer 2012).
In approximately the last 50 years, the macronutrient composition of our diet has changed
even more, with most in developed nations having access to an abundance of food. We now have
access to an unprecedented amount of simple
sugars and fats, and highly processed foods are
cheap and plentiful. In tandem with this nutritional shift, we are required to expend much less
energy on subsistence activities than did our
forebears (Simpson and Raubenheimer 2012).
Despite this change in lifestyle, our physiology
remains very similar to that of our ancestors. This
led many researchers to hypothesize about evolutionary discordance, which suggests significant departures from our ancestral diet but not
our ancestral physiology, have greatly contributed to noncommunicable diseases such as obesity (Konner and Eaton 2010). However,
research investigating the interaction between
our culture, genes, and environment (termed
gene-culture coevolution) has shown recent positive selection for mutations such as lactose tolerance, that has arisen in societies that moved
toward dairy farming for subsistence

~5000–10,000 years ago (Swallow 2003;
Tishkoff et al. 2007). Moreover, the genes that
confer some resistance to diabetes may well be
under positive selection (Helgason et al. 2007)
indicating that an adaptive lag between our genes
and diet is not present for all nutritional changes
in recent evolutionary history. Such evidence
from gene-culture co-evolution studies may
lend credence to another possible explanation
for obesity: the thrifty gene hypothesis, because


Obesity

these selection signatures are what one would
expect to find under this hypothesis, and this is
considered in the next section.

Thrifty Genes?
The thrifty gene hypothesis postulates that obesity
may be the result of positive selection for genes
that promote fat storage – these so-called thrifty
genes may have come under selection during
times of food scarcity in our evolutionary history
(Gibson 2007; Wang and Speakman 2016). Now
that most people in developed, and even many
developing nations, have access to an abundance
of food – these once thrifty genes – may now be
maladaptive and thus lead to obesity. There are
several reasons why this hypothesis is alluring to

researchers. First, there is variation in obesity and
related disease susceptibility in humans, and these
traits are both genetically correlated and heritable
(Gibson 2007). Second, when we look into our
own evolutionary past, we can see shifts in diet,
beginning with an energy limited environment
changing to an energy abundant one (Simpson
and Raubenheimer 2012). Third, when we put
humans in context with other extant primates,
we see many have an annual body mass cycle
where they consume in caloric excess when fruit
is abundant and experience the lean season
when fruit is more scarce (Irwin et al. 2015). In
fact, this type of endogenous cycle where animals
eat to a positive energy balance in preparation for
energy scarcity is found in other taxa such as
barnacle geese, ground squirrels, alpine marmots,
bears, and hairy-nosed wombats (Atkinson et al.
1996; Finlayson et al. 2010; John 2005; Körtner
and Heldmaier 1995; Portugal et al. 2007). It may
appear plausible then that humans are perhaps
accumulating weight for a winter that will
never come.
However, studies that have investigated positive selection in alleles that may confer a fitness
advantage to efficient fat storage have failed to
demonstrate any selection signature that would
be congruent with the thrifty genotype hypothesis (Ayub et al. 2014; Wang and Speakman

5501


2016). Additionally, there are several other limitations to this hypothesis; they all relate to famine as a selective pressure. It seems that while
Paleolithic people were in an energy scarce environment (comparative to now), famines were
more likely during the agricultural and industrial
revolutions. To confer a great enough selective
pressure, famines would likely need to be quite
frequent and quite devastating – the mortality
rate from the famine would need to be high
enough to exert sufficient selective pressure
(Gibson 2007; Speakman 2006). There is much
debate in the literature about the frequency and
severity of famines throughout the past
~50,000 years; some researchers define famines
occurring as frequently as every decade, others
as only once every century, all with differing
mortality rates. This makes determining whether
famine is a sufficiently strong and regular selective pressure difficult (Prentice 2005; Speakman
2006). It is also more likely during these times of
scarcity that people died from infectious diseases, rather than starvation itself, although interestingly a BMI >30 is associated with an
increased chance of survival from some infections such as community-acquired bacterial
pneumonia (Corrales-Medina et al. 2011). It is
possible therefore that the value of the thrifty
gene confers different advantages to those previously investigated, although a BMI >30 is also
associated with an increased risk of mortality
from other acute infections (Dhurandhar et al.
2015). Nevertheless, little direct evidence exists
for the thrifty genotype hypothesis.

Are We Eating to a Protein Target?
Rather than thrifty genotypes, perhaps the more
recent changes in our diet have led to obesity,

because many organisms, including humans, are
eating to specific nutrient targets (Raubenheimer
and Simpson 2016). Organisms possess nutrientspecific appetites that maintain homeostasis and
alter the consumption of different nutrients
(Corrales-Carvajal et al. 2016). If the correct proportion of nutrients required for maintaining

O


5502

health is not available however, organisms
attempt to remedy this by ingesting a range of
foods with differing but complementary nutrients
(Raubenheimer and Simpson 1997; Waldbauer
and Friedman 1991).
Much data across taxa, (including primates,
flies, and murids) indicates that protein is the
primary determinant of appetite and satiety
(Felton et al. 2009; Gosby et al. 2011, 2014; Lee
et al. 2008; Simpson et al. 2004; Solon-Biet et al.
2014; Sørensen et al. 2008). Therefore, generally
food is ingested in a manner that ensures protein
consumption is within a strict range of values, but
a wider range of carbohydrate and lipid intake can
be tolerated. This may be because protein overconsumption can carry physiological and metabolic costs associated with excretion, whereas
underconsumption leads to a decrease or cessation
in reproductive output (Piper et al. 2005; SolonBiet et al. 2015). If excess carbohydrate or fat is
consumed however, energy stored as body fat can
serve as an important buffer against seasonal variations in energy supply (Prentice 2005; Walker

et al. 2017).
Because protein consumption is so closely
linked to satiety, it has the capacity to shape
feeding behavior, and can be exploited to facilitate the loss of body fat by curtailing appetite
through consumption of high protein diets. By
contrast, consuming low protein foods to meet a
protein intake target can result in overconsumption of sugar and fat, which in the long
term can lead to obesity (Gosby et al. 2014;
Raubenheimer et al. 2005). Together, this has
led to the hypothesis that the current obesity
epidemic is fuelled (at least in part) by the
unprecedented abundance of low protein, energy
dense foods that are highly palatable, yet have
little satiety value (Martínez Steele et al. 2018;
Raubenheimer et al. 2005).
Direct evidence suggesting that humans are
eating to reach a protein target comes from an
experiment where participants volunteered to
stay in a chalet in the Swiss Alps. They stayed
in the chalet for 6 days in total, during the first
2 days they were free to select their own food for
each meal from a buffet, which varied in macronutrient proportions. The experimenters were

Obesity

interested only in protein consumption relative
to fat plus carbohydrate consumption. For the
next 2 days, participants were split into groups,
one group received higher protein food (but
lower fat and carbohydrate) than they had selfselected in days 1 and 2, and the other group

lower protein (but higher in fat and carbohydrate), food than they had self-selected in days
1 and 2. The high and low protein food was
designed to seem identical to the participants.
For the final 2 days of the experiment, all participants ate as they chose – the same as on days
1 and 2 of the experiment. The results showed
that participants ate to a protein target at the
expense of carbohydrate and fat, thus, those put
on a higher protein diet (than they self-selected in
days 1 and 2) on days 3 and 4 underconsumed
carbohydrate and fat, whereas the group on the
lower protein overconsumed carbohydrate and
fat in an attempt to consume more protein. The
researchers concluded that when humans are
forced to make choices about food, protein intake
is prioritized over carbohydrate and fat (Simpson
et al. 2003). This is consistent with many other
species studied, including other primates,
murids, fruit flies and Mormon crickets,
suggesting that leveraging protein intake over
carbohydrate and fat is evolutionarily conserved
(Gosby et al. 2014; Simpson et al. 2003; Simpson
and Raubenheimer 2012).

Conclusion
The origins and increased prevalence of obesity
are multifaceted, and evolutionary explanations
are paramount to understanding the obesity
epidemic. The rapid change in our diet in recent
years, as well as a shift to a more sedentary
lifestyle, interacts with our nutritional balancing

act to attempt to eat optimal quantities of protein, which perhaps leads to overconsumption of
carbohydrate and fats. Although an overabundance of highly processed sugar and fat
dense foods undoubtedly plays a role, the rise
in the prevalence of obesity is unlikely to be
caused by positive selection for a thriftier
genotype.


Obesity

Cross-References
▶ Body Fat Percent and Distribution
▶ Body Reserves and Food Storage
▶ Diet
▶ Food Preferences

References
Atkinson, S. N., Nelson, R. A., & Ramsay, M. A. (1996).
Changes in the body composition of fasting polar bears
(Ursus maritimus): The effect of relative fatness on
protein conservation. Physiological Zoology, 69(2),
304–316. />64186.
Ayub, Q., Moutsianas, L., Chen, Y., Panoutsopoulou, K.,
Colonna, V., Pagani, L., . . . Xue, Y. (2014). Revisiting
the thrifty gene hypothesis via 65 Loci associated with
susceptibility to type 2 diabetes. The American Journal
of Human Genetics, 94(2), 176–185. />1016/J.AJHG.2013.12.010.
Buescher, J. L., Musselman, L. P., Wilson, C. A., Lang, T.,
Keleher, M., Baranski, T. J., & Duncan, J. G. (2013).
Evidence for transgenerational metabolic programming

in Drosophila. Disease Models & Mechanisms, 6(5),
1123–1132. />Corrales-Carvajal, V. M., Faisal, A. A., & Ribeiro,
C. (2016). Internal states drive nutrient homeostasis
by modulating exploration-exploitation trade-off.
eLife, 5, e19920.
Corrales-Medina, V. F., Valayam, J., Serpa, J. A., Rueda,
A. M., & Musher, D. M. (2011). The obesity paradox in
community-acquired bacterial pneumonia. International Journal of Infectious Diseases, 15(1), e54–e57.
/>Dhurandhar, N. V., Bailey, D., & Thomas, D. (2015). Interaction of obesity and infections. Obesity Reviews,
16(12), 1017–1029. />Eaton, S. B., Eaton, S. B., 3rd, Konner, M. J., & Shostak,
M. (1996). An evolutionary perspective enhances
understanding of human nutritional requirements. The
Journal of Nutrition, 126(6), 1732–1740.
Felton, A. M., Felton, A., Raubenheimer, D., Simpson,
S. J., Foley, W. J., Wood, J. T., . . . Lindenmayer,
D. B. (2009). Protein content of diets dictates the
daily energy intake of a free-ranging primate. Behavioral Ecology, 20(4), 685–690.
Finlayson, G. R., White, C. R., Dibben, R., Shimmin,
G. A., & Taggart, D. A. (2010). Activity patterns of
the southern hairy-nosed wombat (Lasiorhinus
latifrons) (Marsupialia:Vombatidae) in the South
Australian Murraylands. Australian Mammalogy,
32(1), 39. />GBD 2015 Obesity Collaborators, Afshin, A.,
Forouzanfar, M. H., Reitsma, M. B., Sur, P., Estep,
K., . . . Murray, C. J. L. (2017). Health effects of

5503
overweight and obesity in 195 countries over
25 years. The New England Journal of Medicine,
377(1), 13–27.

Gibson, G. (2007). Human evolution: Thrifty genes and the
dairy queen. Current Biology: CB, 17(8), R295–R296.
/>Global BMI Mortality Collaboration, Di Angelantonio, E.,
Bhupathiraju, S., Wormser, D., Gao, P., Kaptoge, S., . . .
Hu, F. (2016). Body-mass index and all-cause mortality: Individual-participant-data meta-analysis of
239 prospective studies in four continents. Lancet,
388(10046), 776–786.
González-Muniesa, P., Mártinez-González, M.-A., Hu,
F. B., Després, J.-P., Matsuzawa, Y., Loos, R. J. F., . . .
Martinez, J. A. (2017). Obesity. Nature Reviews Disease Primers, 3, 17034. />2017.34.
Gosby, A. K., Conigrave, A. D., Lau, N. S., Iglesias, M. A.,
Hall, R. M., Jebb, S. A., . . . Simpson, S. J. (2011).
Testing protein leverage in lean humans:
A randomised controlled experimental study. PLoS
One, 6(10), e25929.
Gosby, A. K., Conigrave, A. D., Raubenheimer, D., &
Simpson, S. J. (2014). Protein leverage and energy
intake. Obesity Reviews, 15(3), 183–191.
Hammiche, F., Laven, J. S. E., Twigt, J. M., Boellaard,
W. P. A., Steegers, E. A. P., & Steegers-Theunissen,
R. P. (2012). Body mass index and central adiposity
are associated with sperm quality in men of subfertile couples. Human Reproduction, 27(8),
2365–2372.
Helgason, A., Pálsson, S., Thorleifsson, G., Grant, S. F. A.,
Emilsson, V., Gunnarsdottir, S., . . . Stefánsson,
K. (2007). Refining the impact of TCF7L2 gene variants on type 2 diabetes and adaptive evolution. Nature
Genetics, 39(2), 218–225. />ng1960.
Irwin, M. T., Raharison, J.-L., Raubenheimer, D. R., Chapman, C. A., & Rothman, J. M. (2015). The nutritional
geometry of resource scarcity: Effects of lean seasons
and habitat disturbance on nutrient intakes and balancing

in Wild Sifakas. PLoS One, 10(6), e0128046. https://doi.
org/10.1371/journal.pone.0128046.
John, D. (2005). Annual lipid cycles in hibernators: Integration of physiology and behavior. Annual Review of
Nutrition, 25(1), 469–497. />annurev.nutr.25.050304.092514.
Konner, M., & Eaton, S. B. (2010). Paleolithic nutrition.
Nutrition in Clinical Practice, 25(6), 594–602. https://
doi.org/10.1177/0884533610385702.
Körtner, G., & Heldmaier, G. (1995). Body weight cycles
and energy balance in the Alpine Marmot (Marmota
marmota). Physiological Zoology, 68(1), 149–163.
/>Lee, K. P., Simpson, S. J., Clissold, F. J., Brooks, R.,
Ballard, J. W. O., Taylor, P. W., . . . Raubenheimer,
D. (2008). Lifespan and reproduction in Drosophila:
New insights from nutritional geometry. Proceedings
of the National Academy of Sciences of the United
States of America, 105(7), 2498–2503.

O


5504
Martínez Steele, E., Raubenheimer, D., Simpson, S. J.,
Baraldi, L. G., & Monteiro, C. A. (2018). Ultraprocessed foods, protein leverage and energy intake in
the {USA}. Public Health Nutrition, 21(1), 114–124.
NCD Risk Factor Collaboration (NCD-RisC). (2016).
Trends in adult body-mass index in 200 countries
from 1975 to 2014: A pooled analysis of 1698
population-based measurement studies with 19·2 million participants. Lancet (London, England),
387(10026), 1377–1396. />S0140-6736(16)30054-X.
Ng, M., Fleming, T., Robinson, M., Thomson, B., Graetz,

N., Margono, C., . . . Gakidou, E. (2014). Global,
regional, and national prevalence of overweight and
obesity in children and adults during 1980–2013:
A systematic analysis for the Global Burden of Disease
Study 2013. The Lancet, 384(9945), 766–781. https://
doi.org/10.1016/S0140-6736(14)60460-8.
Pearson-Stuttard, J., Zhou, B., Kontis, V., Bentham, J.,
Gunter, M. J., & Ezzati, M. (2018). Worldwide burden
of cancer attributable to diabetes and high body-mass
index: A comparative risk assessment. The Lancet Diabetes & Endocrinology, 6(6), e6–e15. />10.1016/S2213-8587(18)30150-5.
Piper, M. D. W., Skorupa, D., & Partridge, L. (2005). Diet,
metabolism and lifespan in Drosophila. Experimental
Gerontology, 40(11), 857–862. />1016/J.EXGER.2005.06.013.
Pontzer, H., Wood, B. M., & Raichlen, D. A. (2018).
Hunter-gatherers as models in public health. Obesity
Reviews, 19, 24–35. />12785.
Portugal, S. J., Green, J. A., & Butler, P. J. (2007). Annual
changes in body mass and resting metabolism in captive barnacle geese (Branta leucopsis): The importance
of wing moult. Journal of Experimental Biology, 210
(Pt 8), 1391–1397. />7701002254.
Prentice, A. (2005). Early influences on human energy
regulation: Thrifty genotypes and thrifty phenotypes.
Physiology & Behavior, 86(5), 640–645. https://doi.
org/10.1016/j.physbeh.2005.08.055.
Raubenheimer, D., & Simpson, S. J. (1997). Integrative
models of nutrient balancing: Application to insects
and vertebrates. Nutrition Research Reviews, 10(1),
151–179.
Raubenheimer, D., & Simpson, S. J. (2016). Nutritional
ecology and human health. Annual Review of Nutrition,

36(1), 603–626.
Raubenheimer, D., Lee, K. P., & Simpson, S. J. (2005).
Does Bertrand’s rule apply to macronutrients? Proceedings of the Royal Society B: Biological Sciences,
272(1579), 2429–2434.
Simpson, S. J., & Raubenheimer, D. (2012). The nature of
nutrition: A unifying framework from animal adaptation to human obesity. Princeton: Princeton University
Press. Retrieved from />9776.html.
Simpson, S. J., Batley, R., & Raubenheimer, D. (2003).
Geometric analysis of macronutrient intake in humans:

Obesity
The power of protein? Appetite, 41(2), 123–140.
Retrieved from />14550310.
Simpson, S. J., Sibly, R. M., Lee, K. P., Behmer, S. T., &
Raubenheimer, D. (2004). Optimal foraging when regulating intake of multiple nutrients. Animal Behaviour,
68(6), 1299–1311.
Solon-Biet, S. M., McMahon, A. C., Ballard, J. W. O.,
Ruohonen, K., Wu, L. E., Cogger, V. C., . . . Simpson,
S. J. (2014). The ratio of macronutrients, not caloric
intake, dictates cardiometabolic health, aging, and longevity in ad libitum-fed mice. Cell Metabolism, 19(3),
418–430.
Solon-Biet, S. M., Walters, K. A., Simanainen, U. K.,
McMahon, A. C., Ruohonen, K., Ballard, J. W. O.,
. . . Simpson, S. J. (2015). Macronutrient balance,
reproductive function, and lifespan in aging mice. Proceedings of the National Academy of Sciences of the
United States of America, 112(11), 3481–3486.
Sørensen, A., Mayntz, D., Raubenheimer, D., & Simpson,
S. J. (2008). Protein-leverage in mice: The geometry of
macronutrient balancing and consequences for fat
deposition. Obesity, 16(3), 566–571.

Speakman, J. R. (2006). Thrifty genes for obesity and the
metabolic syndrome — Time to call off the search?
Diabetes and Vascular Disease Research, 3(1), 7–11.
/>Swallow, D. M. (2003). Genetics of lactase persistence and
lactose intolerance. Annual Review of Genetics, 37(1),
197–219. />110801.143820.
Tishkoff, S. A., Reed, F. A., Ranciaro, A., Voight, B. F.,
Babbitt, C. C., Silverman, J. S., . . . Deloukas, P. (2007).
Convergent adaptation of human lactase persistence in
Africa and Europe. Nature Genetics, 39(1), 31–40.
/>van der Steeg, J. W., Steures, P., Eijkemans, M. J. C.,
Habbema, J. D. F., Hompes, P. G. A., Burggraaff,
J. M., . . . Mol, B. W. J. (2008). Obesity affects spontaneous pregnancy chances in subfertile, ovulatory
women. Human Reproduction, 23(2), 324–328.
Waldbauer, G. P., & Friedman, S. (1991). Self-selection of
optimal diets by insects. Annual Review of Entomology,
36(1), 43–63.
Walker, S. J., Goldschmidt, D., & Ribeiro, C. (2017).
Craving for the future: The brain as a nutritional prediction system. Current Opinion in Insect Science, 23,
96–103.
Wang, G., & Speakman, J. R. (2016). Analysis of positive
selection at single nucleotide polymorphisms associated with body mass index does not support the
“Thrifty Gene” hypothesis. Cell Metabolism, 24(4),
531–541. />014.
Wells, J. C. K. (2012). The evolution of human adiposity
and obesity: Where did it all go wrong? Disease Models
& Mechanisms, 5(5), 595–607. />1242/dmm.009613.
Whiteman, D. C., Webb, P. M., Green, A. C., Neale, R. E.,
Fritschi, L., Bain, C. J., . . . Carey, R. N. (2015).



Object Permanence
Cancers in Australia in 2010 attributable to modifiable
factors: Summary and conclusions. Australian and
New Zealand Journal of Public Health, 39(5),
477–484. />Wilson, L. F., Antonsson, A., Green, A. C., Jordan, S. J.,
Kendall, B. J., Nagle, C. M., . . . Whiteman, D. C.
(2018). How many cancer cases and deaths are potentially preventable? Estimates for Australia in 2013.
International Journal of Cancer, 142(4), 691–701.
/>Zain, M. M., & Norman, R. J. (2008). Impact of obesity on
female fertility and fertility treatment. Women's Health,
4(2), 183–194.

Object Choice Task
▶ Understanding Human Points

Object Imprinting
▶ Imprinting

Object Manipulation
▶ Bird Tool Use

Object Perception
▶ Face and Object Recognition

Object Permanence
Chris Fields
Caunes Minervois, France

5505


Definition
The apparent maintenance of object identity over
time,
especially
during
periods
of
non-observation.

Introduction
Typically-developing human beings and at least
some other animals tend to regard the objects
around them, including other organisms, landscape features, and artifacts, as maintaining their
identities – remaining the “same thing” – over
time whether they are observed continuously or
not. Objects are, in other words, regarded as “permanent” or “persistent” by default. Both experimental and theoretical practice in psychology
largely adopt the “naïve realist” assumption that
objects having the properties they are typically
perceived to have are ontologically real, i.e.,
they in fact exist in an observation-independent
way and in fact maintain their identities over time.
Given this assumption, object permanence is the
recognition or understanding of the continuous,
identity-preserving existence of objects (see Hoffman et al. (2015) for a critique of naïve realism).
As object permanence must in any case be
inferred from sensory input and memory, the ontological question of whether objects are in fact
permanent can be set aside, in practice, in favor
of the more properly psychological questions of
how object permanence is inferred, how the ability to infer object permanence develops, and how

and why an ability to infer object permanence
evolved within animal lineages exhibiting complex cognition. It bears emphasis that “inferences”
of object permanence are typically, but not always
(e.g., Eichenbaum et al. 2007), automatic: subjects typically “perceive sameness” instead of
having to consciously infer it.

Synonyms

Short- Versus Long-Term Object
Permanence

Continuity; Identity; Persistence; Recognition;
Selfhood; Time

The conditions under which human infants, children, and adults infer object permanence during

O


5506

short (seconds) visual displays have been intensively studied, primarily using visual occludedmotion paradigms (reviewed by Flombaum et al.
2008; Fields 2011). Infants infer trajectory continuity and hence object permanence for a specific
range of trajectory shapes and occlusion times
beginning at about 3 months old; by 2 years old,
infants employ essentially the same trajectoryshape and occlusion-time criteria used by adults.
Smooth trajectories and relatively short occlusion
times robustly indicate object sameness and hence
object permanence; apparent “absorption” of an
object by an occluder followed by “emission” of a

visually indistinguishable object from the
occluder after a long delay or from a position
unreachable by a smooth trajectory suggests
non-sameness and hence a violation of object
permanence. Trajectory-based object permanence
is impervious to a wide range of feature changes,
e.g., of size, shape, or color of the moving object,
in both infants and adults.
While philosophical conundrums such as the
“Ship of Theseus” have been discussed since the
pre-Socratic period, the inference of object permanence over longer periods – tens of seconds to
decades – of non-observation has received relatively little direct experimental investigation. On
the shorter end of this temporal range (tens of
seconds to minutes), numerous studies have demonstrated that infants are surprised, as adults are,
by object permanence violations as early as
2.5 months old (reviewed by Baillargeon 2008).
For example, surprise is elicited when an object is
placed behind an opaque screen, which after a
short delay is lifted to reveal that the object is no
longer there. As the period of non-observation
increases, however, experimental manipulations
become progressively more difficult, and studies
tend to be framed in terms of “object recognition”
instead of “object permanence.” Object recognition in the sense used here requires object permanence; i.e., it requires an inference of continuing
object identity. If for any reason object permanence cannot be inferred, the perceived object is
“seen” not as the same thing observed before but
rather as novel. Such failures of recognition occur,
for example, in severe anterograde amnesia, e.g.,
Korsakoff syndrome (reviewed by Fama et al.


Object Permanence

2012), demonstrating that short-term object permanence may be preserved but long-term object
permanence lost. The relative contributions of
semantic memory, episodic memory, and causal
reasoning to object recognition following moderate to long periods of non-observation (hours to
decades) remain to be determined and may vary
widely with object type and context (reviewed by
Eichenbaum et al. 2007; Scholl 2007; Fields
2012).
Insight into the difficulty of characterizing
object permanence inferences across longer gaps
in observation comes from fundamental computer
science. Recognizing complex objects requires
mereological (part-whole) reasoning. In most situations, mereological relations cannot be defined
precisely; they are only “rough” or heuristic
(Düntsch and Gediga 2000). At low levels of
granularity, “parts” become generic and indiscernible, rendering highly similar assemblages effectively indiscernible. At higher levels of
granularity, parts may change their properties
over time and hence require part-level judgments
of “sameness.” In this setting, additional information, e.g., causal or historical information, must be
added to track individual object identity. Such
information is strictly unavailable during periods
of non-observation, so object identity becomes
both hypothetical and dependent on the choice
of heuristics, e.g., the choice of features or causal
constraints to regard as “essential” for
maintaining object identity.
The tools and techniques of developmental
robotics (reviewed by Baldassarre and Mirolli

2013; Cangelosi and Schlesinger 2015) provide
new empirical approaches to object permanence.
These range from replicating classic experimental paradigms such as occluded motion using
fully specified visual-processing and inference
architectures to investigations of self-motivated
environmental exploration, multiple object
tracking, sensory-motor coordination, and language learning. Such studies make explicit not
only the memory resources and inferences
needed to establish object permanence but also
the role of assumed or inferred object permanence as an enabler of complex cognition and
behavior.


Object Permanence

Is Object Permanence Innate?
The essential role of object permanence in human
cognition has long led to it being considered
innate (Baillargeon 2008). In functional terms,
object permanence being innate means that the
human neurocognitive system already has the
organization needed to express behavioral
responses indicating an inference of object permanence at or soon after birth. Recently developed
protocols for functional imaging of neonates,
including preterm neonates, allow the functional
connectivity of the long-range networks
implementing inferences of visual object permanence (e.g., the visuo-motor, categorization, and
salience networks) to be tracked in parallel with
emerging infant abilities (e.g., Ball et al. 2014;
Gao et al. 2015).

While systematic pathological disruption of
object permanence in infants has not been demonstrated, considerable evidence indicates that the
perceptual processing pathways that implement
object permanence are affected in autism spectrum disorders (reviewed by Gomot and Wicker
2012; Fields and Glazebrook 2017). Hundreds of
genes, many of them known to act prenatally to
influence neural growth and connectivity, have
been associated with autism (reviewed by
Geschwind and Flint 2015); these potentially provide a molecular avenue for investigating the
emergence of object permanence both developmentally and evolutionarily.

A Special Case: The Self
A particular instance of object permanence is crucial for development of a coherent psychology:
the experienced self and its associated body. The
experienced self is fixed as a reference point for
other objects and object-directed actions early in
infancy and may be innate (reviewed by Rochat
2012). How the experienced self is to be defined
conceptually, how it is implemented, and how it is
updated to track physical and psychological
changes remain significant open questions (e.g.,
Metzinger 2011; Klein 2014). Evidence that the
default-mode network, which is implicated in

5507

representation of the self (reviewed by Buckner
et al. 2008), is already largely functional in early
infancy (Gao et al. 2015) may shed light on the
development and assignment of permanence to

the experienced self.

Object Permanence and Time
The idea that an object can change its state cannot
be formulated without an assumption of object
permanence. Conversely, the idea of object permanence cannot be formulated without a sense of
time and hence at least one object – a clock – that
changes state. An internal not necessarily conscious sense of duration sufficient at least to distinguish “now” from “then” is, therefore, required
for object permanence. The human implementation of such an internal “clock” is beginning to be
mapped out (reviewed by Merchant et al. 2013;
Mathews and Meck 2014); however, a mechanistic connection between time perception and object
permanence has yet to be made.

Phylogenetic Distribution and
Evolutionary History

O
Food caching, wayfinding, tool construction and
use, long-term pair bonding, and negotiated social
relations all require recognizing an object or a
location as “the same” after a period of nonperception and hence suggest object permanence.
Comparative studies of object permanence have,
accordingly, focused on birds (many species,
reviewed by Güntürkün and Bugnyar 2016),
social carnivores (primarily domestic dogs,
reviewed by Zentall and Pattison 2016), cetaceans
(primarily dolphins, e.g., Johnson et al. 2015), and
great apes (all species, e.g., Karg et al. 2014). The
experimental tasks employed typically replicate in
a species-appropriate way those used with human

infants or children. While both experimental
designs (e.g., Jaakkola 2014) and inferences
from small sample sizes (e.g., Thornton and
Lukas 2012) in this literature have been criticized,
there is broad consensus that animals exhibiting
complex cognition rely on object permanence.


5508

The deep divergence between birds and mammals (approx. 300 million years; Güntürkün and
Bugnyar 2016) indicates either a deep evolutionary origin of object permanence or significant
convergent evolution. The question of whether
object recognition and wayfinding abilities in
rodents, reptiles, or even social insects can be
regarded as evolutionary precursors or rudimentary forms of object permanence therefore bears
consideration. Conclusive demonstration of a lack
of object permanence, e.g., conclusive demonstration of a systematic inability to recognize “the
same individual” after some period of nonobservation, in a mammalian system would also
be valuable in clarifying the evolutionary history
of this capability.

Object Permanence and the Social Brain
The Social Brain hypothesis is a pillar of human
evolutionary psychology (reviewed by Dunbar
and Shultz 2007; Adolphs 2009). The complex
social relations envisioned as the primary drivers
of human cognitive, emotional, and behavioral
evolution within the Social Brain framework
inevitably involve abilities to reliably identify

particular individual humans over long periods
of time and hence moderate to long periods of
non-observation. Such abilities require an inference of continuing existence with individual identity maintenance; hence they require object
permanence.

Evolutionary Developmental
Perspective
Object permanence allows infants and children to
exchange fitness-enhancing attachment signals
with their mothers, other family members, and
nonfamily caregivers. It can, therefore, be
regarded as an ontogenetic adaptation
(Bjorklund and Pellegrini 2002). It continues,
however, to enable qualitatively new capabilities
across the lifespan, e.g., the ability to regard
abstracta such as religions or social organizations
as maintaining their identities over time. As the

Object Permanence

selective pressures acting on these laterdeveloping abilities may be different from those
acting during infancy or childhood, the robust
childhood expansion of object permanence capabilities away from family members to “objects” in
general may be regarded as a deferred adaptation.
From a mechanistic perspective, the relevant
questions are how the mechanisms enabling
object permanence in infancy develop pre- and
perinatally, how they later expand as the infant
and then the child interacts with her environment,
and how this extended developmental system

evolved within the animal and particularly primate lineage. The direct mechanistic coupling of
evolution and development characteristic of “evodevo” biology (reviewed by Müller 2007; Carroll
2008) has yet to be achieved within psychology.
The centrality of object permanence to complex
cognition, together with its broad phylogenetic
distribution, suggests that it is an ideal test case
for such an endeavor. A deep understanding of
how the pre- and postnatal developmental mechanisms that support object permanence have
evolved in both avian and mammalian lineages
may provide new insights into the generation of
novel cognitive and emotional variants and the
selective pressures that act on them over evolutionary time scales.

Conclusion
With every new demonstration of quantum superposition or entanglement at macroscopic scales
(e.g., Wiseman 2015), the likelihood that
observation-independent, time-persistent objects
can even be defined at the level of fundamental
physics decreases. If the objects of human perception cannot be defined within physics, an understanding of what they are and what it means for
them to maintain their individual identities over
time must be developed within psychology. Hoffman et al. (2015) have suggested that “objects”
are in fact bundles of organism-specific fitness
consequences. If this is the case, object permanence is the inference that fitness consequences
that are correlated at one time will remain correlated at subsequent times. Sophisticated


Object Permanence

neurocognitive mechanisms are required to make
such inferences. Developmental robotics, functional neuroimaging of infants, and evolutionary

developmental psychology provide new conceptual and empirical tools that can be combined with
the traditional methods of experimental psychology to identify these mechanisms and to characterize their development and evolution.

Cross-References
▶ Attention and Memory
▶ Autism
▶ Evolutionary Foundations of the Attachment
System and Its Functions
▶ Face and Object Recognition
▶ Human Visual Neurobiology
▶ Kin Recognition
▶ Ontogenetic Adaptations

References
Adolphs, R. (2009). The social brain: Neural basis for
social knowledge. Annual Review of Psychology, 60,
693–716.
Baillargeon, R. (2008). Innate ideas revisited: For a principle of persistence in infants’ physical reasoning. Perspectives in Psychological Science, 3, 2–13.
Baldassarre, G., & Mirolli, M. (Eds.). (2013). Intrinsically
motivated learning in natural and artificial systems.
Berlin: Springer.
Ball, G., Alijabar, P., Zebari, S., Tusor, N., Arichi, T.,
Merchant, N., Robinson, E. C., Ogundipe, E., Ruekert,
D., Edwards, A. D., & Counsell, S. J. (2014). Rich-club
organization of the newborn human brain. Proceedings
of the National Academy of Sciences USA, 111,
7456–7461.
Bjorklund, D. F., & Pellegrini, A. D. (2002). The origins of
human nature: Evolutionary Developmental Psychology. Washington, DC: American Psychological
Association.

Buckner, R. L., Andrews-Hanna, J. R., & Schacter, D. L.
(2008). The brain’s default network: Anatomy, function, and relevance to disease. Annals of the New York
Academy of Sciences, 1124, 1–38.
Cangelosi, A., & Schlesinger, M. (2015). Developmental
robotics: From babies to robots. Cambridge: MIT
Press.
Carroll, S. B. (2008). Evo-devo and an expanding evolutionary synthesis: A genetic theory of morphological
evolution. Cell, 134, 25–36.

5509
Eichenbaum, H., Yonelinas, A. R., & Ranganath,
C. (2007). The medial temporal lobe and recognition
memory. Annual Review of Neuroscience, 30,
123–152.
Dunbar, R. I. M., & Shultz, S. (2007). Evolution in the
social brain. Science, 317, 1344–1347.
Düntsch, I., & Gediga, G. (2000). Rough set data analysis.
Encyclopedia of Computer Science and Technology,
43, 281–301.
Fama, R., Pitel, A.-L., & Sullivan, E. V. (2012). Anterograde episodic memory in Korsakoff syndrome. Neuropsychological Review, 22, 93–104.
Fields, C. (2011). Trajectory recognition as the basis for
object individuation: A functional model of object file
instantiation and object-token encoding. Frontiers in
Psychology: Perception Science, 2, 49.
Fields, C. (2012). The very same thing: Extending the
object token concept to incorporate causal constraints
on individual identity. Advances in Cognitive Psychology, 8, 234–247.
Fields, C., & Glazebrook, J. F. (2017). Disrupted development and imbalanced function in the global neuronal
workspace: A positive-feedback mechanism for the
emergence of autism in early infancy. Cognitive

Neurodynamics, 11, 1–21.
Flombaum, J. I., Scholl, B. J., & Santos, L. R. (2008).
Spatiotemporal priority as a fundamental principle of
object persistence. In B. Hood & L. Santos (Eds.), The
origins of object knowledge (pp. 135–164). New York:
Oxford University Press.
Gao, W., Alcauter, S., Smith, J. K., Gilmore, J. H., & Lin,
W. (2015). Development of human brain cortical network architecture during infancy. Brain Structure &
Function, 220, 1173–1186.
Geschwind, D. H., & Flint, J. (2015). Genetics and
genomics of psychiatric disease. Science, 349,
1489–1494.
Gomot, M., & Wicker, B. (2012). A challenging,
unpredictable world for people with autism spectrum
disorder. International Journal of Psychophysiology,
83, 240–247.
Güntürkün, O., & Bugnyar, T. (2016). Cognition without
cortex. Trends in Cognitive Sciences, 20, 291–303.
Hoffman, D. D., Singh, M., & Prakash, C. (2015). The
interface theory of perception. Psychonomic Bulletin &
Review, 22, 1480–1506.
Jaakkola, K. (2014). Do animals understand invisible displacement? A critical review. Journal of Comparative
Psychology, 128, 225–239.
Johnson, C. M., Sullivan, J., Buck, C. L., Trexel, J., &
Scarpuzzi, M. (2015). Visible and invisible displacement with dynamic visual occlusion in bottlenose
dolphins (Tursiops spp). Animal Cognition, 18,
179–193.
Karg, K., Schmelz, M., Call, J., & Tomasello, M. (2014).
All great ape species (Gorilla gorilla, Pan paniscus,
Pan troglodytes, Pongo abelii) and two-and-a-halfyear-old children discriminate appearance from reality. Journal of Comparative Psychology, 128,

431–439.

O


5510
Klein, S. B. (2014). Sameness and the self: Philosophical
and psychological considerations. Frontiers in Psychology: Perception Science, 5, 29.
Mathews, W. J., & Meck, W. H. (2014). Time perception:
The bad news and the good. Wiley Interdisciplinary
Reviews. Cognitive Science, 5, 429–446.
Merchant, H., Harrington, D. L., & Meck, W. H.
(2013). Neural basis of the perception and estimation of time. Annual Review of Neuroscience, 36,
313–336.
Metzinger, T. (2011). The no-self alternative. In
S. Gallagher (Ed.), The oxford handbook of the self
(pp. 287–305). Oxford: Oxford University Press.
Müller, G. B. (2007). Evo-devo: Extending the evolutionary synthesis. Nature Reviews Genetics, 8,
943–949.
Rochat, P. (2012). Primordial sense of embodied self-unity.
In V. Slaughter & C. A. Brownell (Eds.), Early development of body representations (pp. 3–18). Cambridge:
Cambridge University Press.
Scholl, B. J. (2007). Object persistence in philosophy and
psychology. Mind & Language, 22, 563–591.
Thornton, A., & Lukas, D. (2012). Individual variation in cognitive performance: Developmental and
evolutionary perspectives. Philosophical Transactions of the Royal Society of London, 367,
2773–2783.
Wiseman, H. (2015). Quantum physics: Death by experiment for local realism. Nature, 526, 649–650.
Zentall, J. R., & Pattison, K. F. (2016). Now you see
it, now you don't: Object permanence in dogs.

Current Directions in Psychological Science, 25,
357–362.

Object Permanence in
Monkeys and Apes
▶ In Nonhuman Primates

Object Play
Niki Christodoulou and Xenia AnastassiouHadjicharalambous
University of Nicosia, Nicosia, Cyprus

Synonyms
Construction play; Exploratory play; Playful
behavior; Tool play

Object Permanence in Monkeys and Apes

Definition
Object play involves playful activities with toys or
other objects, instead of social or interactive play
with peers.

Introduction
Play is certainly an important component in the
course of children’s development; in fact, it
takes up an appreciable portion of their time
budgets, considering that all children engage
into play activities throughout their childhood.
Many definitions have been put forward in an
attempt to define what play is and of the different

types of play available in the literature. Some
have had more examination than others. For
example, there is a great amount of literature
on children’s pretend play, while research on
rough-and-tumble play is lacking (Smith et al.
2015). In this entry, we will consider the evolutionary biological perspective on play development, we will address what playful behavior
entails, and finally, we will focus on object
play and tool use during the periods of infancy
and childhood.

An Evolutionary Perspective
According to evolutionary biology, the desire to
play in specific ways and at specific points in life
is shared among a variety of mammals
(LaFreniere 2011). Evolutionary biologists
have long been intrigued to the origins and functions of play due to its complexity as a phenomenon in young mammals (LaFreniere 2011), not
only to observe but also to define it (Pellegrini
and Smith 1998). In the evolutionary biology
dictionary, the term “functions” with regard to
play refers to when a behavior has typically
added to the survival or reproductive success of
an individual (genes) over many succeeding
generations (Pellegrini and Smith 1998b). Functions can also be defined in the context of beneficial outcomes during the life cycle of the
individual player (Pellegrini and Smith 1998b).


Object Play

Evolutionary biologists interested mainly in the
study of animal behavior (hereafter ethologists)

generally consider play as having been acquired
by our species through the process of natural
selection, in order to provide deferred benefits
to the individual (LaFreniere 2011). In other
words, through play a child develops and practices skills crucial to survival and reproduction
in adulthood (Smith 2009). Yet, play may also
produce immediate benefits to the young individual, and modern ethologists acknowledge
that natural selection acts throughout life cycle,
a view now called life history theory (LaFreniere
2011).
Life history theory is a worldwide accepted
analytical framework, used mainly in biology
and evolutionary psychology since the 1970s
(LaFreniere 2011). It considers an organism as
an ever-changing life cycle – not as a static
adult – suggesting that certain species-typical
characteristics evolve to favor somatic and
reproductive efforts throughout life span
(LaFreniere 2011). Accordingly, Bogin (1999)
postulated that because of a finite amount of
time, energy, and resources available, individuals must make choices regarding their behavioral priorities and allocation of resources with
respect to developmental periods and life goals
suitable to those periods. For example, despite
its clear costs, during the early juvenile period,
play is prioritized in all social primates, while
social play takes up most of the time not spent
eating and sleeping (LaFreniere 2011). This fact
is considered to be crucial as the main basis for
concluding an adaptive function of play, because
natural selection favors only behaviors whose

benefits clearly outweigh the associated costs
(LaFreniere 2011). Play can be costly in terms
of time and energy devoted to it as it diminishes
the time, effort, and energy spent on other
activities.
Despite such costs, there is a natural tendency of young mammals to engage in play
as long as and as often as ecological constraints and opportunities afford (LaFreniere
2011); it is in fact indispensable to the development and good functioning of a healthy
adult.

5511

Characteristics of Playful Behavior
Although many researchers have attempted to
define what we call “playful behavior,” this is
not an easy task. Animal ethologist Robert
Fagen (1974) proposed two approaches, which
can be used to define play: the functional
approach and the structural approach. The functional approach suggests that play does not have a
clear external goal or an obvious end in itself
neither clear immediate benefits to the individual
(Smith et al. 2015). In fact, a “functional” way of
perceiving play is suggested – play is performed
rather for its own sake and for enjoyment instead
of for any other external purpose (Smith et al.
2015). Therefore, if an external goal exists such
as a need to seek comfort or attention, then the
behavior cannot be considered as play (Smith
et al. 2015). However, it is important to highlight
that, although playful behaviors do not generate

any clear immediate benefits for the youth, many
theorists do believe that children indeed benefited
from playing; there is an ongoing controversy as
to what the benefits of play exactly are (Smith
et al. 2015).
The structural approach is primarily
concerned with behaviors present only during
play or the way these behaviors are organized
during playful activities (Smith et al. 2015).
These behaviors are usually called “play signals”
and could, for example, take the form of laughter or the “open-mouth play face” which both
signal play (Smith et al. 2015). However, it is
arguable that not all play is characterized by
play signals. In fact, the structural approach
goes a step further and suggests that for behaviors to be considered as playful, they need to be
“repeated,” “fragmented,” “exaggerated,” or
“reordered” (Smith et al. 2015). Thus, if a
child is just running up a slope may not be
playing, but if he or she runs up and slides
down the slope several times which indicates
repetition, runs just halfway up which shows
fragmentation, takes unusually large or small
steps or jumps which suggests exaggeration, or
crawls up and then runs down which indicates
reordering, then we could possibly agree that
this behavior is playful (Smith et al. 2015).

O



5512

The two approaches, although logically different, could be considered to be theoretically in
parallel and complementary to each other; after
all, the child running up and down the slope has no
clear, immediate goal except from enjoyment
(Smith et al. 2015).
Finally, another approach suggests that play
or playful behavior can be identified through a
number of different criteria but always in conjunction with the two previous approaches
(Smith et al. 2015). No criterion is adequate
enough to define playful behavior, but the more
criteria are present, the higher the agreement will
be on that a behavior is play (Smith et al. 2015).
Based on this premise, Krasnor and Pepler
(1980) proposed a model, which suggests that
playful behaviors are characterized by a form of
“flexibility,” “positive affect,” “nonliterality,”
and “intrinsic motivation.” Flexibility refers to
the form and content of play where objects are
being put in different combinations and roles are
being performed in new ways – these are the
structural characteristics of play (Smith and
Pellegrini 2013). Positive affect refers to visible
signs of enjoyment such as when children smile
and laugh during their time while playing (Smith
and Pellegrini 2013). Nonliterality deals with
elements of “pretend” during play (Krasnor and
Pepler 1980) such as acting out hypothetical
scenarios. Lastly, intrinsic motivation refers to

the fact that such behaviors are conducted for
their own sake with participants being mostly
interested with the behaviors themselves (i.e.,
“means”) rather than the action (i.e., “ends”) of
the behavior (the process is more important than
any goal) (Pellegrini et al. 2007).
The play criterion approach does not aim to
produce an unequivocal definition of playful
behavior. It does, however, identify a continuum
of play, that is, from nonplayful to playful behavior, as well as how different theorists agree on
what to call playful behavior (Smith et al. 2015).
As it has been discussed so far, the main “play”
criteria for young children are flexibility, enjoyment, pretense, and no specific goal while
doing so.

Object Play

Play and Exploration
The play criteria, as discussed above, distinguish
“play” from “exploration.” Play and exploration
were often classified together in earlier writings
perhaps because both of them were not goaldirected neither guided through reinforcement
(Smith et al. 2015). Yet, it is also true that for
young infants, during the sensorimotor stage of
their development (see ▶ Sensorimotor Play), the
difference between exploration and play is harder
to make, as, for very young children, all objects
are unusual (Smith et al. 2015). Once children
enter their preschool years, however, the difference is easier to make (Smith et al. 2015). This
was illustrated by an experiment conducted by

Corinne Hutt (1966) by creating a unique toy –
specifically a box where children could sit on,
with a lever that could make the sound of a buzzer.
Young children around the age of 3–5 years were
rather thoughtful when the novel toy was introduced to them by touching it, by feeling it, and by
trying out the lever – they were in fact trying to
figure out what the novel object could do or in
other words “exploring.” Soon after, this changed
as the child would regularly relax and sit on the
object performing frequently with the lever – perceived as more playful behavior. Based on these
observations, Hutt proposed that children typically continue from thorough exploration of different objects to more playful activities.
In addition, exploration in relation to play is
characterized by fast heart rate, reduced distractibility, and flat affect (Pellegrini 2016). On the
other hand, when playful behavior occurs, there
is low heart rate, children are more relaxed, high
level of distractibility, and positive affect (Hutt
1966). Exploration was identified as relatively
thoughtful and paying attention to details, typically asking, “what does this object do?” – while
play was characterized by a variety of behaviors
typically asking, “what can I do with this object?”
as well as being relaxed while using the object.
Further, exploration usually predates different
forms of object use and play in human animals
(Belsky and Most 1981). Influential work from


Object Play

Belsky and Most (1981) showed that toy exploration was the main activity of children aged
7.5–10.5 months, with no signs of playful behavior, while from around 9 to 10.5 months of age,

they identified objects as they manipulated them.
At 12 months, object play became into view,
along with exploration and naming of objects.
Indeed, all types of play follow exploration – animal behaviorists suggest that children’s play does
not only involve objects but rather ranges also
along social and locomotor dimensions
(Burghardt 2005). The latter two are being more
closely interrelated; for example, play-fighting or
rough-and-tumble play has both social and physical dimensions (Pellegrini and Smith 1998a).
Most child developmental research, however,
has mainly focused on object play dimensions
(Smith et al. 2015).

Play with Objects
Jean Piaget’s (1952) theory of cognitive development suggests that young children’s perception
and understanding of the surrounding world
depend on their motor development – a fundamental skill necessary for the young child to associate visual, tactile, and motor representations of
objects. Specifically, the infant has to understand
that objects exist even when they cannot be seen,
touched, heard, or sensed in any other way – what
we call “object permanence” – and this understanding comes through touching and handling
objects in different ways (Siegler et al. 2017).
Piaget gave a comprehensive description of the
varied ways in which children make use of and
manipulate objects, including play (Smith et al.
2015). Although initially this is best expressed as
exploratory, as the child progresses through the
sensorimotor stage, which usually lasts from birth
until 2 years of age, exploratory behaviors are
repeated and thus perhaps enjoyed – two criteria

for playful behavior to be present (Smith et al.
2015). These behaviors become also more flexible, such as that some of them would easily be
characterized as playful (Smith et al. 2015). Piaget

5513

suggested that this behavior could also be called
“practice play,” in terms of manipulating and
bumping objects together and laughing (Smith
et al. 2015).
As Piaget’s theory suggests, young children
during their first 2 years of life are primarily
focused on the activities of their own body
(Siegler et al. 2017). Once their attention shifts
from basic body reflexes to the events of the
surrounding world, the stage is set for the appearance of object play (Hughes 2010). Except from
the infant’s interest in the world around them,
however, there is another requirement for object
play to occur; the motor skills are needed to allow
the infant to grasp and handle play objects
(Hughes 2010). As mentioned above, Piaget
suggested that during the first 2 years of life,
children develop their motor skills, and this is
when children start engaging into play with
objects.
Object play refers to playful behavior involving the use of different objects such as building
blocks, jigsaw puzzles, cars, dolls, etc. (Smith and
Pellegrini 2013). For instance, when babies
engage in play with objects, they usually mouth
objects and then they drop them down, while

toddlers manipulate the objects such as assembling blocks, or sometimes they pretend play
such as feeding a doll (Smith and Pellegrini
2013). In fact, when children are pretending
using objects, at first, they imitate someone
else’s use of those objects (Pellegrini 2016).
Over time, young individuals learn how to use
other more abstract objects to portray other
objects (Pellegrini 2016). Accordingly, object
play is also evident when children use objects in
varied ways (Pellegrini and Hou 2011) such as
using a pencil to represent a hammer. Play with
objects gives the opportunity to children to create
new combinations of actions as well as to advance
their problem-solving skills (Smith and Pellegrini
2013). Indeed, of all the possible uses of objects,
play with objects is most highly akin to creativity
(Pellegrini and Hou 2011).
In modern societies, object play typically
involves toys which are made to favor children’s

O


5514

play by being created based on mass media prototypes (Smith 2009). There is a huge production
not only of simple toys and objects such as building blocks, jigsaw puzzles, cars, dolls, and toy
animals but also of toys used for pretending and
fantasy activities, such as farm sets, castles, trains,
action figures, and action objects based on either

older but ongoing or current TV series or films
such as Star Wars (Smith 2009).
As mentioned earlier, before babies, toddlers,
and children engage in object play, they must
explore the object first (Hutt 1966). In this initial
exploration, young individuals extract characteristics of and uses for novel objects, and individuals then use this information as a basis for play
bouts (Pellegrini et al. 2007). To illustrate, imagine a child entering the nursery school for the
first time, being introduced to a completely new
setting (Pellegrini et al. 2007). The child will
allow substantial amount of time during the very
first few weeks of their experience, to explore the
physical and social environment of the nursery
school – being a passive onlooker (Pellegrini
and Goldsmith 2003). Exploration is used not
only to become familiar with a new setting, but
it is also needed to identify any potentially dangerous aspects of an environment and discover
how to stay away from them (Spinka et al.
2001). Once a child determines that the novel
environment is safe, then play can occur
(Pellegrini and Goldsmith 2003).
In addition, establishing an accurate time frame
for object play during childhood is challenging,
considering that object play is usually conflated
with other forms of object use (Pellegrini 2016).
In a study by Belsky and Most (1981) where
object play was clearly differentiated from other
forms of object use, they concluded that it begins
around 1 year. Further studies concluded that by
3–5 years of age in American and UK preschool
settings, object play increases and then declines

(McGrew 1972; Pellegrini and Gustafson 2005;
Pellegrini and Hou 2011).
Object play becomes increasingly social as the
child grows (Rubin et al. 1983). To illustrate,
they concluded that less than 2% of preschool
and kindergarten children’s play is solitary,
while 12% and 28% of preschoolers and

Object Play

kindergarteners, respectively, is social (Rubin
et al. 1983). Therefore, not only does object play
expand throughout childhood, but it also becomes
progressively social. Finally, it should be noted
that any benefits of object play should be compared against those of instruction, keeping in
mind different external influences such as the
age of the child, the type of task, and whether
learning is for particular skills or a more general
developmental acquirement (Smith and Pellegrini
2013).

Gender Differences in Object Play
Evidence about gender differences in object play
as well as in the overall frequency of object play
between young boys and girls during the sensorimotor period is lacking (Belsky and Most 1981).
However, studies have suggested that the nature
and choice of toys do vary, especially after the
sensorimotor stage. For instance, Smith and
Daglish (1977) found that at 1 and 2 years of
age, boys engage more into active play and forbidden play such as playing with wall sockets,

pulling curtains, or climbing on furniture and
used more transportation toys. Girls, on the other
hand, engaged more into play with dolls and soft
toys. These kinds of conclusions are also illustrated in other studies with children aged 2, 3, or
4 years, both at home settings and in nursery
classes; boys tend to prefer activities such as
throwing or kicking balls along with transportation toys, while girls tend to prefer dolls and
dressing up (Smith 2009).

Tool Use
According to Bruner (1972), the design features
that object play entails make it an appropriate and
suitable way of developing tool-using skills.
Object play, although highly enjoyable in itself
and intrinsically motivated, offers repetition
in the practice of a range of relevant skills
(Smith 2009).
When considering the early development of
tool use in infancy and childhood, the extent to


Object Play

which young children can indeed explore different objects’ functions should be kept in mind
(Deák 2014). Ideally, a common definition of
“tool use” suggests that individuals make use of
objects not attached to the environment or being
part of individuals’ bodies, in the service of a goal,
for example, getting food (Shumaker et al. 2011).
To illustrate, using a fingernail to twist a screw

would not be considered as an example of tool use
but using a screwdriver would (Pellegrini 2016).
Tool use is an activity that encourages children’s
learning on how to use tools according to cultural
conventions, such as using a fork (Pellegrini
2016). As mentioned above, using tools appears
relatively early in human ontogeny, with an
increasing progression in terms of skills from
infancy to childhood (Cutting et al. 2011).
Object functions depend on the physical layout
and properties of objects such as their material,
part configuration, and markings (Deák 2014).
These object properties are related to tool use,
and this is explained through the concept of
affordances as suggested by Gibson (1982). An
affordance is the extent to which an organism can
interact with an object based on the object’s properties (Gibson 1982). For example, to humans a
cup affords containment of liquids or solids
smaller than the mouth of the cup such as flour
as well as tracing circles on paper (Deák 2014).
Here, it is important to mention that affordances
are inherent to the object-organism interaction
rather than to the object alone, as they are characterized by a distinct organism’s potential for
interacting with those properties (Deák 2014).
Individual humans can achieve different
affordances from the same object too of course
such as a skilled player can exploit more
affordances on a guitar when compared to an
inexperienced person. This concept is relevant
when we examine the early development of tool

use, because in reality most objects or tools are
designed to afford actions of adults and not of
infants and young children (Deák 2014). As a
result, infants must learn how to use tools and
objects in a world with a setup engineered for
adults (Deák 2014).
Unfortunately, infants and young children are
not frequently or sometimes not at all allowed to

5515

explore object affordances or in other words to
explore what they can do with a particular
object. This is not only because of young children’s physical limitations but also by the limited accessibility to these objects as adults
usually design children’s environments in such
a way that they prohibit children’s access (Deák
2014).
However, as infants grow and develop so do
their sensory and motor abilities, and these are the
necessary skills needed for the development of
tool use. Infants aged 6–12 months old readily
manipulate novel objects in order to explore
their functions and properties – squeezing soft
objects or banging hard objects (Bourgeois et al.
2005). As explained above, it is through exploration that children become familiar with novel
objects, and once they do so, they enjoy their
interaction with them. Exploration is a progressive, embodied, multimodal process that progressively allows children to adapt their reaching and
refine their abilities to specific and similar novel
objects, in other words, to detect new affordances
(Deák 2014). Refinement can be gradual in the

sense that younger infants learn how to use a type
of tool, for example, a spoon, after a substantial
amount of time of experience. Consequently, it
has been suggested that tool-using skill depends
on infants’ and young children’s experiences with
objects (Deák 2014). The reason why tool use
learning is protracted is partly because young
children need time also to consolidate their
newly acquired sensory and motor skills.
Further, as Tomasello (1999) suggested, tool
use is supported by social interaction, with adults
having an important role in children’s tool use
learning. Specifically, infants and young children
shift their attention to objects as a consequence of
interaction or observation of adults using objects
and tools (Deák 2014). For instance, when adults
hold objects, children in turn become interested in
those objects and reach out for them, to examine
and learn about them (Tomasello 1999). Thus, it is
suggested that social learning and sensorimotor
development are combined in an attempt to favor
tool use learning in infants and young children.
In summary, as the young child moves from
infancy to childhood, their tool use skills

O


5516


depended on active, sensorimotor development.
Through the acquisition of their sensory and
motor abilities, infants and young children are
able to explore novel objects and gradually learn
how to use them in different ways, just like
adults.

Conclusion
This chapter has presented a short introduction
about the evolutionary biological basis of “play
development” as studied among a variety of mammals. As it has been suggested, it is through play
that children develop and practice skills crucial to
survival and reproduction in adulthood. While
play can be costly in terms of time and energy
devoted to it as it diminishes the time, effort and
energy spent on other activities, there is a natural
tendency of young mammals to prioritize and
engage in play activities during the early-juvenile
period. Importantly, play allows the young individual, after they have determined that a novel
environment is safe, to generate play behaviors
and explore different object functions they can
afford performing in terms of their age and current
skills. Overall, research suggests that play is indispensable to the development and good functioning of a healthy adult while it is through the
acquisition of their sensory and motor abilities
that young children can develop their tool-use
skills and learn how to use novel objects in different ways.

Cross-References
▶ Newborn Behavior
▶ Play and Tool Use

▶ Social Play

References
Belsky, J., & Most, R. K. (1981). From exploration to play:
A cross-sectional study of infant free play behavior.
Developmental Psychology, 17(5), 630.
Bogin, B. (1999). Evolutionary perspective on human
growth. Annual Review of Anthropology, 28(1),
109–153.

Object Play
Bourgeois, K. S., Khawar, A. W., Neal, S. A., & Lockman,
J. J. (2005). Infant manual exploration of objects, surfaces, and their interrelations. Infancy, 8(3), 233–252.
Bruner, J. S. (1972). Nature and uses of immaturity.
American Psychologist, 27(8), 687.
Burghardt, G. M. (2005). The genesis of animal play:
Testing the limits. Cambridge, MA: MIT Press.
Cutting, N., Apperly, I. A., & Beck, S. R. (2011). Why do
children lack the flexibility to innovate tools? Journal
of Experimental Child Psychology, 109(4), 497–511.
Deák, G. O. (2014). Development of adaptive tool-use in
early childhood: Sensorimotor, social, and conceptual
factors. Advances in Child Development and Behavior,
46, 149–181.
Fagen, R. (1974). Selective and evolutionary aspects of
animal play. The American Naturalist, 108(964),
850–858.
Gibson, E. J. (1982). The concept of affordances in development: The renascence of functionalism. In The concept of development: The Minnesota symposia on child
psychology (Vol. 15, pp. 55–81). Hillsdale: Lawrence
Erlbaum.

Hughes, F. P. (Ed.). (2010). Children, play, and development. Thousand Oaks: Sage.
Hutt, C. (1966). Exploration and play in children. Symposia of the Zoological Society of London, 18(1), 61–81.
Krasnor, L. R., & Pepler, D. J. (1980). The study of children’s play: Some suggested future directions. New
Directions for Child and Adolescent Development,
1980(9), 85–95.
LaFreniere, P. (2011). Evolutionary functions of social
play: Life histories, sex differences, and emotion regulation. American Journal of Play, 3(4), 464–488.
McGrew, W. C. (1972). An ethological study of children’s
behaviour. London: Metheun.
Pellegrini, A. D. (2016). Object use in childhood: Development and possible functions. In Evolutionary perspectives on child development and education
(pp. 95–115). Cham: Springer.
Pellegrini, A. D., & Goldsmith, S. (2003). ‘Settling in’ a
short-term longitudinal study of ways in which new
children come to play with classmates. Emotional and
Behavioural Difficulties, 8(2), 140–151.
Pellegrini, A. D., & Gustafson, K. (2005). Boys’ and girls’
uses of objects for exploration, play, and tools in early
childhood. In The nature of play: Great apes and
humans (pp. 113–135). New York: Guilford Press.
Pellegrini, A. D., & Hou, Y. (2011). The development of
preschool children’s (Homo sapiens) uses of objects
and their role in peer group centrality. Journal of
Comparative Psychology, 125(2), 239.
Pellegrini, A. D., & Smith, P. K. (1998a). Physical activity
play: The nature and function of a neglected aspect of
play. Child Development, 69(3), 577–598.
Pellegrini, A. D., & Smith, P. K. (1998b). The development
of play during childhood: Forms and possible functions. Child Psychology and Psychiatry Review, 3(2),
51–57.
Pellegrini, A. D., Dupuis, D., & Smith, P. K. (2007). Play in

evolution and development. Developmental Review,
27(2), 261–276.


Objectivity
Piaget, J. (1952). The origins of intelligence in children
(trans: Cook, M.). New York: International Universities
Press.
Rubin, K. H., Fein, G. G., & Vandenberg, B. (1983). Play.
Handbook of child psychology, 4, 693–774.
Shumaker, R. W., Walkup, K. R., & Beck, B. B. (2011).
Animal tool behavior: The use and manufacture of
tools by animals. Baltimore: JHU Press.
Siegler, R. S., DeLoache, J. S., Eisenberg, N., Saffran, J., &
Gershoff, E. (2017). How children develop. New York:
Macmillan.
Smith, P. K. (2009). Children and play: understanding
children’s worlds (Vol. 12). New York: Wiley.
Smith, P. K., & Daglish, L. (1977). Sex differences in
parent and infant behavior in the home. Child Development, 1250–1254.
Smith, P. K., & Pellegrini, A. (2013). Learning through
play. In: Tremblay, R. E., Boivin, M., Peters, R. DeV.,
eds. Smith, P. K., topic ed. Encyclopedia on early
childhood development. Retrieved from http://www.
child-encyclopedia.com/play/according-experts/learn
ing-through-play
Smith, P. K., Cowie, H., & Blades, M. (2015). Understanding children’s development. New York: Wiley.
Spinka, M., Newberry, R. C., & Bekoff, M. (2001). Mammalian play: Training for the unexpected. The
Quarterly Review of Biology, 76(2), 141–168.
Tomasello, M. (1999). The cultural origins of human cognition. Harvard University Press: Cambridge, MA.


Objectivism
▶ Objectivity

Objectivity
Ross Colebrook1 and Hagop Sarkissian2
1
CUNY Graduate Center, New York, NY, USA
2
Baruch College, CUNY, NY, USA

Synonyms
Absolutism; Moral realism; Objectivism

Definition
Objectivity – the quality of existing independently
of a subject’s beliefs or desires. Not dependent
for its properties on any person’s subjective

5517

experience. Typically discoverable by publicly
available and evaluable means.

Introduction
For purposes of this entry, a domain of facts is
objective if those facts both (a) exist and (b) are
mind-independent, meaning they do not depend
for their existence on any human beliefs, attitudes,
or desires (though for simplification, this article

will only refer to beliefs in what follows). Accordingly, moral facts are objective just insofar as they
both exist and are mind-independent. Here, the
focus will be on how evolutionary considerations
bear on the objectivity of morality. This article
will not consider other ways in which evolutionary considerations bear on other moral phenomena, including how they might have shaped core
moral emotions such as compassion or shame or
widespread moral practices such as reciprocation
and punishment.
For many philosophers, the objectivity of
morality constitutes a “fundamental commitment”
(Wong 2014, p. 337); the objective demands of
morality are “nonnegotiable” (Joyce 2006,
p. 117). Psychologists also discuss morality in
objectivist terms, drawing clear distinctions
between social or conventional norms on the one
hand and moral norms on the other. According
to one influential account, moral norms are distinguished in being universalizable and, more importantly, applicable independent of any authority
or sanction (Turiel 1983).
Additionally, nearly all philosophers and psychologists also maintain that ordinary folk are
committed to objectivism about morality, such
that if two people disagree about the moral status
of a moral claim – say, whether or not discriminating against someone on the basis of their
sexual orientation is morally permissible – at
most one of them can be correct (e.g., Smith
1994; Shafer-Landau 2003). The “philosopher’s
task,” according to one prominent account, is to
make sense of precisely the puzzling objectivity
of morality (Smith 1994). Philosophers predominantly base their claims by canvassing the nature
of moral language and debate, by examining ordinary intuitions about the moral domain, and by


O


5518

reflecting on the phenomenology or felt experience of moral life (Gill 2009). Psychologists, by
contrast, have tested this claim using experimental
methods, with several studies showing that individuals do, in fact, show objectivist tendencies
(for a review, see Sarkissian 2016).

Evolutionary Arguments and Skepticism
About Objectivity
Recently, some researchers have suggested that
evolutionary theory can pose a serious threat
to the purported objectivity of morality. Specifically, it can undermine the notion that most commonsense and widespread moral beliefs are
justified by referring to mind-independent moral
facts. Evolutionary explanations of commonsense
moral beliefs do not, in themselves, lead us to
deny or rule out the existence of mindindependent moral facts. Indeed, this is not the
general strategy of such “debunking” explanations (Nichols 2014). Instead, evolutionary explanations are thought to undermine the warrant or
justification of these beliefs. They can do so in
several ways. Such explanations can provide a
complete or convincing account of the causal
processes giving rise to the belief in question
without, at any point, having to say anything at
all concerning whether or not that belief tracks
any facts or truths about the world (Joyce 2006).
Or, such explanations can show that the psychological processes that generate the belief in question are not the sorts of processes that could
plausibly result in true, factual beliefs (Nichols
2014) or argue that whatever ends are supported

by a theory of Darwinian natural selection – be
they the flourishing of the gene, the individual, or
the group – such ends cannot be moral ends
(Sommers and Rosenberg 2003).
Regardless of these differences, all such
accounts have the same strategy of arguing that
the best evolutionary account of the origins of
humans’ moral faculties and beliefs will refer to
their fitness-enhancing properties, which have no
bearing on whether or not the beliefs track mindindependent moral facts. Presumably ancestral
humans gained adaptive advantages over their

Objectivity

rivals by cultivating or maintaining certain moral
beliefs about themselves and others – whether
in-group or out-group members. The differential
reproductive success conferred on ancestral
humans by adopting these beliefs would lead to
their propagation. Humans inherit these beliefs
today as part of their cognitive architecture. However, whatever moral beliefs they have as a result
of such a process cannot be justified by claiming
that they track mind-independent moral facts.
One widely cited account presents a dilemma
for moral realists (Street 2006). If evolutionary
forces played a pervasive role in the production
and maintenance of moral beliefs, what could
be the relation between these evolutionarily
shaped beliefs and any mind-independent moral
facts? There are two options, neither of which is

attractive or acceptable, hence the dilemma. The
first horn of the dilemma claims that mindindependent moral facts are not at all related to
the evolutionary pressures on moral beliefs. But
this implies an implausible general skepticism
about the truth of many commonsense ethical
claims, even intuitive claims such as that it is
right to take care of one’s children. This is
because even though this judgment seems obviously true – indeed, true in a way that does not
require the endorsement of any particular person’s
beliefs – ancestral humans would have not converged on this belief because it referred to a mindindependent fact. Instead, those who had such
beliefs were simply more fit than others. The
second horn of the dilemma claims that there is,
indeed, some relation between mind-independent
moral facts and the evolutionary pressures on
moral beliefs. The ancestors of modern humans
reaped adaptive advantage precisely by means of
tracking independent moral truths. However, this
leads to a different, but no less serious, problem:
it requires some plausible “tracking” account,
and. any such account seems otiose. It would be
more parsimonious to leave out mindindependent moral facts and just focus on the
adaptive links that ancestral humans forged
between their circumstances and their responses
to those circumstances. The latter account would
also have greater explanatory power through its
ability to describe how they had beliefs that


Objectivity


contemporary society now regards as false (like
the tendency to prefer in-groups to out-groups;
Street 2006, p. 134). By contrast, a tracking
account fails to easily explain the origin or persistence of such false beliefs.
Joyce (2006, 2016) makes a similar argument,
criticizing accounts of moral judgment that
attempt to show that evolutionary forces produced
in human minds a moral truth-tracking faculty.
Joyce focused on the function of moral judgment
in human beings: could it be to track mindindependent moral facts? Some faculties do
appear to be truth-tracking in this sense: mathematical representations seem to track mathematical truth because an organism only derives
adaptive advantage from true mathematical judgments and not from false ones. (E.g., a person that
judged that 2 + 3 ¼ 6 would be at a significant
disadvantage compared to another that judged that
2 + 3 ¼ 5). However, as Joyce argues, there is no
comparable benefit to making true moral judgments. The moral faculty (such as it is and putting
to one side the question of whether such a faculty
even exists) does not appear to give its bearers
advantage on the basis of its truth-tracking (Joyce
2006). Instead, to the extent that a moral faculty
can be given an evolutionary explanation, it
appears to have given ancestral humans an advantage in virtue of its other (non-truth-tracking) features, for example, signaling one’s sincere
commitment to social projects or maintaining
one’s reputation in the group (c.f. Miller 2007;
Nesse 2009). Since one should not expect moral
judgments to do anything other than what they
were selected for, and since they were not selected
for truth-tracking, this provides a plausible reason
for doubting that moral judgments are formed by a
reliable process.

Many theorists have argued that in the absence
of good arguments against the justification of
moral beliefs, those beliefs ought to be regarded
as justified (Wielenberg 2010; Enoch 2011).
However, the arguments above turn the tables on
any such account when applied to the moral
domain. Any account which claims that there
exist moral faculties that in fact track objective
moral facts must now present plausible explanations arguing for the existence of such facts and

5519

their respective faculties. Given the success of
evolutionary accounts to provide explanations
for a range of phenomena, the burden of proof
shifts to those who wish to defend mindindependent moral objectivism.

Evolution’s Contribution to Perceived
Moral Objectivity
However, even while maintaining that mindindependent moral facts are a fiction, Joyce
(2006) nonetheless speculated that a tendency to
see morality in objective terms is an evolutionary
adaptation. Evolutionary forces selected for individuals who were highly motivated to act upon
fitness-enhancing beliefs, including moral beliefs.
Seeing morality as objective (independent of oneself) would be a much stronger motivator to act
than seeing morality as mere subjective preference. For example, when one says that a person
ought not to steal, one does not take oneself to be
merely asserting that it is within the person’s set of
current desires that she not steal. Even if she does
strongly desire to steal and thinks it a great idea,

this has no impact on the fact that she ought not
steal. The challenge for anyone who endorses
moral objectivity is to show exactly how such
“categorical imperatives” are possible. The standard approach is to show that every agent (or at
least every rational agent) has some reason to be
moral. This article will not be detained with the
details of that discussion. The important point is
that when one condemns those who commit atrocities such as genocide, coming to learn that the
perpetrators are furthering their goals through
genocide would not at all mitigate one’s condemnation. It does not matter how the perpetrators feel
about moral norms; moral norms apply to them
regardless of what they feel. This seems a pervasive aspect of moral life.
As noted above, some take this objective
categoricity to be a “nonnegotiable” part of the
moral domain – a central tenet that any theory of
morality must explain. Discovering that such a
central tenet is false can reveal the discourse to
be systematically flawed. If moral discourse is in
fact committed to such categorical judgments,

O


5520

then
evolutionary
debunking
arguments
(as discussed above) will imply moral skepticism.

This leaves a question: Why do some philosophers take objectivity to be such a central part of
moral discourse and practice to begin with? Why
does it seem to be a feature of morality that moral
facts exist? Whence comes the perception that
morality is mind-independent in the first place?
Here, it may be instructive to separate
two issues, following Edouard Machery and
Ron Mallon (2010). The first concerns why one
would be expected to adopt norms of behavior
that, in general, come with costs and may serve
to stifle or frustrate some of one’s current desires.
Call this “normative cognition,” involving such
concepts as “should” or “ought” or “ought not.” It
is rather uncontroversial that humans have
evolved to pick up on prevailing norms in their
groups (whether these norms are implicit/informal
or explicit/formal), assimilate or internalize them,
and feel motivated to adhere to them. Normative
cognition also includes expectations that others
comply with the same norms, along with a desire
to sanction or punish those who do not. Finally,
normative cognition can be accompanied with
feelings of guilt and shame if individuals fail to
adhere to norms that they have come to adopt or
endorse. Indeed, people are, in general, adept at
reasoning about norms (Cosmides and Tooby
2005).
However, any such account of normative cognition will not require anything like the sort of
mind independence that moral objectivism
entails. Normative cognition can include esthetic

matters (e.g., how one ought to dress), prudential
matters (e.g., how one ought to climb the mountain), or matters of propriety (e.g., how one ought
to lay the dinner table). These all seem to depend,
in some way, on the existence of contingent motivations in individuals to adopt them; if one wants
to appear beautiful or be prudent or adhere to
etiquette, then one ought to adopt certain norms
of behavior. But, as just noted, moral norms seem
to be different from these norms. Moral norms
apply regardless of a person’s contingent preferences or desires. How can one, then, explain the
emergence of distinctively moral norms of this
objective kind?

Objectivity

Some have speculated that moral cognition –
that is, a form of cognition that sees certain norms
as mind-independent, factual, inescapable, and
nonnegotiable – was an evolutionary adaptation
of our species to spur us to prosociality (Joyce
2006). Joyce (2006), for example, speculated that
even though mind-independent moral facts are
entirely fictional, it would have been beneficial
for our species to have evolved a tendency to
think they exist. A tendency to see the world as
filled with such mind-independent moral facts
would be a much stronger motivator to act prosocially. Thus, humans project objective moral
facts into the world through our emotional reactions to morally relevant events. Goodness and
badness, virtue and vice – these are not properties
that exist in the world to be perceived by the mind.
Instead, the mind projects these values into the

world, which then motivates individuals to act
according to moral norms. Thus, evolution selects
for a capacity to objectify morality even while
moral facts do not exist. This account, while speculative, merits further research.

Responses to Evolutionary Debunking
Those who aim to secure objectivity in ethics have
responded to evolutionary debunking arguments
in a number of different ways. On the one hand,
some have offered third-factor accounts that purport to show that moral judgments do track moral
facts even though they were not selected for this
purpose. On the other hand, some have argued
that that the standards of reliability being invoked
in debunking arguments are too strong; if they
were adopted for other areas of common knowledge, this would lead to an untenable radical
skepticism. These two approaches constitute
well-developed contributions to the debate about
the implications of evolution for moral objectivity, and they will be considered in turn.

Third-Factor Accounts
The most prominent response to evolutionary critiques of moral objectivity involves arguing that


Objectivity

moral judgments do track real moral facts. However, they do so as a by-product of some other
epistemically respectable cognitive faculty. The
strategy is straightforward: if a plausible evolutionary story can be told about how humans came
to track objective moral facts in spite of the adaptive pressures which influenced ancestral humans’
moral judgments, this might secure the reality of

objective moral facts and maintain their objectivity at the same time. Different proponents of this
type of response have different capacities in mind
when they propose such “third-factor” accounts –
accounts about some third factor that humans
evolved to track that happens to also align with
moral facts.
Enoch (2010) puts forward a quintessential
third-factor account, which starts from the premise that survival is (on balance) a good thing.
The advantage of such a starting point is that it is
fairly obvious that adaptive pressures on ancestral
humans did tend to favor their survival. He then
argues that by making choices that assured their
own survival – a good thing (on balance) – ancestral humans were in fact tracking moral facts
indirectly. They developed the judgments they
did because those judgments were adaptively
advantageous, and it just so happened that such
those judgments tracked moral facts as a
by-product. Put another way, the capacity to
track moral facts was not directly “selected for”
by adaptive pressures but rather was merely
“selected,” because it was not adaptively advantageous to separate it from other faculties that
were themselves selected for by adaptive pressures. It may be an accident that such an adaptation arose, but once it did, it conferred on ancestral
humans a reliable way of making objective
moral judgments. The capacity to perceive moral
facts might be like the capacity to perceive stars
(Huemer 2005); having the capacity to see stars
did not itself give early humans any adaptive
advantage but rather came about from other
capacities that did confer such adaptive advantage. Other candidates for such third factors
include the badness of pain (Skarsaune 2011),

the importance of having personal boundaries
that others cannot transgress (Wielenberg 2010),
the goodness of altruism (de Lazari-Radek

5521

and Singer 2012), cooperating with others
(Brosnan 2011), or enhancing society’s ability to
meet its needs (Copp 2008). Each of these appeals
to capacities or tendencies which enjoy better
evolutionary pedigree than any purported faculty
of moral judgment.
Nonetheless, Street (2006) objects to thirdfactor accounts on the grounds that these accounts
are themselves vulnerable to a Darwinian
dilemma: whatever capacity tracks this third factor must itself have been selected for as a result of
adaptive pressures on ancestral humans. The
question, then, is what the relation would obtain
between objective moral facts and the evolution of
this capacity. If there is no relation, Street says,
this looks like an implausible and convenient
coincidence. If there is some relation, then the
realist must specify what that relation is. But, she
argues, the capacity that allowed early humans to
(indirectly) track moral truth would have to be
fairly specialized and complex. It is implausible
to think that such a specialized and complex
capacity could have arisen as a by-product of
some other cognitive capacity (Street 2006).
Proponents of third-factor responses generally
deny the claim that grasping moral facts really

requires a complex or specialized capacity in the
first place. For instance, de Lazari-Radek and
Singer (2012) argue that this capacity simply
springs from our ability to reason in general,
which is the same capacity involved in grasping
other types of a priori truth (like truths of mathematics; see also FitzPatrick 2014). Similarly, the
capacity to track facts about pain is not specialized
or problematically complex (Skarsaune 2011).
(For a more general defense of third-factor
accounts, see Berker 2014.) If the capacity is
suitably general, this allows the third-factor proponent to say that the relation between moral
truths and the capacity which tracks them are
unproblematic: humans track moral truths by
their capacity to reason, and this capacity has an
excellent evolutionary pedigree.
Finally, it should be noted that all third-factor
responses invoke a moral assumption from the
start, whether it be the goodness of survival, the
badness of pain, or any of the other candidates.
Thus, another prominent objection to third-factor

O


5522

responses targets the legitimacy of this move.
Street argues that any such assumption is “trivially
question-begging,” because it simply assumes the
reliability of our moral judgments, but those are

the very judgements which the evolutionary critique is meant to undermine (Street 2008,
pp. 216–217). This brings us to the second major
response to evolutionary critiques: the overgeneralization response.

The Overgeneralization Response
The second response to evolutionary critiques
of moral objectivity seeks to undermine the epistemic standards implicit in these critiques. This
approach tries to show that if these standards were
adopted universally, they would imply radical
skepticism. This has also been dubbed “the containment problem” for evolutionary debunking
arguments (Millhouse et al. 2016). In fact, some
authors argue that the source of doubt implicit in
these evolutionary critiques does not derive from
evolution at all but rather an unjustified suspicion
about the genealogy or causal history of any
source of knowledge.
Evolutionary critiques of moral objectivity
largely trade on a suspicion about the genealogy
of certain beliefs, but the epistemic standards that
lie behind this suspicion are not often well articulated. As was covered above, the problem introduced by evolution seems to be that adaptive
pressures on the moral judgments of ancestral
humans were not tracking mind-independent
moral facts, and this seems to undermine their
reliability. But one can further ask exactly what
about this lack of truth tracking seems to undermine the objectivity of moral judgments. One
possible thought is that if judgments are not tracking mind-independent moral facts, this means
there is no good reason to believe they are true.
In general, if one has no good reason to believe
that a belief is true, one ought not to maintain it
(Vavova 2014).

Though this standard looks quite reasonable at
first glance, proponents of the overgeneralization

Objectivity

response think it goes too far. Crucially, it depends
on whether a “good” reason is one that must
come from outside the realm of moral judgments
themselves. Street (2008) argues that “thirdfactor” accounts are trivially question-begging,
because they rely on moral beliefs in the first
place, which are themselves questionable on evolutionary grounds. However, proponents of the
overgeneralization response think this argument
implies far too exacting a standard. The reason is
that if a “good” reason is one that must be independent of all moral judgments, a similar argument can be made about other sources of
knowledge, for example, sense perception. Most
people believe they are justified in believing the
deliverances of their immediate sense perception,
at least for medium-sized objects in conditions of
sobriety and good lighting. But if they were asked
to justify this judgment without reference to any
beliefs gained from sense perception itself, they
would come up empty-handed (Vavova 2014;
Shafer 2012). This constitutes a clear reductio ad
absurdum: if evolutionary critiques of moral
objectivity rely on an epistemic standard that
would also undermine perceptual beliefs, this is
a good reason not to accept that standard and
therefore also a good reason not to accept the
critiques.
At this point, the proponent of an evolutionary

debunking argument might grant that some judgments in the one domain can justify other judgments in the same domain (whether it be
perceptual or moral). Relaxing epistemic standards in this way, however, risks allowing thirdfactor accounts a metaphorical foot in the door.
If one is allowed to assume that one’s judgments
about the objective badness of pain or the objective goodness of survival are justified (despite the
adaptive pressures which caused early humans to
make such judgments), many of one’s more substantive judgments can be justified on the basis of
these judgments (Vavova 2014).
On a similar basis, some proponents of this
response argue that evolutionary considerations
can only fail to vindicate moral judgments but
that they are not capable of producing any


Objectivity

independent skepticism about the reliability of
moral judgments (Brosnan 2011; FitzPatrick
2014; White 2010). This can be shown by imagining a situation of total ignorance of the source
of moral judgments. If one has no knowledge of
evolution’s influence on one’s judgments,
would this in any way improve the justification
of one’s moral beliefs? The answer seems to be
“no.” If total ignorance of the origins of one’s
moral beliefs does not improve one’s epistemic
lot, it is unclear how the truth of evolution’s
impact on one’s beliefs could make it any
worse, short of showing that these judgments
are actually “anti-reliable.” All that evolutionary debunking arguments can show is that moral
judgments are made in a way that is independent
of their truth, but this does not in itself imply

that they are unreliable (see Brosnan 2011;
White 2010).
A related point concerns the metaethical presuppositions inherent in evolutionary debunking
arguments. In order to motivate the case against
objectivism, these arguments take it to be the case
that moral judgment is not truth-tracking. But in
order to fail to be truth-tracking, there must be
some truth to track in the first place. After all, if it
turns out that nonobjectivist accounts of moral
facts and concepts are correct, and these things
depend in some way on human attitudes, it is not
clear how evolutionary influences might undermine them (Kahane 2011).
As was covered in the section on “third factor” accounts above, determining the burden of
proof in this disagreement is quite difficult.
Joyce (2016) claims that given the fact that
there is good reason to believe that humans’
faculty of moral judgment was not selected for
tracking mind-independent moral facts, the burden rests on the realist who wants to claim that it
does so. But if the overgeneralization critique of
this argument is correct, it is not clear that the
burden of proof rests on the realist after all. It
would seem that the proponent of an evolutionary critique must show how evolution’s influences on moral judgments render them not just
independent of moral facts but also unreliable.

5523

The exact connection between these two claims
constitutes an area of unfolding debate. Given
the implications for a claim that many psychologists and philosophers take quite seriously –
the objectivity of morality – quite a lot hangs in

the balance.

Conclusion
The relation between moral objectivity and evolution is complex, depending on factors such as
the function of moral judgment, what one takes
moral judgment to be tracking, and the epistemic
standard implicit in our evaluation of moral judgment. Though many philosophers, psychologists,
and ordinary people perceive objectivity to be an
integral part of morality, there is quite a healthy
debate about whether moral objectivity can withstand an evolutionary account of human history.
Though there is not yet a consensus, this article
has laid out the main fault lines between proponents of evolutionary skepticism and their realist
opponents.

Cross-References

O
▶ Adaptations: Product of Evolution
▶ Altruism
▶ Altruism Norms
▶ Benefit Group Relative to Other Groups
▶ Biological Function
▶ Charitable Giving (Peter Singer)
▶ Cheater Detection
▶ Cross-Cultural Universality
▶ Cross-Cultural Variation
▶ Cultural Differences
▶ Cultural Evolution
▶ Cultural Universals
▶ Evolution of Culture

▶ Evolution of Morality
▶ Evolutionary Cultural Psychology
▶ Group Selection
▶ Indirect Benefits of Altruism
▶ Moral Development


Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay
×