22 min read

Highlights from The 10,000 Year Explosion (Gregory Cochran and Henry Harpending)

The core argument of the book

We intend to make the case that human evolution has accelerated in the past 10,000 years, rather than slowing or stopping, and is now happening about 100 times faster than its long-term average over the 6 million years of our existence.

Selective pressures change over time

Geographic expansion (which placed us in new environments) and cultural innovation both changed the selective pressures humans experienced. The payoff of many traits changed, and so did optimal life strategy. For example, when humans hunted big game 100,000 years ago, they relied on close-in attacks with thrusting spears. Such attacks were highly dangerous and physically taxing, so in those days, hunters had to be heavy muscled and have thick bones. That kind of body had its disadvantages—if nothing else, it required more food—but on the whole, it was the best solution in that situation. But new weapons like the atlatl (a spearthrower) and the bow effectively stored muscle-generated energy, which meant that hunters could kill big game without big biceps and robust skeletons. Once that happened, lightly built people, who were better runners and did not need as much food, became competitively superior. A heavy build was yesterday’s solution: expensive, but no longer necessary. The Bushmen of southern Africa lived as hunter-gathers until very recently, hunting game with bows and poisoned arrows for thousands of years in that region. They are small, tough, lean people, less than five feet tall. It seems likely that the tools made the man—the bow begat the Bushmen.
Spear-thrower

Positive advantages lead to faster change

With the advent of new methods of food preparation, such as the use of fire for cooking, teeth began to shrink, and they continued to do so over many generations. Pottery, which allowed storage of liquid foods, accelerated that shrinkage. Complex biological functions tend to slowly deteriorate when they no longer matter, since mutations that interfere with the function no longer reduce reproductive fitness, and you might think that this would explain these dental changes. However, this trend, which we call “relaxed selection,” happens too slowly to be the explanation. Instead, the changes in tooth size must have been driven by positive advantages—possibly because small teeth are metabolically cheaper than large ones.

We're gotten better at eavesdropping recently

Eavesdropping can be a life-or-death affair. We have evidence of this, since a number of genes affecting the inner era show signs of recent selection.

Domestication changes appearance

Domesticated animals and plants often look and act very different from their wild ancestors, and in every such case, the changes took place in far less than 100,000 years. For example, dogs were domesticated from wolves around 15,000 years ago; they now come in more varied shapes and sizes than any other mammal.
In an extreme example, the Russian scientist Dmitri Belyaev succeeded in developing a domesticated fox in only forty years. In each generation he selected for tameness (and only tameness); this eventually resulted in foxes that were friendly and enjoyed human contact, in strong contrast to wild foxes. This strain of tame foxes also changed in other ways: their coat color lightened, their skulls became rounder, and some of them were born with floppy ears. It seems that some of the genes influencing behavior (tameness in this case) also affect other traits—so when Belyaev selected for tameness, he automatically got changes in those other traits as well. Many of these changes have occurred as side effects of domestication in a number of species—possibly including humans, as we shall see.

The fox example reminded me of orange cats, which are much more friendly on average than other cats. They also tend to be male.

Evolution can act relatively rapidly

Evolutionary genetics predicts that substantial change in almost any trait is possible in a few tens of generations, and those predictions are confirmed every day. Selection is used routinely in many kinds of agriculture, and it works: It grows more corn, lots more.
The end of the Ice Age also brought about a global rise in sea level. Mile-thick continental ice sheets melted, and the sea level rose hundreds of feet. As the waters rose, some mountains became islands, isolating small groups of various species. These islands were too small to sustain populations of large predators, and in their absence the payoff for being huge disappeared. Instead, small elephants had an advantage over large ones, probably because they required less food and reproduced more rapidly. Over a mere 5,000 years, elephants shrank dramatically, from an original height of 12 feet to as little as 3 feet. It is worth noting that elephant generations are roughly twenty years long, similar to those of humans.

But evolution can only act fast on simple adaptations

A complex adaption is a characteristic contributing to reproductive fitness that involves the coordinated action of many genes. This means that humans could not have evolved wings, a third eye, or any new and truly complicated adaptive behavior in that time frame.
We think that this argument concerning the evolution of new complex adaptations is correct, but it underestimates the importance of simple adaptations, those that involve changes in one or a few genes. The conclusion that all humans are effectively the same is unwarranted. We will see not only that we have been evolving at a rapid rate, but that evolution has taken a different course in different populations.

On neoteny and dogs

Dogs are much more playful than wolves, and this can probably be understood as retention of juvenile behavior (called “neoteny”). Retaining existing juvenile behavior is accomplished far more easily than evolving a behavior from scratch. Many of the ways in which dogs interact with humans can be understood as new applications of behavioral adaptations designed for a pack—the owner takes on the role of the leader of the pack.

There is no complex behavioral adaptation in dogs without a recognizable precursor in wolves, but that hardly means that all breeds of dogs are the same, or close to it.

Adaptations can be lost even faster than they can occur

Any adaptation, whether physical or behavioral, that loses its utility in a new environment can be lost rapidly, especially if it has any noticeable cost. Fish in lightless caves lose their sight over a few thousand years at most—much less time than it took for eyes to evolve in the first place.

Differences in dog breeds

Dog breeds vary greatly in learning speed and capacity. The number of repetitions required to learn a new command can vary by factors of ten or more from one breed to another. They typical Border collie can learn a new command after 5 repetitions and respond correctly 95 percent of the time, whereas a basset hound takes 80-100 repetitions to achieve a 25 percent accuracy rate.

Non-surface level differences also exist

It was natural for previous generations of physical anthropologists to concentrate on differences in easily observed characteristics, but that never implied that all differences would be easily observable. It was the scientists that were superficial, not the differences.

On Lewontin

Approximately 85 percent of human genetic variation is within-group rather than between groups, while 15 percent is between-group. Lewontin and others have argued that this means that the genetic differences between human populations must be smaller than differences within human population groups. But genetic variation is distributed in a similar way in dogs: 70 percent of genetic variation is within-breed, while 30 percent is between-breed. Using the same reasoning that Lewontin applied in his argument about human populations, one would have to conclude that differences between individual Great Danes must be greater than the average difference between Great Danes and Chihuahuas. But this is a conclusion that we are unable to swallow.

Human + Neanderthal pairings

Imagine that humans occasionally mated with Neanderthals, and that at least some of their offspring were incorporated into human populations. That process would have introduced new gene variants, new alleles, into the human population. Many, probably most, of those alleles would have done almost exactly the same thing as their equivalents in modern out-of-Africa humans; they would have been neither better not worse than those equivalents—in other words, they would have been selectively neutral. Those neutral alleles from Neanderthals would have been rare, and they would probably have disappeared, the typical fate of rare neutral alleles.

Not all adaptations show up in the fossil record. (Introgression is the transfer of alleles from one species to another).

Many instances of adaptive introgression—those, for example, that involve biochemical changes that do not affect appearance—are cryptic and were effectively undetectable before the development of modern molecular research methods. This is worth remembering when we look at the fossil record: The majority of adaptive genetic events do not have noticeable skeletal signs. Some cases of adaptive introgression, though, have readily visible effects, as when genes that increased drought tolerance spread from Utah cliffrose to bitterbrush. The introgressed bitterbrush looks more like cliffrose and can survive in places where ordinary bitterbrush cannot. In this case, the population with introgressed genes reflects that introgression in its external appearance, but more often the effects of introgression are not readily apparent in the gross anatomy of an organism.

Geographic variation combined with mixing populations allows evolution to escape local maxima.

Another point: Ongoing natural selection in two populations can allow evolutionary events to occur that would be impossible in a single well-mixed population, since it allows for simultaneous exploration of divergent paths. Natural selection is short-sighted: Alleles increase in frequency because of their current advantage, not because they might someday be useful. Think of various possible solutions of some problem as hills, with higher hills corresponding to better solutions. Natural selection climbs up the first hill it chances upon; it can’t see that another solution has greater possibiltles in the long run. Not only that: Since the environmental conditions of Europe and Africa were significantly different, evolution could try solutions in Europe that couldn’t be explored in Africa, because the initial step along that path had negative payoffs in Africa. In Europe, for example, you had to worry about staying warm enough, whereas Africans faced heat stress: These issues were important considerations in the evolution of larger brains. It may be that the relative unimportance of heat stress in Europe opened up some evolutionary pathways that had greater long-term possibilities than the ones that developed in Africa.

Consider an analogy from the history of technology. Somewhere back in late classical times, the use of the camel was perfected—a better saddle was developed, for example, one that allowed camels to carry heavy loads efficiently. Throughout most of the Middle East and North Africa, camels were (after those developments) a superior means of land transportation: They were cheaper than ox-drawn wagons and not dependent upon roads. Over a few centuries, people in areas where camels were available abandoned wheeled vehicles and roads almost entirely. You can still see the effects in the oldest sections of some cities in the Arab world, where the alleys are far too narrow to have ever passed a cart of wagon. Europeans, not having camels, had to stick with wheeled vehicles, which were clearly more expensive, given the infrastructure they required. But as it turned out, wheeled vehicles—in fact, the whole road/wheeled vehicle system—could be improved. Back then, when camels seemed so much better, who knew that someday there would be horse collars and nailed horseshoes, then improved bridge construction, suspensions that reduced road shock, macadamized roads, steam power, internal combustion engines, and ultimately the nuclear DeLorean.

Sedentism led to elites and governments

The sedentary lifestyle of farming allowed a vast elaboration of material culture. Food, shelter, and artifacts no longer had to be portable. Births could be spaced together, since mothers didn’t have to continually carry small children. Food was now storable, unlike the typical products of foraging, and storable food could be stolen. For the first time, humans could begin to accumulate wealth. This allowed for nonproductive elites, which had been impossible among hunter-gatherers. We emphasize that these elites were not formed in response to some societal need: They took over because they could.

Combined with sedentism, these developments eventually led to the birth of governments, which limited local violence. Presumably, governments did this because it let them extract more resources from their subjects, the same reason that farmers castrate bulls. Since societies were generally Malthusian, with population growth limited by decreasing agriculture production per person at higher human density, limits on interpersonal violence ultimately led to a situation in which a higher fraction of the population died of infectious disease or starvation.

Something always limits population growth.

Hunter-gatherers should have been, if anything, less vulnerable to famine than farmers, since they did not depend on a small set of domesticated plant species (which might suffer from insect pests or fungal blights even if a year with good weather), and because local violence usually kept their population well below the local carrying capacity. State societies limited local violence, but in a Malthusian world, something always limits population growth. In this case, fewer deaths by violence meant more deaths due to starvation and infectious disease. Moreover, hunter-gatherer societies do not appear to have been divided into well-fed elites and hungry lower classes, a situation that virtually guarantees malnourishment and/or famine among a significant fraction of the population, whereas agricultural societies did have divisions of this sort.

Why alcoholism and type 2 diabetes are correlated at the population level

Most populations that are highly vulnerable to type 2 diabetes also have increased risks of alcoholism. This is no coincidence. It’s not that the same biochemistry underlies both conditions, but that both stem from the same ultimate cause: limited previous exposure to agricultural diets, and thus limited adaptation to such diets.

Measles is a disease of civilization

Measles can only persist in a large, dense population: Populations that are two small or too spread out (under half a million in close proximity) fail to produce unexposed children fast enough, so the virus dies out. This means that measles, at least in the form we know it today, could not have existed in the days before agriculture—there was no concentrated population that large anywhere on earth.

Many mutations confer malaria resistance

The most important mutations that protect against malaria are those that change some feature in the red blood cells that are the primary target of the malaria parasite—usually, the hemoglobin molecule (for example, sickle cell hemoglobin [HbS], hemoglobin C [HbC], hemoglobin E [HbE], alpha- and beta-thalassemia, Melanesian ovalocytosis, and glucose-6-phophate dehydrogenase [G6PD] deficiency). We also know of a number of alleles (such as the glycophorin C variant in New Guinea) that are almost certainly malaria defenses but do not cause noticeable diseases as side effects. In fact, it looks as if the well-known defenses, such as sickle-cell, that cause obvious disease are only the tip of the iceberg.

The expensive malaria defenses (defenses with serious side effects) are far more common than any single genetic disease caused by random mutations. Some 400 million people, 7 percent of the world’s population, have G6PD deficiency, which can be serious. About 250,000 children are born with sickle-cell anemia each year (which is very serious), while about 20,000 boys are born with Duchenne muscular dystrophy, one of the most common of all mutation-driven genetic diseases.

These malaria defense became common because they gave an advantage to carriers (people with one copy of the gene variant); however, they cause problems (from mild to lethal) in people with two copies. This is unusual: We seldom see such crude adaptations in other functions. For example, humans don’t have an allele that makes carriers run faster while crippling those with two copies. Normally, genes work together in an efficient and coordinated way. We think that this evolutionary sloppiness exists because falciparum malaria, as we know it today, has not been around very long—perhaps as little as 4,000 years. The same appears to be true of the antimalaria genetic defenses. For example, the main African variety of G6PD deficiency is roughly 2,500 years old, HbE in Thailand is roughly 2,000 years old.

Significant changes have occurred recently

Some changes can be seen even over the past 1,000 years. English researchers recently compared skulls from people who died in the Black Death (~650 years ago), from the crew of the Mary Rose, a ship that sank in Tudor times (~450 years ago), and from our contemporaries. the shape of the skull changed noticeably over that brief period—which is particularly interesting because we know that there has been no massive population replacement in England over the past 700 years. The height of the cranial vault of our contemporaries was about 15 percent larger than that of the earlier populations, and the part of the skull containing the frontal lobes was thus larger.

Ancestral humans may have been stronger than modern humans

The dystrophin-associated sweeping alleles that we see in the selection surveys (which do not cause disease) raise the interesting possibility of direct trade-offs between muscle and brain function in the recent past. We have reason to think that humans circa 100,000 BC had stronger muscles than today—and so changes in the dystrophin complex may have sacrificed muscle strength for higher intelligence.

Some open questions on language and hearing

Another very intriguing pattern involves new versions of genes that affect the inner ear. We wonder if this is a consequence of recent increases in language complexity sufficiently recent that our ears (and presumably our brains, throats, and tongues) are still adapting to those changes. Or, since some of the sweeping genes involving the inner ear are regional and recent, could some populations be adapting to characteristics of particular languages or language families? It seems that all humans can learn any human language, but we don’t know whether everyone is inherently just as good as everyone else at learning every language, communicating in every language, or eavesdropping in every language.

We should expect societies to spend most of their time at the Malthusian limit.

Suppose that farming methods improve, so that productivity per acre goes up by a factor of ten. The population begins to grow—let’s say fairly slowly, with each family managing to raise 2.5 children (on average) to adulthood. The population is growing 25 percent per generation. In ten generations—about 250 years—the population has caught up with those improved methods. Living standards are low again, and population growth stops. But 2.5 children per family is by no means an especially high rate of population growth: In colonial America, the average family raised more than 7 children to adulthood. At that rate, population growth could catch up with a tenfold increase in productivity in just two generations.

The point is that even moderate rates of population growth can rapidly catch up with all plausible improvements in food production. Thus, populations should spend most of their time near a Malthusian limit, and there should be no lasting improvement in the standard of living. Malthus himself pointed out that factors other than food shortages can also limit population. Any negative factor that intensifies as population density increases can be the limiting factor—starvation and malnutrition are not the only possibilities. The key is which negative factor shows up at the lowest population density. We believe that the nature of the key limiting factor—which is not necessarily the same in all human populations—can have important effects on human evolution, including the recent changes we have been discussing.

Cooperation prevents war from limiting maximum size of society

War (defined broadly to include all kinds of interpersonal violence) might limit population before starvation occurred if it increased strongly as human density increased. If humans had been unable to form large, well-organized societies, war might have saved us from penury: In fact, war probably has been an important limiting factor in many species other than our own and was probably important for early humans. But humans can cooperate, particularly if there is something worth stealing. In a population with a storable surplus, state formation eventually limited local violence—and peace led to the poorhouse.

Farming was easier in Africa than elsewhere

The female-dominated farming system seen in much of Bantu Africa, in which women were largely self-supporting, indicates that producing food was fundamentally easier there than in most of Eurasia. In much of Eurasia, hard work from two parents barely allowed break-even reproduction. Disease may have limited the complexity of African state systems—but of course there were many other factors, ranging from Africa’s relative isolation from rest of the Old World to elephants attacking the fields of pioneers.

Differential fertility between elites and non-elites

In many parts of the Old World, particularly among farmers living under strong states, famine and malnutrition were the main factors limiting population. With internal peace, population rapidly bumped up against carrying capacity. In those societies, people living on the bottom rungs of society were regularly short on food, so much so that they often couldn’t raise enough children to take their place. However, elites must have had above-replacement fertility, and their less successful offspring would have replaced the missing farmers. Gregory Clark, in A Farewell to Alms, shows that in medieval England the richest members of society had approximately twice the number of surviving offspring as the poorest. The bottom of society did not reproduce itself, with the result that, after a millennium or so, nearly everyone was descended from the wealthy classes. There is reason to think that this happened in many places (eastern Asia and much of western Europe, for example), but wealth was not acquired in the same way everywhere, so selection favored different traits in different societies.

The evolution of government as social complexity increased

In the days before agriculture, governments didn’t really exist. Most of the hunter-gatherers were egalitarian anarchists: They didn’t have chiefs or bosses, and they didn’t have much use for anyone who tried to be boss. Bushmen today still laugh at wannabe “big men.” Perhaps we could learn from them.

But farmers do have chiefs: It goes with the territory. Grain farmers store food, and so they have something valuable to steal, which wasn’t the case among hunter-gatherers. Elites, defined as those who live off the productive work of others, came into existence in farming societies because they could. Interestingly, some peoples seem to have curbed the growth of elites just by growing root crops such as yams that rot quickly unless left in the ground, and thus are hard to steal. Another point is that the strongest early states often had natural barriers that made it difficult for “citizens” to escape the tax collectors. Egypt, with a strip of very fertile land embedded in uninhabitable desert, is a prime example.

On Genghis Khan and his legacy

The most spectacular example is Genghis Khan, otherwise known as the Scourge of God, the Master of Thrones and Crowns, the Perfect Warrior, and Lord of All Men. About 800 years ago, Genghis and his descendants conquered everything from Peking to Damascus. Genghis knew how to have a good time. Here’s his definition of supreme joy: “to cut my enemies to pieces, drive them before me, seize their possessions, witness the tears of those dear to them, and embrace their wives and daughters!” It appears that the last part of the list especially appealed to him. He and his sons and his son’s sons—the Golden Family—ruled over much of Asia for several hundred years, tending to the harem throughout. In so doing, they made the greatest of all genetic impacts. Today some 16 million men in central Asia are his direct male descendants, as shown by their possession of a distinctive Y chromosome. It just shows that one man can make a difference.

Cities were population sinks (I believe they still are in most of the developed world and Asia)

Remember that rulers, then as now, made mistakes, had bad luck, and in fact often had no idea what they were doing. Sometimes ruling elites lost wars and were replaced by outsiders, as in the Norman Conquest. Sometimes they got a little too enthusiastic about slaughtering each other, as in the Wars of the Roses. And often ruling elites just made bad choices—bad in terms of reproductive fitness, that is. The most common mistake must have been living in cities, which have almost always been population sinks, mostly because of infectious disease. By “population sink,” we mean that city dwellers couldn’t manage to raise enough children to break even: Cities in the past, before modern medicine and civil engineering, could only maintain their population with a continuing flow of immigrants from the surrounding countryside.

Siphium as abortifacient

Sometimes evolutionarily bad choices on the part of a ruling class are obvious. In classical times, there was a plant called silphium that grew in a narrow coastal strip of Cyrenaica, modern-day Libya. Its resin was used as a contraceptive and abortifacient. The resin appears to have been very effective, preventing pregnancy with a once-a-month pea size dose. Silphium eventually became too popular for its own good. Never domesticated, it was overharvested as demand grew. As it became scarcer, the price rose until it was worth its weight in silver, which drove further overharvesting and eventually lead to one of the first human-caused extinctions in recorded history. However, during the centuries in which it was routinely used by the Greco-Roman upper classes, it must have noticeably depressed fertility, unless they were throwing money out the window.

Unflattering thought

If your ancestors were farmers for a long time, you’re descended from people who decided it was better to live on their knees than to die on their feet.

Marital distances

Hunter-gatherers can be amazingly mobile, and since most recent hunter-gatherers were spread very thinly, there often were no girls next door. So hunter-gatherers, especially in sparsely settled areas, had to find mates at a considerable distance. A generation ago, when many Bushmen were still wandering freely, their average marital distance was over 40 miles. This. may not have been typical in prehistory. In the days before agriculture, when everybody and his brother was a hunter-gatherer, most lived in choice territories, not in the marginal habitats like the Kalahari Desert where that way of life has persisted. Population density would have been higher in those conditions than among Bushmen today, and people may not have had to search so far for a mate. However, it is clear that agriculture eventually led to crowding. Peasant farmers usually marry people living nearby, not least because there are plenty of people living nearby to choose from. In an example discussed by Alan Fix, based on census records from a densely settled part of rural England about 150 years ago, the average marital distance was only 6 or 7 miles.

Infectious disease killed many Amerindians

Even during the twentieth century, first contacts between Amerindians and people of European descent killed one-third to one-half of the natives in the first five years unless there was high-quality medical care available. This was the case during a period in which some of the worst Eurasian diseases (smallpox, bubonic plague, and typhus) were no longer major threats. For example, of the 800 Surui contacted in 1980 in Brazil, 600 had died by 1986, most of tuberculosis.

Judging from historical accounts, the fatality rate of smallpox was much higher among Amerindians than among Europeans. Roughly 30 percent of the Europeans who were infected died, whereas for the Amerindians, the fatality rate sometimes reached 90 percent. For example, in an epidemic in 1827, smallpox spared only 125 out of 1,600 Mandan Indians in what later became North Dakota.

Iron age people may have cirumnavigated Africa.

Hanno the Navigator (of Carthage) had explored the west coast of Africa, and Herodotus tells us of an earlier Phoenician expedition, sent out by the Pharaoh Necho around 600 BC, that seems to have circumnavigated the continent.

Europeans were not adapted to Africa

These difficulties persisted for centuries. British soldiers stationed on the Gold Coast would lose half their numbers in a year. Early explorers did no better. Mungo Park began his second attempt at African exploration (in 1805) with a party of forty-five Europeans; only eleven were alive by the time they reached the Niger. He was eventually killed at the Bussa rapids—by Africans, not parasites—but when his son Thomas went in search of him, he died of fever before getting far. We presume you’ve heard of Dr. Livingston—Dr. David Livingston, that is, the nineteenth-century British medical missionary to central Africa. His wife died of malaria during their travels, and the doctor himself later died of malaria and dysentery. John Speke and Sir Richard Francis Burton, nineteenth-century British explorers, sought and eventually found the sources of the Nile—but both men fell ill of tropical diseases. Speke suffered greatly when a a beetle crawled into his ear. He removed it with a knife, but he became temporarily deaf and later temporarily blind. Consider that these are the famous explorers, the ones who enjoyed some degree of success. What happened to the unlucky ones.

About half of all languages today are derived from Proto-Indo-European

Our picture of the Indo-European expansion begins with a very rapid spread across the steppe as soon as the increased frequency of the lactase-persistence mutation became common enough to allow the switch to a dairying economy. This rapid spread would have resulted in a population that spoke similar dialects over a wide region all the way from the Ukraine to the Urals—similar because there hadn’t been time for linguistic divergence. The wave of advance continued on into Europe, where dairying was ecologically competitive with early agriculture and produced a far more aggressive culture. Most likely, Indo-European culture also became more warlike as their mobility, superior numbers, and better nutrition allowed them to win battles more often than other peoples. Their victories, in turn, may have led to further advantages in military efficiency: Success feeds success.

Jewish intelligence

We’re not the first to notice this. Popular opinion has held that European Jews are smart for a long time. At the turn of the century in London, for example, Jews took a disproportionate share of prizes and awards in the school system. This was not the case in classical times: Surviving writings from the ancient Greeks and Romans offer no hint that the Jews were considered unusually smart.

Contrast with the Ancient Greeks

They made no contributions to the mathematics and protoscience of the classical era. A fair amount of classical commentary on the Jews has been preserved, and there is no sign that anyone then had the impression that Jews were unusually intelligent. By “no sign,” we mean that there is apparently no single statement to that effect anywhere in preserved classical literature. This is in strong contrast with the classical Greeks, whom everyone thought unusually clever.

Jewish Persecution in the Middle Ages

And persecution was a very serious matter. Although the Jews of this region were prosperous, they were not safe. The first major crisis was the First Crusade of 1096, which resulted in the deaths of something like a quarter of the Jews in the Rhineland. Religious hostility, probably exacerbated by commercial rivalries, increased in Europe during this period, manifesting itself in the form of massacres and expulsions. This pattern of persecution kept the Ashkenazi Jews from overflowing their white-collar niche during the High Middle Ages, which otherwise would have happened fairly rapidly.

Dispelling a common myth

Members of the general public sometimes believe that individual genetic profiles do not necessarily reflect nationality. Somebody who is Swedish, for example, might be genetically closer to someone from Japan than to another Swede, according to this view of things. If this was true, it. would apply to a group like the Ashkenazi Jews as well, even though they are not quite a “nationality.” However, that belief is false. In fact, a case where a person of one nationality is closer genetically to someone of a distant nationality than to his or her own compatriots never happens. If you’re Swedish, every Swede (not counting recent immigrants) is genetically closer to you than any person in Japan.

On mutations common to Ashkenazi Jews

The mutations that so frequently affect Ashkenazi Jews are mysterious in another way. Many of them fall into two categories or clusters involving particular metabolic pathways: They affect the same biological subsystems. Imagine a fat biochemistry textbook, where each page describes a different function or condition in human biochemistry: Most of the Ashkenazi diseases would be described on just two of those pages. The two most important genetic disease clusters among the Ashkenazim are the sphingolipid storage disorders (Tay-Sachs disease; Gaucher’s disease; Niemann-Pick disease; and mucolipidosis, type IV) and the disorders of DNA repair (BRCA1 and BRCA2; Fanconi anemia, type C; and Bloom syndrome).

Heterozygote advantage may be related to Ashkenazi intelligence

Heterozygote advantage isn’t confined to genetic defenses against malaria; it can also occur in other cases where certain traits are favored by selection. It seems that the key to such cases is that there has been strong selection (carriers have a big advantage) applied over a relatively short time period. Over longer periods, mutations with fewer side effects eventually occur and win out. That fact that heterozygote advantage can favor other traits is important, because we think that most of the characteristic Ashkenazi mutations are not defenses against infectious disease. One reason is that these mutations do not exist in neighboring populations—often literally people living across the street—that must have been exposed to very similar diseases. Instead, we think that the Ashkenazi mutations have something to do with Ashkenazi intelligence, and that they arose because of the unique natural-selection pressures the members of this group-faced in their role as financiers in the European Middle Ages.

More evidence for that hypothesis

We have evidence—not definitive—that some of the mutations common among the Ashkenazim may boost intelligence. We looked at the occupations of patients in Israel with Gaucher’s disease, essentially all of whom were being treated at the Shaare Zedek Medical Centre in Jerusalem. These patients are much more likely to be engineers or scientists than the average Israeli Ashkenazi Jew—about eleven times more likely, in fact. There are similar reports on torsion dystonia, another Ashkenazi genetic disease. Ever since it was first recognized, observers have commented on the unusual intelligence of the patients who suffer from it.