Who is the true American?

Who is the true American?
Two women went to Solomon:
Each claimed the baby as her own.
The wise king issued his decree:
“Divide the child in two,” said he.
One woman wept, resigned her claim:
The mother that deserved the name.


2020 Visions

The increase rate in COVID deaths has slowed.
Maine residents report a welcome change
As California smoke-cloud has moved on.
Three traffic-stops in four are now non-fatal.

Pittsburgh’s new ocean-wall has been completed.
France heaves a sigh as locust swarm turns east.
More peaceful demonstrations breaking out.
The ceasefire rages on in Palestine.

Dallas spared most Sahara dust-storm damage.
President claims round-earth beliefs a hoax.
Diversity of virtual sports-crowds faulted.
Faculty purged from nation’s college staffs.

More Boomer generation euthanized.
China declares the Uighur race a myth.
The letter “n” erased from alphabet.
Putin invades the Baltic states, again.

Google permits some fact-based advertising.
Brazilian population disappears.
Two-headed calves are born in Worcestershire.
The moon has turned to blood and starts to fall.


An Educational Suggestion

The current wave of censurings, cancelings, dismissals, silencings and censorship in the academy actually has viable arguments behind it. These include the emotional safety and comfort of students and faculty, the desire for solidarity, the real menace of extremist and violent subversion, the protection of chosen identities, the dream of a harmonious human society, etc. On the other side there are also powerful arguments, including such liberal principles as freedom of speech, the need for vigorous argument and debate to find the truth, training in the art of persuasion, the comradeship of debate, and the pursuit of evidence however unpalatable if real facts are to be established, etc.

How do we resolve these respectable countering claims? It turns out that in a way we in America have already done so. It’s our hoary and honored distinction between religious universities and secular ones. Though of course there are many gradations between these pure categories, the principles that support our permitting both to exist are clear. As long as the choice of attendance is free, we and our courts hold that a religious school can require certain beliefs and commitments of its students and faculty, and a secular school can permit free speech and expression that can be offensive or blasphemous to believers.

Why not resurrect this distinction–for schools that set limits on the ideological content of speech and schools that do not? With religious schools, the student and beginning professor know that there are lines they cannot cross without offending the values and the feelings of their community. With a true secular school, the student and professor know that within the bounds of the nation’s law, no speculation, hypothesis, unearthing of awkward evidence, challenging of claimed evidence, logical disproof of existing moral customs, etc, is forbidden. One must like it or lump it, take it and dish it out.

Now of course this is an idealized picture. As we know, there are more or less religious schools across the country that preserve great traditions of reasoned debate and turn out students with cheerfully contrarian views and a very fine sense of objective fact. And there are many technically secular schools that inculcate a fairly narrow set of absolute beliefs and unchallengeable doctrines, with a curated set of contextless facts to support them. And the picture is complicated by the fact that our democratically elected government rightly believes that education is in the national interest and needs to be supported, but that it is not its business to support religious institutions. It backs up this principle by not taxing religious property, giving religious institutions at least the ability to support themselves. Government also in principle forbids institutionalized religious indoctrination in state schools.  This compromise has worked out well. American universities are among the best in the world, and religious strife–one the greatest killers on the planet–is rare on campus.

But at present it is clear that the compromise is breaking down. Many secular state universities and state-supported colleges–as well as many private secular universities that profess religious and ideological freedom–are now on the official and public level enforcing distinct and unmistakable sets of moral beliefs, among them “woke” theories of social construction and identity. For the most part those beliefs, if chosen and held by an individual, would be arguable and even beneficial, like religious moral rules; but they are challengeable, and even meaningless if they are enforced. Moral choice is by nature free. Like religious dogmas, ideological group commitments tend, if unchallenged, to become caricatures of themselves and the excuse for sadistic condemnation, character assassination, and show trials–and a useful path to promotion. In religious universities today such corruptions are controlled partly by the antiquity of its agreed set of rules, partly by the competing presence of secular schools whose reputation for free thought they covet. But no such constraint exists in secular schools that have actually become ideologically committed on an institutional level–that is, no longer secular institutions–while still claiming the support of the secular state.

Not that there is anything wrong with an ideologically committed school or university, as long as it abides by the law. Great religious foundations have created extraordinary monuments of knowledge within them. Bright minds can easily couch world-changing ideas in terms that placate the genial and lax inquisitors. But the ideological university in the guise of a free university is a problem. Students and faculty may be buying a pig in a poke, or to change the metaphor, may be victims of bait-and-switch. And they can find themselves the focus of a new kind of witch-hunt.

My proposed solution is this. Perhaps we should apply the same standards to the ideological university as to the religious university. Perhaps a university’s faculty and students should have a vote on whether it wants to be a purely secular free speech university or an ideologically committed university with the same legal advantages and disadvantages of a religious university. Then those in the minority could leave for an institution better fitted to them.

In the free university no student or faculty member could be disciplined, fired, or expelled for the expression of ideas. Certainly no crying of fire in a crowded theater–there are plenty of sensible rules in the nation’s law that draw the line. And the ideological university would be permitted to police offensive speech, inappropriate ideas, the presence of invited speakers, and the strict application of behavioral rules between people who differ by sex, gender choice, race, etc. Since its claim, like the religious school’s, is to obey rules that are higher than the rules of the state, it might lose state economic support but gain exemption from taxes.

Then students and faculty would know clearly what they would be getting into, and choose where to learn and teach on that basis. Nonconformists could gravitate toward free schools where they could trust that they would not be fired for controversial ideas; and social idealists could find committed schools with a safe haven for a loving community of like souls. And the clear distinction, as between the old religious and secular schools, might spur competition between the free and the committed institutions and advance the creation of knowledge.



A Divestment

I have just cancelled my Facebook account. I realize that in doing so I am giving up much that is good and distancing myself from friends who are very dear to me. But I can’t trust myself with it. I am retiring this year to become an emeritus professor, a bit like Lear when he steps down, prof in name only, and my teacher’s habit, to try to correct error and fix logic and point out new perspectives and unearth evidence and help people enjoy a book as a book, is exactly what social media doesn’t need at present.

Because everything is politicized now on social media. Even not entering the conflict is an aggressive act. In this nightmare year of plague and racism and fear and institutional folly and brutal violence by the lawless and the law alike, what is desired is simple recitation over and over of the creed of “this” side or “that.” Any concession to the valid points of one side or the other is seen as endorsement, triumphant putdown, conversion or betrayal. Any mild criticism of a view one otherwise endorses is heresy. Those who try to mediate–which was my intention in entering the fray–are the ones hated most as traitors by both sides. So I’m out.

This divestment is only part of a general metamorphosis–caterpillar to butterfly or butterfly to caterpillar? I’ve been slowly clearing out my institutional office and my home study, hundreds of books to go to libraries, fifty-three years of dusty knickknacks, five giant bins of papers, keeping perhaps 1/10 of my them for a generously-offered archive.

I feel, as the cliché goes, as if an elephant were lifting its feet from my back one by one, a liberation that also includes a rush of memories of students and colleagues, and love for my flawed but very decent and increasingly brilliant university.

And as I enter my dotage or sanyasihood I am trying to rejuvenate my first vocation, of poet, and shred away what religious folks call the burden of self. I see a kind of liberation that might be possible; not less care for others, but more cogent care. A way of being a night-light for people, or a place to rest on a journey, or a suggester of ways to put things that display their holiness within.


The Plague War

I feel the pain in their incessant battle,
The urge to bite, but armor shields the flesh,
The trembling heart-shock of the feared rebuttal,
The wanted wound that keeps the hatred fresh;

I feel the murderous pity for the ones
The enemy supposedly still harms,
The warm companionship of well-shared guns,
The pride of race or wokeness, up in arms–

Arms that have blades upon their very helves,
That cut the striker while he strikes the stricken,
Weapons that turn themselves against themselves,
Medicine mixed to make the taker sicken.

And they’re good people too, made mad with grief:
God grant the damned election brings relief.


Nine Fallacies about Racism

The current narrative about racism is based on a set of propositions which, upon closer examination, are both factually unfounded and logically incoherent. Let’s look at these propositions in turn.

1. Racism is a social invention. This proposition draws on the sociological assertion that human reality is socially and culturally constructed, which is a partial truth at best and a toxic distortion at worst. Human reality is much more a matter of our biological construction, ecological and technological constraints and affordances, and individual choices. The social reality of a human being can be socially constructed in a fairly superficial way by multiple ethnic and customary habits, fashions, family traditions, peer groups, commercial advertising, and the cultural mix that goes into most humans everywhere, changing day by day. But it is our genes and their epigenetic settings, the laws of physics, chemistry, and physiology, our own understanding of them, the available technological and economic uses of them, and our own self-training and self-education, that are by far the most important influences on our thoughts and behaviors. If racism is socially constructed, it is only one meme among many, and dealing with it is just a matter of changing the current fashion. The most ardent upholders of the current narrative all recognize that this has not worked.

Xenophobia, the fear of strangers, has been shown to be innate by many studies in psychology, anthropology, sociology and other disciplines (not to mention almost all the literatures of the world that tell the story of one tribe’s victory over another). Infants already seek comfort with humans that are known to them and humans that look like the ones they know, and fear odd-looking strangers. The adaptive commonsense of this tendency should be obvious. It is an indelible part of our makeup. The oxytocin reward system that makes us love our own group also tends to make us suspicious of others.

Xenophobia, like many human givens, can certainly be counterbalanced by other predispositions, such as the exploratory instinct, the lure of the sexually other, and the incentive of gain by trade. But xenophobia is always there and is indeed easily shaped by both an individual and his or her group into more specific forms, ranging from irrational support of one’s own sports team and hatred of the opponent to religious prejudice and inquisitions, jingoistic nationalism, civic pride, class conflict (which redirects our racist instinct into an economic conflict) and of course the theory of racism itself. Political partisanship uses it all the time—as “dog whistles” about monkeys, and the “orange” slur often used about Trump, clearly attest. Racism as a basic instinct did not need inventing. Racism was not taught but inherited in our genes; it is not a moral failing unless it is unchecked, and must be treated as we treat a hereditary condition like sickle cell anemia, or nymphomania, or Tay-Sachs, or autism: with compassion, education, and therapy.

2. Racism is a clear and distinct concept in itself. This impression can be easily corrected with a little examination of how the word is used. The word “racism” is itself incoherent, meaning several (sometimes contradictory) things: a belief that there are distinct races of humans (as opposed to various local groupings of human haplotypes); a habitual preference for one “race” over others; a belief based on bad science that one “race” is superior to others; a social and legal practice based on that belief; an irrational preference for one skin color, hair texture, or nose or eye shape over another; a political position to justify the economic oppression of one defined group by another. One can be racially hostile to another person who has the same skin color, etc, but who is simply identified as belonging to another race, as evidenced by Nazi racism against Jews and Slavs, the evident racism of the Qiché against the other tribes in the Popol Vuh, the protestant Irish against the Catholics, the Japanese use of Korean “comfort women,” and countless other examples.

Racism is hugely varied in its manifestations. One can believe in the inferiority of members of one race but sincerely support their equal rights as human beings, as Lincoln did. One can love another race but regard it as basically lesser, as we do dogs. One can, sadly, prefer members of one’s own “race” but believe that another race has superior natural talents. Either as a bearer of the white man’s burden or of white guilt, one can be paternalistically protective of the “inferior” race; one can profess to seek the emancipation of other “races”, as did Marx and Stalin, while ardently despising them. “Scientific” racism was a standard socialist position for much of the last two centuries, leading to eugenics programs in many left-leaning nations.

3. Racism always involves contempt or a belief in the inferiority of another group. Again, not so. Here a hugely important distinction, virtually ignored by contemporary theorists, emerges. The quality of feeling that characterizes our racist distaste for the “inferior” racial Other is quite different from that which we feel about the “superior” Other. One can hate “another” race precisely because one believes it is superior, as with antisemitism in general and some strains of American anti-Asian prejudice, especially exemplified in college admissions policies. Race bias toward the “inferior” can range from genial condescension and paralyzing paternalism to animal fear, exploitation and brutal sadistic repression; toward the “superior” it ranges from secretly sneering compliance and sabotage to cold mass murder on an industrial scale. We seek to subjugate the “lower” race; but we seek to eradicate the “higher” race.

4. Racism is only a “white” phenomenon. This assertion is spectacularly wrong, and is a racist position in itself. Scientific racism, which replaced the normal folk unwisdom about perceived human differences in the eighteenth and nineteenth centuries, certainly could not have been invented without science. Most of modern science was created in Europe and North America by “white” people. Like the faulty phlogiston theory of combustion, it was a mistake. But it fed into other political and social incentives, such as the slave trade, colonialism, and socialism itself, which always sought ways to identify human groupings as more important than human individuals. The West made science available, and racism misused its mistake.

But racism in all other senses than the scientific fallacy is sturdily universal among human beings. History presents an overwhelming picture of clan warfare, tribal massacres, ethnic holocausts, pogroms, and enslavements. Whole populations of modern humans show marked differences between the inheritance of mitochondrial DNA through the mother and the Y chromosome from the father that can only mean a period in which one racial strain virtually exterminated all the males of another and raped its females. The history of the relations among the Shang, Zhou, Qin, Han and Mongolian tribes and their surrounding peoples is a story of successive racial exterminations. So too the establishment and collapse of the Roman Empire. Under the Caesars, darker-skinned Mediterraneans crushed fair-skinned Celts. Ancient Mesopotamia’s history of tribal holocaust is perhaps the oldest, vying with ancient Egypt’s. We have already briefly looked at the tribal wars of Mesoamerica, as we could also at the Andean civilizations. Polynesians subjugated Melanesians, and were subjugated in turn.

Apart from the Mongol invasion of Asia and Europe, perhaps the largest territorial story of racist subjugation and extermination is that of sub-Saharan Africa long before the white colonies were created. Beginning in the first century AD, Bantus from the general region of Cameroon swept eastward across Africa, wiping out hundreds of native societies including many Nilotic groups; another wave drove southward, subjugating or exterminating indigenous peoples such as the Pygmies and the Khoisan, arriving in what is now South Africa to meet the European settlers moving north from the Cape in the sixteenth century. Subsequent vicious tribal wars between different Bantu-speaking tribes continued to this day. Black racism against the brown peoples of the south and against other black tribes was always part of a norm that indeed included trade, cooperation, and great cultural achievements as well.

5. Slavery is a racist practice. This proposition is only half true. Slavery—the ownership of other human beings and their forced labor–has been practiced in one form or another by most human societies at one time or another. If we include such practices that meet the definition, as the belonging of children to parents, military conscription, serfdom, and in many traditions marriage itself, it is universal. It was normal practice in ancient and classical times to enslave populations conquered in war, and often this practice had little at all to do with race or perceived race differences. The combatants in Homer’s Iliad all explicitly belong to the same Greek-speaking race, connected often by ancient family ties, yet they cheerfully enslaved each other when they could. Poor people in many cultures sold their children as slaves to racially identical rich people, and the practice still continues in many places. Slavery only became a specifically racist practice with the slave trade, when the earlier relationship of belonging turned into a new relationship of chattel ownership.

6. The slave trade is a European invention. This is patently false. What we usually mean by slavery is the slave trade, or chattel slavery, which was not so prevalent as normal local slavery, though it too certainly took place in all known major civilizations. Slavery as a commercial industry does have a specific history, but it is not in any sense exclusively a European one. The slave trade we know as such is an African and Middle Eastern invention. Ancient southern Egypt sold Nubian slaves to northern Egypt and then later to Rome. In Egyptian wall-paintings pale-skinned Hittite and Amorite slave girls serve black Pharaohs. Bantu kingdoms sold their own slaves to other Bantu kingdoms, and began the systematic process of rounding up village populations to be sold. Mighty slave-trading nations like Mali, Ghana, the Ashanti and the Yoruba grew rich on the practice. Mansa Musa’s gold was legendary. Under the Arabs, beginning in the sixth century, and later the Turks, slave trading moved north and became a massive industry, and now it was European coastal populations as far north as Iceland that were being captured in millions by corsairs and Ottoman raiding parties and sold in the great world slave trading center of Istanbul. It is unclear whether the Slav peoples gave their name to the institution, or whether they took their name from it; the connection itself is eloquent.

It was only in the 1600s that the disease of the mass slave trade spread from Africa and the Mediterranean to northern Europe and the New World. It is a truly remarkable achievement of the European Enlightenment that so ancient, profitable, customary and universally accepted a practice should have lasted only two hundred years before its evil was recognized and banned by the major European nations, beginning with France and England and finally ratified by all nations of whatever racial makeup. In the slave-dependent United States that moral realization cost a bloody civil war that took the lives of three quarters of a million people. The effective figures in the battle against slavery were predominantly “white” cultural and political leaders in nations with predominantly European populations.

7. Enslavement and genocide based on race was a conservative idea. Just as scientific racism was generally a product of left-leaning progressives in the West, the opposition to slavery came originally from sources generally considered today as conservative—Whiggish supporters of business enterprise, Protestant religious moralists like William Wilberforce and William Lloyd Garrison, the Catholic Church, and the nascent Republican Party. Progressivist Fabians like Beatrice Webb, Bertrand Russell, and John Maynard Keynes, the intellectual leaders of British socialism, were ardent eugenicists, as of course were the national socialists of Sweden and Germany. In the communist Soviet Union whole populations were ethnically “cleansed,” including Balkars, Crimean Tatars, Chechens, Inguish, Karacheys, Kalmyks, Koreans and Turks, who were reduced to second-class citizenship and deported to central Asia with huge loss of life. And in the Holodomor about ten million Ukrainians were exterminated for refusing to work as slaves. Communist China even now is doing the same sort of thing to the Uyghurs.

8. Racism is a capitalist phenomenon. One of the most striking things about American slave narratives is that the escape from slavery is not ever conceived as an escape to a socialist world of paternal state control but to a place of free enterprise where the former slave could enter the marketplace and make a decent living by their own work. Here an important distinction needs to be made, between mercantilism, which is compatible with and indeed relies upon slavery (and thus on racist justifications for it), and capitalism, which inherently rejects slavery. Mercantilism works basically as an extractive industry that rifles the earth and the human body to create wealth for a few. It requires imperialist colonization, and it does not like innovations that disturb its process. Capitalism, as its name implies, replaces human brute labor with capital stock such as technology and marketing tools, replaces labor-intensive foreign raw materials whenever it can with common and easily obtainable local ones, and thrives on technological progress. It does so not out of the goodness of its heart but because its core principles, the creation of value and the reaping of the rewards of value-creation, rely on a skilled and flexible workforce and as broad a market (people who can pay for its products) as possible. Even Henry Ford, like other progressives an avowed racist, recognized that for the system to work his workers would have to earn enough to buy his cars. And that meant the creation of large working and middle classes and enough public education and medical care to maintain competent workers who would be flexible enough to keep up with accelerating technological innovation. Black former slaves flocked north to work in his factories, beginning the slow process of black economic emancipation in America.

The American Civil War was a war between the mercantilist South and the capitalist north. As everywhere else in the world where capitalism took root, the result of victory was the outlawing of slavery and the gradual integration of former slave populations into the market economy. Russia had already abolished serfdom as its capitalist middle class expanded in the early twentieth century; tragically its form of socialism after the Revolution replaced the old form of serfdom with the new one of the collectives.

Capitalism is the only reliable economic antidote to slavery.

9. Racism can be countered by identity politics. Identity politics, that is, the ideological cultivation of solidarity based on race (gender, gender identification, disability, etc), has been put forward as a potent weapon against the oppression of a minority by the majority. Virtues unique to this given identity, heroic stories about it, and atrocities committed by the enemy can then be marshaled to organize enthusiastic support for violent resistance. The problem with this means of countering racism is that it is inherently impractical, for two reasons.

The first reason is that it is folly to attack and attempt to damage or destroy a group that is much larger, better armed, richer, and more organized, with its own rules, laws, material resources, and infrastructure. If the attack is ineffective, it is ineffective. If it is effective enough to be a real nuisance, it will be counterproductive, resulting in the delegitimation of its just claims and possibly increased repression. Hostilities based on inalienable group identity by definition exclude members of the majority that might see and assist the justice of their cause and join their numbers. In reality the success of mass protests against racism is based crucially on the forbearance of liberal capitalist societies from brutal repressive measures that are possible under socialist rule, on civil pacifist restraint by the protesters, and on the continued appeal to the painfully slow conscience of the oppressor.

Worse still, the weapon of race identity is not available to minorities alone. When the majority is insulted and tormented enough into identifying itself as a special race with its own heroic history, grievances, and special virtues, very terrible things can happen and have happened again and again. The apathy of the majority is a precious protection. It is not wise to awaken a sleeping dragon, as the Great Depression did in Germany after the treaty of Versailles in which Wilsonian “social justice” elevated ethnic identity into a political and moral imperative. Or as Trump did after the Great Recession, when racial political correctness and accusations had alienated the majority of the American working class.

The only effective remedies for racism seem to be four: religious solidarity that supersedes race, as with Catholicism and Islam; the capitalist free market, where individual profit supersedes racial solidarity and abundance overcomes scarcity and want; equal laws equally enforced; and the long slow process of liberal persuasion and education. Human beings of all kinds have a conscience, even those whose habit involves racial categories. To dismiss anyone’s conscience as invalid or insincere is an evil. The only effective appeal to majorities whose very existence as a working majority is oppressive to a minority is the old-fashioned appeal to our common humanity and to its collective conscience. This was the vision of Martin Luther King, like that of Mandela and Gandhi–the great apostles of liberalism for our times.


Capitalism and Socialism: What do the Words Mean?

Capitalism and socialism are two important words used in the world of “political economy.” As a literary scholar and poet I became interested in the field’s wild variety, its rhetorical use of language and often its surprising insights. For the last thirty years or so the field became a hobby. This essay shares some of what I learned.

Let’s set aside the popular use of the words “capitalist” and “socialist” as insulting epithets meaning roughly “an evil and greedy oppressor” and “an evil and murderous tyrant.” We have perfectly good words for “evil, greedy, murderous” etc—evil, greedy, and murderous for example. Socialist and capitalist have meanings of their own that might be worth exploring.

In serious discussion “capital” is usually agreed across all shades of the political spectrum to signify the means of production, or the ways value or “utility” is created. The main argument seems to be about who should own it—individuals, individuals and voluntary groups, certain groups only, or the State. In ordinary usage, “capitalism” normally means the first two: private or private-and-corporate ownership. “State capitalism,” the ownership of the means of production by the state, is usually called socialism or communism. Absolute monarchical ownership, though it uses capital, is one extreme form of state socialism in this literal sense. If the leadership of a state is democratically elected, it is possible to claim that state ownership is ownership by the people in general, but it is hard to make this case when the current leaders already have total control over the livelihood of the voters. Ownership of capital only by limited groups, such as guilds, feudal dynasties, oligarchies, monopolies, and cartels is often identified with “mercantilism.” It is associated with colonial systems of asymmetric trade and extractive industries like mining and cash crop farming, rather than value-added industries in which the ingenious restructuring of raw materials outweighs the value of the raw materials themselves: such latter industries are characteristic of a capitalist economy.

Another popular use of the word “capital” includes the implied meaning of large, accumulated, stored and abstracted forms of the means of production, such as money, legal obligations, intellectual property, etc. In this sense no large human project, such as a highway system or electrification or a national health service, is possible without capitalism in one form or another.

“Capitalism” can mean either a set of theories about economic systems or the practice of organizing capital, including rules of ownership, contracts, property rights, etc. Theory and practice are not always the same, but they are hard to separate and the relationship changes all the time, so I’ll deal with them together.

The word is also often used to mean the marketplace itself, that spontaneous order by which demand for goods and their availability are communicated across an entire community by means of the price signal, marginal utility, and profit. The continuous feedback of buying, selling, hiring and client-making, controlled by property laws, insurance, bank rates, and stable currency, is immensely creative and adaptable, and may be humankind’s greatest gift. This system is the only one known in which competition, the creation of demand, and competitive advantage tend to create continuous innovation, the substitution of capital for brute labor, and huge increases of abundance.

Problems that often accompany such spontaneous orders fall into two main groups.

The first is that of externalities. If the inputs of an industry include clean air, clean water, a vital ecosystem, and a well-functioning civil society, and its output includes pollution, ecological harm, and social disruption, and if these factors are not added to the balance sheet, then the market can break down and the price signal becomes distorted. The “invisible hand” is crippled. Different nations have tried different legal and cultural ways of rectifying the balance and ensuring that such debts are repaid by remediation or compensation—of this more later.

The second problem is basically the mathematical power law that comes into force whenever any system is creative, that is, it grows and innovates. Advances lead to other advances, the stock of property increases by compound interest, and even very small differences in the rate of increase of wealth can exponentially magnify into huge inequality. Great increases in transportation speed, reliability, efficiency, and demand creation, and plummeting transaction costs, lead to cases where only a few really excellent providers can overwhelm local enterprises. Thousands of mom and pop stores are replaced by the discount chain, a multitude of local folk singers is replaced by a few superstars with mass record labels or streaming rights. The result is that the rich get rapidly richer, while the poor get richer more slowly, and can even in recessions, pandemics, or radical technological changes get poorer. Savage passions of envy, felt injustice, racism, hostility to immigrants, class and identity solidarity emerge. All modern nations, and even cities and states within federal nations, seek ways of redressing the balance without killing the golden goose of the capitalist marketplace.

The mildest way of redress is the enforcement of existing laws. The legal systems of most advanced nations are themselves spontaneous orders (sometimes called autopoietic systems, complex orders, self-organizing or emergent systems, etc). That is, such a system has feedbacks such as juries, appellate courts, adversarial advocacy, precedents, judicial review, critical journals, law books, law schools, etc, that dynamically alter, adapt, or improve the production of justice. (Science, with its own feedback systems of experimental protocol, statistical ana]ysis, replication, apprenticeship, science journals, peer review and professional recognition by prizes and awards, is another such spontaneous order). The law can adapt to changing conditions and mitigate privilege (“private law”) that favors the fortunate.

When innovations in technology and economic efficiency threaten the natural and social environment and lead to runaway increases in economic inequality, and the existing laws cannot keep up, the political process reaches crisis proportions and there is a demand for radical change. One kind of change is a constant lure and tropism: toward various levels of state interference. And here the crucial word “socialism” enters the picture. Socialism as a theory generalizes the staggeringly complex landscape of an economy (in its dynamic setting of other economies), together with its even more complex culture, and all the individuals and groups and ideologies within it, under the term “society.” It largely dismisses the yet more complex biological makeup and natural predispositions of the population, assuming that these are socially constructed, and seeks to control the harms by more or less direct and/or coercive means.

As a practice socialism tends to be activist (while capitalism is often more laisser-faire). Thus socialism suffers the disadvantage of being held responsible for its experiments and innovations, its “dirigiste” attitude of taking charge to fix abuses and injustices, while capitalists can escape the blame for developments it did not directly cause. But socialism by the same token is always in the situation of interfering with an entity much larger and more complicated and unpredictable than its own resources can handle, that wants to go its own way, and that produces much that is desirable.

Nevertheless, given the huge environmental and cultural dangers of uncompensated externalities and the inevitable gravitation of a creative economy toward a power-law distribution of capital with extremes of wealth and poverty, emergency changes in ownership and control may be plainly indicated, as they are in natural disasters and pandemics. Revolution is a worse-than-natural disaster that must be anticipated and avoided. Sometimes non-state means of damping its flames are possible: nineteenth century industrial Britain, for instance, had the extreme good luck of having John Wesley and his religious message of love and peace, and a tradition of great art that was shared by high and low alike. It managed to avoid the worst atrocities that took place in the French Revolution. American religious traditions fulfilled the same function, especially in the case of the Black churches. But the religious buffer postpones rather than eliminates the problem. So it gets left to government. When we cannot rely on cultural luck to provide enough patience to let the system right itself, state means may be necessary.

The available interventionist strategies fall into four general categories: state regulation of the economic and ecological consequences of progress, state control of the economy, state ownership of the “commanding heights” of the economy, and full state ownership of the means of production. The term “socialism” in these terms is applied very differently according to one’s ideology. Only extreme right-wingers and anarchistic libertarians would label state regulation—against breach of contract, theft, etc—as socialism. The business-friendly Right would call some state regulations “socialism” but attempt quite rationally to use others to legally rent-share or create monopolies (socialism for the rich!). The moderate Right would accept some state control but describe most forms of state ownership as socialism. The moderate Left would save the term “socialism” for state control of the economy, hoping for or fearing a transition to “commanding heights” state ownership, and reserving “communism” for full state ownership. The far left would ultimately desire full state ownership, termed by them “socialism,” to be followed by a dreamed-of “withering away of the State,” true communism.

Socialism in the sense of full state ownership of the means of production has a disastrous history, one rightly called out by the Right. I should not need to argue this. East Germany and North Korea are good examples. On both major counts, negative externalities and distribution of power and money, they were disasters. They were hell on earth according to survivors and escapees. “Commanding heights” state ownership has often been a failure too, given the fact that the parts of the economy that are not state owned and that participate in the dynamism of the market fairly soon render the commanding heights obsolete, leaving them as a crippling drag on the rest and a center of oppressive bureaucratic power and surveillance, as has happened in China and India.

The problem for most advanced nations is to figure out where state regulation leaves off and becomes market-chilling state control under which the price signal no longer operates, and where state control in turn becomes state ownership. Generally speaking, if the government controls more than half the Gross National Product it effectively owns the means of production and, since the population relies on it for employment, the regime can perpetuate itself by holding the people’s jobs for ransom by the vote. Is there a slippery slope? Plainly Sweden felt there was in the 1990s when in a prolonged decline it turned its back on socialism and committed itself to a policy of free market capitalism with high redistributive taxes, an attractive position taken up by other Scandinavian countries and by American liberal opinion.

Some would argue that the heavy burden of taxes saps creativity and competitiveness in such nations. Others argue that their ethnic homogeneity, leading to a more general spirit of trust and cooperation, is a luxury that disqualifies them as models and that more multi-ethnic societies cannot achieve. Others too would point out that such countries must rely on the military protection and economic markets of a world power, the USA. Germany has managed what looks like a good balance under the moderate liberal-conservative wing of Angela Merkel—another capitalist free-market economy with high taxes and good social services. But the dangerous populist reaction to Syrian immigration there indicates how fragile that balance can be; and as a member of the European Union it has had to lead a divisive policy of anti-free market austerity that looks sometimes like creeping mercantilism.

Other nations like Singapore and Taiwan have managed to have long-booming capitalist economies with low taxes and good public services. The world is wondering how: perhaps it is residual Confucian civil service ethics, which may dissipate with a new generation. China and India have become booming free-market capitalist societies, even while carrying state-owned non-market sectors on their backs, but with much corruption and huge future demographic and ecological problems. The US is currently the pioneer in fully confronting these issues, bitterly divided along partisan lines, but in the process breaking new intellectual ground; and the outcome is in doubt.

Without a full and, I would argue, an impossible understanding of a whole country, state-imposed means of mitigation for externalities and inequality are always haphazard, and often result in unforeseen consequences. (Think by comparison of early psychopharmaceutical attempts to deal with psychological distress and illness in a brain almost completely unknown except for broad statistical measurements.) A good example would be Lyndon Johnson’s Great Society programs and their later additions using the same body of theory. Jane Jacobs showed us the horrors of urban renewal; the war on drugs resulted in the imprisonment and disenfranchisement of about a tenth of the black male population; Bill Clinton had to ditch large parts of the welfare system that had broken up the Black family and created a plague of illegitimacy and hapless single parents. Unlike Mao Zedong’s Great Leap Forward, an even more ambitious and astoundingly bloody attempt at rectification, the Great Society did have its successes too, but its history should be a grave corrective to wilder “woke” hopes and simplistic economic phrenology.

It should be obvious by now that what most of the US candidates on the moderate left are calling for is not state ownership (socialism in the most used sense) and only very partial ownership or control of the means of production, limited to education and health care. The moderate right believes that intelligent and flexible regulation, together with minimum bureaucracy and modest redistribution might work better. Both sides—or rather all sides—of the debate are necessary, lest in our ignorance and unhampered enthusiasm we make old or new mistakes and squander the huge progress already made. Or get on a slippery slope that will take us to the annoyingly proverbial Venezuela or to robber-baron Russia.

Given a better understanding of the terms “capitalist” and “socialist,” how might a classical free-market liberal or commonsense libertarian respond to capitalism’s problems of redistribution and externalities?

Modestly redistributive policies to deal with inequality are already in place, but are highly wasteful and inefficient. More might or might not help. Exciting possibilities mooted among the people themselves include an entirely unlegislated further expansion of corporate concepts of worker ownership, stock sharing, cooperatives on the Spanish Mondragon model, well-paid gig systems, and in the aftermath of COVID, work at home and the renewal of cottage industry. If the current social services programs could be replaced by a universal basic income, as some argue, many current distortions of the marketplace would disappear. Starving and ill-educated people make poor workers and poor consumers. Education is crucial, but much of the current state system is obviously broken and people are turning to alternative schooling of various types: home, charter, cooperative, private, and virtual. The ideological battle for local environmentalism has been more or less won. The efficiency of capitalism, seeking higher profits by exploiting renewable energy sources and former waste products, and by the miniaturization and customization of industry, is gradually reducing the ecological burden in many parts of the northern hemisphere. The northern forests are returning.

The free-market liberal or moderate libertarian I have postulated might in this light come to some odd insights. One might be that the current administration has adopted several radically anti-capitalist policies characteristic of left-wing and socialist governments: restrictions on immigration and the free flow of labor; the suppression of minority populations; protectionist restrictions on trade; mercantilist economics; nationalism and group identity politics; attacks on the Press; blatant philistinism; and the custom of mass rallies and enthusiastic propaganda. Its support is demographically similar to Lenin’s during the Russian revolution. “Populism” is in a sense the new socialism.

The libertarian argument might be that we should let civil society itself sort out the problem of inequality; many of the extremely rich in the US are convinced of its evil, culturally committed to reversing it, and actively supporting ameliorative efforts in medicine, education, media, etc. The Law is now fully alerted to the problems that our racist human nature has created by mass incarceration and the war on drugs. Trump and the far Left, not-so-strange bedfellows, could hamper the indigenous emergence of better forms of voluntary redistribution, and the evolution of the law and economic system to correct the ill effects of inequality and externalities.

Progressives would clearly take a different view. But a clear understanding of the linguistic struggle over the meaning of key terms like socialism and capitalism could help find a common vocabulary so that the debaters would not be speaking past each other. Perhaps the traditional humanities might be useful after all.


The Good and the Right

Laws, as many philosophers have opined, seem to be based on one of two foundations: what is good, and what is right. Very roughly, the distinction can be found in the difference between our own two traditions, of Roman law, and English common law; further back, between the ancient Hebrew ritual law, and the code of Hammurabi. Legal experts will, I hope, forgive the many exceptions to these generalizations, for their usefulness as an analytic tool of thought.

The distinction, even more generally, is between what is commanded of us by the gods or God (or, in later ages, by Humanity, by Nature, by Reason, or by Popular Will) on one hand; and what is required of us in the honest fulfillment of a contract, on the other. The former, which finds its Western origins in ancient Israel (and can be found also in the Confucian legal system of ancient China), sees law as a way to enforce the good—the good as a transcendent endowment of human society that we can partly intuit, especially if we are talented, trained, learned, and morally upright. The latter, which can be identified roughly with the Hammurabic, Solonic, and English Common Law traditions, sees laws as the way to make sure the humble contracts that human beings make with each other have the support they need over and above the natural sanctions built into our families, our markets, and our practical agreed systems of mutual trust. The first emphasizes the good, the second, the right.

The Jewish moral law was, for a time, enforced by the civil authorities of ancient Israel. But with the destruction of the Israelite monarchy in 587 BC, a profound reevaluation of the laws of goodness began, one that is still continuing in the Jewish community. God had evidently found something lacking, the Prophets said, in the literalism and the abuses of a law that afforded so much power to the authorities and left so little to the spontaneous free choice of just individuals. Perhaps the law of goodness was to be kept, not in the hands of armed enforcers, but in the human heart and soul enlightened by the inner voice of Adonai. Thereafter Jews found and punctiliously obeyed the laws of contract they found among other peoples, and kept their free ethical observance of the law of the good to themselves—until the coming of the Jewish State in the twentieth century, when with the revival of secular power the enforceability of orthodoxy once more became an issue.
Roman law, though again it was based upon a transcendent conception of the good, made many concessions to the low demands of commerce. It gave much authority over to local magnates, capos, and dons, so that in exchange for a local return to the patriarchal customs of the tribe, there would be a general concession to the legal supremacy of the Senate (and later, the Emperor). However, such laws did not provide for the increasing numbers of helpless indigents that are spawned by mercantile padrón systems everywhere.

Christianity, which began with a purely internal and voluntary law of the good—love thy neighbor—had inherited the inner ideals of the old Jewish moral law. But it was purged now, Christians believed, of a great burden of its literalism and legalism, and reinforced by the blazing hope of salvation and faith in the redemption. This new religion gradually created for itself through energetic private charity the role of the Empire’s welfare system. Finally the Empire itself simply could not manage without it, and was itself forced, under Constantine, to become the secular enforcer of Christian moral law. As the Roman Empire crumbled, the ideal of a society in which the highest moral precepts, enjoined by God, would be enforced by the State, burned brighter and brighter in the imagination of the world. The result was finally the birth of Islamic law, or the Sharia, in the seventh century AD. Sharia systematized and perfected the law of the good, and embodied one of the most beautiful, and tragically flawed, visions of society that our species had yet achieved.

All societies based on the enforcement of a law of good have tended to stagnate, wither, and eventually die. The Soviet Union is a nice test case: based on noble principles of humane goodness, and enforced by a perfect system of coercion, it lasted exactly one lifetime, full of unbelievable carnage, before cracking and falling into dust. It took the Holy Roman Empire much longer to collapse, because it was still “corrupted” by the contractual pragmatism of the law of the right, and was so inefficient and far-flung that it could not fully enforce its own principles. It took even longer for the Islamic empire of the Ottomans and the Confucian empires of China to sink into decay, but decay they did.

Meanwhile another conception of law was gaining ground: the law of right, rather than of good. The code of Hammurabi had arisen at around 1700 BC to protect the golden goose of Mesopotamian business enterprise. Its practical wisdom would eventually leaven the mysterious prescriptions of Leviticus and the pollution-and-purification ritual of Roman law, and give Roman and Jewish civilization the tools to prosper economically. However, in its homeland Hammurabic law could not control the political ambitions of the Persian Empire, which overreached itself and fell victim at last to the Greeks under Alexander. Hammurabi’s core ideas had been incorporated into the new and improved version, the Greek laws of Solon (see The Classical Greek Reader, edited by Kenneth Atchity and Rosemary McKenna), where the laws of contract turned out not to need an emperor to preserve them, but to be equally enforceable by a democracy, a republic, or a legally constrained monarchy of free men. The principles of Hammurabi took on a new lease of life. But Greek law of right was adapted only to the city, and was fatally vulnerable to strict limits of size: it consumed itself in inter-city conflict, was undermined by elitist Platonic yearnings for a law of the good, and was overwhelmed by the more pragmatic ecumenism of the Roman Republic. With the Greek city-states died the first great attempt at a law of right.

The second great attempt at a society based on a law of right—one that succeeded—arose in the north with the slow maturing of the neolithic rules of the Germanic tribes into a haphazard and populist collection of laws to secure and sanction the boundaries of a marketplace. As it evolved with its juries, its torts, its precedents, its limitations on monarchic power, its appellate review, its defense of the local rights of civil society, and its astonishing capacity for commercial and technological innovation, it came to dominate the world. Finally the Christian Church was forced to acknowledge the secular dominance of the law of right. After the agonizing upheavals of the Reformation, Christianity was able to internalize the law of good, as the Israelites had been forced to do two thousand years earlier, and abandon the inquisitorial attempt to enforce it externally by secular means. Render unto Caesar that which is Caesar’s, and unto God that which is God’s; and now that Caesar made no claim to a law of the good, but wanted only to enforce the right, the way was open for the Enlightenment compromise, in which the Church could have men’s souls if the State could claim men’s bodies and enrich—and tax—men’s pocketbooks.

But the yearning for an enforced law of the good could not be eradicated from men’s souls, and though two great regimes—Britain and America—had largely freed themselves from the law of good, Romanticism and the age of revolutions saw a massive swing toward the ideals of the higher moral law. The result was all the various contenders for the role of secular enforcer of world morality—Jacobinism, Communism, democratic socialism, Nazism, Fascism, and so on. Almost all despised Judaism and Christianity for having abandoned, as they saw it, the role of secular enforcer of goodness. They hated Judaism partly for having, in their view, succeeded so very well economically and culturally without the help of a state at all, and for having been able, they felt, to combine an inner, voluntary, community solidarity with an adroit and profitable expertise in the outer realm of contracts.

In the last few decades, however, in the light of the huge economic and cultural success of the nations that clung to the law of right, there has been a decisive swing back in that direction. Dozens of regimes have adopted free market policies, have at least in theory signed on to Hernando de Soto’s drive to give poor people the legal right to their own property (thus freeing them from moral peonage to a paternalistic government), and have submitted themselves to the contractual discipline of the IMF, the WTO, and the World Bank. That movement has indeed been challenged from many quarters; Islamic and Christian fundamentalism, a resurgence of coercive secular moralism under the paradoxical banner of “social justice”, and the new outbreak of populist nationalism. But perhaps the tipping point is already past.

Let it be said at once that the above is not an attack on the law of good, nor simply a paean to the law of right. The laws of good apply the more strongly to the individual conscience as the secular enforcement of them diminishes. They apply also to the free institutions of civil society (protected from each other, as they must be, by the law of right). The absolute claims of the law of good that make it so dangerous when armed with secular power are precisely what generate the decent conduct without which a good society is impossible.

But goodness, in my view and that of almost all ethicists, is essentially bound up with freedom. We cannot praise a coerced virtue, nor blame an enforced crime. The very core of morality, enjoined by God himself in almost all religions, is the spontaneous assent to divine grace. Paradoxically, to enforce the law of good is to destroy it. Paradoxically, the freedom to do evil—as long as it does not violate the right—is required for the freedom to do good. The law of right is at its center the law of freedom, and is thus, paradoxically again, the only thing for which one can rightly resort to coercion and war. All of this is not to say that the law of good must bottle itself up within the individual and the closed community, and render itself impotent. Instead it means that the law of good must win the world the hard way, by the noncoercive means of persuasion, gifts, and the marketplace—must win the population one by one by one. And it can only do so under the wing of the law of right.

Certainly, the laws of right do not make a perfect world. Adam Smith’s Invisible Hand, the miraculous pricing mechanism praised by von Mises and Hayek, that directs resources to where they are most needed, does indeed work, in the large statistical aggregate, when it is protected by the law of right. But it cannot deal with local tragedies, and it cannot by itself create the social and cultural capital that renders people capable of exercising political freedom in a responsible and objective way—nor does it claim to do so. And it cannot per se engender the marvelous overplus of heroism, sanctity, generosity and scientific and artistic integrity that society needs to advance. But neither can the law of good do so when enforced by coercion, for these things are free gifts and cannot of their nature be coerced.
Thus if religion is a natural human need and right, it is one that only the persuasive and noncoercive measures of civil society can guarantee. A civil society which did not do so would tend, if this analysis is correct, to wither on the vine—or at least it would be overwhelmed and outbred by devout immigrants with the greater cohesion, moral strength, and enthusiasm for life provided by their religion. But it would be more dangerous still for the state to enforce religion.

However, natural law of right might very well argue that America’s current anxiety about public displays of religion (except “secular” statist ones) may be deeply misconceived. To insist on them in government buildings is to try to make Caesar do the work of God and thus to betray a lack of faith in the Lord. To try to ban them in public places is just as dangerous, because it implicitly concedes that public space is government space, and thus violates the Constitution’s pledge that all rights not specifically delegated to government are reserved to the people; it is we the people who own public space, not the government. Just as government should not grow food, but should encourage the growing of food, so government should not take on the provision of religion, but should smile upon it as a natural need of its citizens. Further, the recent attempt to suppress by “political correctness” and speech codes civil society’s habits of giving honor to religion, and even its noncoercive but often very uncomfortable sanctions against irreligious and immoral behavior, may also be a mistake.


Evolutionary Aesthetics


I wrote this long essay over six years ago as a response to an attack by Joseph Carroll on my pioneering work in the the field (in my book Natural Classicism, 1986, and other publications). I did not publish it at the time, as I dislike scholarly squabbles and had other fish to fry. But it contains a brief summary of the field that may be of interest, and some points that I believe still hold up well.




Evolutionary Aesthetics and Literary Darwinism: a Retrospective (Memoir
Frederick Turner

Edward O. Wilson has recently modified his views about the nature of heredity and selection, and their relationship with social behavior. Wilson was arguably the founder of sociobiology, and it behooves us to take him seriously. It may be time to look back at the emergence and history of the movement known as literary Darwinism, which is now thirty years old, and assess its strengths, its weaknesses, and its possible future successes in the study of literature. It is a movement that includes such figures as Brian Boyd, Jonathan Gottschall, Robert Storey, and Nancy Easterlin, all of whom have published books squarely in the center of the field in the last few years (as of early 2013), and many others, such as Brett Cooke, Alice Andrews, Troy Camplin, Alexander Argyros, Dennis Dutton, and Michelle Scalise Sugiyama. Other important figures have concentrated more on evolutionary aesthetics in general, such as Ellen Dissanayake, Walter Koch, Helen Fisher, Koen dePryck, Kathryn Coe, and Nancy Aiken, and major authorities from other disciplines such as Robin Fox, Semir Zeki, Lisa Zunshine, David Sloan Wilson, Harold Fromm and Gary Westfahl have had important things to say about literature from an evolutionary point of view. Ellen Dissanayake is especially important in terms of priority in concept: her early work concerned mainly other arts than literature, but was prophetic. Most recently, my own Epic: Form, Content, and History examines over sixty of the world’s grand narratives, synthesizing the new evolutionary understanding of literature with exciting developments in comparative folklore and anthropology and what I regard as the best insights of traditional literary studies.

I have been unable to find an earlier articulation of the basic principles of literary Darwinism as such than my own pair of long essays “The Neural Lyre: Poetic Meter, the Brain, and Time” (1983) and “Performed Being: Word Art as a Human Inheritance” (1986). These two essays appeared together in my Natural Classicism. They explored the implications for literary study of Edward Wilson’s Sociobiology: The New Synthesis, which had come out eight years before, combining it with ideas from other disciplines such as ethology, cognitive and perceptual psychology, anthropology and neuroscience.

At that time the poststructuralist and social constructionist movements, which argued that humans are basically blank slates, inscribed by incommensurable cultural structures–themselves the result of regimes of power and constructed knowledge–were still in full swing. The times were not hospitable to the novel features of literary Darwinism that I had articulated. These were: that literature was composed by an animal that had evolved and that had a nature of its own, that we could look especially to pre-human and human mating ritual as a sort of pressure-cooker for the emergence of the arts, that our nervous systems were themselves partly the result of our early cultural evolution as a genus, that we could thus find pan-human cross-cultural elements in the arts, elements whose presence had much to do with their perceived value to human beings in general, that there must be specific identifiable brain modules for the basic artforms—music, meter, visual representation, storytelling, dramatic mimesis, etc–and that aesthetic pleasure—the experience of beauty—was itself a capacity shared by humans and some other animals. This neurologically expensive aesthetic capacity must, I argued, have some objective value in assessing the threats and promises of any real world environment, since its universality indicates that it might be robustly adaptive for the survival of the species.

In the ‘eighties Ernst Pöppel and I discovered the three-second line in human poetry. For several years this was the only human aesthetic feature that was unambiguously pan-cultural, provably based on neuroanatomy, sufficiently idiosyncratic to be more than the result of coincidence or general neural function, clearly involved in ritual behavior and collective action, and obviously adaptive in terms of its powerful aid to memory. Since then music, visual pattern-making and representation, and narrative have become sufficiently well researched in terms of evolutionary features to be able to claim the same distinction.

In Alexander Argyros’ brilliant A Blessed Rage for Order: Deconstruction, Evolution, and Chaos (1992), the contradictions between an evolutionary aesthetic and the then mainstream social-constructionist critical consensus were masterfully outlined. I had earlier suggested that one of the ways the capacity for the pan-human experience of beauty could be explained in adaptive terms was that we are able to recognize situations in nature and each other that are ripe for the emergence of spontaneous order out of dynamical chaos, or were actually undergoing the symmetry-breaking and symmetry–reconstitution that are involved in such emergence, or were the result of such a transformation. Such a capacity might indeed be useful: a sort of general sensitivity to nascent fruitfulness, that might apply as much to a fertile landscape or a fruiting flower or a promising morning for forage, as to a good mating partner. Art and literature of high quality would share this characteristic appeal to our aesthetic instinct. Argyros took the suggestion several steps further, engaging the arguments of the deconstructionists who had, he felt, rightly recognized the semantic instability of the arts and literature but misinterpreted it as the absence of a transcendental signified rather than the signal of new growth and emergence. He was thus able to express in the terms of contemporary “Theory” ideas that questioned its foundations (or rather, its anti-foundationalist principles).

In the same year Cosmides’ and Tooby’s very important The Adapted Mind: Evolutionary Psychology and the Generation of Culture appeared. In this work the strongest and most comprehensive case for the causal relationship between biological evolution and human culture was made. Certainly the authors of the various essays the book contains are picking the low-hanging fruit, and more aware of general and direct nature-to-culture elements of human life than of significant variation, cultural resistance to nature, conflicts and ingenious compromises between different evolutionary strategies, and culturally-driven changes to our genetic nature. But they were fighting a pervasive social constructionism at the time, and the point needed to be made strongly. It was for later writers to show counter-examples and demonstrate how they might lead to a subtler evolutionism. The arts and literature were not especially stressed in this book, as it is there that the complexities and conflicts that might distract from the main point are most obvious.

Two years later—and 11 years after the theory was first suggested–the next major statement of the evolutionary case for literature appeared, Joseph Carroll’s Evolution and Literary Theory (1994). In the meantime I had elaborated many of the early propositions of the theory in Rebirth of Value (SUNY Press, 1991) and Beauty: The Value of Values (University Press of Virginia, 1991). The force of the change in contemporary notions of literature that the new perspective offered can be gauged by the difference between Carroll’s previous book, Wallace Stevens’ Supreme Fiction: A New Romanticism (1987), a traditional literary study of influence, and the books that followed (including Evolution and Literary Theory (1994) and Literary Darwinism: Evolution, Human Nature, and Literature (2004). I felt at the time that Carroll’s new-found enthusiasm had led him into a reductionism and an obsolete biological determinism that would limit the relevance of the theory to the actual reader. But nevertheless Brett Cooke, my co-editor, and I included an essay of his in the first collection of literary Darwinist essays by various hands, Biopoetics: Evolutionary Explorations in the Arts (1999).

In Literary Darwinism: Evolution, Human Nature, and Literature, Carroll strongly attacked my work, claiming that I had weakened the Darwinist case by stressing the great plasticity of the human genome, brain and nervous system, and the importance of culture in human behavior, especially artistic behavior. He dismissed what he called my “cosmic evolutionism,” ignoring the fact that in many disciplines, ranging from cosmological physics through thermodynamics, chemistry, crystallography, biology, and sociology, spontaneous order had already been shown to emerge from damped, driven dynamical systems that were subject to the essential triad of Darwinian principles: persistence and/or replication of past structures, variation, and environmental selection. He likewise dismissed my “aestheticism,” missing the point that I was attempting to take the panhuman claims of aesthetic differences in quality seriously on their merits. Essentially, like his foes among the poststructuralists, he was denying the very concept of beauty and the aesthetic as meaningful categories; like social constructionists who explain beauty away as a euphemism for class, power or economic superiority, he explained it away as a cover for reproductive, survival or status drives. If one had a completely tin ear to melody and beauty, one might well be disposed to explain away the enthusiastic transports of those who love music, art, and literature. In one’s annoyance at their supposed superiority, one might invoke, to diminish them to one’s own level, some mechanism that matched one’s own motivations. Social constructionist critics and adaptationist critics alike are in danger of permitting readers to fall into this trap.

Carroll assumed a radically determinist position in general, and did not even bother to address the logic of Ilya Prigogine, who states that “The more we know about our universe, the more difficult it becomes to believe in determinism.” Prigogine was one of many “hard” scientists whose work on the nature of cause had long been questioning traditional views of it from a variety of directions: quantum indeterminacy, irreversibility, chaotic feedback systems, the emergence of spontaneous order, mathematical difficulty and NP problems, and the constitutive unpredictability of a wide range of everyday phenomena. Prigogine’s formulation of the paradox of perfect predictability is elegant: it is possible only if all processes are time-reversible, past and future are meaningless, all time is eternally present, and cause can be reduced to logical entailment. Centuries before, David Hume had already shown the fallacy of this idea. The very causation that Carroll appeals to as the only reliable guide to understanding human behavior is, paradoxically, voided by the assumption of perfect determinacy, because that assumption also voids the reality of time, in which cause takes place. Further, Carroll’s rhetoric of appeal to “hard” science as opposed to airy-fairy unverifiable humanistic nonsense suffers from the fact that there are “harder” and much more unambiguously verifiable sciences than biology, and in those sciences the one-way cause-effect relation was increasingly coming to be seen as a relatively rare exception in a world of quantum nonlocal coherence in the microcosm, and nonlinear dynamics and self organization in the macrocosm.

Carroll rejected my skepticism about biogenetic determinism in particular, and my insistence on the plasticity of evolutionary and developmental processes and their products. I had argued that the genes primarily generate abilities and potentials and open up capacities in humans and other living organisms, rather than shutting them down. I resisted the then-current sociobiological dogma that genes “constrain” behavior, suggesting that it distorted the picture: genes enable kinds of behavior (and certainly not others), and in the case of humans, an extraordinary variety of kinds of behavior. “If genes do not constrain, Carroll asked, “what is it they could possibly do?” Well, they could express proteins, for a start, proteins that make cells, that cooperate in forming and operating organs that are well adapted to deal with the astonishing variety of unpredictable conditions this planet presents—and often deal with them in a variety of different epigenetic ways, leading eventually to the establishment of distinct ecological niches and the bifurcation of species, and further evolution.

Carroll objected to the lavish generativity of the human genetic inheritance that I proposed. How, he wondered, can we explain literature if there are an infinite number of possible explanations? An explanation in straightforward terms of inherited drives toward reproduction, survival, and status would at least be simple to achieve: a “just-so” story, in Steven Jay Gould’s words, would be better than the dizzying wealth of explanation that my position seemed to imply. What Carroll did not grasp was the idea of a very limited set of rules that, if they constituted a discrete combinatorial system (in Steven Pinker’s terminology) could generate an infinite or at least uncountable number of possible expressions, as is the case with organic chemistry, or a natural language, or Chess, or music. He feared that in stressing this abundance of possible results, I was threatening any predictive power that a genetically-based theory of literature might possess.

Quite the contrary: I was proposing a radical contraction in the number of possible generative structures, and simultaneously describing the explanatory power of that proposition in terms of a richness of possible result from those structures that matched the richness of the human and natural phenomena themselves. He missed the implication that the “constraint” was not at the level of what the human genome can do, but on the ways in which the genome could be allowed and instigated to do it (such as by deep syntax and recursion in language, and the basic genres, traditions, and technical skills of the arts). Even if when activated the human genome can do an infinite number of things, there is a finite number of ways in which that fecundity of the human inheritance can be activated. You can say an infinite number of sonnetty things in a sonnet, but if it has 25 lines and they don’t scan or rhyme, it’s not a sonnet and, predictably, can’t say sonnetty things. There are trillions of possible good chess games, but if the knight can’t jump or the pawns can move backward, it’s not chess. The market can create an infinite number of products, but without contracts and exchange and rules it’s not a market. If RNA transcription doesn’t work, or proteins don’t fold, the DNA cannot make its amazing varieties of cells and organs. I was investigating the limited conditions for unlimited expression, a rather radical program in the context of modernist and postmodernist experimentation with form and genre.

Rather than actually attempting to dispute my facts or my logic, Carroll chose to go ad hominem. He did not acknowledge the fact that all the ideas about literary Darwinism he espoused had already been discussed and critiqued by Argyros and me, and attempted to discredit our work in the growing community of literary Darwinians, for instance vetoing my inclusion in the advisory board of The Evolutionary Review. He labeled my work “poetic” and attributed what he took to be ambivalence about the adaptationist approach to my “spiritual aspirations.” In the fields of both critical theory and sociobiology this accusation would be the supreme dismissal, but I decided to let it lie at the time. I was exploring game theory and the emergence of quasi-moral sanctions in iterated nonzero-sum multiplayer games, literary economics, evolving ecosystems, self-organization, emergence, and other topics, especially epic, and did not have time or patience to publicly refute Carroll’s criticisms.

He had in fact both misunderstood and misstated my position, as well as Argyros’s, but I thought that fairly soon developments in the fields of epigenetics, neural plasticity, gene transcription and expression, the silencing and activation of genes, regulatory genetics, environmental effects on protein activity and stem cell function, evolutionary anthropology and other disciplines would be well enough understood that his notion of genes directly determining behavior would fall apart by its own weight. But unfortunately Carroll’s apparent obliviousness to what has been going on recently in the biological sciences and in the study of collective behavior seems to have gone unnoticed or at least unremarked.

To do Carroll justice, there is a certain logic in his position. His view of science is akin to that of nineteenth century scientism, in which the world is a machine in which the operations of the whole are completely reducible to the operation of the parts, and in which cause is always one-way, uniquely determinative, and in theory always ascertainable by observation and experiment. Mutual causality—nonlinear processes where A causes B but B also causes A—was not a subject for science and therefore could not be said to happen. The idea that a given set of initiating causes might have more than one outcome was forbidden. Thus any suggestion that the process of gene expression might be nonlinear and thus capable of producing many different outcomes—from the activation of the gene, its transcription into RNA and thence into proteins, the self-organization of proteins into cells and cells into organs, and the development of such organs as the nervous system and brain into functional wholes—was unscientific. More to the point, from Carroll’s point of view it would tend to undermine the strict connection between genes and behavior.

What must be especially discomfiting to Carroll must be the fact that probably the most fertile general area of scientific study these days is of precisely such cases of “branchy” and “looped” causation in nature, usually in much larger systems where all the elements are causing each other, often creating runaway unpredictable positive feedback and possible emergent orders.

Carroll had in mind the prospect of founding and leading a large quasi-scientific project that would consign both traditional literary criticism and postmodern theory to the dustbin, and explain literature as the expression of survival, status, and reproduction drives, themselves genetically hardwired. Nature must determine nurture and its product, culture. Any questioning of the gene-causes-behavior dogma would be fatal to his project. If the methods of nineteenth century experimental science could not uniquely explain a given apparent phenomenon, then that phenomenon could not exist. As Noam Chomsky once observed in another context, the logic is equivalent to that of the drunk in the old joke, who had lost his keys and was searching for them beside a lamp post. A policeman comes over and asks what he’s doing. “I’m looking for my keys” he says. “I lost them over there”. The policeman looks puzzled. “Then why are you looking for them all the way over here?” “Because the light is so much better.” If you’ve got a hammer (e.g. the proposition that we are the puppets of our genes), everything looks like a nail. The ichthyologist with the one-inch mesh net claims that there are no half-inch fish in the sea. Classic scientific method, admirable and still hugely useful, has the unfortunate psychological effect on its practitioners that they tend to turn the method of investigation (reduction) into its conclusion (that the phenomenon under investigation is reducible).

Carroll had thus set up for himself unwittingly a list of propositions, the falsification of any one of which would invalidate his whole project. If at any point the complex processes of inheritance, activation, transcription, expression, histone activity, cytogenesis, embryonic and post-embryonic development, adaptation to a changing environment, reproductive mate choice, neural plasticity, socioeconomic interaction, individual and cultural choice, and learning itself could not be shown to be directly and uniquely caused by the genes (rather than by interaction with a natural and social environment, emergent forms of self-organization, feedback effects, the internal logic of trading and games, “spandrels,” autonomous self-legislating systems, holistic global patterning, etc), then his argument must be fundamentally flawed in its method, and must fail. A sticky wicket, as the British used to say.

But this memoir has more interesting game than disposing of one more oversimplification. Carroll’s work (which has its merits) will be more useful as a straw man, a convenient and understandable voice for what we might call the lumpen-Darwinist position, than as an opponent. And it will be useful to have a baseline to indicate where newly established fact or ignored established knowledge differs from pseudo-Darwinist conventional wisdom. For there has emerged a new picture of biological inheritance and of the relationship between an individual organism’s experience/activity and its genes, a picture that is of profound importance for aesthetics and criticism. It is an emergentist position, recognizing that in the real world multiple entangled causes are involved in almost any event, and multiple events can be caused by any set of causes, but noting also a common characteristic, that dynamical feedback systems of this kind are prone to cross distinct identifiable thresholds where new forms of organization and causal dependence can emerge.

The emergentist picture is not one that renders invalid the evolutionary study of the arts and literature, but rather one that begins to properly accommodate all the meanings and experiences of real artists and audiences. It emphatically does not constitute a return to the era of social constructionism. But it also does not regard human artistic culture as a veneer concealing the “brute” quasi-Freudian drives, as lumpen Darwinists seem to believe (whose work essentially revives early Freudian ideas in a new guise, simplified and purged of psychiatric evidence, and oblivious to psychological discoveries since). In fact, because the new emergentist biosocial synthesis that I and others are exploring recognizes human nature as itself having been shaped by human social and cultural factors, it makes a much more powerful argument than does lumpen-Darwinist crypto-Freudianism that the adapted mind cannot be ignored by literary criticism and theory.

Lumpen Darwinists evidently believe that, as a pre-cultural animal in the hoary old 17th Century “state of nature,” we evolved a fixed set of genes rigidly controlling drives that somehow later got crammed into a façade of symbolic culture but remained untouched by it. But the genetic, developmental, archeological, and anthropological evidence shows a different picture. Let us look more closely at the various stages that must exist between the inert gene in the chromosome within a cell and the behavior of an individual animal (including a human one).

First of all, the gene has to be turned on, and the process by which this happens immediately involves a host of feedbacks: between the maternal and paternal alleles, between the gene and its intervening intron sequences, between the gene and the regulatory genes that can command whole suites of genes to be silent or be expressed through gene methylation. For the gene to be turned on it must be transcribed into RNA, which involves further feedbacks with the environment and with the whole organism’s own actions and responses, sensitively transmitted through histone acetylation and other processes. Though usually DNA writes to RNA, RNA can write to DNA in the form of endogenous viral inserts, transposons, and other reverse processes. The RNA must make proteins, that must fold correctly to be able to function, and again both the endogenous and exogenous environment (including the result of the choices of the whole organism) can play a part in allowing this to happen or to be aborted. The proteins must find each other and organize together to make a cell, and cells must detect their local topological position relative to their neighbors and to the shape of the organ they compose, and act accordingly, again with much feedback from outside and within. Further feedbacks exist between the cell and its neighbors, between the cell and the environment (nutritional stress or abundance, temperature, chemical change, parasite attack, viral load, etc), and between the cell and its own memory of its previous states.

This whole realm has been called the “epigenome,” and its study is epigenetics: “the study of mitotically and/or meiotically heritable changes in gene function that cannot be explained by changes in DNA sequence.” Unlike the genome, whose evolution is by and large Darwinian, its means of evolution is Lamarckian—the inheritance of acquired characteristics. There is in any species an archive, often very large, of unexpressed genes, together with the potential somatic structures and behaviors they specify. Genes themselves make up only a fraction of the complete DNA complement of the chromosomes: the introns that punctuate the genes are still largely a mystery in terms of their strict function, if any. Most genes, and almost all of the intron sequences, are silent; coherent sets of genes can be toggled off or on by regulatory genes such as the HOX genes, so that the options of structure and behavior can be customized to fit the experience or choices of the individual animal (or plant, bacterium, etc). What is especially interesting is that these custom combinations are themselves significantly heritable, and thus in turn subject to adaptive selection.

But the rate of this adaptive process is staggeringly faster than that of genetic change in the underlying DNA. The point is, as in the words of the Cold Spring Harbor Consensus of epigenetic experts, that there can be a “stably heritable phenotype resulting from changes in a chromosome without alterations in the DNA sequence.”

The picture here is not one of a single cause (the gene) generating a single behavior, but of a staggeringly dendrified set of developmental options, all in nonlinear mutually causal relations with the activities of other genes and of whole ganged sets of genes. We are not the helpless product of our genes: our choices determine not just what we do, but what we are and will be, and what our descendants will be. This is not a rejection of biology, but a biological fact, and it is something that is at the center of most of the world’s great literature. Certainly the lumpen Darwinist’s genetic drives toward survival, reproduction, and status are human motivations, but they are well recognized already by literature, religion and the arts, and usually counterposed dramatically against the emergent (but equally biological and evolutionary) epigenetic drives toward self identity and fulfillment, curiosity, gratitude, love, community, creative art, and social and cultural meaning.

When Carroll was writing his earliest book on literary Darwinism in 1994, he might not have been aware of archeological research that was indicating a much greater age for Homo sapiens than was previously thought. And since he was also apparently unaware of research in epigenetics that hugely accelerated the speed of possible phenotypic change, he might be forgiven for assuming that there simply was not enough time for significant changes to happen to the genome and thus to the human behavioral repertoire as a result of sociocultural selection pressure. We were the naked ape, the trousered ape. Others, like Ellen Dissanayake, who had been keeping up with developments in the field, were already realizing that there was plenty of time for us to domesticate ourselves.

Our genus was making tools, tools that required social learning and organization, more than two million years ago; members of Homo sapiens were apparently making art as far back as the species existed, if the available evidence can be trusted (the Blombos Cave ochre markings are 75,000 years old). Homo sapiens is now thought to have been around for at least 200,000 years. Over evolutionary and geological time the major determinant of our individual survival and our reproductive success was whether we could fit into the sign conventions, cultural norms, and communicative media of the group we lived in, social systems that we ourselves were individually modifying as we went along. Certainly our physiology, including our brains, was determining what we could do culturally. But what we were doing culturally, including art, ritual, play behavior, that promoted cooperative success in hunting, gathering, mating and the creation of technology, was exerting an overwhelming selective pressure on our epigenetic inheritance, and thus changing our neurophysiology. The study of animal behavior shows the same feedback between biogenetic and socio-cultural forces.

Not that the “old Adam,” as Robin Fox describes him in his fine book The Tribal Imagination: Civilization and the Savage Mind, was ever banished: he remains still one of the epigenetic pathways we could take (though to be strictly accurate, there are probably plenty of “old Adams” themselves, depending on epigenetic factors, including even quasi-reptilian ones: sociopathy may be one of them). Given the extreme metabolic cheapness of storing genetic information, and the trouble and expense of deleting it, the chromosomes seldom throw anything away, but keep old behavioral strategies, silenced, for a rainy day when they may again be useful. Species that went to the trouble of purging their archives might not have survived environmental shocks that archival material could have anticipated and been able to deal with if activated. Cloning, or asexual reproduction, effectively prevents new material from being added to the archive, and is a risky strategy for many species with unpredictable habitats. Contemporary agronomy is now worried about precisely such issues.

What the chromosome does in “lazily” neglecting to edit its memories, many of us do with our computer archives, allowing them to accumulate as long as they can be easily retrieved and cheaply stored in a drive or the Cloud. Biologically we can still retrieve the mammalian virtues that lumpen Darwinists like. Extreme stress, especially in childhood, can activate old defensive and offensive systems, as can perceived injustice, bereavement, or sexual frustration. On the other hand, such conditions can also force more eusocial strategies to kick in, such as sacrificial love or noble generosity. It is precisely such moments that are the stuff of literary fiction.

The seven deadly sins are the seven mammalian virtues: sloth (avoidance of metabolic expense), wrath (costly sanctions against defectors), lust (reproductive quantity), gluttony (nutritional survival), envy (competition with conspecifics), covetousness (territoriality) and pride (self-reward). The point here is that their sinfulness for Homo sapiens is as biologically real as their virtuousness for mammals in general. Meanwhile, the Aristotelian virtues, of courage, temperance, liberality, self-respect, magnanimity, patience, ambition, wit, truthfulness, friendship, modesty, and righteous indignation, and the Christian virtues of faith, hope, and charity, are not super-biological impositions upon brute biological drives but equally as biologically rooted as their opposites, and choices of action that epigenetically affect gene expression can determine whether an individual, and to some extent, his or her descendants, embodies them or not. Females that selected male mates with the Aristotelian or Christian virtues could have been making a better bet on the future than those that selected ones with the mammalian virtues; though the mammalian virtues ought to be there to fall back on if necessary.

Odysseus, as man and mammal (he pretends to be a sheep to escape Polyphemus’s cave and is sometimes described as like a mountain lion) experiences and enacts both the mammalian and the human virtues. But both sets are equally natural and biological. The survival of his son Telemachus, and thus of his father’s genes, depends on Odysseus’ eventual choice of the human set: he has the self-restraint to avoid the fate of his crew, the ambition to leave a public name, and the loyalty to marital friendship to not stay in the cave with beautiful Calypso. In one sense, then, the human virtues are even more natural and biological than the mammalian ones, since they better promote reproductive success. Gilgamesh must give up the practice of ius primae noctis to find the friendship of Enkidu; but their non-reproductive bond, which creates the city walls of the city, preserves all the genes in Uruk.

At every inflection point in the journey from gene to phenotype there are branch-points, where, depending ultimately on environmental vicissitudes or individual choices, a different menu of behavioral strategies is offered. Pace Richard Dawkins, those branch-points could well define a hierarchy of successively more holistic replicable units upon which variation and selection can take hold and evolution take place: the gene, the RNA strand (thought to be the original form of life), the protein, the cell, the organ, the individual organism, the biome, the ecosystem. Certainly we can describe the organism as the gene’s way of making another gene: but to be fair we should also consider that DNA was originally RNA’s archive or memory for making RNA, that a gene could be thought of as the cell’s way of making another cell, that chromosomes are an ecosystem’s way of replicating itself over time.

Among many of the leading lights in the field of selection and evolution, the concept of multi-level selection has become the new paradigm. In a sense, the field is as old as Darwinism itself. Darwin was fascinated by symbiosis and commensality: a commensal pair of species (or by extension a larger interdependent ecosystem of many species, as James Lovelock points out) would itself be a replicating unit capable of variation and selection, upon which evolutionary adaptation could work. Group selection concepts—ranging from kin selection through reciprocal altruism to cooperative trading communities policed by costly signaling, reliable recognition of in-group members, group sanctions against defectors, and deceit detection—have proliferated. William D. Hamilton, building on the ideas of J.B. S. Haldane and Ronald Fisher, proposed a testable theory of kin selection as long ago as 1964. Robert Trivers first proposed the concept of reciprocal altruism in 1971, though strong opposition from the likes of Dawkins for a while held back the development of the field. But Trivers has been vindicated, if the recent proliferation of research and theory building out of group selection and multi-level selection is any witness. Richard Lewontin on the mutual causal relationship of an organism with the environment, David Sloan Wilson and Elliott Sober on the logic of group selection, Brian Skyrms on replication dynamics, and Robert Wright on “nonzerosumness” have profoundly and coherently complicated the field of evolutionary biology.

Skyrms’ work is especially interesting. His computer models of multiplayer iterated nonzero-sum games among replicating computer programs that can exchange “genes” specifying elements of strategy have provided a profoundly illuminating picture of the emergence of signaling, group sanctions, and even a sort of proto-ethical social contract. If even mindless computer programs can–through competitive/cooperative/coalitional interaction–generate something that looks a lot like values and mores, how much more plausible is it that intelligent animals could do so. The point is that even strict deterministic computation can produce emergent properties that do not resemble their own microstructure and past stages of development, and that are causal in turn.

But the plasticity of behavior does not cease here. Neural Darwinism asserts that the brain builds itself in the first place, in a succession of competitive/cooperative processes ranging from inter-cellular through dendritic to synaptic interactions. The passage of information itself, as Donald Hebb pointed out, alters the shape of the synaptic cleft. As Eric Kandel (in learning and molecular memory studies), Robert Turner (in functional MRI brain mapping), Lorimer Moseley and Peter Brugger (in phantom limb studies) have shown, the brain is capable of significantly changing itself, even on the scale of observable gross anatomy. Turner’s team at the Max Planck Institute for Cognitive Neuroscience in Leipzig has demonstrated robust differences between the growth patterns of the motor cortex of pianists and violinists in training. This plasticity extends to quintessentially cultural activities. In an unpublished paper, “Ritual Action Shapes our Brains: an Essay in Neuroanthropology”, delivered at Cognition, Performance, and the Senses, a Wenner Gren sponsored workshop, Turner concludes:

“Thus ritual symbolism provides sensory experience that powerfully links autonomic activity with conscious thought, in a highly structured way relevant to important societal concerns. It induces physical responses that are experienced as complex emotions, which render particularly salient and memorable the conscious reflections or teachings made at the time that the ritual symbols are brought into play. The collective representations comprising a particular culture become embedded as neural representations in the brains of the participants. As such, they are embodied in enduring material changes in the structure and connectivity of brain tissue.”

In the field of psychology examples of the astonishing versatility of the human brain are everywhere. Roger Sperry pointed out that we normally function with two potentially separable centers of perception and judgment, the left and the right. More radically, dissociative disorders demonstrate that the same brain tissue can support up to sixteen distinct personalities. We are sometimes strangers to ourselves during mood-swings occasioned by stress, change, or falling in love.

Nevertheless, we are not protean beings (though the results of our activities within such constrained generative systems as language and music can indeed be protean). We have a nature; we are recognizably human to each other across the globe and the centuries. The work of the literary adaptationist is to fully acknowledge the apparent paradox and to set about the hard work of identifying the rules of the games by which we achieve our multifarious individual and cultural achievements. What we find, I argue, is that we are not alone in this work: human culture, religion, arts, philosophy and religion themselves have already been at this for millennia. Indeed, the very inquiry as to our nature has itself been one of the adaptive forces preferentially affecting our survival and reproduction, and great literature is aware of this.

In Carroll’s attack on my work he pooh-poohed my aphorism: “We have a nature; that nature is cultural; that culture is classical” as an empty paradox. But it is exactly what contemporary anthropology, neuroscience and archeology are discovering. The requirements of social functioning—the social emotions, the political skills, the ability to recognize individuals and predict their behavior, the awareness of one’s own role in the group, the ability to perform ritual actions, accurate signaling, signal recognition, deceitful signals and deceit detection—were as determinative of the direction of adaptation as were the need for the right sort of foot and pelvis for bipedalism. The mind is adapted, but it is adapted by earlier culture.

Even in the study of the psychology of social animals cultural differences between groups of conspecifics are significant, differences that over time could lead to bifurcation between available habitats, and eventually the separation of strains, genetic incompatibilities in cross-breeding, and speciation. No “mystical” “poetic” or “spiritual” explanation is necessary to understand the astonishing multiplication of birds of paradise or bowerbird or monkey species, all based on females’ arbitrary individual preference for a certain sort of color, motion, rhythm or structure, and males’ skills at presenting their real or illusory advantages and talents. Meerkats, chimps, macaques, dolphins, whales, and many species of birds have different local or temporal dialects, rituals, and technologies. The social and behavioral tail regularly wags the genetic dog.

The human brain, then, is as domesticated as is a Chihuahua’s body or a wheat plant’s ear. Again, the point is not that the brain can be anything it likes (leading, as literary lumpen Darwinists fear, to a voiding of any kind of natural explanation for what happens in literature), but that what it is is the result of its interaction with its own past cultural choices. It is not a “blank slate,” as Pinker rightly observes. But what is written there–and in the proteome and genome that present it for adaptation to the world–is already social and cultural. Our nature is not a blank slate, but a three billion year old palimpsest of inscriptions, the most recent and most vehement being the ones written by our ancestral cultures.

Dozens of scholars and scientists, both before and after the birth of literary Darwinism, have recognized one aspect or other of this naturally “branchy” way of looking at human nature and the nature of social animals in general. Konrad Lorenz notes how the displacement activity occasioned by the conflict of two drives, territoriality and reproduction, can in the form of mating ritual, often of great beauty, be itself inscribed in the genes. It can become a drive of its own that can actually compete, in the parliament of behavioral motivators, with its own territorial and reproductive origins. Especially telling is his account of the monogamous greylag geese that do not mate again after the loss of their life-partner in the triumph ceremony. Even more to the point is the occurrence of homosexuality in many social species (including the geese that Lorenz observed). Homosexual pairs must be useful enough to the breeding group as defenders and adventurers, unburdened by family responsibilities, to be worth keeping in the genetic repertoire despite their failure to reproduce.

Irenaeus Eibl-Eibesfeldt, Lorenz’s successor, was one of the founders of human ethology, and an active member of the Werner Reimers Stiftung Biology and Aesthetics group that he and others, including myself, founded in 1981. He insisted that diversity, individual and cultural, was itself a heritable and adaptive feature of the human inheritance. That is, the multiplicity of developmental outcomes–that lumpen Darwinists complain about as complicating a nice neat program of causal explanation–is a reality, and a reality that is provably adaptive for species that must cope with a variety of environments and cannot adapt fast enough through traditional Darwinian selection by elimination.

Perhaps the most radical rebuff to the mechanistic unidirectional cause theory of the relationship between gene and behavior has been the recent change of mind on the part of Edward O. Wilson, the father of sociobiology itself. His book Eusociality: The Social Conquest of Earth shows a world-class scientist, having recognized the value of the sociobiological approach and thoroughly explored the possible unidirectional and invariant causal aetiologies of animal behavior, coming to accept emergent properties of sociobiological processes, eusociality in this case, as taking on causal power of their own.

Lumpen Darwinists are in the uncomfortable position of those medical geneticists who promised to pin particular diseases on particular genes, and by gene therapy to effect a cure. With a few exceptions, this program has been one of the big disappointments of recent medical research. Ignoring the many-branched pathways and even loops and outside interference on the way from gene to pathology was a failed strategy—looking for the keys under the lamp. The experience proved that the task was much more complicated than first believed. To point this out is not to discredit evolutionary biology in general.

Indeed, the field of literary Darwinism has on the whole come to avoid the “lumpen Darwinist” reductionist position. The Literary Animal: Evolution and the Nature of Narrative (2005), edited by Jonathan Gottschall and David Sloan Wilson, goes a long way toward a more sophisticated approach. Brian Boyd’s fine book On the Origin of Stories: Evolution, Cognition, and Fiction is aware of the rich interplay between nature, culture, and history. So too are Gottschall’s The Storytelling Animal: How Stories Make Us Human (which is explicit about how the cultural tail of storytelling has come to wag the biogenetic dog), and Nancy Easterlin’s A Biocultural Approach to Literary Theory and Interpretation (a subtle and refreshing work of integration between the best of traditional critical theory and practice, and the new evolutionary paradigm).

But some of the more automatic mental habits of lumpen Darwinism still persist. One of the reasons for the reluctance of people in the fields of evolutionary sociology and aesthetics to tackle the conceptual problems offered by the failure of the straight “gene causes behavior” model is, I believe, a fear of teleology—or more precisely, a fear of being thought of as having teleological assumptions. What I mean by teleology here is the idea that the organization of the parts of an entity is for the purpose of dealing with a future situation or achieving a future goal or realizing an as-yet-unrealized abstract quality that is a function of the organism as a whole.

There are many reasons for this fear of teleology, some good, some less so. One is the lingering culture of strife between religion and science, in which any hint of design—even sometimes, absurdly, in conscious human productions—is taken as a betrayal of the cause of anti-Creationism. Intentionality must either be denied altogether (except as a human illusion about ourselves) or attributed only to humans (which begs the question of our animality). Here is a cultural case of an immune response—the abandonment of the possibility of the emergence of natural design—that is worse than the infection.

Another reason for “teleology avoidance” is the sometimes-horrifying history of social Darwinism with its triumphant vision of the march of progress and Hegelian transcendence, often attributed to forerunners like Herbert Spencer and Ernst Häckel. Any language that might hint of goal-orientedness or higher purpose must be avoided. Even the word “function” has become slightly questionable, though it is pretty much impossible to avoid when discussing the behavior of living organisms, and still remain comprehensible at all.

A third and much more admirable reason is that scientific probity, as well as Occam’s razor, requires that we exhaust all possible reductive explanations in terms of parts before assuming that properties of the whole may be responsible. Hypotheses non fingo. Science prefers bottom-up to top-down explanations; but this preference, though often the best way of finding the facts of a situation, and always the best way of eliminating unnecessary assumptions, is a method of procedure but not a conclusion. As we have already seen, biology is full of top-down whole-to-part causes (in feedback with bottom-up part-to-whole ones). It is indeed to the credit of the reductive method that they were discovered, but to do so biologists had to eventually overcome their reluctance to accept facts that seemed to question their method. New methods of investigation, especially dynamical modeling in information-rich computer simulations, have less of this traditional scientific bias: the failures of reductive explanations are more immediately obvious when the model crashes, and the successes of holistic explanations stand out by their elegant and exquisitely measurable departure from mere chance.

Likewise, such research as Diane Fossey’s and Jane Goodall’s on primates, in which the scientist actually includes herself in the behavioral situation she studies, makes up in richness of results what it lacks in reliable objectivity. But it may well be that this inclusion of subjectivity is not a problem in the study of conscious beings, but an inescapable feature of the real object of study, and to eliminate it is to eliminate the phenomenon one wishes to study. The politics of a primate group is all about intentions, estimates of others’ intentions, purposes and designed plans. The observer problem is not just a hindrance to accurate knowledge, but a constitutive feature of the object of knowledge. Even on the level of quantum physics the state of knowledge of a system is part of its reality: the emergent features of physicists’ intentions (if that is what they are) cannot be expunged from the matter out of which physicists emerged.

Richard Lewontin and Steven Pinker coined the term “spandrel” for such phenomena as art and religion, and it is not only ingenious but useful. But I suspect that the concept may itself be partly a defense against the imputation of teleology to their work. Pinker and Lewontin are well aware that natural organisms trumpet function, purpose, and anticipation everywhere (and are almost impossible to even describe without such concepts) and will defend them against diehard deniers of them. The very term “spandrel” presupposes functions and purposes. As long as those characteristics of life—function, purpose, goals–can be shown to lead to reproductive success, Pinker and Lewontin can secure their flank. The spandrel concept is designed to deal with the problem that the arts, religion, and philosophy, and such odd phenomena as humor, dreaming and mystical experience, are pervasive in, and even characteristic of the (very successful) human species, yet they cannot be assigned a direct function in the approved suite of individual survival, sexual reproduction, and social dominance. Their apparent inutility is indeed part of their strong appeal.

The spandrel is a convenient category to dispose of such apparently impractical features of the human makeup. They can be explained as by-products of competing functional structures and drives governed by genes, as the architectural spandrel is supposedly the by-product of two opposed practical requirements—covering an uninterrupted space and transmitting the thrust of a massive covering safely to the ground. But the spandrel’s ingenuity as an escape from ascribing functionality to the arts, religion, etc, cannot stand up to an examination of the linguistic mechanism of the spandrel concept itself in the light of evolutionary anatomy.

Take the evolution of the lung, for instance. Thought to be originally a simple sac or pocket extruded from the gut to hold oxygen-rich air swallowed in anaerobic waters, it evolved into the lung found in lungfishes and in the ancestors of land vertebrates, and into the swim bladder of teleost fishes. Sharks have no such structures, and maintain the lift required for swimming by a special fin. The point is that this sac structure can be taken as a belch retainer, a breathing apparatus, or a buoyancy control, or even as a useless appendage, depending on which function (or purpose) we are considering. If without knowledge of its future evolution we were able to study the original species—some kind of prehistoric dweller of littoral margins, perhaps—we might well be tempted to call this air-filled sac a spandrel. For the world of the teleost fish, one of our hypothetical species’ descendants, the lung is a spandrel; for the world of the terrestrial mammal, another descendant, the swim-bladder is; for the shark’s world the structure itself is. Adaptation is always fluid, always making do with what it has got, repurposing older structures as it does so; the olfactory bulb becomes an emotional center, the lens of the eye and the otolith of the ear arise from the same ancestral structure, the harmless expression of territorial aggression by mating geese becomes the triumph ceremony, with its own supporting epigenetic and genetic base. Intermediate stages of the process at any point include less-than-functional remnants of earlier purposes and spandrel-like opportunities for new ones. Architectural spandrels may have evolved as a by-product of the functional need to support a dome upon arches, but as sites for ideological messages or images they became a large part of the function of a cathedral, as anyone may see who visits the Hagia Sophia and considers the huge Arabic inscriptions that replaced the former Orthodox mosaics on the spandrels.

Of course, the specter haunting the issue is design, and the putative Designer. But perhaps the problem is not with the concept of design itself but with the implication that design needs a designer, or at least a conscious intentional one. Design, like function and purpose, is a useful concept in analyzing the behavior and structure of living organisms. Instead of abandoning design altogether, why not explicitly deny any necessary requirement for a designer?

Bruno Clarke reminds me that there is a perfectly good concept, “autopoiesis,” coined by Francisco Varela and Umberto Maturana, that signifies a kind of second-order cybernetics in which an organism (which need not necessarily be conscious) controls itself by continually refashioning itself. As long as enough iterations are permitted, design can result from differential reproductive success, from the requirements of a task that ensures survival, from the inherent constraints of instinctive behavior such as foraging or mating, or—yes–from the intentions or desires of designers. Designers such as tool-using chimps and potato-washing macaques might well evolve if being a designer promotes reproductive success. If teleological behavior—behavior that anticipates a future that might be altered by actions in the present—is helpful to survival, why should it not evolve? Any reproducing species is already operating under the enormous if tacit metaphysical assumption that there will be a future for its offspring to survive and reproduce into (an assumption sometimes proven wrong by such events as the Chicxulub meteor event).

Suppose that the (spandrel-like) values and purposes made possible by a plannable future were complete nonsense, but were some species to operate as if they were real—by nurturing its young, cooperation, emotionally valuing one outcome over another, self-sacrifice, attachment, ritual and the like—such a species would be at a competitive advantage with others, and its genes would reflect the resultant adaptive pressure. In all the games that nature provides for us to play, teleological cooperators, if they are clever enough to detect and sanction defectors, can outbreed non-cooperators. In order to keep up, other species would in turn be forced to develop teleological behavior, and thus also the core assumptions of teleological behavior, such as relative value, goals, signals, and even theory of mind, as a guide to preserve consistency. Eventually every part of the ecosystem would be filled with organisms and structures that acted as if the universe were meaningful, differentiated in value, and full of intentional design.

Concede still that all of those value-abstractions are still complete nonsense, like Park Place in the game of Monopoly. But that concession is now a purely metaphysical one, with no practical or scientific relevance. Those abstractions will have become laws of nature. If teleology works as a survival strategy, why not see it as emergent, like wetness if you put enough water molecules together? This universe may not be telic, but it is certainly teleogenic.

As Robert Wright points out, the very fact that there cannot be less than zero organized complexity implies that any random walk of biological organisms over all possible structural and behavioral alternatives is bound to result in a net increase of organized complexity that looks an awful lot like progress. Paraphrasing Steven Jay Gould, no friend of teleology, Wright amusingly illustrates this idea:

Consider a drunken man walking down a sidewalk that runs east-west. Skirting the sidewalk’s south side is a brick wall [the impossibility of less-than zero organized complexity], and on the sidewalk’s north side is a curb and a street. Will the drunk eventually veer off the curb, into the street? Probably. Does this mean he has a “northerly directional tendency”? No. He’s just as likely to veer south as north. But when he veers south the wall bounces him back to the north. He is taking a “random walk” that just seems to have a directional tendency.

If you have enough drunks and give them enough time, one will eventually get all the way to the other side of the street…

–and maybe end up finding his keys. Wright’s word for this non-tendentious tendency is “nonzerosumness.”

Any species that does not possess “spandrels” of one sort or another, whether temporal (remnants of earlier function behaviors) or spatial (gaps between existing functions that need to be filled with something or other), can have no evolutionary potential. Spandrels are at least the drunkenness of the drunk in the illustration, and depending on the organized complexity of a species, which may be able to use them in a more and more coherent way, can be much more. Silent genes are spandrels too, and they are, as we have seen, an enormously adaptive archive of alternative behaviors. They are the free play on which evolution can go to work. Spandrels are functional, even if their functionality is second-order rather than first-order.

Spandrels are spaces for future inscriptions, necessary jotting-materials for alternate strategies. This dynamic potential can be a competitive advantage for a species, a space where the behavior of its competitors, within and outside it, can be modeled, predicted, and dealt with.

Clearly we have here the glimmerings of a role for the arts and literature that might match their staggering prevalence in human behavior. Such writers as Lisa Zunshine, Brian Boyd, and Jonathan Gottschall have already hinted at such a role. Essentially, the arts and literature are to the replicative unit of a human culture what epigenetic variability is to a species, what sexual recombination is to sexed species in general, and what preserved mutation is to all life.

We might therefore speculate that a new paradigm for evolutionary literary studies and aesthetics is on the point of emergence. Its principles might include the following concepts:

1. Teleogenesis.
Literature and the arts describe, deal with and even initiate the emergence of new kinds of function, purpose, and goals. Emergence presupposes of course a base from which the emergence arises and a continued bottom-up causality that now competes with new top-down causes. A new paradigm of literary study would include the idea of the arts and literature as generators of value through emergence within strongly autopoeietic cultural craft systems based upon innate genres and incorporating their own biological history.

2. Modeling.

Understanding literature and the arts is a matter not just of assigning causes for the presence of the parts of their constitution, but also designing, modeling and tweaking models of the whole, and trying new models if the tweaks don’t work. Storytelling itself is model-making, the construction of parallel spatio-temporal structures that, though fictional, are isomorphic with or even predictive of real events. Criticism might then become a second-order storytelling, the creation of a parallel structure modeling the work of art or literature it describes, whose deep systemic resemblance to the original is productive of insight into its meaning.

3. Freedom as nonlinear feedback, anticipation, and mutual causation.

The issue of freedom—Kant’s and Schiller’s question —is this: how can anything be meaningful (beautiful, morally significant, veridical) if it is not freely created and intended? The problem can be resolved into a different question, concerning prediction. Paradoxically, all you need for freedom is prediction, if prediction itself can alter behavior. A good predictor can choose a different path if he or she is reflexive enough to predict his or her own behavior in the first place and see where it leads. A community of predictors, all trying to predict each other—e.g. a human culture–constitutes an infinitely complex dynamical system. Its future (within the parameters of the game that makes prediction possible at all) is constitutively indeterminate. The trick then becomes understanding the game or genre of the system, an understanding that can give at least a sense of the range of possible outcomes. An understanding of our genetic and evolutionary history—how the game was played to date—can make us better anticipators, better literary or art critics.

4. A new conception of individuality.

The lumpen Darwinist model of the individual as the unit of variation and selection is clearly a distortion of the facts as presented by the new sciences of epigenetics and commensality. That model was of a “selfish” independent actor internally sanitized by the immune system and externally predating upon the rest of the world. This model has now been exploded by a torrent of new evidence, some of it indicated in this essay. We now understand that the human body, for instance, is a bag of billions of more or less genetically connected organisms, living together in a community—or better, perhaps, a market. The immune system is, so to speak, a citizen police force to keep the peace among diverse inhabitants, not an army of ethnic cleansing, and we need probiotics as much as antibiotics. The bag itself is selectively permeable and requires for its maintenance not only internal but also external cooperators. Pure selfishness would be a thoroughly bad idea, since it would deprive the individual of both the resources and the challenging admixture of different agencies that maintain individuality itself. What destroys individuality is a tropism toward uniformity that loses in adaptiveness what it gains in mere numerical multiplication. Individuality relies on continuously maintained sources of internal and external diversity, and, as both source and beneficiary of interdependence, is the opposite of independence.

This does not mean that individuality itself is passé. On the contrary, individuality is the result and chief agent of the “holobiont” of self-organized commensality on both sides of the skin. Individuality is an enormous and beautiful achievement, the autopoiesis of billions of symbionts, mitochondria, and epigenetic gene-combinations into a functioning and purposing whole. It is also the locus where the relatively less intentional flows of biogenetic determination and social construction emerge into conscious insight and tragically self-aware intention, and can there be reflexively tinkered with and altered—again, the stuff of all fictions and artistic surprises.

In one sense we could say that the change in our view of the individual is that it has changed from being an essence into being a self-maintaining interface; from a thing into an autonomous process; from brain into skin; from the alive entity to the life of the entity. Individuality is one of the great inventions of evolution—invented both through Margulis-type genetic exchange and through sexual recombinative reproduction. In the dynamic hierarchy of motion, from location, to motion, to speed, to acceleration, to jerk, to control, individuality is where control happens and control is controlled. Individuality is the cyber, the steerer, the pilot; and the pilot can only be a pilot at all because of its inclusion of multitudes of agencies.

The uniqueness of the individual could not be the result of cloning, however docile such a community might be. Indeed, the image, common nowadays, of cancer cells as rugged individualists or Idaho survivalists, bucking the conformity of the law-abiding cellular community, is exquisitely wrong. Rather, cancer cells are cells that have “chosen” to abandon the differentiating choices offered by cellular specialization and the vital flexibility provided by the epigenetic archive, and to ignore their ultimate dependence on the many other organism in the symbiont. They have pulled out of the open marketplace of the body, so to speak, and commandeered the local vascular system to provide themselves a welfare system in their own ministate of clones. They have chosen against individuality and made themselves enforced faceless dependents rather than cooperative individual traders.

Thus any form of literary and art criticism must recognize the unique agency of the individual, the author. Significantly, both poststructural literary theory and literary Darwinism attempted to take down the author as individual chooser and moral center—whether as “author function” or as genetic puppet. If art is the potential of the species, taking place in the free space of a “spandrel,” it happens as the work of an individual or a team of individuals in their own unique feedback system. That is the kind of function evolution designed individuality for in the first place. The use of statistics, or the invocation of general social or biological forces, or assuming a generic artist, reader, or audience, can be useful, but they can only be ancillary to the act of individual anthropology and personal meeting that is the core of critical understanding. They miss the whole point of art and literature, which are the ultimate critics of such forces and generalities themselves.

5. The artist or writer as collaborator with the critic, not the object of observation or experimental subject.

The questions—about nature and human nature—that the evolutionary critic faces are no different from the ones that artists and narrators face themselves. Their methods, of storytelling, construction, melodic development, visual symbolism and so on, are often as revealing in their way as the evolutionist’s genetic and archeological science. Archeologists can often learn more about our remote ancestors from the inside, so to speak, by mastering the art of flint-knapping, than from the outside by analysing shards of bone, and to learn knapping they must apprentice themselves to the ancestors they study. The arts and literature indeed display the universals of human adapted nature, but they do so not so much as the unconscious puppets of our “drives” but as both an expression of our nature and as a sagacious description and critique of it. In my recent book Epic: Form, Content, and History I show how the epic theme of the “wild man” and his defeat by cultured man or his fall into the state of cultured man is used to discover our peculiar emergence as the animal that unlike all other animals knows that it is an animal.

6. True “constraints” were staring us in the face all the time: they are the genres and rules and techniques and skills of the arts.

What those genres, rules, techniques and skills do is, in 10,000 hours of practice (as the saying goes), epigenetically turn on sets of genes that might otherwise have been silenced, and silence others. This epigenetic modification of gene expression may even be heritable—the frequency of family excellence among the Breughels, Bachs, Bellinis, Wyeths, Amises, Wollstonecrafts and Brontës (not to speak of the Darwins) may be better than chance. Their forte is skill in a craft, and it is those ancient human crafts, with their limited rules and techniques and unlimited powers of expression, that are the true “constraints” of human nature. After over a century of modernist and postmodernist dismantling of the traditional forms of the arts, not only artists themselves, but critics too, need renewed attention to those ancient genres, methods and concepts of literary and aesthetic creation. A shaman needs to make her shamanic drum, a very constrained and particular activity, but what she uses it for can be very various. The anthropologist/critic must sit at her feet to learn what she is doing and how it reveals our nature to us.


The House of Lies: on the “social media”

It is a cataract of souls in their own hell,
Descending to a common destination,
The drowning dragging down the swimmer till
Both drink that last insipid cold potation.

It’s a perfected drug, that learns itself from you,
A feverish disease, virus designed for catching;
It drags you from whatever human work you do,
An itch that spreads by every spasm of scratching,

It is a cataract of scale upon the eyes,
That grows in from the edges to the center,
Rendering all a gray and cheerless web of lies,
A narrowing house it seemed so bright to enter:

The place of that dishonorable cowardice
Where one may say what one would never dare to
Were one to look that fellow-human in the face;
Where one may do what conscience could not bear to.

Here every gentle, unassuming kindly thing,
Each better angel of our dear and troubled nature,
Lies naked to the anguished, raging serpent’s sting,
And friends show suddenly a loathsome feature.

Once there were children, animals, and flowering trees,
Once there were wise debates and unwise laughter;
Once there were truths and quests and open mysteries
Now they are buried; worse will follow after.

Truths chosen to tell lies, lies bent to look like truth;
Faces defaced, and books with words distorted;
Age trodden down to flatter the exploited youth,
Discovery smeared, jeered at or aborted.

Here is the city of sadistic politics,
Here is a group psychosis of the spirit;
Here is a bright grotesquely smiling crucifix,
Here is the damning punishment of merit.

Here is the city where the foulest is the best;
Here is the hell once named as other people;
Here is the place where hatred is too tired to rest,
Here is the point of an inverted steeple.