Desexing the Kinsey Institute

The Kinsey InstituteA photograph from the Kinsey Institute archives (censored by the Review editors), 1895–1900

In 2000, an Indiana politician attacked his opponent, John Frenz, by assailing the Kinsey Institute, founded more than fifty years earlier within Indiana University. “The Kinsey Institute is the largest library of pornography of its kind in the world,” his attack ad proclaimed, elaborating that its “library contains sex-related art, studies on bestiality, obscene photographs of children,” and that the institute supported “homosexuality as an accepted lifestyle.” This politician, a strong-values conservative named Eric Holcomb, is now the governor of Indiana. What terrible infraction had Holcomb’s opponent committed? He’d voted for a state budget, of which a tiny portion funded the Kinsey Institute.

Alfred Kinsey began his career as a biologist, with a focus on entomology and an expertise on the gall wasp. He didn’t switch to studying sex until he taught a course on “Marriage and Family” at Indiana University, and found that there was a dearth of scientific literature on sex. He began collecting information about the sexual histories of students in his courses, and amassed a large quantity of them. When the University’s president, Herman Wells, made Kinsey decide between continuing his sex research and teaching the course, he chose the former. In 1947, with the help of a lawyer, Kinsey founded the Institute of Sex Research “to continue research on human sexual behavior,” as well as to secure a library for his growing collection of books and artifacts, according to James Capshew’s Herman B. Wells: The Promise of the American University. The institute was created as a “separate entity that could be firewalled from local, political, and institutional mandates,” Judith Allen and her co-authors write in the recently published The Kinsey Institute: The First Seventy Years.

Since 1947, the institute’s core mission has been to study subjects that no other academic institution would touch: sex in all its permutations. According to its official history, it was founded to encourage “the right to study sex and to proclaim that study openly.” Kinsey published two essential works on human sexuality based on his surveys—Sexual Behavior in the Human Male (1948) and Sexual Behavior in the Human Female (1953)—that both became New York Times bestsellers and scandalized polite society with their revelations. Kinsey discovered that over a third of men had had sexual experiences with other men, a shocking discovery at a time when same-sex sexual activity was illegal and widely considered immoral. He also found that women were more likely to have orgasms from masturbation than from heterosexual intercourse, and that half of the female population had had sex before marriage. Over 50 percent of the people he surveyed reported having an erotic response to biting. Kinsey’s groundbreaking work legitimized sex research and paved the way for Masters and Johnson’s studies in the mid-1960s, which proved, through laboratory research on copulating and masturbating subjects, that women could have multiple orgasms, and that orgasms during masturbation were typically more pleasurable than those during intercourse.

During the decades after Kinsey’s death in 1956, the institute expanded his agenda—opening a sex clinic, hiring an art curator, starting public tours, and releasing a sex-reporting app. But over the past three years, it has quietly become a shell of its former self. As its new director Sue Carter told the Indianapolis Star in 2016, “Where once research into sex, gender and reproduction formed the backbone of the institute’s work, now love, sexuality and well-being will take center stage.” Its name has been shortened to “The Kinsey Institute.”

In many ways, Carter was an unusual choice to lead the institute. Although she is the first biologist to head up the institute since Kinsey himself, her career has focused on rodents—in particular, on the prairie vole, one of the few mammals that pair-bonds and is monogamous. Carter previously built a research lab at the University of Illinois-Chicago where she discovered that oxytocin—the so-called love hormone—played an essential part in promoting pair-bonding in prairie voles. Almost none of Carter’s hundreds of published scientific papers relate to human sexuality, and few of her articles have appeared in sex-research journals. This has baffled sex researchers at many other institutions. “The overarching kind of focus behind the institute was to look at sex research, not at research on love and not at research on intimacy,” said Cynthia Graham, a professor in sexual and reproductive health at the University of Southampton, who is also a Kinsey Institute fellow and editor of The Journal of Sex Research.

Perhaps tellingly, Carter’s own research on vole monogamy has been cited by pro-abstinence and anti-pornography organizations to justify their positions. Physicians for Life, a pro-life association based in Alabama that also advocates for abstinence, uses Carter’s research to argue that monogamy is natural and built into our biology. They claim that sex out of wedlock will lead to a release of oxytocin that produces an expectation of commitment which is then disappointed, leading to “depression, dissatisfaction, and the disruption of future bonding potential.” Meanwhile, a website called Your Brain On Porn uses Carter’s oxytocin research to argue that strong pair bonds make us less likely to pursue addictive behaviors like porn-watching. Another anti-porn advocate uses Carter’s research to make a diametrically opposed argument: that oxytocin “bonds” men to porn and away from human partners.

Since Carter took up her position at the institute, she has hired five former students or colleagues from her lab at the University of Illinois-Chicago; none of them do sex research. She’s also hired her husband, the psychologist Stephen Porges, as a “distinguished university scientist.” Porges had never written an article on sex research when he was hired, aside from one that he co-authored with his wife on sexual receptivity in hamsters. According to the Kinsey Institute’s website, over the three years that Carter has been the director, only about 40 percent of the publications coming out of the Kinsey institute have been related to human sexuality, compared to about 88 percent under the last two directors. Many of the reports produced under Carter’s leadership seem to be focused on the effects of oxytocin and vasopressin on everything from “aggression in domestic dogs” to schizophrenia in middle-aged women. Other studies are concerned with non-sex related psychological disorders like depression and anxiety.

Perhaps the most visible change at the institute under Carter has been to its art collection, which had, since the 1990s, been displayed in the affiliated gallery. The collections are extensive and include such rare pieces as Ian Hornak’s painting of nudes adorned with animal heads, Herb Ritts’s photographs of male nudes, and erotic works by Marc Chagall and Picasso. Public shows at the gallery ended in 2015. In addition to shuttering the gallery, Carter terminated the institute’s popular Juried Erotic Art Show, which had run for a decade. This event not only attracted large crowds, but also provided more artwork to the institute, as many of the pieces were donated.

Smithsonian Institution ArchivesAlfred Kinsey with the project staff of the Institute for Sex Research, Indiana University, August 1953

All of the changes at Kinsey have occurred at a time when the political climate in Indiana has become increasingly proscriptive toward any form of sexual expression other than heterosexual sex within marriage. When Carter was hired, the governor of Indiana was Mike Pence (now vice president of the United States), who signed one of the most extreme anti-abortion state laws in US history and cut funding to Planned Parenthood. And about six months before she was hired, conservative critics of Kinsey had been particularly outspoken after the institute sought, and was granted, consultative status as a nongovernmental organization with the United Nations. In May 2014, Kinsey’s greatest nemesis, conservative activist Judith Reisman, founded an organization called Stop the Kinsey Institute, encouraging people to sign a petition that would revoke the institute’s newly won UN accreditation. Since the 1980s, Reisman had been arguing that Kinsey’s research was inaccurate, involved the sexual abuse of children, and was responsible for the “moral decay of millions of people.”

In the years since Carter was hired, the Kinsey Institute has become increasingly dependent on the support of the university and, as a consequence, the conservative state government. Kinsey initially set up the institute as an independent nonprofit organization associated with Indiana University, in an attempt to protect it from political pressure; thanks to its nonprofit status, the management of the institute was answerable only to an independent board of directors. But in December 2016, the institute merged with Indiana University, and now answers only to the university’s administration.

Many of the people I interviewed believe that the university saw the institute as a political liability, with the potential to compromise the university’s own access to state funding, and so, instead of getting rid of the institute, they made its focus less controversial. The fear of defunding has only become more grounded since Eric Holcomb—whose attack ad in 2000 said that “tax-payers should NOT be forced to support Kinsey”—replaced Pence as governor last year. Holcomb now signs off on the budget that supports the institute.

“There is a reason that I was selected and not someone from the more traditional sex-research field,” Carter told me, somewhat cryptically, after I asked her if she thought the organization of the institute had changed. She didn’t elaborate on what she meant, but instead went on to say that “Kinsey has to have some kinds of priorities to survive in the future. We cannot focus simply on things like sexual pleasure. We can’t because there’s no funding for it.”

Why did the institute merge with the university? Nearly everyone involved to whom I spoke cited a different reason. According to Amy Applegate, the chair of Kinsey’s board at the time, the merger extended protections of academic freedom to Kinsey while also giving the institute access to all the university’s resources. For her part, Carter said that the institute needed more financial support. Fred Cate, Indiana University’s vice president of research, cited both finances and integration with the library. Rick Van Kooten, the vice provost of research, said that institute could now enjoy stronger protections against legal challenges like the one in the 1950s, when the US Customs Office intercepted a box of erotic goods from overseas bound for Kinsey on the grounds that it was obscene. But even though the institute was an independent nonprofit at the time, it still managed to acquire the university’s help and prevail in its ensuing legal battle against the Customs Office. Jorge José, a former university administrator, said the merger was intended to give the university’s administration more “oversight over the whole operation of the Kinsey Institute.”

Whatever the actual reasons behind it—political, practical, or otherwise—the institute’s incorporation with the university has only compounded the effects of Sue Carter’s tenure, in making the institute less public, and less about sex. Perhaps the saddest part of the Kinsey Institute’s change in focus is that our country needs it now more than ever. The Kinsey Institute should be participating in the national conversation on sexual assault and harassment. The institute should be leading the way with discussions on the importance of female sexual pleasure, the “orgasm gap” between men and women, and the fact that so many women experience pain during sex. In the words of Edward Laumann, who has researched human sexuality at the University of Chicago for over forty years, the institute “is about human sexuality, so why the hell are we talking about voles, however interesting that may be?”

Source Article from

Desolation Row

Universal History Archive/UIG/Getty ImagesFederico García Lorca, date unknown

In his late teens, Federico García Lorca’s main interest was music and song. He was steeped not only in the Andalusian folk tradition but also in the European art song. He loved the work of Schubert and Beethoven.

Lorca’s arrangements for piano and voice of Andalusian folk songs, inspired by Manuel de Falla’s arrangement in 1914 of seven other Spanish songs, combine in the subtlety of their harmonies an attention to the European art tradition and to local expression. Lorca’s own voice, according to his contemporaries, was not very rich, but his playing was sophisticated and skillful, as can be attested by the series of recordings he made in 1931 accompanying Encarnación López, known as La Argentinita, the lover of his friend the bullfighter Ignacio Sánchez Mejías, whose death Lorca lamented in one of his most famous poems. His piano accompaniment on these songs can be exuberant, but it can be subdued as well. He believed in the concept of duende, a heightened soulfulness displaying an authentic, deep, and earthy emotion, but he was also a master of restraint.

Lorca’s politics overcame his natural instinct against ideology in the same way that, in his musical consciousness, the highly wrought emotions of a Schubert song opposed the fierce abandon of the traditional vocal style known as the cante jondo. Just as his talent as a lyric poet gave way to Surrealism and experimentation, under the pressure of his times his art became political. As an artist, he was interested in simple freedoms in a period in Spain when nothing was simple, when such interests came face to face with dark and malevolent forces.

He knew with an almost whimsical certainty that in Spain in 1936 the personal was political, and that the body itself, especially the body of a woman or a homosexual man, was as much the territory of conflict and destiny as the ownership of land or factories. In that fateful year he wrote a play, The House of Bernarda Alba, that had all the austerity and artful simplicity of a Schubert song. It was a play for women’s voices full of yearning and plaintive expression, but surrounded by a savage sense of restriction and cruelty that everyone in the audience Lorca imagined for his play would know and recognize.

He was too subtle to make this obvious or programmatic, and too interested in the pure excitement and depth of the conflict between his characters to make them smaller than the world outside. He wrote them with the same strange tenderness that George Eliot used to create her idealistic men such as Will Ladislaw or Daniel Deronda, or that homosexual writers from Henry James to Tennessee Williams used to imagine their women trapped by convention. And he shared with Oscar Wilde that pure confidence in his own brave gifts with no sense of his doom so close, save one that may be beyond us in its depth and its irony.

Indeed, he was working with such ease and speed in the last years of his life that publishers and producers could not keep up with him. At the time of his death in 1936, his long, ambitious poem “Poet in New York” and a number of other sequences, including “The Tamarit Diwan” and the “Dark Love Sonnets,” had yet to appear, and his plays The House of Bernarda Alba and The Public had yet to be staged.

Federico García Lorca was born near Granada in 1898, the eldest son in a prosperous family. When he was growing up, his father purchased an idyllic country house, complete with orchards and gardens, on the outskirts of Granada. The family spent time in the city itself too, so that he could be educated there. In the early 1920s in Madrid, Lorca developed a close friendship with Salvador Dalí and Luis Buñuel. He would also remain close to a group of Spanish poets, known as the Generation of ’27, which included Rafael Alberti, Pedro Salinas, and Vicente Aleixandre.

In essays and interviews, Lorca made clear that his allegiance was not to Spain, or even to Granada, but to the Vega, the rich plain to the west of the city where his father farmed, the land nourished by the rivers Darro and Genil that flow down from the Sierra Nevada. “My whole childhood,” he said,

was centred on the village. Shepherds, fields, sky, solitude. Total simplicity. I’m often surprised when people think that the things in my work are daring improvisations of my own, a poet’s audacities. Not at all. They’re authentic details, and seem strange to a lot of people because it’s not often that we approach life in such a simple, straightforward fashion: looking and listening…. I have a huge storehouse of childhood recollections in which I can hear the people speaking. This is poetic memory, and I trust it implicitly.

This idea that Lorca’s imagery and poetic voice came from the soil unmediated has echoes of the Irish Literary Renaissance and the efforts of writers such as W.B. Yeats, Augusta Gregory, and J.M. Synge to ally themselves with a native culture that was primitive and powerful, simple and untainted. Unlike the Irish writers, however, Lorca had been brought up in the very places he wished to invoke, using the same language as the people about whom he wrote:

I love the countryside. I feel myself linked to it in all my emotions. My oldest childhood memories have the flavour of the earth. The meadows, the fields, have done wonders for me. The wild animals of the countryside, the livestock, the people living on the land, all these are suggestive in a way that very few people understand…. My very earliest emotional experiences are associated with the land and work on the land.

But like the work of the Irish writers, the poems and plays Lorca wrote came from an imagination that was not itself primitive or nourished only by the soil. It was not “total simplicity.” While his poems used ballad forms or took the shape of folk songs, they also took their bearings from ideas of the unconscious and from Surrealism, from his friendships with Dalí and Buñuel and contemporary poets as much as from the people working on the land and living in the villages, even though images and metaphors he used could have their origins in ordinary local speech.

In an introduction to Lorca’s plays, his brother Francisco showed the relationship between the rich use of metaphor in ordinary speech and his brother’s playing with it in a poem like “The Marked Man,” from his “Gypsy Ballads.” He recalled the family nurse Dolores describing the source of a spring in her picturesque and vivid speech: “‘And imagine, a bull of water rose up.’ I remembered the impression this admirable phrase made on Federico for it appears later, more or less transformed…in these lines:

The heavy water bullocks
charge after the boys
who bathe inside the moons
of their curving horns.

Lorca here took the idea of a spurt of water, or water coming from the ground, having the power and suddenness and surprise of a bullock’s charge. He played also with the image of the moon’s shape reflected in water as being a curving horn. He was combining bull and water and moon to suggest a sense of elemental danger coming fast, too fast for the imagery to be narrowed down or made too precise, coming as fast as speech or associations in the mind might come.

In Poet in Spain, a volume of new translations of Lorca’s Spanish poems, with the original on the facing page, Sarah Arvio translates the first line of these four as “the heavy water oxen” and replaces “curving” with “rippling.” Will Kirkland and Christopher Maurer in Collected Poems translate the first line as “Dense oxen of water” and also use “rippling.” (The Spanish word is ondulados, which means wavy or rolling or indeed rippling, which is clearly much better since it carries the idea of water along and suggests the moon in water rather than just the moon.) Thus Arvio manages to keep the water metaphor going from the first and third lines into the fourth. Her use of “water oxen” is more precise than “oxen of water” (which sounds like a translation even if it opens the metaphor more to imply that there were in fact no oxen at all, there was just spring water, or spurting water that, as Dolores the nurse would have it, was like a bull).

Lorca’s early poems are filled with elemental things, like a Miró painting—night, star, moon, bird—but they come with edges of strangeness and menace, like a Dalí painting—clock, knife, death, dream. He is never interested in just describing a scene. Instead, he begins to work on a set of associations, using echoes in the patterns of sound and sometimes a strict metrical form as undercurrent, thus suggesting a sort of ease or comfort at the root of the poem so that the branches can grow in any direction, with much grafting and sudden shifts, as his mind, in free flow, throws up phrases that, however unlikely, he allows in, thus extending the reach of the poem, or at other times pruning it briskly back.

Often, Lorca works as though making a quick sketch, jotting down a few images. It seems essential to me that any translator follow his punctuation so that the images meant to stand apart are allowed to do so. In her introduction, however, Arvio writes:

I’ve used almost no punctuation; this was my style of composition. I felt that punctuating, as I worked, hindered the flow of the language. When I was done it was too late to go back; the poems had their own integrity and didn’t need commas and periods. So I let them stand. I was fascinated to see, studying the manuscripts, that Lorca often wrote his drafts with little or no punctuation: a stray period, a comma in the middle of a line, an exclamation mark. He added on punctuation later; manuscripts unpublished at the time of his death were punctuated by an editor.

Since her translations are filled with intelligent decisions and a keen sense of the music in the original poems, since her ear for Lorca’s delicate and difficult tones is sensitive and sharp, indeed often inspired, this decision seems unwise. Hindering “the flow of the language” in the imagistic poems seems to me not only necessary but mandatory. Lorca, one presumes, added on the punctuation later because he saw the need for it.

For example, the Spanish original of the early poem “Delirium,” as printed in Arvio’s book, is made up of five discrete statements, in stanzas of two lines each, with a full stop after each stanza. The second and the third are translated as follows:

sigh as they fly

The blue and white
is delirious

Even though Arvio has included line breaks, we still need a full stop after “Bee-eaters/sigh as they fly.” (Lorca has one.) Then we will know not to read on as though they fly in some direction that might be “The blue and white/distance.”

The poems darken as Lorca moves into his late twenties and early thirties. Death is both played with and confronted directly, but it is seldom absent. He said:

Everywhere else, death is an end. Death comes, and they draw the curtains. Not in Spain. In Spain they open them. Many Spaniards live indoors until the day they die and are taken out into the sunlight. A dead man in Spain is more alive as a dead man than anyplace else in the world.

Lorca’s poem “From Here” begins: “Tell my friends/I have died.” In “Another Dream,” the poem asks: “How many children does Death have?” And “Two Evening Moons” opens: “The moon is dead dead.” One of the sections of “Window Nocturnes” begins:

When I stick my head
out the window I see
how the blade of the wind
wants to cut it off

In this unseen
guillotine I have laid
the eyeless heads
of all my desires

In “Horseman’s Song,” he writes: “Death is watching me/from the Cordoba towers.”

In some of the poems, such as “Horseman’s Song,” and many of the later Gypsy Ballads, the death is violent. The sense of fear around violent death is in the very rhythms of the poems:

When stars thrust their spurs
into the gray water
when young bulls dream
of the sweep of a flower
blood cries rang out
near the Guadalquivir

In later poems, poems written close to the time of his own death, the ballad form gives way to a more refined stanza-led form as the sense of fear and foreboding moves close to lament, as in the powerful “Of the Dark Death”:

Wrap me in a veil when the sunrise
pelts me with fistfuls of ants
and soak my shoes in hard water
so that its scorpion claw will slip

In 1971, thirty-five years after Lorca’s death, a book was published in Paris, written in Spanish, by the Irish academic Ian Gibson, who would later write biographies of Lorca, Dalí, and Antonio Machado. Two years later it appeared in English with the title The Death of Lorca. I remember the chill I felt more than forty years ago as Gibson forensically and meticulously recounted the early days of the civil war in Granada, to which Lorca had returned shortly before the war broke out.

Gibson emphasizes in his book that Lorca had taken sides in the argument about liberalism and repression in Europe. In 1933, he writes, the poet signed “a manifesto condemning the Nazi persecution of German writers. And when, in 1935, Mussolini invaded Abyssinia, he cancelled a projected visit to Italy and signed another anti-Fascist manifesto.” Eighteen months before the war, Lorca spoke about conditions in Spain in an interview:

I will always be on the side of those who have nothing, of those to whom the peace of nothingness is denied. We—and by we I mean those of us who are intellectuals, educated in well-off middle-class families—are being called to make sacrifices. Let’s accept the challenge.

In the last interview he gave before coming to Granada, Lorca’s comments on the city would not have won him many friends among conservatives. He said of the fall of Granada and the expulsion of the Moors in 1492:

It was a disastrous event, even though they say the opposite in the schools. An admirable civilization, and a poetry, architecture and delicacy unique in the world—all were lost, to give way to an impoverished, cowed town, a wasteland populated by the worst bourgeoisie in Spain today.

Lorca’s family also had close links to progressive politics in Granada. On July 10, 1936, his brother-in-law, the socialist doctor Manuel Fernández-Montesinos, was elected mayor of Granada. But as Gibson makes clear, lines were not precisely drawn in Granada between left and right. Among Lorca’s closest friends in the city was the Rosales family, who were members of the conservative Falange, which supported General Franco’s uprising.

Fine Art Images/Heritage Images/Getty ImagesFederico García Lorca and Salvador Dalí, Cadaqués, Spain, date unknown

In the first weeks of that uprising, the repression in Granada was particularly severe, with more than two thousand people taken out and shot. Gibson writes: “The flower of Granada’s intellectuals, lawyers, doctors and teachers died…along with huge numbers of ordinary left-wing supporters.” Among those shot was Fernández-Montesinos.

As the forces of repression in Granada came to look for Lorca, he took refuge in a house owned by the Rosales family. He believed that he would be safe there. However, he was arrested a week later and then taken out and shot. The precise place where his body was buried has never been identified.

One of the most moving moments in Gibson’s book on Lorca’s execution, and also in his biography of the poet, is when the composer Manuel de Falla, who lived below the Alhambra, heard that Lorca had been arrested and made his way into the city to see if he could rescue him:

Falla was a tiny, timid man, and it is difficult to overestimate his courage on this occasion. In the Civil Government he was informed that Lorca was already dead, and it seems that he himself was in danger of being shot, despite his fervid and well-known Catholicism and his fame as a composer.

Lorca had first met Falla around 1920 when the composer, who had come to live in Granada, a city in which he had already set some of his best-known music, became increasingly fascinated by cante jondo, part of the traditional music of Andalusia being kept alive by the Gypsy population; it was “imbued,” Lorca said in a lecture in Granada in 1922, “with the mysterious color of primordial ages.” It is “a stammer, a wavering emission of the voice, a marvelous buccal undulation that smashes the resonant cells of our tempered scale, eludes the cold, rigid staves of modern music, and makes the tightly closed flowers of the semi-tones blossom into a thousand petals.” It “always sings in the night. It knows neither morning nor evening, mountains nor plains. It has only the night, a wide night steeped in stars. Nothing else matters.”

When Lorca met Falla, who was almost a quarter of a century his senior, he had already given up his dream of studying music to study law in order to please his father, but music became the bond between them. Falla was ready to treat the young poet almost as a son, and Lorca was careful to keep the more flamboyant parts of his life secret from the famously conservative composer, who lived with his sister.*

In order to give energy and respectability to the tradition of cante jondo—viewed as too primitive in some cosmopolitan quarters in Spain—Falla and Lorca organized a festival in Granada with some associates to coincide with the feast of Corpus Christi in June 1922. Lorca was already writing poems that mined and excavated and enriched the tones and formal structure of the cante jondo, poems that expressed deep anguish and intense feeling in ways both direct and rich with metaphor, and that connected the natural world or the world of objects with the self or the speaker. He became fascinated by the idea of duende, quoting a Gypsy singer who, while listening to Falla play his Nights in the Gardens of Spain, said: “Whatever has black sound has duende.” Duende was the opposite of mere virtuosity, it was what sent shivers down the spine when someone sang or played music or recited poetry. Speaking of duende as a person, Lorca told a Buenos Aires audience in 1933 that it

is a power, not a work. It is a struggle, not a thought…. The true fight is with the duende…. But there are neither maps nor exercises to help us find the duende. We only know that he burns the blood like a poultice of broken glass, that he exhausts, that he rejects all the sweet geometry we have learned, that he smashes styles, that he leans on human pain with no consolation…. The duende does not come at all unless he sees that death is possible. The duende must know beforehand that he can serenade death’s house…. The duende wounds. In the healing of that wound, which never closes, lie the strange, invented qualities of a man’s work…. The duende loves the rim of the wound.

Singers with duende walked for miles to take part in the festival that Lorca and Falla organized. While the excitement gave Lorca inspiration and nourishment for his work and led to his “Gypsy Ballads,” Falla, on the other hand, could not wait to get back to his quiet life. In his work after the festival, Gibson writes, “the Andalusian elements…were drastically reduced.”

Though Lorca and Falla attempted to work together in the few years that followed, the collaboration came to nothing. But suggesting that their paths diverged would be to misunderstand Lorca’s achievement not only in the more experimental and jagged long poem “Poet in New York,” but in the poems written in his late twenties and his thirties that are included in Sarah Arvio’s book. The air of simplicity in these poems is more like an alibi or a mask. The metaphors are darting and daring; while they are often random, they can form a pattern as deliberate and indicative as gravestones in a cemetery. They make clear that underlying everything is death, that death is approaching, filling the air with violence and menace and fright.

The imagery wavers between the fixed and the fluid, at times wild and unpredictable against the disciplined music of the metrical line. This is close to how Carol A. Hess, in her biography of Falla, describes his Harpsichord Concerto: “clarity of texture, use of preexisting materials and procedures, and heterogeneous timbres” refined “to a new level of distillation.”

In Lorca’s poetry, there is also a sort of clarity of texture that is darkened or even tossed aside by phrases or individual images that are filled with beauty and mystery, but seem also at times jagged or almost private, elusive, abstract, yielding no easy interpretation. Out of these heterogeneous timbres come moments of piercing clarity, when sex and death are at war or at play, when doom and fear are in the air, but at other times a lightness and sense of ease is suggested.

This poses enormous problems for translators. In the first section of “Lament for Ignacio Sánchez Mejías,” a poem written in 1934, for example, Lorca uses the phrase a las cinco de la tarde more than twenty-five times as a refrain. It means what Sarah Arvio says it means—“at five o’clock”—or what Galway Kinnell or Stephen Spender and J.L. Gili in their translations say it means—“at five in the afternoon.” In the Spanish however, the sound “ah” gets repeated four times in the line, and in between the word “cinco” has a hard “inko” sound that is both plaintive and tough. As it is repeated in the Spanish, the line has an emotional momentum. In the English, no matter what you do, it sounds like the time of a train.

In part three of the poem, Arvio comes up with an interesting solution when she translates the fifth line as “I have seen the gray rain chase the waves” (Kinnell translates this as “I’ve seen gray rains fleeing toward the sea” and Spender and Gili’s version is “I have seen grey showers move towards the waves”). Arvio’s single-syllable words suggest panic; in the meter, it sounds like a good line of English poetry—T.S. Eliot might have been proud of it—rather than a translation. Since both the rain and the waves are in movement, “chase” serves to emphasize this and suggests also that something urgent is at stake, even if in the original Spanish the rain is running away from something as well as running toward the sea.

The preceding two lines of this poem, however, are a perfect illustration of a problem that no translator can solve. In Arvio’s version they read, “The stone is a shoulder for carrying time/and trees of tears and ribbons and planets”; and in Kinnell’s version, “Stone is a shoulder for carrying away time/with its trees made of tears and ribbons and planets”; and in Spender and Gili’s version, “Stone is a shoulder on which to bear Time/with trees formed of tears and ribbons and planets.”

The first line, no matter what you do with it, is beautiful. The second line is the problem. What was Lorca imagining or seeing when he wrote it? In Spanish, the last three nouns each end in the same sound: lágrimas y cintas y planetas. The rhythm almost pulls them along, defying the reader to wonder what they might actually signify, so perhaps it doesn’t matter what Lorca was imagining or seeing. The sound of the words holds such questions at bay.

In English, however, when we see the word “ribbons” here, we are almost tempted to turn the book upside down or at least shake it to see if some logic, or even some seductive lack of logic, might sweetly emerge. And that is even before we come to the planets. Lorca, of course, would be shocked at the mere suggestion that we should want to know something as banal as what these words actually mean here, or what they are asking us to imagine or see. He would invoke the concept of duende and insist on its power to beguile the reader of his poetry, perhaps even in translation:

Before reading poems aloud…the first thing one must do is invoke the duende. That is the only way that everybody will immediately succeed at the hard task of understanding metaphor (without depending on critical apparatus or intelligence) and be able to catch, at the speed of the voice, the rhythmic design of the poem.

  1. *

    Manuel de Falla’s house and the house of the García Lorcas in the outskirts of Granada can both be visited. Falla’s single bed with a crucifix on the wall behind it and his spartan living conditions are in great contrast with the airy domestic beauty and openness of the house where the García Lorcas lived. 

Source Article from

As If!

Kwame Anthony Appiah
Kwame Anthony Appiah; drawing by Siegfried Woldhek

Kwame Anthony Appiah is a writer and thinker of remarkable range. He began his academic career as an analytic philosopher of language, but soon branched out to become one of the most prominent and respected philosophical voices addressing a wide public on topics of moral and political importance such as race, cosmopolitanism, multiculturalism, codes of honor, and moral psychology. Two years ago he even took on the “Ethicist” column in The New York Times Magazine, and it is easy to become addicted to his incisive answers to the extraordinary variety of real-life moral questions posed by readers.

Appiah’s latest book, As If: Idealization and Ideals, is in part a return to his earlier, more abstract and technical interests. It is derived from his Carus Lectures to the American Philosophical Association and is addressed first of all to a philosophical audience. Yet Appiah writes very clearly, and much of this original and absorbing book will be of interest to general readers.

Its theme and its title pay tribute to the work of Hans Vaihinger (1852–1933), a currently neglected German philosopher whose masterwork, published in 1911, was called The Philosophy of “As If.”1 Vaihinger contended that much of our most fruitful thought about the world, particularly in the sciences, relies on idealizations, or what he called “fictions”—descriptions or laws or theories that are literally false but that provide an easier and more useful way to think about certain subjects than the truth in all its complexity would. We can often learn a great deal by treating a subject as if it conformed to a certain theory, even though we know that this is a simplification. As Vaihinger says, such fictions “provide an instrument for finding our way about more easily in the world.”

One of the clearest examples Vaihinger offers is Adam Smith’s assumption, for purposes of economic theory, that economic agents are motivated exclusively by self-interest—that they are egoists. Smith knew perfectly well that human motivation was much richer than that, as he demonstrated in his book The Theory of Moral Sentiments, a work less widely known than The Wealth of Nations. But as Vaihinger explains:

For the construction of his system of political economy it was essential for Adam Smith to interpret human activity causally. With unerring instinct he realized that the main cause lay in egoism and he formulated his assumption in such a way that all human actions, and particularly those of a business or politico-economical nature, could be looked upon as if their driving force lay in one factor—egoism. Thus all the subsidiary causes and partially conditional factors, such as good will, habit, and so forth, are here neglected. With the aid of this abstract cause Adam Smith succeeded in bringing the whole of political economy into an ordered system.

Vaihinger explored the phenomenon in a wide range of cases, from mathematics, the natural sciences, ethics, law, religion, and philosophy. Appiah’s range is equally wide, but his examples are different; he gives special attention to psychology, ethics, political theory, social thought, and literature. In general he defends the value of idealization, but he is also aware of its intellectual dangers. He emphasizes that it is essential to hold on to the contrasting concept of truth, and to keep in mind both the departures from truth that idealization involves and the specific purposes for which it is useful.

Appiah has packed into this short book an impressive amount of original reflection on a number of topics, so my discussion will have to be selective. He mentions some examples from the natural sciences, but in such abbreviated form that they cannot be understood by readers who are not already familiar with the theories in question.2 I shall discuss some cases where Appiah’s analyses of idealization are more accessible.

The contemporary theory of what is standardly referred to as economic rationality is descended from Adam Smith’s egoistic model of economic behavior; it is based on a much more sophisticated and quantitatively precise but still-idealized model of the psychology of individual choice. The modern discipline of decision theory has permitted a great increase in the exactness of what we can say about this type of human motivation, by introducing quantitative measures of subjective degrees of belief and subjective degrees of preference.

If, for example, on a cloudy day you have to decide whether or not to take an umbrella when you go out, you face four possibilities: (1) rain and umbrella; (2) no rain and umbrella; (3) rain and no umbrella; (4) no rain and no umbrella. Obviously your decision will depend both on your estimate of the likelihood of rain and on how much you mind getting wet, or alternatively how much you mind carrying an umbrella when it isn’t raining, but decision theory makes this more precise. It says your choice is explained by the fact that you assign a probability p between zero and one to the prospect of rain, and (ignoring misty in-between states) a probability of one minus p to the prospect of no rain, and that you assign a desirability, positive or negative, to each of the possibilities (1) to (4). By multiplying the probability and the desirability for each of these outcomes, one can calculate what is called the “expected value” of each of them, and therefore the expected value of taking an umbrella and of not taking an umbrella. The rational choice is to do what has the higher expected value.3

Decision theory applies this kind of calculus to choices among alternatives of any complexity, with any possible assignment of subjective probabilities and desirabilities. With the help of game theory it can be extended to multiperson interactions, as in a market economy. What interests Appiah is that the theory assigns these supposed quantifiable psychological states to individuals only on the basis of an idealization. They are not discovered by asking people to report their subjective probabilities and desirabilities: in general, people do not have introspective access to these numbers. Rather, precise psychological states of this type are assigned by the theory itself, on the basis of something to which people do have access, namely their preferences or rankings (better, worse, indifferent) among alternatives.

This by itself does not imply that the states are fictional: real but unobservable underlying causes can often be inferred from observable effects. The fiction comes from the way the inference proceeds in this case. Given a sufficiently extensive set of preferences (rankings of alternatives) by an individual, it is possible, employing relatively simple laws, to assign to that individual a set of subjective probabilities and desirabilities that would account for those preferences, if the individual were rational in the sense of the theory. But since rationality in the sense of the theory involves such superhuman capacities as immunity to logical error, instantaneous calculation of logical consequences, and assigning equal probability and desirability to all possibilities that are logically equivalent, it is clear that no actual humans are rational in this sense. So if we use the theory of economic rationality to think about the behavior of real human beings, we are treating them as if they were superrational (“Cognitive Angels,” in Appiah’s phrase); we are employing a useful fiction, which allows us to bring human action under quantitative laws.

The fiction is useful only for certain purposes. If it is not to lead us astray, we have to recognize the ways in which it deviates from reality, and to correct for those deviations when they make a difference that matters. This is in fact the concern of the recently developed field of behavioral economics, which tries to identify the consequences of systematic deviations of actual human behavior from the standards of classical economic rationality. (For example, people often fail to count logically equivalent possibilities as equally desirable: an outcome framed as a loss will be counted as less desirable than the same outcome framed as the absence of a gain; an outcome described in terms of the probability of death will be evaluated differently from the same outcome described in terms of the probability of survival.) Appiah’s point is more general: if we try to formulate laws of human psychology, we will inevitably have to ignore a great deal of the messy complexity of actual human life. This is sometimes legitimate, provided that we recognize the idealization and are prepared to restore the complexity when necessary—when, for example, assuming the rationality of every free market would send us off an economic cliff.

Consider next a completely nontechnical type of idealization that is omnipresent in contemporary thought and discourse: racial and sexual categories such as “Negro” and “homosexual.” The thought that someone—oneself or another—is a Negro or a homosexual has great personal, social, and political significance in our society. Yet in light of the actual complexity and variety of people’s biological heredity and erotic dispositions these are very crude concepts; they do not correspond to well-defined properties or categories in the real world. Nevertheless, Appiah says, we may find it indispensable to employ them:

In earlier work of my own, for example, I have argued both that races, strictly speaking, don’t exist, and that it is wrong to discriminate on the basis of a person’s race. This can usually be parsed out in a way that is not strictly inconsistent: What is wrong is discrimination against someone because you believe her to be, say, a Negro even though there are, in fact, strictly speaking, no Negroes. But in responding to discrimination with affirmative action, we find ourselves assigning people to racial categories. We think it justified to treat people as if they had races even when we officially believe that they don’t.

These cases do not start out as idealizations. “Negro” and “homosexual” became important social identities because it was widely believed that they were essential properties possessed by some people and not others, and that they had behavioral, social, and moral consequences. Appiah maintains that when someone who does not share these beliefs goes on using the terms, this is not just the verbal acknowledgment of a misguided but tenacious social illusion; it is an example of fictional thinking. We do not truly distance ourselves from these categories and perhaps should not:

Identities, conceived of as stable features of a social ontology grounded in natural facts, are often…assumed in our moral thinking, even though, in our theoretical hearts, we know them not to be real. They are one of our most potent idealizations.

This invites the question: When are these idealizations indispensable, and when on the contrary should we resist them, by appealing to the more complex truth? Appiah addresses this and related questions with great insight in an earlier book, The Ethics of Identity,4 but not here.


Appiah considers another type of idealization that he calls “counter-normative”: thinking or acting as if a moral principle is true although we know it isn’t. He believes we do this when we treat certain prohibitions—against murder or torture, for example—as moral absolutes. His view is that strictly, there are exceptions to any such rule, but it may be better to treat it as exceptionless. In that way we will be sure to avoid unjustified violations, without countervailing risk, since “it is remarkably unlikely that I will ever be in one of those situations where it might be that murder was permissible (and even less likely that I will ever be in one where it is required).” Appiah adds that sometimes the advantage of the fiction will depend on its acceptance not by an individual but by a community. Perhaps the strict rule against making false promises would be an example, since even if it is not universally obeyed, the general belief that it is generally accepted encourages people to trust one another.

Which moral rules one regards as fictions or idealizations will depend on what one believes to be the basis of moral truth. Appiah does not take up this large topic, but his discussion seems most consistent with the view that the ultimate standard of right and wrong is what will produce the best overall outcomes. Counternormative fictions then become useful if we will not achieve the best overall outcomes by aiming in each case at the best overall outcome: it is better to put murder and torture entirely off the table. This is an area of perennial controversy, but those who think the prohibitions on murder, torture, and false promises have a different source, dependent on the intrinsic character of those acts rather than overall outcomes, may be less prone than Appiah to attribute their strictness to idealization.

Appiah concludes with a topic of great philosophical interest, that of idealization in moral theory itself. There is some possibility of confusion here, because he is talking about idealization in a sense somewhat different from that discussed so far.

Every morality is an ideal; it enjoins us to conform to standards of conduct and character that we are often tempted to violate, and it is predictable that ordinary human beings will sometimes fail to conform, even if they accept the morality as correct. This by itself does not involve idealization in Appiah’s sense. The moral principles need depend on no assumptions that are not strictly true. A morality describes not how people do behave but how they should behave; and it has to assume only that they could behave in that way, even if at the moment many of them do not.

The idealization that interests Appiah occurs when political thinkers or philosophers theorize about morality. In developing their accounts, they will often imagine situations or possibilities that differ from what is true in the actual world, as an aid to evaluating moral or political hypotheses. One type of idealization consists in evaluating a moral or political principle by considering what things would be like if everyone complied with it. But as Appiah points out, this is far from decisive:

Consider a familiar kind of dispute. One philosopher—let us call her Dr. Welfare—proposes that we should act in a way that maximizes human well-being. What could be more evident than that this would make for the best world? Another—Prof. Partiality—proposes instead that we should avoid harm to others in general but focus our benevolence on those to whom we have special ties. There is every reason to doubt that this will make a world in which everyone is as well off as could be. But a world in which everyone is succeeding in complying pretty well with Prof. Partiality’s prescription might be better (by standards they share) than a world where most of us are failing pretty miserably to comply with Dr. Welfare’s. And given what people are actually like, one might suppose that these are the likely outcomes.

An ideal that cannot be implemented is futile. The question is, how much of a drag on moral ideals should be exercised by the stubborn facts of human psychology? How far can moral ideals ask us to transcend our self-centered human dispositions without becoming unrealistically utopian? As Appiah says,

Some aspects of human nature have to be taken as given in normative theorizing…, but to take us exactly as we are would involve giving up ideals altogether. So when should we ignore, and when insist on, human nature?

I would suggest that to idealize in this context is not to ignore human nature but to regard it, rightly or wrongly, as capable of change. Only if the change is impossible or undesirable is the idealization utopian.

Appiah illustrates a different kind of reason to avoid excessive idealization with the example of immigration policy. To even pose the problem that faces us we have to take the existence of national boundaries as given, as well as the fact that some states treat their own citizens with flagrant injustice or are beset by chaos and severe deprivation. In thinking about what obligations such a situation places on stable and prosperous states, it is no use imagining a unified world without state boundaries, or a world of uniformly just states in which people are free to move from one to another. Such ideal possibilities do not tell us what we should do now, as things are.

Appiah’s response relies on the idea of fortunate nations each doing their fair share toward alleviating the plight of those seeking asylum, while acknowledging that many nations probably won’t meet this standard. This too is an ideal, but it doesn’t depend on imagining a world very different from the actual one.

Immigration is a special case, but Appiah deploys a more general form of the argument—unsuccessfully, in my view—to criticize the structure of John Rawls’s theory of justice. Rawls presents his most general principles of justice by the device of what he called “ideal theory.” That is, he tries to describe the structure and functioning of a fully just or “well-ordered” society, in which “everyone is presumed to act justly and to do his part in upholding just institutions.” Rawls held that ideal theory was the natural first stage in formulating principles of justice, before proceeding to a systematic treatment of the various forms of injustice and the right ways to deal with them—such as criminal law and principles of rectification. The latter enterprise he described as “nonideal theory,” and he held that it depends on the results of ideal theory.

Appiah objects that the description of a fully just society is no help with the problem we actually face, which is how to make improvements in our actual, seriously unjust society. He adds:

The history of our collective moral learning doesn’t start with the growing acceptance of a picture of an ideal society. It starts with the rejection of some current actual practice or structure, which we come to see as wrong. You learn to be in favor of equality by noticing what is wrong with unequal treatment of blacks, or women, or working-class or lower-caste people. You learn to be in favor of freedom by seeing what is wrong in the life of the enslaved or of women in purdah.

But this is misguided as a response to Rawls, whose method in moral theory is to begin precisely with intuitively obvious examples of injustice like those Appiah cites. Rawls’s philosophical project is to discover general principles that give a morally illuminating account of what is wrong in those cases by showing how they deviate from the standards that we should want to govern our society. Such general principles are needed to help us judge what would be right in less obvious cases. Both levels of inquiry are essential to the systematic pursuit and philosophical understanding of justice, and the whole aim of Rawls’s theory is to unite them. It is highly implausible to claim that an understanding of the general principles that would govern a fully just society will not help us to decide what kinds of social or legal or economic changes to our actual society will make it more just.

There is much more in this rich and illuminating book, including a fine discussion of our emotional response to fiction and drama. Appiah’s insight is that when we feel genuine sadness at the death of Ophelia, it is not because of what Coleridge called the “willing suspension of disbelief,” but because of the suspension of “the normal affective response to disbelief.” We react as if we believe an unhappy young woman has died, although we do not believe it, so this is another case of idealization.

The examples that Appiah discusses are interesting in themselves, but he also thinks they offer a larger lesson:

Once we come to see that many of our best theories are idealizations, we will also see why our best chance of understanding the world must be to have a plurality of ways of thinking about it. This book is about why we need a multitude of pictures of the world. It is a gentle jeremiad against theoretical monism.

It isn’t just that we need different theories for different aspects of the world, but that our best understanding may come from theories or models that are not strictly true, and some of which may contradict one another. This is a liberating outlook, though care must be taken not to let it become too liberating. As Appiah insists, we should not allow the plurality of useful theories to undermine our belief in the existence of the truth, leaving us with nothing but a disparate collection of stories. It is conscious deviation from the truth that makes a theory an idealization, and keeping this in mind is a condition of its value.

  1. 1

    The Philosophy of “As If”: A System of the Theoretical, Practical and Religious Fictions of Mankind, translated by C.K. Ogden (Harcourt, Brace, 1924). 

  2. 2

    At several points he references the philosopher of science Nancy Cartwright, who explored the phenomenon in her book How the Laws of Physics Lie (Oxford University Press, 1983). 

  3. 3

    For example, if your subjective probability of rain is 0.4 and your subjective desirabilities for the four possibilities are +1,–1,–6, +2, then the expected values are +0.4,–0.6,–2.4, +1.2. This makes the expected value for you of taking an umbrella–0.2 and of not taking one–1.2, so it’s rational to take one. 

  4. 4

    Princeton University Press, 2005. 

Source Article from

Jordan Peterson & Fascist Mysticism

Carlos Osorio/Toronto Star via Getty ImagesJordan Peterson, Toronto, December 2016

“Men have to toughen up,” Jordan B. Peterson writes in 12 Rules For Life: An Antidote to Chaos, “Men demand it, and women want it.” So, the first rule is, “Stand up straight with your shoulders back” and don’t forget to “clean your room.” By the way, “consciousness is symbolically masculine and has been since the beginning of time.” Oh, and “the soul of the individual eternally hungers for the heroism of genuine Being.” Many such pronouncements—didactic as well as metaphysical, ranging from the absurdity of political correctness to the “burden of Being”—have turned Peterson, a professor of psychology at the University of Toronto, into a YouTube sensation and a bestselling author in several Western countries.           

12 Rules for Life is only Peterson’s second book in twenty years. Packaged for people brought up on BuzzFeed listicles, Peterson’s brand of intellectual populism has risen with stunning velocity; and it is boosted, like the political populisms of our time, by predominantly male and frenzied followers, who seem ever-ready to pummel his critics on social media. It is imperative to ask why and how this obscure Canadian academic, who insists that gender and class hierarchies are ordained by nature and validated by science, has suddenly come to be hailed as the West’s most influential public intellectual. For his apotheosis speaks of a crisis that is at least as deep as the one signified by Donald Trump’s unexpected leadership of the free world.

Peterson diagnoses this crisis as a loss of faith in old verities. “In the West,” he writes, “we have been withdrawing from our tradition-, religion- and even nation-centred cultures.” Peterson offers to alleviate the resulting “desperation of meaninglessness,” with a return to “ancient wisdom.” It is possible to avoid “nihilism,” he asserts, and “to find sufficient meaning in individual consciousness and experience” with the help of “the great myths and religious stories of the past.”

Following Carl Jung, Peterson identifies “archetypes” in myths, dreams, and religions, which have apparently defined truths of the human condition since the beginning of time. “Culture,” one of his typical arguments goes, “is symbolically, archetypally, mythically male”—and this is why resistance to male dominance is unnatural. Men represent order, and “Chaos—the unknown—is symbolically associated with the feminine.” In other words, men resisting the perennially fixed archetypes of male and female, and failing to toughen up, are pathetic losers.

Such evidently eternal truths are not on offer anymore at a modern university; Jung’s speculations have been largely discredited. But Peterson, armed with his “maps of meaning” (the title of his previous book), has only contempt for his fellow academics who tend to emphasize the socially constructed and provisional nature of our perceptions. As with Jung, he presents some idiosyncratic quasi-religious opinions as empirical science, frequently appealing to evolutionary psychology to support his ancient wisdom.

Closer examination, however, reveals Peterson’s ageless insights as a typical, if not archetypal, product of our own times: right-wing pieties seductively mythologized for our current lost generations.

Peterson himself credits his intellectual awakening to the Cold War, when he began to ponder deeply such “evils associated with belief” as Hitler, Stalin, and Mao, and became a close reader of Solzhenitsyn’s novel The Gulag Archipelago. This is a common intellectual trajectory among Western right-wingers who swear by Solzhenitsyn and tend to imply that belief in egalitarianism leads straight to the guillotine or the Gulag. A recent example is the English polemicist Douglas Murray who deplores the attraction of the young to Bernie Sanders and Elizabeth Warren and wishes that the idea of equality was “tainted by an ideological ordure equivalent to that heaped on the concept of borders.” Peterson confirms his membership of this far-right sect by never identifying the evils caused by belief in profit, or Mammon: slavery, genocide, and imperialism.

Reactionary white men will surely be thrilled by Peterson’s loathing for “social justice warriors” and his claim that divorce laws should not have been liberalized in the 1960s. Those embattled against political correctness on university campuses will heartily endorse Peterson’s claim that “there are whole disciplines in universities forthrightly hostile towards men.” Islamophobes will take heart from his speculation that “feminists avoid criticizing Islam because they unconsciously long for masculine dominance.” Libertarians will cheer Peterson’s glorification of the individual striver, and his stern message to the left-behinds (“Maybe it’s not the world that’s at fault. Maybe it’s you. You’ve failed to make the mark.”). The demagogues of our age don’t read much; but, as they ruthlessly crack down on refugees and immigrants, they can derive much philosophical backup from Peterson’s sub-chapter headings: “Compassion as a vice” and “Toughen up, you weasel.” 

In all respects, Peterson’s ancient wisdom is unmistakably modern. The “tradition” he promotes stretches no further back than the late nineteenth century, when there first emerged a sinister correlation between intellectual exhortations to toughen up and strongmen politics. This was a period during which intellectual quacks flourished by hawking creeds of redemption and purification while political and economic crises deepened and faith in democracy and capitalism faltered. Many artists and thinkers—ranging from the German philosopher Ludwig Klages, member of the hugely influential Munich Cosmic Circle, to the Russian painter Nicholas Roerich and Indian activist Aurobindo Ghosh—assembled Peterson-style collages of part-occultist, part-psychological, and part-biological notions. These neo-romantics were responding, in the same way as Peterson, to an urgent need, springing from a traumatic experience of social and economic modernity, to believe—in whatever reassures and comforts.

This new object of belief tended to be exotically and esoterically pre-modern. The East, and India in particular, turned into a screen on which needy Westerners projected their fantasies; Jung, among many others, went on tediously about the Indian’s timeless—and feminine—self. In 1910, Romain Rolland summed up the widespread mood in which progress under liberal auspices appeared a sham, and many people appeared eager to replace the Enlightenment ideal of individual reason by such transcendental coordinates as “archetypes.” “The gate of dreams had reopened,” Rolland wrote, and “in the train of religion came little puffs of theosophy, mysticism, esoteric faith, occultism to visit the chambers of the Western mind.”

A range of intellectual entrepreneurs, from Theosophists and vendors of Asian spirituality like Vivekananda and D.T. Suzuki to scholars of Asia like Arthur Waley and fascist ideologues like Julius Evola (Steve Bannon’s guru) set up stalls in the new marketplace of ideas. W.B. Yeats, adjusting Indian philosophy to the needs of the Celtic Revival, pontificated on the “Ancient Self”; Jung spun his own variations on this evidently ancestral unconscious. Such conceptually foggy categories as “spirit” and “intuition” acquired broad currency; Peterson’s favorite words, being and chaos, started to appear in capital letters. Peterson’s own lineage among these healers of modern man’s soul can be traced through his repeatedly invoked influences: not only Carl Jung, but also Mircea Eliade, the Romanian scholar of religion, and Joseph Campbell, a professor at Sarah Lawrence College, who, like Peterson, combined a conventional academic career with mass-market musings on heroic individuals.

The “desperation of meaninglessness” widely felt in the late nineteenth century, seemed especially desperate in the years following two world wars and the Holocaust. Jung, Eliade, and Campbell, all credentialed by university education, met a general bewilderment by suggesting the existence of a secret, almost gnostic, knowledge of the world. Claiming to throw light into recessed places in the human unconscious, they acquired immense and fanatically loyal fan clubs. Campbell’s 1988 television interviews with Bill Moyers provoked a particularly extraordinary response. As with Peterson, this popularizer of archaic myths, who believed that “Marxist philosophy had overtaken the university in America,” was remarkably in tune with contemporary prejudices. “Follow your own bliss,” he urged an audience that, during an era of neoconservative upsurge, was ready to be reassured that some profound ancient wisdom lay behind Ayn Rand’s paeans to unfettered individualism. 

Peterson, however, seems to have modelled his public persona on Jung rather than Campbell. The Swiss sage sported a ring ornamented with the effigy of a snake—the symbol of light in a pre-Christian Gnostic cult. Peterson claims that he has been inducted into “the coastal Pacific Kwakwaka’wakw tribe”; he is clearly proud of the Native American longhouse he has built in his Toronto home.

Peterson may seem the latest in a long line of eggheads pretentiously but harmlessly romancing the noble savage. But it is worth remembering that Jung recklessly generalized about the superior “Aryan soul” and the inferior “Jewish psyche” and was initially sympathetic to the Nazis. Mircea Eliade was a devotee of Romania’s fascistic Iron Guard. Campbell’s loathing of “Marxist” academics at his college concealed a virulent loathing of Jews and blacks. Solzhenitsyn, Peterson’s revered mentor, was a zealous Russian expansionist, who denounced Ukraine’s independence and hailed Vladimir Putin as the right man to lead Russia’s overdue regeneration.

Nowhere in his published writings does Peterson reckon with the moral fiascos of his gurus and their political ramifications; he seems unbothered by the fact that thinking of human relations in such terms as dominance and hierarchy connects too easily with such nascent viciousness such as misogyny, anti-Semitism and Islamophobia. He might argue that his maps of meaning aim at helping lost individuals rather than racists, ultra-nationalists, or imperialists. But he can’t plausibly claim, given his oft-expressed hostility to the “murderous equity doctrine” of feminists, and other progressive ideas, that he is above the fray of our ideological and culture wars.

Indeed, the modern fascination with myth has never been free from an illiberal and anti-democratic agenda. Richard Wagner, along with many German nationalists, became notorious for using myth to regenerate the volk and stoke hatred of the aliens—largely Jews—who he thought polluted the pure community rooted in blood and soil. By the early twentieth century, ethnic-racial chauvinists everywhere—Hindu supremacists in India as well as Catholic ultra-nationalists in France—were offering visions to uprooted peoples of a rooted organic society in which hierarchies and values had been stable. As Karla Poewe points out in New Religions and the Nazis (2005), political cultists would typically mix “pieces of Yogic and Abrahamic traditions” with “popular notions of science—or rather pseudo-science—such as concepts of ‘race,’ ‘eugenics,’ or ‘evolution.’” It was this opportunistic amalgam of ideas that helped nourish “new mythologies of would-be totalitarian regimes.”

Peterson rails today against “softness,” arguing that men have been “pushed too hard to feminize.” In his bestselling book Degeneration (1892), the Zionist critic Max Nordau amplified, more than a century before Peterson, the fear that the empires and nations of the West are populated by the weak-willed, the effeminate, and the degenerate. The French philosopher Georges Sorel identified myth as the necessary antidote to decadence and spur to rejuvenation. An intellectual inspiration to fascists across Europe, Sorel was particularly nostalgic about the patriarchal systems of ancient Israel and Greece.

Like Peterson, many of these hyper-masculinist thinkers saw compassion as a vice and urged insecure men to harden their hearts against the weak (women and minorities) on the grounds that the latter were biologically and culturally inferior. Hailing myth and dreams as the repository of fundamental human truths, they became popular because they addressed a widely felt spiritual hunger: of men looking desperately for maps of meaning in a world they found opaque and uncontrollable.

It was against this (eerily familiar) background—a “revolt against the modern world,” as the title of Evola’s 1934 book put it—that demagogues emerged so quickly in twentieth-century Europe and managed to exalt national and racial myths as the true source of individual and collective health. The drastic individual makeover demanded by the visionaries turned out to require a mass, coerced retreat from failed liberal modernity into an idealized traditional realm of myth and ritual.

In the end, deskbound pedants and fantasists helped bring about, in Thomas Mann’s words in 1936, an extensive “moral devastation” with their “worship of the unconscious”—that “knows no values, no good or evil, no morality.” Nothing less than the foundations for knowledge and ethics, politics and science, collapsed, ultimately triggering the cataclysms of the twentieth century: two world wars, totalitarian regimes, and the Holocaust. It is no exaggeration to say that we are in the midst of a similar intellectual and moral breakdown, one that seems to presage a great calamity. Peterson calls it, correctly, “psychological and social dissolution.” But he is a disturbing symptom of the malaise to which he promises a cure. 

Source Article from

In the Review Archives: 1966–1968

Hervé Gloaguen/Gamma-Rapho via Getty ImagesMembers of the Velvet Underground John Cale and Nico, with Gerard Malaga and Andy Warhol, in New York City, circa 1966

Fifty-five years ago, The New York Review published its first issue. To celebrate the magazine’s emerald anniversary, in 2018 we will be going through the archives year by year, featuring some of the notable, important, and sometimes forgotten pieces that appeared in its pages. You can follow us on social media (Facebook and Twitter) for links to archival highlights along with the newest articles, and you can sign up for our twice-weekly email newsletter for periodic updates.

Stokely Carmichael: What We Want

In September 1966, the twenty-five-year-old civil rights activist and head of the Student Non-Violent Coordinating Committee published this essay defining the goals of the black power movement.​

Getty ImagesStokely Carmichael, in Atlanta, Georgia, 1966

For too many years, black Americans marched and had their heads broken and got shot. They were saying to the country, “Look, you guys are supposed to be nice guys and we are only going to do what we are supposed to do—why do you beat us up, why don’t you give us what we ask, why don’t you straighten yourselves out?” After years of this, we are at almost the same point—because we demonstrated from a position of weakness. We cannot be expected any longer to march and have our heads broken in order to say to whites: come on, you’re nice guys. For you are not nice guys. We have found you out.  »


“The Responsibility of Intellectuals”: An Exchange

In the Review’s February 23, 1966 issue, Noam Chomsky published a 12,000-word essay on “The Responsibility of Intellectuals.” “We can hardly avoid asking ourselves,” Chomsky wrote, “to what extent the American people bear responsibility for the savage American assault on a largely helpless rural population in Vietnam… As for those of us who stood by in silence and apathy as this catastrophe slowly took shape over the past dozen years—on what page of history do we find our proper place?” In this exchange later that spring, George Steiner pressed Chomsky on the question of what political or personal actions ought to be taken to end the war.

GS: I write to express my admiration for your lucid and compelling essay. But I write also to ask what your next paragraph would be? The mendacities which surround us need exposure. But what then? You rightly say that we are all responsible; you rightly hint that our future status may be no better than that of the acquiescent intellectual under Nazism. But what action do you urge or even suggest?

David LevineNoam Chomsky, 1972

NC: I do feel that the crucial question, unanswered in the article, is what the next paragraph should say. I’ve thought a good deal about this, without having reached any satisfying conclusions. I’ve tried various things—harassing congressmen, “lobbying” in Washington, lecturing at town forums, working with student groups in preparation of public protests, demonstrations, teach-ins, etc., in all of the ways that many others have adopted as well. The only respect in which I have personally gone any further is in refusal to pay half of my income tax last year, and again, this year. My own feeling is that one should refuse to participate in any activity that implements American aggression—thus tax refusal, draft refusal, avoidance of work that can be used by the agencies of militarism and repression, all seem to me essential.  »


The Music of the Beatles

“I and my colleagues have been happily torn from a long nap by the energy of rock,” composer Ned Rorem wrote in January 1968, “principally as embodied in the Beatles. Naturally I’ve grown curious about their energy. What are its origins? What need does it fill?”​

Mark and Colleen Hayward/Getty ImagesThe Beatles presenting their new album, “Sgt. Pepper’s Lonely Hearts Club Band,” in London, May 1967

I never go to classical concerts any more and I don’t know anyone who does. It’s hard still to care whether some virtuoso tonight will perform the Moonlight Sonata a bit better or a bit worse than another virtuoso performed it last night. But I do often attend what used to be called avant-garde recitals, though seldom with delight, and inevitably I look around and wonder: what am I doing here? Where are the poets and painters and even the composers themselves who used to flock to these things? Well, perhaps what I am doing here is a duty, keeping an ear on my profession so as to justify the joys of resentment, to steal an idea or two, or just to show charity toward some friend on the program. But I learn less and less. Meanwhile the absent artists are at home playing records; they are reacting again, finally, to something they no longer find at concerts. Reacting to what? Why, to the Beatles, of course, whose arrival I believe is one of the most healthy events in music since 1950.  »

Source Article from

Grown Men Reading ‘Nancy’

Fantagraphics BooksA panel from Nancy, August 8, 1959; click for full strip

One of the defining traits of 1980s New York City postmodernist writing and painting was the urge to deconstruct. This extended to the comics medium in Art Spiegelman and Francois Mouly’s Raw, an oversized anthology magazine that serialized Maus and introduced readers to “art comics” from around the world. Spiegelman’s experimental work looked like exploded pages of Sunday cartoon battles between what was then considered “low” and “high” art. Richard McGuire’s short story “Here” dissected a single room across time using a panel-in-panel device also seen in 1980s painters like David Salle and Robert Longo. Gary Panter drew apocalyptic nightmares that dismantled and intuitively reconstructed drawing modes from Picasso to Jack Kirby.

In the midst of all of this deconstruction was a renewed interest among cartoonists in a humble, plain-looking gag strip that began in the 1930s: Ernie Bushmiller’s Nancy. Nancy follows an eight-year-old suburban girl as she solves mundane problems and interacts with Sluggo, a fellow prankster and sometimes romantic interest. Bushmiller (born in 1905) drew it for most of his life, with each strip as a self-contained “gag”—a single joke that could be easily digested as the reader glanced across the strip. The imagery and jokes are so prototypical and simple that the American Heritage Dictionary uses it to illustrate the meaning of “comic strip.” The appeal of Nancy to the art comic crowd might seem counter-intuitive, but while Nancy was never particularly clever, it was always cleverly constructed. In fact, the accomplishment of Nancy, with its refined, reduced lines and preoccupation with plungers and faucets, might primarily be a matter of form. As Bill Griffith (Zippy the Pinhead, also Raw) wrote in his 2012 introduction to a collected Nancy volume: “Nancy doesn’t tell us much about what it’s like to be a kid. What Nancy tells us is what it’s like to be a comic strip.”

Mark Newgarden“Love’s Savage Fury” (center spread) by Mark Newgarden, RAW magazine #8, 1986

Nancy became a touchstone for artists to appropriate, distort, and transform. In Raw, Mark Newgarden’s 1986 comic Love’s Savage Fury depicted a Nancy whose minimal facial features rearrange while Bazooka Joe, a Topps bubblegum package mascot, eyes her across a NYC subway. Newgarden (who worked at Topps and co-created The Garbage Pail Kids) and Paul Karasik (a Raw associate editor and cartoonist who would go on to co-write the graphic-novel adaptation of Paul Auster’s City of Glass) then collaborated on a 1988 essay titled “How to Read Nancy” that deconstructed the elements of a single 1959 Nancy gag in nine ways across eight pages. By isolating elements of the comic, they explored how each piece supported the entire gag—for example, solely the dialogue of the strip; then solely the spotted blacks; then the arc of the horizon line, etc.

Jerry MoriartyFrom Jack Survives by Jerry Moriarty, 1984; click to enlarge

What Newgarden and Karasik’s essay did was explain how very deliberate every decision in Bushmiller’s composition of Nancy was. Every mark in this 1.5- by 5-inch space was in perfect concert with the whole. For example, the placement of blacks of Nancy’s hair and Sluggo’s shirt create a graphic band that transitions into the long hose line that swiftly directs our eyes to the culmination. Sluggo faces left, then right, then left again to create a visual counterpoint to his dialogue that creates a rhythm, again leading to the gag’s climax. Everything serves a single gag, and the mechanics to deliver that gag are clearly visible. These same mechanics of comics design and pacing can be seen in Chris Ware and Carol Tyler’s novelistic stories, or most strongly in Jerry Moriarty’s painterly, poetic comic shorts, collected in his books Jack Survives and Whatsa Paintoonist.

Fantagraphics BooksA comic strip by Ernie Bushmiller from Judge magazine, April 13, 1929

Three decades later, in an epic feat of comics fandom, research, and obsession, Newgarden and Karasik have expanded that essay into a 274-page book examining over forty elements of the same 1959 gag. This gag comic strip now joins the ranks of works of art that have entire books dedicated to them. What Newgarden and Karasik have done here is clearly, methodically, often hilariously explained the unique beauty and craft of comics. This book-length How to Read Nancy also includes a short biography of Bushmiller and a history of hose-themed gags at the end (providing examples of drawn gags that climax with erupting hoses dating back to the late 1800s). It also reprints some of Bushmiller’s early, pre-Nancy works, including this gem.

Today, comics are studied in colleges and reviewed in prominent magazines, but they are often discussed either as vessels for urgent, personal stories or as objects filled with beautiful, unusual graphics. They are rarely discussed or reviewed for their “cartooning,” the particular panel-to-panel magic, the arrangement of elements that mysteriously combines reading and looking, and distinguishes why a comic like Nancy is masterful and others are not. Beautiful cartooning affects a comic the way a well-chosen word, arriving at the right time in a sentence, makes for good writing, or the way a room composed with the right combination of things in the exact right places is good interior design.

Fantagraphics BooksThe Nancy comic strip, originally published August 8, 1959, that is used to isolate different aspects of the strip in How to Read Nancy, 2017

Fantagraphics Books“Spotting Blacks” in The Cartoonist’s Eye: “Black ink is one of the cartoonist’s best friends—and most powerful tools…,” from How to Read Nancy, 2017

Fantagraphics Books“The Hose” in Props & Special Effects: “Certain objects seem to innately lend themselves to makers of visual humor…,” from How to Read Nancy, 2017

For instance, one chapter of How to Read Nancy, titled “The Leaky Spigot,” focuses on the number of droplets placed around the spigot at the center of the strip. Four droplets communicate that there is a great deal of pressure pulsing through the hose. The greater the pressure, the more rewarding Nancy’s vengeance will be. Two or three droplets would not imply this strength of pressure. Five might suggest a malfunction, and would break the graphic symmetry of the design. Karasik and Newgarden also note that the droplets to the right are slightly smaller and therefore in spatial perspective. Every element of the strip is analyzed to this degree of fascinating and humorous detail.

By the time I read Nancy, in the 2000s, it was already a revered comic strip, published in hardcover collections. Admired alternative graphic novelists had long sung its praises. Nevertheless, Nancy is so unpretentious that its subtle genius always feels to new generations of readers like a fresh discovery. Nancy occupies a surreal, vague landscape of fences, sidewalks, and grass. Strips rarely address current issues or events, except for regular holidays like Valentine’s Day or Labor Day. Because the strip never addresses topical issues, its timelessness allows one to project onto it contemporary issues and symbols. Reading the 1959 strip in 2017, it was hard not to see a woman with an afro about to take down a penis-gun-squirting bully.

Fantagraphics BooksNancy, March 17, 1960

Nancy was also important to Scott McCloud, whose 1993 Understanding Comics remains the most popular comics-studies book. Understanding Comics has a big-hearted enthusiasm for the potential of the comics art form, and it inspired a generation of cartoonists to be more formally playful. McCloud devised his own Nancy card game and encouraged students to use the strip as a tool for further experimentation. Rather than take that approach, How to Read Nancy is closer to Hitchcock/Truffaut (1966), in which younger artists examine the work of a master. Just as Hitchcock’s techniques illuminate the medium of film to a degree most filmmakers do not, Bushmiller’s illuminate comics.

The beauty of cartooning may be difficult to appreciate, especially for those who have not been versed in cartooning for years. By dissecting this gag strip so systematically, How to Read Nancy is important for people working in the form, and also for the cartooning medium as a whole to be understood and recognized as the unique art form that it is.

Fantagraphics BooksNancy, March 4, 1970

How to Read Nancy, by Mark Newgarden and Paul Karasik, is published by Fantagraphics Books.

Source Article from

Ivan Ilyin, Putin’s Philosopher of Russian Fascism

This is an expanded version of Timothy Snyder’s essay “God Is a Russian” in the April 5, 2018 issue of The New York Review.

Fine Art Images/Heritage Images/Getty ImagesIvan Ilyin, circa 1920

“The fact of the matter is that fascism is a redemptive excess of patriotic arbitrariness.”

—Ivan Ilyin, 1927

“My prayer is like a sword. And my sword is like a prayer.”

—Ivan Ilyin, 1927

“Politics is the art of identifying and neutralizing the enemy.”

—Ivan Ilyin, 1948

The Russian looked Satan in the eye, put God on the psychoanalyst’s couch, and understood that his nation could redeem the world. An agonized God told the Russian a story of failure. In the beginning was the Word, purity and perfection, and the Word was God. But then God made a youthful mistake. He created the world to complete himself, but instead soiled himself, and hid in shame. God’s, not Adam’s, was the original sin, the release of the imperfect. Once people were in the world, they apprehended facts and experienced feelings that could not be reassembled to what had been God’s mind. Each individual thought or passion deepened the hold of Satan on the world.

And so the Russian, a philosopher, understood history as a disgrace. Nothing that had happened since creation was of significance. The world was a meaningless farrago of fragments. The more humans sought to understand it, the more sinful it became. Modern society, with its pluralism and its civil society, deepened the flaws of the world and kept God in his exile. God’s one hope was that a righteous nation would follow a Leader into political totality, and thereby begin a repair of the world that might in turn redeem the divine. Because the unifying principle of the Word was the only good in the universe, any means that might bring about its return were justified.

Thus this Russian philosopher, whose name was Ivan Ilyin, came to imagine a Russian Christian fascism. Born in 1883, he finished a dissertation on God’s worldly failure just before the Russian Revolution of 1917. Expelled from his homeland in 1922 by the Soviet power he despised, he embraced the cause of Benito Mussolini and completed an apology for political violence in 1925. In German and Swiss exile, he wrote in the 1920s and 1930s for White Russian exiles who had fled after defeat in the Russian civil war, and in the 1940s and 1950s for future Russians who would see the end of the Soviet power.

A tireless worker, Ilyin produced about twenty books in Russian, and another twenty in German. Some of his work has a rambling and commonsensical character, and it is easy to find tensions and contradictions. One current of thought that is coherent over the decades, however, is his metaphysical and moral justification for political totalitarianism, which he expressed in practical outlines for a fascist state. A crucial concept was “law” or “legal consciousness” (pravosoznanie). For the young Ilyin, writing before the Revolution, law embodied the hope that Russians would partake in a universal consciousness that would allow Russia to create a modern state. For the mature, counter-revolutionary Ilyin, a particular consciousness (“heart” or “soul,” not “mind”) permitted Russians to experience the arbitrary claims of power as law. Though he died forgotten, in 1954, Ilyin’s work was revived after collapse of the Soviet Union in 1991, and guides the men who rule Russia today.

The Russian Federation of the early twenty-first century is a new country, formed in 1991 from the territory of the Russian republic of the Soviet Union. It is smaller than the old Russian Empire, and separated from it in time by the intervening seven decades of Soviet history. Yet the Russian Federation of today does resemble the Russian Empire of Ilyin’s youth in one crucial respect: it has not established the rule of law as the principle of government. The trajectory in Ilyin’s understanding of law, from hopeful universalism to arbitrary nationalism, was followed in the discourse of Russian politicians, including Vladimir Putin. Because Ilyin found ways to present the failure of the rule of law as Russian virtue, Russian kleptocrats use his ideas to portray economic inequality as national innocence. In the last few years, Vladimir Putin has also used some of Ilyin’s more specific ideas about geopolitics in his effort translate the task of Russian politics from the pursuit of reform at home to the export of virtue abroad. By transforming international politics into a discussion of “spiritual threats,” Ilyin’s works have helped Russian elites to portray the Ukraine, Europe, and the United States as existential dangers to Russia.


Ivan Ilyin was a philosopher who confronted Russian problems with German thinkers. This was typical of the time and place. He was child of the Silver Age, the late empire of the Romanov dynasty. His father was a Russian nobleman, his mother a German Protestant who had converted to Orthodoxy. As a student at Moscow between 1901 and 1906, Ilyin’s real subject was philosophy, which meant the ethical thought of Immanuel Kant (1724–1804). For the neo-Kantians, who then held sway in universities across Europe as well as in Russia, humans differed from the rest of creation by a capacity for reason that permitted meaningful choices. Humans could then freely submit to law, since they could grasp and accept its spirit.

Law was then the great object of desire of the Russian thinking classes. Russian students of law, perhaps more than their European colleagues, could see it as a source of political transformation. Law seemed to offer the antidote to the ancient Russian problem of proizvol, of arbitrary rule by autocratic tsars. Even as a hopeful young man, however, Ilyin struggled to see the Russian people as the creatures of reason Kant imagined. He waited expectantly for a grand revolt that would hasten the education of the Russian masses. When the Russo-Japanese War created conditions for a revolution in 1905, Ilyin defended the right to free assembly. With his girlfriend, Natalia Vokach, he translated a German anarchist pamphlet into Russian. The tsar was forced to concede a new constitution in 1906, which created a new Russian parliament. Though chosen in a way that guaranteed the power of the empire’s landed classes, the parliament had the authority to legislate. The tsar dismissed parliament twice, and then illegally changed the electoral system to ensure that it was even more conservative. It was impossible to see the new constitution as having brought the rule of law to Russia.

Employed to teach law by the university in 1909, Ilyin published a beautiful article in both Russian (1910) and German (1912) on the conceptual differences between law and power. Yet how to make law functional in practice and resonant in life? Kant seemed to leave open a gap between the spirit of law and the reality of autocracy. G.W.F. Hegel (1770–1831), however, offered hope by proposing that this and other painful tensions would be resolved by time. History, as a hopeful Ilyin read Hegel, was the gradual penetration of Spirit (Geist) into the world. Each age transcended the previous one and brought a crisis that promised the next one. The beastly masses will come to resemble the enlightened friends, ardors of daily life will yield to political order.

The philosopher who understands this message becomes the vehicle of Spirit, always a tempting prospect. Like other Russian intellectuals of his own and previous generations, the young Ilyin was drawn to Hegel, and in 1912 proclaimed a “Hegelian renaissance.” Yet, just as the immense Russian peasantry had given him second thoughts about the ease of communicating law to Russian society, so his experience of modern urban life left him doubtful that historical change was only a matter of Spirit. He found Russians, even those of his own class and milieu in Moscow, to be disgustingly corporeal. In arguments about philosophy and politics in the 1910s, he accused his opponents of “sexual perversion.”

In 1913, Ilyin worried that perversion was a national Russian syndrome, and proposed Sigmund Freud (1856–1939) as Russia’s savior. In Ilyin’s reading of Freud, civilization arose from a collective agreement to suppress basic drives. The individual paid a psychological price for sacrifice of his nature to culture. Only through long consultations on the couch of the psychoanalyst could unconscious experience surface into awareness. Psychoanalysis therefore offered a very different portrait of thought than did the Hegelian philosophy that Ilyin was then studying. Even as Ilyin was preparing his dissertation on Hegel, he offered himself as the pioneer of Russia’s national psychotherapy, travelling with Natalia to Vienna in May 1914 for sessions with Freud. Thus the outbreak of World War I found Ilyin in Vienna, the capital of the Habsburg monarchy, now one of Russia’s enemies.

“My inner Germans,” Ilyin wrote to a friend in 1915, “trouble me more than the outer Germans,” the German and Habsburg realms making war against the Russian Empire. The “inner German” who helped Ilyin to master the others was the philosopher Edmund Husserl, with whom he had studied in Göttingen in 1911. Husserl (1859–1938), the founder of the school of thought known as phenomenology, tried to describe the method by which the philosopher thinks himself into the world. The philosopher sought to forget his own personality and prior assumptions, and tried to experience a subject on its own terms. As Ilyin put it, the philosopher must mentally possess (perezhit’) the object of inquiry until he attains self-evident and exhaustive clarity (ochevidnost).

Husserl’s method was simplified by Ilyin into a “philosophical act” whereby the philosopher can still the universe and anything in it—other philosophers, the world, God— by stilling his own mind. Like an Orthodox believer contemplating an icon, Ilyin believed (in contrast to Husserl) that he could see a metaphysical reality through a physical one. As he wrote his dissertation about Hegel, he perceived the divine subject in a philosophical text, and fixed it in place. Hegel meant God when he wrote Spirit, concluded Ilyin, and Hegel was wrong to see motion in history. God could not realize himself in the world, since the substance of God was irreconcilably different from the substance of the world. Hegel could not show that every fact was connected to a principle, that every accident was part of a design, that every detail was part of a whole, and so on. God had initiated history and then been blocked from further influence.

Ilyin was quite typical of Russian intellectuals in his rapid and enthusiastic embrace of contradictory German ideas. In his dissertation he was able, thanks to his own very specific understanding of Husserl, to bring some order to his “inner Germans.” Kant had suggested the initial problem for a Russian political thinker: how to establish the rule of law. Hegel had seemed to provide a solution, a Spirit advancing through history. Freud had redefined Russia’s problem as sexual rather than spiritual. Husserl allowed Ilyin to transfer the responsibility for political failure and sexual unease to God. Philosophy meant the contemplation that allowed contact with God and began God’s cure. The philosopher had taken control and all was in view: other philosophers, the world, God. Yet, even after contact was made with the divine, history continued, “the current of events” continued to flow.

Indeed, even as Ilyin contemplated God, men were killing and dying by the millions on battlefields across Europe. Ilyin was writing his dissertation as the Russian Empire gained and then lost territory on the Eastern Front of World War I. In February 1917, the tsarist regime was replaced by a new constitutional order. The new government tottered as it continued a costly war. That April, Germany sent Vladimir Lenin to Russia in a sealed train, and his Bolsheviks carried out a second revolution in November, promising land to peasants and peace to all. Ilyin was meanwhile trying to assemble the committee so he could defend his dissertation. By the time he did so, in 1918, the Bolsheviks were in power, their Red Army was fighting a civil war, and the Cheka was defending revolution through terror.

World War I gave revolutionaries their chance, and so opened the way for counter-revolutionaries as well. Throughout Europe, men of the far right saw the Bolshevik Revolution as a certain kind of opportunity; and the drama of revolution and counter-revolution was played out, with different outcomes, in Germany, Hungary, and Italy. Nowhere was the conflict so long, bloody, and passionate as in the lands of the former Russian Empire, where civil war lasted for years, brought famine and pogroms, and cost about as many lives as World War I itself. In Europe in general, but in Russia in particular, the terrible loss of life, the seemingly endless strife, and the fall of empire brought a certain plausibility to ideas that might otherwise have remained unknown or seemed irrelevant. Without the war, Leninism would likely be a footnote in the history of Marxist thought; without Lenin’s revolution, Ilyin might not have drawn right-wing political conclusions from his dissertation.

Lenin and Ilyin did not know each other, but their encounter in revolution and counter-revolution was nevertheless uncanny. Lenin’s patronymic was “Ilyich” and he wrote under the pseudonym “Ilyin,” and the real Ilyin reviewed some of that pseudonymous work. When Ilyin was arrested by the Cheka as an opponent of the revolution, Lenin intervened on his behalf as a gesture of respect for Ilyin’s philosophical work. The intellectual interaction between the two men, which began in 1917 and continues in Russia today, began from a common appreciation of Hegel’s promise of totality. Both men interpreted Hegel in radical ways, agreeing with one another on important points such as the need to destroy the middle classes, disagreeing about the final form of the classless community.

Lenin accepted with Hegel that history was a story of progress through conflict. As a Marxist, he believed that the conflict was between social classes: the bourgeoisie that owned property and the proletariat that enabled profits. Lenin added to Marxism the proposal that the working class, though formed by capitalism and destined to seize its achievements, needed guidance from a disciplined party that understood the rules of history. In 1917, Lenin went so far as to claim that the people who knew the rules of history also knew when to break them— by beginning a socialist revolution in the Russian Empire, where capitalism was weak and the working class tiny. Yet Lenin never doubted that there was a good human nature, trapped by historical conditions, and therefore subject to release by historical action.

Marxists such as Lenin were atheists. They thought that by Spirit, Hegel meant God or some other theological notion, and replaced Spirit with society. Ilyin was not a typical Christian, but he believed in God. Ilyin agreed with Marxists that Hegel meant God, and argued that Hegel’s God had created a ruined world. For Marxists, private property served the function of an original sin, and its dissolution would release the good in man. For Ilyin, God’s act of creation was itself the original sin. There was never a good moment in history, and no intrinsic good in humans. The Marxists were right to hate the middle classes, and indeed did not hate them enough. Middle-class “civil society” entrenches plural interests that confound hopes for an “overpowering national organization” that God needs. Because the middle classes block God, they must be swept away by a classless national community. But there is no historical tendency, no historical group, that will perform this labor. The grand transformation from Satanic individuality to divine totality must begin somewhere beyond history.

According to Ilyin, liberation would arise not from understanding history, but from eliminating it. Since the earthly was corrupt and the divine unattainable, political rescue would come from the realm of fiction. In 1917, Ilyin was still hopeful that Russia might become a state ruled by law. Lenin’s revolution ensured that Ilyin henceforth regarded his own philosophical ideas as political. Bolshevism had proven that God’s world was as flawed as Ilyin had maintained. What Ilyin would call “the abyss of atheism” of the new regime was the final confirmation of the flaws of world, and of the power of modern ideas to reinforce them.

After he departed Russia, Ilyin would maintain that humanity needed heroes, outsized characters from beyond history, capable of willing themselves to power. In his dissertation, this politics was implicit in the longing for a missing totality and the suggestion that the nation might begin its restoration. It was an ideology awaiting a form and a name.


Ilyin left Russia in 1922, the year the Soviet Union was founded. His imagination was soon captured by Benito Mussolini’s March on Rome, the coup d’état that brought the world’s first fascist regime. Ilyin was convinced that bold gestures by bold men could begin to undo the flawed character of existence. He visited Italy and published admiring articles about Il Duce while he was writing his book, On the Use of Violence to Resist Evil (1925). If Ilyin’s dissertation had laid groundwork for a metaphysical defense of fascism, this book was a justification of an emerging system. The dissertation described the lost totality unleashed by an unwitting God; second book explained the limits of the teachings of God’s Son. Having understood the trauma of God, Ilyin now “looked Satan in the eye.”

Thus famous teachings of Jesus, as rendered in the Gospel of Mark, take on unexpected meanings in Ilyin’s interpretations. “Judge not,” says Jesus, “that ye not be judged.” That famous appeal to reflection continues:

For with what judgment ye judge, ye shall be judged: and with what measure ye mete, it shall be measured to you again. And why beholdest thou the mote that is in thy brother’s eye, but considerest not the beam that is in thine own eye? Or how wilt thou say to thy brother, Let me pull out the mote out of thine eye; and, behold, a beam is in thine own eye? Thou hypocrite, first cast out the beam out of thine own eye; and then shalt thou see clearly to cast out the mote out of thy brother’s eye.

For Ilyin, these were the words of a failed God with a doomed Son. In fact, a righteous man did not reflect upon his own deeds or attempt to see the perspective of another; he contemplated, recognized absolute good and evil, and named the enemies to be destroyed. The proper interpretation of the “judge not” passage was that every day was judgment day, and that men would be judged for not killing God’s enemies when they had the chance. In God’s absence, Ilyin determined who those enemies were.

Perhaps Jesus’ most remembered commandment is to love one’s enemy, from the Gospel of Matthew: “Ye have heard that it hath been said, An eye for an eye, and a tooth for a tooth: But I say unto you, That ye resist not evil: but whosoever shall smite thee on thy right cheek, turn to him the other also.” Ilyin maintained that the opposite was meant. Properly understood, love meant totality. It did not matter whether one individual tries to love another individual. The individual only loved if he was totally subsumed in the community. To be immersed in such love was to struggle “against the enemies of the divine order on earth.” Christianity actually meant the call of the right-seeing philosopher to apply decisive violence in the name of love. Anyone who failed to accept this logic was himself an agent of Satan: “He who opposes the chivalrous struggle against the devil is himself the devil.”

Thus theology becomes politics. The democracies did not oppose Bolshevism, but enabled it, and must be destroyed. The only way to prevent the spread of evil was to crush middle classes, eradicate their civil society, and transform their individualist and universalist understanding of law into a consciousness of national submission. Bolshevism was no antidote to the disease of the middle classes, but rather the full irruption of their disease. Soviet and European governments must be swept away by violent coups d’état.

Ilyin used the word Spirit (Dukh) to describe the inspiration of fascists. The fascist seizure of power, he wrote, was an “act of salvation.” The fascist is the true redeemer, since he grasps that it is the enemy who must be sacrificed. Ilyin took from Mussolini the concept of a “chivalrous sacrifice” that fascists make in the blood of others. (Speaking of the Holocaust in 1943, Heinrich Himmler would praise his SS-men in just these terms.)

Ilyin understood his role as a Russian intellectual as the propagation of fascist ideas in a particular Russian idiom. In a poem in the first number of a journal he edited between 1927 and 1930, he provided the appropriate lapidary motto: “My prayer is like a sword. And my sword is like a prayer.” Ilyin dedicated his huge 1925 book On the Use of Violence to Resist Evil to the Whites, the men who had resisted the Bolshevik Revolution. It was meant as a guide to their future.

What seemed to trouble Ilyin most was that Italians and not Russians had invented fascism: “Why did the Italians succeed where we failed?” Writing of the future of Russian fascism in 1927, he tried to establish Russian primacy by considering the White resistance to the Bolsheviks as the pre-history of the fascist movement as a whole. The White movement had also been “deeper and broader” than fascism because it had preserved a connection to religion and the need for totality. Ilyin proclaimed to “my White brothers, the fascists” that a minority must seize power in Russia. The time would come. The “White Spirit” was eternal.

Ilyin’s proclamation of a fascist future for Russia in the 1920s was the absolute negation of his hopes in the 1910s that Russia might become a rule-of-law state. “The fact of the matter,” wrote Ilyin, “is that fascism is a redemptive excess of patriotic arbitrariness.” Arbitrariness (proizvol), a central concept in all modern Russian political discussions, was the bugbear of all Russian reformers seeking improvement through law. Now proizvol was patriotic. The word for “redemptive” (spasytelnii), is another central Russian concept. It is the adjective Russian Orthodox Christians might apply to the sacrifice of Christ on Calvary, the death of the One for the salvation of the many. Ilyin uses it to mean the murder of outsiders so that the nation could undertake a project of total politics that might later redeem a lost God.

In one sentence, two universal concepts, law and Christianity, are undone. A spirit of lawlessness replaces the spirit of the law; a spirit of murder replaces a spirit of mercy.


Although Ilyin was inspired by fascist Italy, his home as a political refugee between 1922 and 1938 was Germany. As an employee of the Russian Scholarly Institute (Russisches Wissenschaftliches Institut), he was an academic civil servant. It was from Berlin that he observed the succession struggle after Lenin’s death that brought Joseph Stalin to power. He then followed Stalin’s attempt to transform the political victory of the Bolsheviks into a social revolution. In 1933, Ilyin published a long book, in German, on the famine brought by the collectivization of Soviet agriculture.

Writing in Russian for Russian émigrés, Ilyin was quick to praise Hitler’s seizure of power in 1933. Hitler did well, in Ilyin’s opinion, to have the rule of law suspended after the Reichstag Fire of February 1933. Ilyin presented Hitler, like Mussolini, as a Leader from beyond history whose mission was entirely defensive. “A reaction to Bolshevism had to come,” wrote Ilyin, “and it came.” European civilization had been sentenced to death, but “so long as Mussolini is leading Italy and Hitler is leading Germany, European culture has a stay of execution.” Nazis embodied a “Spirit” (Dukh) that Russians must share.

According to Ilyin, Nazis were right to boycott Jewish businesses and blame Jews as a collectivity for the evils that had befallen Germany. Above all, Ilyin wanted to persuade Russians and other Europeans that Hitler was right to treat Jews as agents of Bolshevism. This “Judeobolshevik” idea, as Ilyin understood, was the ideological connection between the Whites and the Nazis. The claim that Jews were Bolsheviks and Bolsheviks were Jews was White propaganda during the Russian Civil War. Of course, most communists were not Jews, and the overwhelming majority of Jews had nothing to do with communism. The conflation of the two groups was not an error or an exaggeration, but rather a transformation of traditional religious prejudices into instruments of national unity. Judeobolshevism appealed to the superstitious belief of Orthodox Christian peasants that Jews guarded the border between the realms of good and evil. It shifted this conviction to modern politics, portraying revolution as hell and Jews as its gatekeepers. As in Ilyin’s philosophy, God was weak, Satan was dominant, and the weapons of hell were modern ideas in the world.

During and after the Russian Civil War, some of the Whites had fled to Germany as refugees. Some brought with them the foundational text of modern antisemitism, the fictional “Protocols of the Elders of Zion,” and many others the conviction that a global Jewish conspiracy was responsible for their defeat. White Judeobolshevism, arriving in Germany in 1919 and 1920, completed the education of Adolf Hitler as an antisemite. Until that moment, Hitler had presented the enemy of Germany as Jewish capitalism. Once convinced that Jews were responsible for both capitalism and communism, Hitler could take the final step and conclude, as he did in Mein Kampf, that Jews were the source of all ideas that threatened the German people. In this important respect, Hitler was indeed a pupil of the Russian White movement. Ilyin, the main White ideologist, wanted the world to know that Hitler was right.

As the 1930s passed, Ilyin began to doubt that Nazi Germany was advancing the cause of Russian fascism. This was natural, since Hitler regarded Russians as subhumans, and Germany supported European fascists only insofar as they were useful to the specific Nazi cause. Ilyin began to caution Russian Whites about Nazis, and came under suspicion from the German government. He lost his job and, in 1938, left Germany for Switzerland. He remained faithful, however, to his conviction that the White movement was anterior to Italian fascism and German National Socialism. In time, Russians would demonstrate a superior fascism.


From a safe Swiss vantage point near Zurich, Ilyin observed the outbreak of World War II. It was a confusing moment for both communists and their enemies, since the conflict began after the Soviet Union and Nazi Germany reached an agreement known as the Molotov-Ribbentrop Pact. Its secret protocol, which divided East European territories between the two powers, was an alliance in all but name. In September 1939, both Nazi Germany and the Soviet Union invaded Poland, their armies meeting in the middle. Ilyin believed that the Nazi-Soviet alliance would not last, since Stalin would betray Hitler. In 1941, the reverse took place, as the Wehrmacht invaded the Soviet Union. Though Ilyin harbored reservations about the Nazis, he wrote of the German invasion of the USSR as a “judgment on Bolshevism.” After the Soviet victory at Stalingrad in February 1943, when it became clear that Germany would likely lose the war, Ilyin changed his position again. Then, and in the years to follow, he would present the war as one of a series of Western attacks on Russian virtue.

Russian innocence was becoming one of Ilyin’s great themes. As a concept, it completed Ilyin’s fascist theory: the world was corrupt; it needed redemption from a nation capable of total politics; that nation was unsoiled Russia. As he aged, Ilyin dwelled on the Russian past, not as history, but as a cyclical myth of native virtue defended from external penetration. Russia was an immaculate empire, always under attack from all sides. A small territory around Moscow became the Russian Empire, the largest country of all time, without ever attacking anyone. Even as it expanded, Russia was the victim, because Europeans did not understand the profound virtue it was defending by taking more land. In Ilyin’s words, Russia has been subject to unceasing “continental blockade,” and so its entire past was one of “self-defense.” And so, “the Russian nation, since its full conversion to Christianity, can count nearly one thousand years of historical suffering.”

Although Ilyin wrote hundreds of tedious pages along these lines, he also made clear that it did not matter what had actually happened or what Russians actually did. That was meaningless history, those were mere facts. The truth about a nation, wrote Ilyin, was “pure and objective” regardless of the evidence, and the Russian truth was invisible and ineffable Godliness. Russia was not a country with individuals and institutions, even should it so appear, but an immortal living creature. “Russia is an organism of nature and the soul,” it was a “living organism,” a “living organic unity,” and so on. Ilyin wrote of “Ukrainians” within quotation marks, since in his view they were a part of the Russian organism. Ilyin was obsessed by the fear that people in the West would not understand this, and saw any mention of Ukraine as an attack on Russia. Because Russia is an organism, it “cannot be divided, only dissected.”

Ilyin’s conception of Russia’s political return to God required the abandonment not only of individuality and plurality, but also of humanity. The fascist language of organic unity, discredited by the war, remained central to Ilyin. In general, his thinking was not really altered by the war. He did not reject fascism, as did most of its prewar advocates, although he now did distinguish between what he regarded as better and worse forms of fascism. He did not partake in the general shift of European politics to the left, nor in the rehabilitation of democracy. Perhaps most importantly, he did not recognize that the age of European colonialism was passing. He saw Franco’s Spain and Salazar’s Portugal, then far-flung empires ruled by right-wing authoritarian regimes, as exemplary.

World War II was not a “judgment on Bolshevism,” as Ilyin had imagined in 1941. Instead, the Red Army had emerged triumphant in 1945, Soviet borders had been extended west, and a new outer empire of replicate regimes had been established in Eastern Europe. The simple passage of time made it impossible to imagine in the 1940s, as Ilyin had in the 1920s, the members of the White emigration might someday return to power in Russia. Now he was writing their eulogies rather than their ideologies. What was needed instead was a blueprint for a post-Soviet Russia that would be legible in the future. Ilyin set about composing a number of constitutional proposals, as well as a shorter set of political essays. These last, published as Our tasks (Nashi zadachi), began his intellectual revival in post-Soviet Russia.

These postwar recommendations bear an unmistakable resemblance to prewar fascist systems, and are consistent with the metaphysical and ethical legitimations of fascism present in Ilyin’s major works. The “national dictator,” predicted Ilyin, would spring from somewhere beyond history, from some fictional realm. This Leader (Gosudar’) must be “sufficiently manly,” like Mussolini. The note of fragile masculinity is hard to overlook. “Power comes all by itself,” declared Ilyin, “to the strong man.” People would bow before “the living organ of Russia.” The Leader “hardens himself in just and manly service.”

In Ilyin’s scheme, this Leader would be personally and totally responsible for every aspect of political life, as chief executive, chief legislator, chief justice, and commander of the military. His executive power is unlimited. Any “political selection” should take place “on a formally undemocratic basis.” Democratic elections institutionalized the evil notion of individuality. “The principle of democracy,” Ilyin wrote, “was the irresponsible human atom.” Counting votes was to falsely accept “the mechanical and arithmetical understanding of politics.” It followed that “we must reject blind faith in the number of votes and its political significance.” Public voting with signed ballots will allow Russians to surrender their individuality. Elections were a ritual of submission of Russians before their Leader.

The problem with prewar fascism, according to Ilyin, had been the one-party state. That was one party too many. Russia should be a zero-party state, in that no party should control the state or exercise any influence on the course of events. A party represents only a segment of society, and segmentation is what is to be avoided. Parties can exist, but only as traps for the ambitious or as elements of the ritual of electoral subservience. (Members of Putin’s party were sent the article that makes this point in 2014.) The same goes for civil society: it should exist as a simulacrum. Russians should be allowed to pursue hobbies and the like, but only within the framework of a total corporate structure that included all social organizations. The middle classes must be at the very bottom of the corporate structure, bearing the weight of the entire system. They are the producers and consumers of facts and feelings in a system where the purpose is to overcome factuality and sensuality.

“Freedom for Russia,” as Ilyin understood it (in a text selectively quoted by Putin in 2014), would not mean freedom for Russians as individuals, but rather freedom for Russians to understand themselves as parts of a whole. The political system must generate, as Ilyin clarified, “the organic-spiritual unity of the government with the people, and the people with the government.” The first step back toward the Word would be “the metaphysical identity of all people of the same nation.” The “the evil nature of the ‘sensual’” could be banished, and “the empirical variety of human beings” itself could be overcome.


Russia today is a media-heavy authoritarian kleptocracy, not the religious totalitarian entity that Ilyin imagined. And yet, his concepts do help lift the obscurity from some of the more interesting aspects of Russian politics. Vladimir Putin, to take a very important example, is a post-Soviet politician who emerged from the realm of fiction. Since it is he who brought Ilyin’s ideas into high politics, his rise to power is part of Ilyin’s story as well.

Putin was an unknown when he was selected by post-Soviet Russia’s first president, Boris Yeltsin, to be prime minister in 1999. Putin was chosen by political casting call. Yeltsin’s intimates, carrying out what they called “Operation Successor,” asked themselves who the most popular character in Russian television was. Polling showed that this was the hero of a 1970s program, a Soviet spy who spoke German. This fit Putin, a former KGB officer who had served in East Germany. Right after he was appointed prime minister by Yeltsin in September 1999, Putin gained his reputation through a bloodier fiction. When apartment buildings in Russian cities began to explode, Putin blamed Muslims and began a war in Chechnya. Contemporary evidence suggests that the bombs might have been planted by Russia’s own security organization, the FSB. Putin was elected president in 2000, and served until 2008.

In the early 2000s, Putin maintained that Russia could become some kind of rule-of-law state. Instead, he succeeded in bringing economic crime within the Russian state, transforming general corruption into official kleptocracy. Once the state became the center of crime, the rule of law became incoherent, inequality entrenched, and reform unthinkable. Another political story was needed. Because Putin’s victory over Russia’s oligarchs also meant control over their television stations, new media instruments were at hand. The Western trend towards infotainment was brought to its logical conclusion in Russia, generating an alternative reality meant to generate faith in Russian virtue but cynicism about facts. This transformation was engineered by Vladislav Surkov, the genius of Russian propaganda. He oversaw a striking move toward the world as Ilyin imagined it, a dark and confusing realm given shape only by Russian innocence. With the financial and media resources under control, Putin needed only, in the nice Russian term, to add the “spiritual resource.” And so, beginning in 2005, Putin began to rehabilitate Ilyin as a Kremlin court philosopher.

That year, Putin began to cite Ilyin in his addresses to the Federal Assembly of the Russian Federation, and arranged for the reinterment of Ilyin’s remains in Russia. Then Surkov began to cite Ilyin. The propagandist accepted Ilyin’s idea that “Russian culture is the contemplation of the whole,” and summarizes his own work as the creation of a narrative of an innocent Russia surrounded by permanent hostility. Surkov’s enmity toward factuality is as deep as Ilyin’s, and like Ilyin, he tends to find theological grounds for it. Dmitry Medvedev, the leader of Putin’s political party, recommended Ilyin’s books to Russia’s youth. Ilyin began to figure in the speeches of the leaders of Russia’s tame opposition parties, the communists and the (confusingly-named, extreme-right) Liberal Democrats. These last few years, Ilyin has been cited by the head of the constitutional court, by the foreign minister, and by patriarchs of the Russian Orthodox Church.

After a four-year intermission between 2008 and 2012, during which Putin served as prime minister and allowed Medvedev to be president, Putin returned to the highest office. If Putin came to power in 2000 as hero from the realm of fiction, he returned in 2012 as the destroyer of the rule of law. In a minor key, the Russia of Putin’s time had repeated the drama of the Russia of Ilyin’s time. The hopes of Russian liberals for a rule-of-law state were again disappointed. Ilyin, who had transformed that failure into fascism the first time around, now had his moment. His arguments helped Putin transform the failure of his first period in office, the inability to introduce of the rule of law, into the promise for a second period in office, the confirmation of Russian virtue. If Russia could not become a rule-of-law state, it would seek to destroy neighbors that had succeeded in doing so or that aspired to do so. Echoing one of the most notorious proclamations of the Nazi legal thinker Carl Schmitt, Ilyin wrote that politics “is the art of identifying and neutralizing the enemy.” In the second decade of the twenty-first century, Putin’s promises were not about law in Russia, but about the defeat of a hyper-legal neighboring entity.

The European Union, the largest economy in the world and Russia’s most important economic partner, is grounded on the assumption that international legal agreements provide the basis for fruitful cooperation among rule-of-law states. In late 2011 and early 2012, Putin made public a new ideology, based in Ilyin, defining Russia in opposition to this model of Europe. In an article in Izvestiia on October 3, 2011, Putin announced a rival Eurasian Union that would unite states that had failed to establish the rule of law. In Nezavisimaia Gazeta on January 23, 2012, Putin, citing Ilyin, presented integration among states as a matter of virtue rather than achievement. The rule of law was not a universal aspiration, but part of an alien Western civilization; Russian culture, meanwhile, united Russia with post-Soviet states such as Ukraine. In a third article, in Moskovskie Novosti on February 27, 2012, Putin drew the political conclusions. Ilyin had imagined that “Russia as a spiritual organism served not only all the Orthodox nations and not only all of the nations of the Eurasian landmass, but all the nations of the world.” Putin predicted that Eurasia would overcome the European Union and bring its members into a larger entity that would extend “from Lisbon to Vladivostok.”

Putin’s offensive against the rule of law began with the manner of his reaccession to the office of president of the Russian Federation. The foundation of any rule-of-law state is a principle of succession, the set of rules that allow one person to succeed another in office in a manner that confirms rather than destroys the system. The way that Putin returned to power in 2012 destroyed any possibility that such a principle could function in Russia in any foreseeable future. He assumed the office of president, with a parliamentary majority, thanks to presidential and parliamentary elections that were ostentatiously faked, during protests whose participants he condemned as foreign agents.

In depriving Russia of any accepted means by which he might be succeeded by someone else and the Russian parliament controlled by another party but his, Putin was following Ilyin’s recommendation. Elections had become a ritual, and those who thought otherwise were portrayed by a formidable state media as traitors. Sitting in a radio station with the fascist writer Alexander Prokhanov as Russians protested electoral fraud, Putin mused about what Ivan Ilyin would have to say about the state of Russia. “Can we say,” asked Putin rhetorically, “that our country has fully recovered and healed after the dramatic events that have occurred to us after the Soviet Union collapsed, and that we now have a strong, healthy state? No, of course she is still quite ill; but here we must recall Ivan Ilyin: ‘Yes, our country is still sick, but we did not flee from the bed of our sick mother.’”

The fact that Putin cited Ilyin in this setting is very suggestive, and that he knew this phrase suggests extensive reading. Be that as it may, the way that he cited it seems strange. Ilyin was expelled from the Soviet Union by the Cheka—the institution that was the predecessor of Putin’s employer, the KGB. For Ilyin, it was the foundation of the USSR, not its dissolution, that was the Russian sickness. As Ilyin told his Cheka interrogator at the time: “I consider Soviet power to be an inevitable historical outcome of the great social and spiritual disease which has been growing in Russia for several centuries.” Ilyin thought that KGB officers (of whom Putin was one) should be forbidden from entering politics after the end of the Soviet Union. Ilyin dreamed his whole life of a Soviet collapse.

Putin’s reinterment of Ilyin’s remains was a mystical release from this contradiction. Ilyin had been expelled from Russia by the Soviet security service; his corpse was reburied alongside the remains of its victims. Putin had Ilyin’s corpse interred at a monastery where the NKVD, the heir to the Cheka and the predecessor of the KGB, had interred the ashes of thousands of Soviet citizens executed in the Great Terror. When Putin later visited the site to lay flowers on Ilyin’s grave, he was in the company of an Orthodox monk who saw the NKVD executioners as Russian patriots and therefore good men. At the time of the reburial, the head of the Russian Orthodox Church was a man who had previously served the KGB as an agent. After all, Ilyin’s justification for mass murder was the same as that of the Bolsheviks: the defense of an absolute good. As critics of his second book in the 1920s put it, Ilyin was a “Chekist for God.” He was reburied as such, with all possible honors conferred by the Chekists and by the men of God—and by the men of God who were Chekists, and by the Chekists who were men of God.

Ilyin was returned, body and soul, to the Russia he had been forced to leave. And that very return, in its inattention to contradiction, in its disregard of fact, was the purest expression of respect for his legacy. To be sure, Ilyin opposed the Soviet system. Yet, once the USSR ceased to exist in 1991, it was history—and the past, for Ilyin, was nothing but cognitive raw material for a literature of eternal virtue. Modifying Ilyin’s views about Russian innocence ever so slightly, Russian leaders could see the Soviet Union not as a foreign imposition upon Russia, as Ilyin had, but rather as Russia itself, and so virtuous despite appearances. Any faults of the Soviet system became necessary Russian reactions to the prior hostility of the West.


Questions about the influence of ideas in politics are very difficult to answer, and it would be needlessly bold to make of Ilyin’s writings the pillar of the Russian system. For one thing, Ilyin’s vast body of work admits multiple interpretations. As with Martin Heidegger, another student of Husserl who supported Hitler, it is reasonable to ask how closely a man’s political support of fascism relates to a philosopher’s work. Within Russia itself, Ilyin is not the only native source of fascist ideas to be cited with approval by Vladimir Putin; Lev Gumilev is another. Contemporary Russian fascists who now rove through the public space, such as Aleksander Prokhanov and Aleksander Dugin, represent distinct traditions. It is Dugin, for example, who made the idea of “Eurasia” popular in Russia, and his references are German Nazis and postwar West European fascists. And yet, most often in the Russia of the second decade of the twenty-first century, it is Ilyin’s ideas that to seem to satisfy political needs and to fill rhetorical gaps, to provide the “spiritual resource” for the kleptocratic state machine. In 2017, when the Russian state had so much difficulty commemorating the centenary of the Bolshevik Revolution, Ilyin was advanced as its heroic opponent. In a television drama about the revolution, he decried the evil of promising social advancement to Russians.

Russian policies certainly recall Ilyin’s recommendations. Russia’s 2012 law on “foreign agents,” passed right after Putin’s return to the office of the presidency, well represents Ilyin’s attitude to civil society. Ilyin believed that Russia’s “White Spirit” should animate the fascists of Europe; since 2013, the Kremlin has provided financial and propaganda support to European parties of the populist and extreme right. The Russian campaign against the “decadence” of the European Union, initiated in 2013, is in accord with Ilyin’s worldview. Ilyin’s scholarly effort followed his personal projection of sexual anxiety to others. First, Ilyin called Russia homosexual, then underwent therapy with his girlfriend, then blamed God. Putin first submitted to years of shirtless fur-and-feather photoshoots, then divorced his wife, then blamed the European Union for Russian homosexuality. Ilyin sexualized what he experienced as foreign threats. Jazz, for example, was a plot to induce premature ejaculation. When Ukrainians began in late 2013 to assemble in favor of a European future for their country, the Russian media raised the specter of a “homodictatorship.”

The case for Ilyin’s influence is perhaps easiest to make with respect to Russia’s new orientation toward Ukraine. Ukraine, like the Russian Federation, is a new country, formed from the territory of a Soviet republic in 1991. After Russia, it was the second-most populous republic of the Soviet Union, and it has a long border with Russia to the east and north as well as with European Union members to the west. For the first two decades after the dissolution of the Soviet Union, Russian-Ukrainian relations were defined by both sides according to international law, with Russian lawyers always insistent on very traditional concepts such as sovereignty and territorial integrity. When Putin returned to power in 2012, legalism gave way to colonialism. Since 2012, Russian policy toward Ukraine has been made on the basis of first principles, and those principles have been Ilyin’s. Putin’s Eurasian Union, a plan he announced with the help of Ilyin’s ideas, presupposed that Ukraine would join. Putin justified Russia’s attempt to draw Ukraine towards Eurasia by Ilyin’s “organic model” that made of Russia and Ukraine “one people.”

Ilyin’s idea of a Russian organism including Ukraine clashed with the more prosaic Ukrainian notion of reforming the Ukrainian state. In Ukraine in 2013, the European Union was a subject of domestic political debate, and was generally popular. An association agreement between Ukraine and the European Union was seen as a way to address the major local problem, the weakness of the rule of law. Through threats and promises, Putin was able in November 2013 to induce the Ukrainian president, Viktor Yanukovych, not to sign the association agreement, which had already been negotiated. This brought young Ukrainians to the street to demonstrate in favor the agreement. When the Ukrainian government (urged on and assisted by Russia) used violence, hundreds of thousands of Ukrainian citizens assembled in Kyiv’s Independence Square. Their main postulate, as surveys showed at the time, was the rule of law. After a sniper massacre that left more than one hundred Ukrainians dead, Yanukovych fled to Russia. His main adviser, Paul Manafort, was next seen working as Donald Trump’s campaign manager.

By the time Yanukovych fled to Russia, Russian troops had already been mobilized for the invasion of Ukraine. As Russian troops entered Ukraine in February 2014, Russian civilizational rhetoric (of which Ilyin was a major source) captured the imagination of many Western observers. In the first half of 2014, the issues debated were whether or not Ukraine was or was not part of Russian culture, or whether Russian myths about the past were somehow a reason to invade a neighboring sovereign state. In accepting the way that Ilyin put the question, as a matter of civilization rather than law, Western observers missed the stakes of the conflict for Europe and the United States. Considering the Russian invasion of Ukraine as a clash of cultures was to render it distant and colorful and obscure; seeing it as an element of a larger assault on the rule of law would have been to realize that Western institutions were in peril. To accept the civilizational framing was also to overlook the basic issue of inequality. What pro-European Ukrainians wanted was to avoid Russian-style kleptocracy. What Putin needed was to demonstrate that such efforts were fruitless.

Ilyin’s arguments were everywhere as Russian troops entered Ukraine multiple times in 2014. As soldiers received their mobilization orders for the invasion of the Ukraine’s Crimean province in January 2014, all of Russia’s high-ranking bureaucrats and regional governors were sent a copy of Ilyin’s Our Tasks. After Russian troops occupied Crimea and the Russian parliament voted for annexation, Putin cited Ilyin again as justification. The Russian commander sent to oversee the second major movement of Russian troops into Ukraine, to the southeastern provinces of Donetsk and Luhansk in summer 2014, described the war’s final goal in terms that Ilyin would have understood: “If the world were saved from demonic constructions such as the United States, it would be easier for everyone to live. And one of these days it will happen.”

Anyone following Russian politics could see in early 2016 that the Russian elite preferred Donald Trump to become the Republican nominee for president and then to defeat Hillary Clinton in the general election. In the spring of that year, Russian military intelligence was boasting of an effort to help Trump win. In the Russian assault on American democracy that followed, the main weapon was falsehood. Donald Trump is another masculinity-challenged kleptocrat from the realm of fiction, in his case that of reality television. His campaign was helped by the elaborate untruths that Russia distributed about his opponent. In office, Trump imitates Putin in his pursuit of political post-truth: first filling the public sphere with lies, then blaming the institutions whose purpose is to seek facts, and finally rejoicing in the resulting confusion. Russian assistance to Trump weakened American trust in the institutions that the Russia has been unable to build. Such trust was already in decline, thanks to America’s own media culture and growing inequality.

Ilyin meant to be the prophet of our age, the post-Soviet age, and perhaps he is. His disbelief in this world allows politics to take place in a fictional one. He made of lawlessness a virtue so pure as to be invisible, and so absolute as to demand the destruction of the West. He shows us how fragile masculinity generates enemies, how perverted Christianity rejects Jesus, how economic inequality imitates innocence, and how fascist ideas flow into the postmodern. This is no longer just Russian philosophy. It is now American life.

Source Article from

Why Irish America Is Not Evergreen

Roy Rochlin/Getty ImagesSpectators watching the St. Patrick’s Day Parade, New York City, March 17, 2016

I got my first break as a writer in an Irish pub in Manhattan. I was fresh off the boat from Ireland and trying to stay afloat with a waitressing job at a place called Mustang Sally’s. A few weeks into my tenure, two fellow immigrants stopped in for dinner—the film directors Jim Sheridan and Terry George. (At the time, Sheridan had begun writing In America, a biopic about his family’s immigrant experience in New York.) They were kind enough not to mind that I messed up their order in my rush to sell my rather slim credentials as an aspiring filmmaker. A week later, George helped me get an internship on an Irish-American TV show. A few years later, I was working for him as a writer on a TV series.

So goes the life of the Irish in America. Trying to hook up new arrivals with apartments or jobs or career opportunities is how things work in immigrant communities around the world. In New York, where the historical links to Ireland run deep, the networks in the immigrant community I became part of were particularly robust. Starting in the 1820s, successive generations of Irish people have flooded to this city, each building on the efforts of those who came before. I didn’t realize when I got here at the end of the 1990s, however, that thanks to multiple failed attempts at immigration reform, the conveyor belt would more or less stop with my generation. What that means for Irish-American identity in general, and the New York Irish in particular, is becoming a pressing issue.

At this St. Patrick’s Day, as rivers are dyed green and Blarney infests the airwaves, one could be fooled into thinking that the Irish-American community is as robust as ever. But a series of changes to US immigration rules has largely closed the door to new entries, leading inexorably to a “graying” of Irish America. This began with the 1965 Immigration Act, which ended the quota system that had benefited mostly white Europeans in favor of a fairer family reunification policy that helped boost immigration from developing nations. Most would-be Irish immigrants did not have near-enough relatives already in the US to petition for them. But if legal immigration was blocked, there were still many willing to take their chances by overstaying a student or tourist visa. The recession that paralyzed Ireland’s economy in the 1980s propelled around 150,000 undocumented Irish to America. Many landed in New York.

For some, the gamble paid off. An extraordinary effort led by the grassroots Irish Immigration Reform Movement (IIRM) to “Legalize the Irish” ensued, and through sheer determination and a bit of luck, it prevailed. By the mid-1980s, there were already 2–3 million unauthorized immigrants from various countries living in the US. In a bipartisan bid to deal with the growing crisis, Congress passed the 1986 Immigration Reform and Control Act, which legalized any immigrant of “good moral character” who had been living in the US continuously since 1982.

Most of the young Irish had arrived too late to take advantage of this amnesty, but with the help of a few Democratic politicians—Brian J. Donnelly, Howard L. Berman, and later, Bruce Morrison—they became eligible to apply for visas on a first-come, first-serve basis in what eventually became the Green Card lottery. Of the first 40,000 visas made available to all of the countries “adversely affected by the 1965 Act,” the Irish won 40 percent. Paul Finnegan, the executive director of the New York Irish Center, who was part of the IIRM reform effort, recalled attending Donnelly visa parties at which volunteers would help undocumented hopefuls each fill in hundreds of applications (to increase their chances) and then charter buses so the forms could be delivered as close as possible to the central processing center. 

Despite the success of the Donnelly drive, the 16,000 or so visas awarded to Irish applicants came nowhere near fulfilling demand. Another lottery sponsored by Berman followed in 1989, and then came the Morrison program, which allocated 50,000 visas for the Irish to be awarded over three different years in the 1990s. “By that point,” Finnegan told me, “anyone in the Irish community who was engaged would have gotten legalized.”

I was still in school in Dublin when all of this was going on, but like many others, when I eventually learned that getting a Green Card was suddenly as easy as applying for a driver’s license, I threw my hat in the ring. Over the course of ten years or so, upwards of 70,000 Irish people were able to settle legally in the US.

But the bonanza was short-lived. Since the 1990s, immigration has all but dried up from Ireland—though the demand for visas remains high. Over the past decade, more than 11,000 Irish hopefuls enter the Green Card lottery each year, but only a pitiful average of about 150 are successful. With replenishment rates this low, it should not be surprising that many Irish-Americans are anxious about the future of the community.

When you drop tens of thousands of young, enthusiastic people into a city like New York, where there’s an abundance of opportunity and an established community to help secure access to it, things happen. Next year, a new Irish Arts Center complex costing more than $50 million will open in Hell’s Kitchen, largely thanks to the efforts of Pauline Turley, a Morrison visa recipient. Every year, there is now a month-long Irish Theatre festival, thanks to George Heslin, the founder and director of Origin Theater, who also came to New York on a Morrison visa. These and other cultural efforts are supported, in turn, by generous donations from Irish people who did well in business. In 2016 alone, the Ireland Funds America raised more than $15 million from donors “linked to Ireland by interest, ancestry, and compassion” for Irish community projects worldwide. 

One could attend competing Irish cultural events every night of the week in New York. I’ve always found it comforting to know that if I show up at an affair at the Consulate or NYU’s Ireland House, I’ll know a good portion of the people there. But after drinking from the same trough for twenty years, we could use some fresh-faced twenty-year-olds to keep us energized. Not to write ourselves off prematurely, but the youngest of the Morrison/Donnelly visa recipients have hit forty by now. Heslin told me recently that there are just two Irish-born actors under the age of thirty who can legally work in New York and that “the scarcity of young Irish artists living in America at this time is having a real impact on how we tell our cultural story.”

This is not to suggest that the community is in danger of going extinct—not with some 34.5 million people who can claim Irish heritage living in the US. But without the influx of more recent, younger immigrants, there is a noticeable disjunction between the Irish of Ireland, who are increasingly tolerant and open in their social attitudes, and the Irish-American community, which is leaning more conservative. Gay organizations were banned from participating in the official St. Patrick’s Day parade on 5th Avenue in New York until as recently as 2015—which was the same year that Ireland became the first country in the world to legalize same-sex marriage by popular vote. A majority of Irish-American voters, who had narrowly favored Barack Obama in 2012, broke for Donald Trump in 2016 (as did most white Catholic voters), while last year Leo Varadkar, a young gay man of Indian descent, was elected as Taoiseach, or prime minister, in Ireland. Several of Trump’s closest advisers (Steve Bannon, Kellyanne Conway, John Kelly) and his loudest media cheerleaders (Sean Hannity, Bill O’Reilly) have Irish surnames—and theirs have also been some of the most hard-line voices against immigration under the Trump administration.

To be sure, this is far from the whole story of Irish-American politics. In my community in New York, a new grassroots movement called Irish Stand was set up last year by Irish Senator Aodhán Ó Ríordáin, with the help of the writer Lisa Tierney-Keogh and others, to defend the civil rights of immigrants and refugees in response to Trump’s election. (The organization’s inaugural event, held at Riverside Church in Manhattan last March, was attended by nearly 3,000 people.) This divergence in Irish and Irish-American attitudes is part of why the founder of the digital news site,  Niall O’Dowd, who has been lobbying for immigration reform since he came here as an undocumented immigrant in the 1980s, told me recently that “maintaining that first-generation connection has never been more important.” 

In 2005, O’Dowd and a Manhattan bar owner named Ciaran Staunton founded the Irish Lobby for Immigration Reform (ILIR) to fight for legalization of all undocumented immigrants and to allow low-skilled workers to come to the US. Working with Latino and other groups, the ILIR lobbied hard for various reform efforts culminating in the bipartisan 2013 Immigration Act that passed in the Senate but not in the House. Following that last failed attempt, the campaigners have all but given up on comprehensive immigration reform, which, O’Dowd says, “took a lot of time and energy and went nowhere.” The Irish lobby sees its best hope now in securing a deal that would provide a significant allotment of E3 visas along the lines of the recently established Australian program—that is, specifically for graduates doing professional work—and in making the best possible use of existing programs such as the J1, H1B, and L1 visas, which are designed respectively for students, technicians and specialists, and foreign workers on inter-company transfers.

That plan will cut a lot of young aspirants adrift. I recently met a man in his twenties from Dublin waiting tables at an Irish bar in Queens. Reminding me of myself twenty years ago, he told me he wanted to get into the film business in New York. But he had already renewed his student work permit the maximum number of times, and has to go home next month.

In any case, all of these visas are really only temporary work passes—not intended to lead to permanent residency; nor are they easily converted into Green Cards. And none of these options are accessible to unskilled workers, arguably those most in need of opportunity, which means that not all of the undocumented Irish living in the US today—estimated variously to number between 10,000 and 50,000—would even qualify for legalization were a deal eventually reached. This community has not been targeted for raids in the same aggressive way as other undocumented groups, but they have not been entirely spared the stepped-up enforcement measures in place since Trump’s election either. The Irish Times recently reported a sharp increase in the numbers of Irish being detained in the Boston area, and an increase in the number of deportations to thirty-four from twenty-six last year. This is a drop in the ocean compared to the hundreds of thousands of Latinos who are deported yearly, but it makes campaigning by undocumented Irish immigrants openly for reform, as their predecessors in the 1980s did, impossible.

White privilege is clearly at play here, and shelving the fight for comprehensive reform while pushing for a special deal can only lead to finger-pointing at the Irish lobby on that score. The irony, of course, is that the diversity visa program, which benefits immigrants from all over the world, arose in large part out of Irish lobbying efforts. But now that President Trump has called for the scrapping of that program to reduce immigration from what he referred to as “shithole countries,” playing the history card of Irish-American ties to secure visas for people who happen to be mostly white and educated would be even more awkward.   

Yet that is the predicament in which those of us who came of age at a more benign time and are lucky enough to be here now find ourselves. With rising hostility toward even legal immigrants, we’re obliged to fall back on fighting for a deal that would provide a small number of special visas for the most privileged—simply in hopes of ensuring a minimal “future flow” to sustain the Irish-American community here. If there ever is such a deal, it will be the perfect circling of the US immigration story: we will have gone from “No Irish Need Apply” to only-certain-well-educated-and-therefore-already-advantaged-Irish can apply. I prefer the version Jim Sheridan gave in In America—the one in which an immigrant shows up here, struggles, has transformative experiences, struggles more, then gives back.

Source Article from

Beware the Big Five

Jaap Arriens/NurPhoto/Getty ImagesFacebook founder Mark Zuckerberg pictured on an iPhone, August 2017

The big Silicon Valley technology companies have long been viewed by much of the American public as astonishingly successful capitalist enterprises operated by maverick geniuses. The largest among them—Microsoft, Apple, Facebook, Amazon, and Google (the so-called Big Five)—were founded by youthful and charismatic male visionaries with signature casual wardrobes: the open-necked blue shirt, the black polo-neck, the marled gray T-shirt and hoodie. These founders have won immense public trust in their emergent technologies, from home computing to social media to the new frontier, artificial intelligence. Their companies have seemed to grow organically within the flourishing ecology of the open Internet.

Within the US government, the same Silicon Valley companies have been considered an essential national security asset. Government investment and policy over the last few decades have reflected an unequivocal confidence in them. In return, they have at times cooperated with intelligence agencies and the military. During these years there has been a constant, quiet hum of public debate about the need to maintain a balance between security and privacy in this alliance, but even after the Snowden leaks it didn’t become a great commotion.

The Big Five have at their disposal immense troves of personal data on their users, the most sophisticated tools of persuasion humans have ever devised, and few mechanisms for establishing the credibility of the information they distribute. The domestic use of their resources for political influence has received much attention from journalists but raised few concerns among policymakers and campaign officials. Both the Republicans and the Democrats have, in the last few election cycles, employed increasingly intricate data analytics to target voters.

Private organizations, too, have exploited these online resources to influence campaigns: the Koch brothers’ data firm, i360, whose funding rivals that of both parties, has spent years developing detailed portraits of 250 million Americans and refining its capacities for influence operations through “message testing” to determine what kinds of advertisements will have traction with a given audience. It employs “mobile ID matching,” which can link users to all of their devices—unlike cookies, which are restricted to one device—and it has conducted extensive demographic research over social media. Google’s DoubleClick and Facebook are listed as i360’s featured partners for digital marketing. The firm aims to have developed a comprehensive strategy for influencing voters by the time of the 2018 elections.

Only in recent months, with the news of the Russian hacks and trolls, have Americans begun to wonder whether the platforms they previously assumed to have facilitated free inquiry and communication are being used to manipulate them. The fact that Google, Facebook, and Twitter were successfully hijacked by Russian trolls and bots (fake accounts disguised as genuine users) to distribute disinformation intended to affect the US presidential election has finally raised questions in the public mind about whether these companies might compromise national security.

Cyberwarfare can be waged in many different ways. There are DDoS (distributed denial of service) attacks, by which a system is flooded with superfluous traffic to disrupt its intended function. The largest DDoS attack to date was the work of the Mirai botnet (a botnet is created by hacking a system of interconnected devices so they can be controlled by a third party), which in October 2016 attacked a company called Dyn that manages a significant part of the Internet’s infrastructure. It temporarily brought down much of the Internet in the US. There are also hacks designed to steal and leak sensitive materials, such as the Sony hack attributed to North Korea or the hacking of the DNC’s e-mail servers during the 2016 election. And there are attacks that damage essential devices linked to the Internet, including computing systems for transportation, telecommunications, and power plants. This type of attack is increasingly being viewed as a grave threat to a country’s infrastructure.

The military once used the term “information warfare” to refer to any cyberattack or military operation that targeted a country’s information or telecommunications systems. But the phrase has come to have a more specific meaning: the exploitation of information technology for the purposes of propaganda, disinformation, and psychological operations. The US is just now beginning to confront its vulnerability to this potentially devastating kind of cyberattack.

This is the subject of Alexander Klimburg’s prescient and important book, The Darkening Web: The War for Cyberspace, written largely before the revelation of Russian interference in the 2016 election. With its unparalleled reach and targeting, Klimburg argues, the Internet has exacerbated the risks of information warfare. Algorithms employed by a few large companies determine the results of our web searches, the posts and news stories that are featured in our social media feeds, and the advertisements to which we are exposed with a frequency greater than in any previous form of media. When disinformation or misleading information is fed into this machinery, it may have vast intended and unintended effects.

Facebook estimated that 11.4 million Americans saw advertisements that had been bought by Russians in an attempt to sway the 2016 election in favor of Donald Trump. Google found similar ads on its own platforms, including YouTube and Gmail. A further 126 million people, Facebook disclosed, were exposed to free posts by Russia-backed Facebook groups. Approximately 1.4 million Twitter users received notifications that they might have been exposed to Russian propaganda. But this probably understates the reach of the propaganda spread on its platform. Just one of the flagged Russian accounts, using the name @Jenn_Abrams (a supposed American girl), was quoted in almost every mainstream news outlet. All these developments—along with the continued rapid dissemination of false news stories online after the 2016 election, reports by Gallup that many Americans no longer trust the mainstream news media, and a president who regularly Tweets unfounded allegations of “fake news”—have vindicated Klimburg’s fears.*

Klimburg argues that liberal democracies, whose citizens must have faith in their governments and in one another, are particularly vulnerable to damage by information warfare of this kind. And the United States, he observes, is currently working with an extremely shallow reservoir of faith. He cites Gallup polls conducted prior to the election of Donald Trump in which 36 percent of respondents said they had confidence in the office of the presidency and only 6 percent in Congress. We have no reason to believe that these numbers have subsequently increased. The civic trust that shores up America’s republican political institutions is fragile.

Klimburg gives a fascinating diagnosis of how this situation has been inflamed. He describes a growing tension in the US over the last twenty years, coming to a head under Obama, between the perception of the Internet and its reality. The Silicon Valley corporations have attained their global reach and public trust by promoting the Internet as a medium for the free exchange of information and ideas, independent of any single state’s authority. Since almost all trade in and out of the US now relies on the information transfers that these Silicon Valley companies facilitate, this perception of independence is economically essential. The country’s largest trading relationship, with the European Union, is governed by the Privacy Shield agreement, which assures EU companies that data transfers will be secured against interference and surveillance.

In Obama’s International Strategy for Cyberspace, released on May 16, 2011, he described the Internet as a democratic, self-organizing community, where “the norms of responsible, just and peaceful conduct among states and people have begun to take hold.” When Edward Snowden’s revelations about NSA surveillance and the collection of metadata threatened to compromise this agreement, Obama issued Presidential Policy Directive 28, which set out principles for “signals intelligence activities” compatible with a “commitment to an open, interoperable, and secure global Internet.”

Martin Libicki, a researcher at the RAND corporation, the global policy think tank, has had an important part in restraining offensive initiatives at the Department of Defense. His aim is to restrict America’s capabilities to what is required for defense against cyberattacks. Klimburg himself adheres closely to Libicki’s general view, expressed in several RAND reports, that the US needs to maintain a perception of itself as one of the “free Internet advocates”—in contrast to “cyber-sovereignty adherents” such as Russia and China, which aim above all to control cyberspace and its influence over their citizens.

But Klimburg’s book warns us that the facts too frequently contradict this view. In his account, America’s military and intelligence agencies have always considered cyberspace a site of potential conflict and sought global dominance over it. Throughout the 1990s, the US military had intensive discussions about the various ways in which these new technologies might be applied to traditional forms of warfare. They were particularly concerned with psychological warfare, which might be used, for example, to weaken an enemy army’s resolve to fight or to bring down national leaders by eroding their popular support.

Only a year before the release of Obama’s International Strategy for Cyberspace, Russia’s Kaspersky Lab had discovered the Stuxnet virus, a malicious worm originally built as a cyber-weapon by the US and Israel. It was intended to disrupt Iran’s nuclear program (by infecting the control systems used to operate its centrifuges, causing them to malfunction and explode), but subsequently spread across the globe. This attack, along with Obama’s establishment of US Cyber Command alongside the National Security Agency in 2009, signaled to other states that the US intended to use the Internet for offensive purposes.

What concerns Klimburg most, though, is the extent to which US government agencies are prepared and willing to mislead the American people about its own cyber initiatives. Such disinformation creates exactly the kind of confusion that liberal states vulnerable to psychological and information warfare urgently need to avoid. This sort of deceit is now a crucial aspect of US policy and defense strategy. Klimburg suggests, for example, that the details about America’s extraordinary intelligence-gathering programs, which Bob Woodward disclosed in his book Obama’s Wars (2010), had been deliberately leaked to him as a warning to adversaries—an attempt on the government’s part to impress the extent of US cyber power upon the rest of the world.

At the same time, other government agencies have sought to maintain a view, both domestically and internationally, of the Internet as a domain of cooperation, not conflict. The language employed in official cyber strategy documents, Klimburg tells us, is deliberately obfuscatory. The 2015 Defense Department statement of its cyber-strategy used terminology such as “Offensive Cyber Effects Operations” but gave no indication of what that term included or excluded. Fred Kaplan, in his book Dark Territory: The Secret History of Cyber War (2016), has also claimed that even in the early days of cyber-operations at the NSA, under Michael Hayden’s command, the already tenuous distinction between defensive and offensive operations was deliberately elided.

Klimburg suggests that a healthy democracy needs much greater transparency about its cyber-policy. The government could provide its citizens with clear, unambiguous principles concerning the collection of signals intelligence, the development of offensive and defensive cyber-capabilities, their relation to traditional military strategy, and the evolving relationship between the intelligence community and the military. The American public might come to have more trust in the government, for example, if it only used psychological cyber-operations to win over “hearts and minds” in military zones—such as the locally informed and culturally specific influence campaigns used as counterinsurgency measures in Afghanistan—rather than manipulating popular beliefs more broadly and in less controlled ways.

Klimburg is not greatly concerned by the burgeoning power of the private corporations, like those in Silicon Valley, that run the online platforms on which the government’s influence operations take place. In his view they are independent and have purely commercial interests. But if we want to understand the growing imbalance of power in online persuasion, we might ask more questions than he does about the carefully guarded lack of transparency with which the titanic Silicon Valley companies operate. The interests that now guide what technologies they produce are not entirely commercial ones. The national security community has exploited the private sector to help develop America’s immense cyber-capabilities. In doing so it has placed an extraordinary array of potential cyber-weapons in the hands of unaccountable private companies.

US House Intelligence CommitteeA Facebook advertisement paid for by a Russian account with ties to the Kremlin in an attempt to influence the 2016 presidential election

The Internet, as is well known, owes its origins to DARPA (the Defense Advanced Research Projects Agency), the agency responsible for establishing and cultivating new military technologies. According to the “free Internet” narrative encouraged by Obama, Silicon Valley, and the Defense Department, the Internet technologies we use, from software to social media platforms, are controlled by the private sector. However, when DARPA boasts online about the technologies whose research and development it has sponsored, it lists, along with the Internet, the graphical user interfaces that allow us to interact with our devices, artificial intelligence and speech recognition technologies, and high-performance polymers for advanced liquid crystal display technology. These technologies encompass every aspect of the smartphone. Our online lives wouldn’t be possible without the commercialization of military innovations.

DARPA offers early funding, often to academics and researchers rather than private corporations, to develop new technologies for national security purposes, but the economic relationship between Silicon Valley and the national security community extends much further than that. One aspect of that relationship is detailed in Linda Weiss’s America Inc.?: Innovation and Enterprise in the National Security State (2014). Weiss describes the development in Silicon Valley of a hybrid public/private economy in which the government assists in the creation of new technologies it needs for national security operations by investing in companies that can also commercialize these technologies.

Government agencies have mitigated risk and even helped to create markets for companies whose products, while ostensibly strictly civilian and commercial, satisfy their own needs. The driverless car industry will incorporate, test, and improve technologies devised for missile guidance systems and unmanned drones. Facial recognition software developed by intelligence agencies and the military for surveillance and identity verification (in drone strikes, for example) is now assuming a friendly guise on our iPhones and being tested by millions of users.

The government has used various mechanisms to fund these projects. The Small Business Innovation Research program (SBIR), Weiss tells us, “has emerged as the largest source of seed and early-stage funding for high-technology firms in the United States,” investing, at the time of writing, $2.5 billion annually. This investment—the national security agencies supply 97 percent of funding for the SBIR program—not only serves as a form of government “certification” for private venture capitalists, it also provides an incentive for invention, since SBIR asks for no equity in return for its investment.

Silicon Valley has also been profoundly shaped by venture capital funds created by government agencies. The CIA, Defense Department, Army, Navy, National Geospatial-Intelligence Agency (NGIA), NASA, and Homeland Security Department all have venture capital at their disposal to invest in private companies. Weiss quotes a Defense Department report to Congress in 2002 explaining the aim of its initiatives:

The ultimate goal is to achieve technically superior, affordable Defense Systems technology while ensuring that technology developed for national security purposes is integrated into the private sector to enhance the national technology and industrial base.

The direction of technological development in the commercial sector, in other words, is influenced by the agenda of government agencies in ways largely unknown to the public.

It’s not difficult to trace, for example, the profound influence of In-Q-Tel, the CIA’s wildly successful venture capital fund, which has sometimes been the sole investor in start-ups but now often invests in partnerships with the Big Five. In-Q-Tel was the initial sole investor in Palantir Technologies, Peter Thiel’s software company specializing in big data analysis. A branch of the company called Palantir Gotham, which specializes in analysis for counterterrorism purposes, has won important national security contracts with the DHS, FBI, NSA, CDC, the Marine Corps, the Air Force, and Special Operations command, among other agencies.

But In-Q-Tel’s achievements are also familiar to us in more mundane forms: Google Earth originated in an In-Q-Tel sponsored company called Keyhole Inc., a 3-D mapping startup also partially owned by the NGIA. The cloud technology on which we all increasingly rely is being developed by companies like Frame, which is jointly funded by In-Q-Tel, Microsoft, and Bain Capital Ventures. Soon we will be able to use our computers to interact with 3-D holographic images, thanks to another In-Q-Tel–sponsored company, Infinite Z. Another of their companies, Aquifi, is producing scanners that can create a color 3-D model of any scanned object.

Since many of the startups in which government agencies invest end up being absorbed by the Big Five, these companies all now have close relationships with the defense and intelligence agencies and advise them on technological innovation. Eric Schmidt, the former executive chairman of Alphabet, Inc., chairs the Pentagon’s Defense Innovation Board (Jeff Bezos formerly served on it too), which in a January 2018 report recommended encouraging tech entrepreneurship within the military. The goal would be to create “incubators” like those used in the business and tech worlds that would help develop startups targeted to new defense needs, such as big data analysis.

The US government has supported the monopolies of the Big Five companies partly for the sake of the “soft power” they can generate globally. Libicki, in a 2007 RAND publication, Conquest in Cyberspace: National Security and Information Warfare, suggested that the government could achieve “friendly conquest” of other countries by making them depend on US technologies. The “bigger and richer the system, the greater the draw,” he tells us. Huge global corporations (his primary example is Microsoft), whose technologies are deeply linked with the domestic technologies of other nation-states, give America greater soft power across the globe.

It is clearly time to ask whether this hybrid Silicon Valley economy has been a good national security investment. Weiss points out that after the government funds research, it gives away the patents to private companies for their own enrichment. We can find on the websites of organizations like In-Q-Tel and DIUx the kinds of contracts they offer. The licenses that they acquire are generally nonexclusive. The technologies that power America’s national security innovations can be sold to anyone, anywhere. The profits go to companies that may or may not be concerned about the national interest; Intel recently alerted the Chinese government to a vulnerability in their chips, one that could be exploited for national security purposes, before alerting the American government.

Mariana Mazzucato, in The Entrepreneurial State (2013), examined the case of Apple, which has the lowest research-and-development spending of the Big Five. The company has succeeded commercially by integrating technologies funded by the military and by intelligence agencies (such as touch screens and facial recognition) into stylish and appealing commercial products. The government has shouldered nearly all the risk involved in these products, while Apple has reaped the rewards. In other words, taxpayer’s money has helped enrich companies like Apple, and as we now know from the recently released Paradise Papers (documents concerning offshore tax havens leaked from a Bermudan law firm), the companies have not responded with a corresponding willingness to increase the government’s tax revenues. Apple managed to keep a great deal of its $128 billion in profits free from taxation by using Irish subsidiaries and only pledged to repatriate its sheltered funds once the Trump administration dramatically slashed the corporate tax rate.

Silicon Valley companies do not simply have vast amounts of money, though; they also own vast amounts of data. To be sure, much older corporations like Bank of America and Unilever, which have been gathering our data for decades, own much more (approximately 80 percent, compared to Silicon Valley’s 20 percent, according to a recent study by IBM and Oxford Economics) but the Big Five, Uber, and others have extremely sophisticated data analytics, and their platforms are designed for the efficient exploitation of their data for advertising and influence.

This is where Klimburg’s concerns about the development of offensive cyber-powers by the military and intelligence agencies intersect most worryingly with the problem of privatizing our cyber-assets. The US has, since the start of the war on terror, increasingly outsourced intelligence and military operations to private companies, particularly those engaged in data analytics and targeting. Government agencies have offered lucrative contracts to older companies such as Booz Allen Hamilton and Boeing AnalytX, as well as to new players, such as Palantir, SCL group, and SCL’s now infamous partner, Cambridge Analytica, whose roles in the Leave EU campaign in Britain and in Trump’s presidential campaign have both drawn legal scrutiny. In doing so the government has encouraged these companies to develop the most sophisticated methods for influencing the public. These kinds of military-grade information operations may then be applied to their client base.

Government partnerships with such companies make the data owned by the Big Five exploitable in ways that many of us are only just beginning to understand. But these immense powers may also be freely employed for ends that threaten national security. The way in which the Koch brothers have already exploited their resources to promote skepticism about climate change should serve as a warning.

The problem is compounded by the exceptional form of corporate governance that the Big Five have been allowed to maintain. Even though Facebook and Google are publicly traded companies, their founders, Mark Zuckerberg of Facebook and Larry Page and Sergey Brin of Google, have a more than 50 percent vote on their respective boards—that is, effectively total control.

In Klimburg’s view, the national security community has irresponsibly overdeveloped its offensive powers in cyberspace. As far as its pursuit of dominance in military and intelligence capacities goes, this may be true. But by giving Silicon Valley irresistible commercial incentives to develop military technologies, the government has, at the same time, surrendered unparalleled power to private corporations. Extensive control of information has been handed over to unaccountable global corporations that don’t profit from the truth. It’s currently laughably easy, as Vladimir Putin has brazenly shown us, to spread foreign propaganda through the platforms they operate. But even if they can develop mechanisms to prevent the spread of foreign propaganda, we will still be heavily reliant on the goodwill of a handful of billionaires. They are, and will continue to be, responsible for maintaining the public’s confidence in information, preserving forms of credibility that are necessary for the health and success of our liberal democratic institutions.

Zuckerberg, in a well-known incident he now surely regrets, was asked in the early days of Facebook why people would hand over their personal information to him. He responded, “They trust me—dumb fucks.” We’re finally starting to appreciate the depth of the insult to us all. Now we need to figure out how to keep the corporations we have supported with our taxes, data, and undivided attention from treating us like dumb fucks in the future.

  1. *

    See Art Swift, “Americans’ Trust in Mass Media Sinks to New Low,” Gallup News, September 14, 2016. 

Source Article from

Bang for the Buck

John Locher/AP ImagesDonald Trump speaking at a campaign rally in Las Vegas, December 2015

“Welcome, Patriots! Gun Show Today,” says a big sign outside the Cow Palace in Daly City, California, just south of San Francisco, where the Republican National Convention nominated Barry Goldwater for president in 1964. Inside, past the National Rifle Association table at the door, a vast room, longer than a football field, is completely filled with rows of tables and display cases. They show every conceivable kind of rifle and pistol, gun barrels, triggers, stocks, bullet keychain charms, Japanese swords, telescopic sights, night-vision binoculars, bayonets, a handgun carrier designed to look like a briefcase, and enough ammunition of every caliber to equip the D-Day landing force. Antique guns on sale range from an ancient musket that uses black powder to a Japanese behemoth that fires a bullet 1.2 inches in diameter.

Also arrayed on tables are signs, bumper stickers, and cloth patches you can sew onto your jacket: 9-11 WAS AN INSIDE JOB; THE WALL: IF YOU BUILD IT THEY CANT COME; HUNTING PERMIT UNLIMITED FOR ISIS. Perhaps 90 percent of those strolling the aisles are men, and at least 98 percent are white. They wear enough beards and bushy mustaches to stuff a good-sized mattress. At one table a man is selling black T-shirts that show a map of California in red, with a gold star and hammer and sickle. Which means? “This state’s gone Communist. And I hate to say it, but it was Reagan that gave it to them. The 1986 amnesty program—which granted legal status to some 2.7 million undocumented immigrants.”

If reason played any part in the American love affair with guns, things would have been different a long time ago and we would not have so many mass shootings like the one that took the lives of seventeen high school students in Parkland, Florida on February 14. Almost everywhere else in the world, if you proposed that virtually any adult not convicted of a felony should be allowed to carry a loaded pistol—openly or concealed—into a bar, a restaurant, or classroom, people would send you off for a psychiatric examination. Yet many states allow this, and in Iowa, a loaded firearm can be carried in public by someone who’s completely blind. Suggest, in response to the latest mass shooting, that still more of us should be armed, and people in most other countries would ask you what you’re smoking. Yet this is the NRA’s answer to the massacres in Orlando, Las Vegas, Newtown, and elsewhere, and after the Parkland killing spree, President Trump suggested arming teachers. One bumper sticker on sale here shows the hammer and sickle again with GUN FREE ZONES KILL PEOPLE.

Nor, when it comes to national legislation, do abundantly clear statistics have any effect. In Massachusetts, which has some of America’s most restrictive firearms laws, three people per 100,000 are killed by guns annually, while in Alaska, which has some of the weakest, the rate is more than seven times as high. Maybe Alaskans need extra guns to fend off bears, but that’s certainly not so in Louisiana, another weak-law state, where the rate is more than six times as high as in Massachusetts. All developed nations regulate firearms more stringently than we do; compared with the citizens of twenty-two other high-income countries, Americans are ten times more likely to be killed by guns. In the last fifty years alone, more civilians have lost their lives to firearms within the United States than have been killed in uniform in all the wars in American history.1

Congress, terrified of the NRA, not only ignores such data but has shielded manufacturers and dealers from any liability for firearms deaths, and has prevented the Centers for Disease Control from doing any studies of gun violence. As of last October—the figure has doubtless risen since then—the top ten recipients of direct or indirect NRA campaign funds in the US Senate had received more than $42 million from the organization over the past thirty years. Funneling a river of money to hundreds of other members of Congress as well, the NRA has certainly gotten what it pays for.

In Armed in America, Patrick J. Charles points out that after each horrendous mass shooting, like the one we’ve just seen at Parkland, not only does the NRA once again talk about good guys with guns stopping bad guys with guns, but gun purchases soar and stock prices of their makers rise. However, only a tiny fraction of the more than 30,000 Americans killed by guns each year die in these mass shootings. Roughly two thirds are suicides; the rest are more mundane homicides, and about five hundred are accidents. Some 80,000 additional people are injured by firearms each year. All these numbers would be far less if we did not have more guns than people in the United States, and if they were not so freely available to almost anyone.

Although not the definitive study of the NRA that David Cole called for in these pages recently,2Armed in America does cast a shrewd eye on what is probably the most powerful lobbying organization in Washington. For almost a century the NRA has pursued a two-faced strategy. It “would tout itself to lawmakers as the foremost supporter of reasonable firearms restrictions. At the same time, the NRA informed the gun-rights community that virtually all firearms restrictions would either make gun ownership a crime or somehow lead to disarmament.” The NRA presents itself to the public as “a voice of compromise” and boasts of its courses in gun safety, but skillfully mobilizes its five million members and annual budget of more than $300 million to make sure Congress never passes any meaningful gun control. The poignant, outspoken campaigning by the Florida high schoolers who survived the Parkland shooting may spur somewhat tightened gun control in a few states, but, at least at the national level, don’t expect new laws to be sweeping and significant.

The Koch brothers have been major financial supporters of the NRA because it so reliably turns out right-wing voters on election day. A vocal and militant NRA also helps protect people like the Kochs by encouraging the illusion that the real source of political power in America is gun ownership—rather than, say, great wealth.

Guns were essential tools in our early history, but as the frontier disappeared, a mystique about them grew only stronger. Charles quotes Sports Afield from 1912: “Perfect freedom from annoyance by petty lawbreakers is found in a country where every man carries his own sheriff, judge and executioner swung on his hip.” Last year, someone who would dearly love to wield such powers against his enemies became the first sitting president to address the NRA in more than three decades. “The eight-year assault on your Second Amendment freedoms has come to a crashing end,” Donald Trump told the organization’s annual convention. “You have a true friend and champion in the White House.”

For more than a century, the NRA and its opponents have argued over the meaning of that amendment: “A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.” Gun enthusiasts claim that this protects almost anyone who wants to carry a rifle down the street or a pistol to church, and therefore that gun control violates the Constitution. Liberals, on the other hand, maintain fervently that the rights granted by the Second Amendment refer only to a “well regulated Militia,” such as that which fought the redcoats at Lexington and Concord or that makes up the National Guard today.

Charles takes the second position, which he argues at ponderous length, firing salvos at rival scholars and tracing the amendment’s ancestry back to Britain’s Militia Acts of 1661 and 1662. Yet something feels sterile about this dispute over what the Founding Fathers had in mind. It is tragic that we should still have to battle over the intentions of that assembly of men in frock coats and powdered wigs when, all around us, the carnage from gun violence continues.

And so it was with little appetite that I picked up yet another book that takes the history of guns back to colonial times, but Roxanne Dunbar-Ortiz’s Loaded is like a blast of fresh air. She is no fan of guns or of our absurdly permissive laws surrounding them. But she does not merely take the liberal side of the familiar debate. “Neither party,” she writes of that long squabble, “seems to have any idea what the Second Amendment was originally about.” Of course the amendment was written with militias in mind, she says, but, during and after the colonial era, just what were those militias? They were not merely upstanding citizens protecting themselves against foreign tyrants like King George III. They also searched for runaway slaves and seized land from Native Americans, often by slaughter.

Loaded quotes former Wyoming senator Alan Simpson: “Without guns, there would be no West.” But in this sense, the West began at the Atlantic seaboard, where settler militias were organized from the seventeenth century onward. Before long, members could collect bounties for the heads or scalps of Native Americans—an early case, incidentally, of the privatization of warfare. When the thirteen colonies declared their independence, one grievance was the king’s Royal Proclamation of 1763, by which the British, fretting over the expense of sending troops across the Atlantic to fight endless Indian wars, placed land beyond the Appalachian-Allegheny mountain range off-limits to white settlement.

Many well-armed settlers, however, thirsted for that land and crossed the mountains to take it. Among them was the eager young George Washington, who went on to make a fortune speculating in land far to the west of coastal Virginia where he had been born. As settlement expanded across the Great Plains, US Army troops took over the job of suppressing the doomed Native American resistance, but militias had long preceded them.

The militias also kept slaves in line. Dunbar-Ortiz quotes a North Carolina legal handbook of 1860 on such duties: “The patrol shall visit the negro houses in their respective districts as often as may be necessary, and may inflict a punishment, not exceeding fifteen lashes, on all slaves they may find off their owner’s plantations…[and] shall be diligent in apprehending all runaway negroes.” If a captured slave behaved “insolently” the militia could administer up to thirty-nine lashes. Some militias, such as the Texas Rangers, did double duty, both seizing land and hunting down escaped slaves. After the Civil War, when the South was still awash in guns and ammunition, militias morphed easily into the Ku Klux Klan—and into private rifle clubs; by 1876 South Carolina alone had more than 240.

Cleansed of its origins, some of this history has been absorbed into our culture. Dunbar-Ortiz comes, she tells us, from rural Oklahoma, the daughter of a “proletarian cowboy,” and grew up on romantic stories of bandits like Jesse James who were said to be American Robin Hoods. But who was Jesse James? He was a veteran of a particularly brutal militia, in which he had fought for the Confederacy in the Civil War.

Men like Daniel Boone and Davy Crockett, Dunbar-Ortiz points out, have been sanitized in a different way, remembered not as conquerors of Native American or Mexican land, but as frontiersmen roaming the wilderness in their fringed deerskin clothing—and as skilled hunters. This has powerful resonance with many gun owners today, who hunt, or once did, or at least would like to feel in themselves an echo of the hunter: fearless, proud, self-sufficient, treading in the footsteps of pioneers. One of those fringed leather jackets (although not deerskin, the salesman acknowledges) is on sale at the gun show, as is a huge variety of survival-in-the-wilderness gear: canteens, beef jerky, buffalo jerky, bear repellent, and hundreds of knives, many of them lovingly laid out on fur pelts: coyote, beaver, muskrat, possum, and the softest, badger.

The early militias are one strand of ancestry Dunbar-Ortiz identifies for gun enthusiast groups like the NRA. Another is the legacy of America’s wars—not those with defined front lines, like the two world wars and Korea, but the conflicts in Vietnam, Central America, Iraq, Afghanistan.3 In those wars it was often unclear who was friend and who was enemy, mass killings of civilians were common, and many a military man evoked the days of the Wild West. General Maxwell Taylor, Lyndon B. Johnson’s ambassador to South Vietnam, for instance, called for more troops so that the “Indians can be driven from the fort and the settlers can plant corn.”

One of the greatest predictors of American gun ownership today is whether someone has been in the military: a veteran is more than twice as likely as a nonveteran to own one or more guns. Among the bumper stickers and signs at the gun show are JIHAD FREE ZONE and I’LL SEE YOUR JIHAD AND RAISE YOU A CRUSADE; the latter shows a bloody sword. Many a vet is strolling the aisles, happy to talk about fighting in Iraq or Afghanistan. The first of the chain of mass shootings that have bedeviled the United States over the last half-century or so, from atop a tower at the University of Texas at Austin in 1966, was by Charles Whitman, an ex-Marine.

The passion for guns felt by tens of millions of Americans also has deep social and economic roots. The fervor with which they believe liberals are trying to take all their guns away is so intense because so much else has been taken away. In much of the South, in the Rust Belt along the Great Lakes, in rural districts throughout the country, young people are leaving or sinking into addiction and jobs are disappearing. These hard-hit areas have not shared the profits of Silicon Valley and its offshoots or the prosperity of coastal cities from Seattle to New York. Even many of his supporters know in their hearts that Trump can never deliver on his promises to bring back coal mining and restore abundant manufacturing jobs. But the one promise he, and other politicians, can deliver on is to protect and enlarge every imaginable kind of right to carry arms.

People passionate about guns often display a sense of being under siege, left behind, pushed down, at risk. One of the large paper targets on sale at the gun show shows a scowling man aiming a pistol at you. On bumper stickers, window signs, flags, is the Revolutionary era DON’T TREAD ON ME, with its image of a coiled rattlesnake. At one table, two men are selling bulletproof vests. For $500 you can get an eight-pound one whose plates—front, back, side—are made of lightweight compressed polyethylene. “They used to use it to line the bottom of combat helicopters,” said one of the men. For only $300, you can get one with steel plates, but it weighs twenty-three pounds. Also on sale is a concealable vest that goes under your clothing: medium, large, and X-large for $285; XX-large and XXX-large for $315.

Who buys these? I ask.

“Everybody—who sees the way the world is going.”

The most bellicose descendants of the American militias of centuries past are the forces that go under the same name today. We have seen a lot of these camouflage-clad men (and the occasional woman) in the past few years: striding through Charlottesville, Virginia, last August with their rifles and walkie-talkies under Confederate flags, traveling in convoys with gun barrels poking through the windows of pickup trucks and SUVs to camp near the Mexican border and watch for immigrants slipping across, and, most often, tangling with US Forest Service or other federal officials in theatrically orchestrated standoffs over the use of federal land in the Far West. Four hundred armed militiamen were on the scene in 2014 at the height of a standoff in Nevada; one hundred appeared at another in Montana the next year, and three hundred at one in Oregon the year after that. Similar armed confrontations have taken place in New Mexico, Texas, and California, and a militia leader from Utah was arrested in 2016 after apparently trying to bomb a Bureau of Land Management outpost in Arizona. Between 2010 and 2014 alone there were more than fifty attacks on BLM or Forest Service employees, including two by snipers.

James Pogue’s Chosen Country is a young journalist’s account of spending many weeks with participants in several of these western land occupations. A would-be Hunter S. Thompson, he includes far more than you want to know about his own drinking, smoking, drug use, tattoos, girlfriends, beloved grandmother, and brushes with the law. Nonetheless, there is an extravagant verve to his writing (three armed riflemen at a roadblock “gave us looks sort of like what you’d give a couple of college boys you found at your daughter’s slumber party”; young militiamen romanticize “a glossy magical cowboy past”) and, more important, amid the overblown gonzo riffs, he has genuine compassion for the suffering of some of those “on the angrier fringes of the rancher subculture.”

The Endangered Species Act has thrown both loggers and ranchers out of work, and even though there are good reasons for limiting grazing on federal land (such as preventing erosion or the pollution of drinking water), a new restriction can push a small struggling sheep farmer into bankruptcy. Pogue gets in amazingly deep with these western rebels, even joining a carful of them on a madcap expedition to Salt Lake City to enlist Mormon elders in defusing one standoff. But he is wise enough to know that those who will really benefit from any privatization of the vast federally owned territory in the West are not the militiamen with their “Ranchers’ Lives Matter” yard signs but those who have the capital to exploit the land’s riches: agribusiness, mining companies, oil and gas drillers. It’s no surprise that many of those interests enthusiastically support the militia occupations.

Alon Reininger/Contact Press ImagesBeretta handguns at an NRA convention, San Antonio, Texas, April 1991

There are rivalries aplenty between various militia groups, but one undercurrent in almost all of them, whether spoken or denied, is white nationalism. The first attempt to plant a private militia on the Mexican border was made by David Duke of the Ku Klux Klan. Of African-Americans, Cliven Bundy, patriarch of the family behind several of the western land standoffs, has said, “I’ve often wondered, are they better off as slaves, picking cotton…?” Two of Bundy’s sons were among those who occupied federal buildings at the Malheur National Wildlife Refuge in southeastern Oregon; one of their collaborators had recently aired a video that showed him wrapping pages of the Koran in bacon and setting them on fire. The Malheur occupiers rifled through a collection of Native American relics, and turned the site of a nearby archaeological dig containing more artifacts into a latrine. It is not hard to see the continuity with the militias of two hundred years ago.

American right-wingers in uniform have been around since the Nazi and blackshirt groups of the 1930s. Later militias came and went; a new wave of them was spurred into being by the election of Barack Obama in 2008. Their ideology tends to echo that of others on the far right: the New World Order and its minions (the Kenyan-born Obama, Hillary Clinton, George Soros, most people in Hollywood, and many others) favor the spotted owl over loggers and ranchers and black people over white, patrol the skies with black helicopters, and are conspiring to flood the United States with immigrants and refugees, install United Nations rule, impose Sharia law, and seize guns from their rightful owners. As long as I’m alive and breathing, sings the country and western artist (and Trump supporter) Justin Moore, You won’t take my guns. One bumper sticker on sale at the gun show says, AMERICA HAS BEEN OCCUPIED BY GLOBALIST FORCES. Militias go farther than other right-wing groups in promising to resist this imposition of the New World Order with arms. “When the ballot box doesn’t work,” says John Trochmann, founder of the Militia of Montana, “we’ll switch to the cartridge box.”

Some of this, of course, is hot air. The number of active militia groups actually fell by 40 percent from 2015 to 2016, according to the Southern Poverty Law Center, which monitors the movement closely. One “key factor” was that when the brothers Ammon and Ryan Bundy and their followers seized buildings at Malheur in early 2016, the federal government hung tough, shooting dead one militia leader when he tried to pull a gun on officers at a roadblock, arresting many more, and indicting them on serious charges.

There has been one huge change since then: the election of Donald Trump. A few years before, during an earlier standoff, Trump voiced qualified support for Cliven Bundy. (He was uneasy about the occupation and suggested Bundy cut a deal with Obama, but said, “I like him, I like his spirit, his spunk, and the people that are so loyal…. I respect him.”) Several friends of the Bundys or supporters of their Malheur occupation became prominent Trump backers, and one, oilman Forrest Lucas, was on the president’s shortlist for secretary of the interior. A judge’s recent declaration of a mistrial was the latest in a series of setbacks the government has had in prosecuting the Bundys. Since the election, militia members have been increasingly visible around the country, providing “security” for right-wing demonstrators and speakers. One such speaker is Cliven Bundy, newly released from jail. And, in contrast to their decline as Obama cracked down on the land occupations, under Trump the number of armed militia groups in the United States has soared ominously, from 165 in 2016 to 273 in 2017.

What happens with them next? I see two dangers. The first is that the next militia standoff over a federal land occupation in the West may end differently. It is hard to imagine Trump’s Justice Department firmly enforcing the law against people who so represent the concentrated essence of his base. Does that mean that the armed seizure of some National Forest land, say, might be unhindered and become permanent? And might that, in turn, encourage dozens of similar land grabs? The rural areas of western states are filled with people—including thousands of county sheriffs’ deputies and other state and local officeholders—who believe no one should tell them where they can’t graze their cattle, hunt game, cut a tree, or dig for gold. And what right do the feds have to own all that land, anyway? Promoting oil drilling in National Parks, Trump clearly feels the same way.

The second danger is this: Trump may well be forced out of office—by defeat in 2020 if not by other means before then. If that occurs, we know it will be a stormy process, in which he will try in every possible way to inflame and rally his supporters, with more dark charges of “rigged” voting if he loses the election. To anyone on the far right his defeat or removal will be virtual proof of a conspiracy to restore the New World Order. Will these gun-toting men in boots and camouflage flak jackets accept his departure from the White House quietly? And, if they can’t prevent it, will they somehow take revenge?

—March 8, 2018

  1. 1

    If you want to arm yourself with such statistics for arguments with gun enthusiasts, you’ll find plenty in “Guns Don’t Kill People, People Kill People” and Other Myths about Guns and Gun Control by Dennis A. Henigan (Beacon, 2016), although the book’s usefulness is hampered by the lack of an index. 

  2. 2

    “The Terror of Our Guns,” July 14, 2016. 

  3. 3

    Bring the War Home: The White Power Movement and Paramilitary America by Kathleen Belew (Harvard University Press, 2018) makes the same point, by tracing the roots of much white racist violence from the 1970s through the early 1990s to the Vietnam War and some of its veterans. 

Source Article from