‘I’m not the Resistance, I’m a reporter’: An Interview with April Ryan


Alex Brandon/AP PhotoApril Ryan raising her hand to put a question to White House press secretary Sarah Huckabee Sanders, Washington, D.C., October 31, 2017

At the top of Donald Trump’s journalistic enemies list is April Ryan, the fifty-one-year-old American Urban Radio Networks correspondent. Ryan—who has covered the presidency for more than two decades—is also an on-air political analyst for CNN and the author of three books, including the recently released Under Fire: Reporting from the Front Lines of the Trump White House. Around Washington, D.C., Ryan holds the title “Dean of the White House Press Corps.”

“I watched her get up,” the president fumed last week before departing for Paris. “I mean, you talk about somebody that’s a loser. She doesn’t know what the hell she’s doing… She’s very nasty, and she shouldn’t be… You’ve got to treat the White House and the Office of the Presidency with respect.”

What Trump may find disrespectful is that Ryan has a penchant for asking tough questions on topics he’d doesn’t want to hear about: voter suppression, civil rights, Russia. Ryan is also black, female, middle-aged, and resolute. In January 2018, she asked, “Mr. President, are youa racist?”

This boldness has made Ryan the target of Trump’s more ardent followers. She receives frequent death threats. On a reporter’s salary, she’s had to hire a full-time bodyguard. There are reports that Cesar Sayoc Jr., who is accused of sending pipe bombs to Hillary Clinton, George Soros, Barack Obama, and others, also had Ryan on his mailing list.

We spoke for two hours in New York City in the early fall, and then again, twice after the November 6 election, by telephone. An edited and condensed version of the conversations follows.


Claudia Dreifus: Like the late Helen Thomas of the UPI, you’re known as the “Dean of the White House Press Corps.” It’s an honorific earned by covering four presidents. When did you realize that reporting on the Donald Trump presidency would be very different?

April Ryan: I saw it during the 2016 campaign. I knew in my gut he was going to win. You could see it, if you were honest with yourself and listened to what he was saying. He’d say to a 90 percent white crowd, “We built this nation!”

He used code words and they solidified the crowd. With Donald Trump, there wasn’t political decorum. It was shock and awe. This was a street game and he was playing “the Dozens.” His opponents, in both the primaries and the general election, were too polished to understand what was happening. I used to go on MSNBC and tell Chris Matthews, “This man could be our president.” 

Chris would say, “No, it will never happen!”  

When it did happen, how did life change for journalists covering the White House?

There used to be an atmosphere of mutual respect there. You had journalists like Bill Plante of CBS and Ann Compton of ABC. They were tough questioners, but they always were respected.

I didn’t have any problem with any president until now. George W. Bush, I really have a fondness for him because we were able to talk about race. Bill Clinton, I asked him hard questions about Monica Lewinsky. I dogged Obama about unemployment in the black community. They may not have agreed with the questions I threw out, but they respected me.

This president changed the dynamic. The press room now, every day, it’s something different. Now looking at this new crop, you’ve got conservative journalists over here, you’ve got liberal journalists, and you got those few who are in the middle. We didn’t have that back then. If a reporter had politics, you didn’t know about it. It wasn’t out like it is now.

Where do you sit on the spectrum?

I sit in the middle. I sit with the tradition of Walter Cronkite. You didn’t know his politics until he left journalism. And you don’t know if I’m a Republican or a Democrat. I don’t talk about my politics. No one knows my politics.

Whatever they are, you’re a journalist with a bodyguard. Why?

There have been threats. It’s gone beyond emails, beyond the phone calls, beyond threatening messages to the company website. The security: it’s not just for myself, it’s for protecting me and mine. I have kids. I’m trying to keep my life and the lives of those I love safe.

The other thing is that people have tried to intimidate me. Right now, I can’t cover a Trump rally in a red state, even if I wanted to. I mentioned that to Steve Bannon when I talked to him. He said, “No. I wouldn’t advise it.” 

The questions you ask the president appear to be the source of the hostility. This began in February of 2017 at Donald Trump’s first solo presidential press conference when you asked what would, under ordinary circumstances, have been a rather mundane question.

Yes. I asked him a question about urban America and it wound up being crazy. I asked if, in his plans for an inner cities agenda, he was going to talk with the Congressional Black Caucus. The president went, “Well, I would… I’ll tell you what, do you want to set up the meeting? Are they friends of yours?” 

All I could say was something like, “I’m just a reporter. I know some of them.” But the president went on, “Let’s go… let’s set up a meeting. I would love to meet with the Black Caucus.”

When he started speaking those words, I was at a loss. Blood rushed to my ears because that’s my way when things get tough. I was shaking my head, not realizing what I was doing. “No, I can’t do that, sir.” I was in shock. In my mind, I thought, but didn’t say, “That’s not my job.”

This exchange was the worst thing ever for me. That moment is forever etched into history because that video went viral. For better or worse, I’m a meme from that. 

Who do you actually work for?

American Urban Radio Networks. We send newsfeeds to black-owned radio stations or stations with a black focus. Our focus is on minority America, though we ask questions about all of America—health disparities, lead poisoning, police shootings, Freddie Gray, Eric Garner. 

How do you develop your questions for the president? Dan Rather, who once covered the White House, told me that he spent months crafting a single question for Richard Nixon.

Months? It doesn’t take me that long. We’re in a different day now. You’re responding to the moment. The 24/7 news machine and social media have changed the dynamic of how we ask questions. I do a lot of research. I have sources who tell me things. My questions are often driven by the events of the day.   

When I asked Sarah Huckabee Sanders if the president had ever thought of resigning, that was driven by the news that a federal prosecutor had just raided Michael Cohen’s office. I had sources telling me that things were going on, like wiretaps and things of that nature.

This past January, at a public ceremony celebrating Martin Luther King Jr. Day, you bluntly asked the president if he was a racist. What was your process in developing that question?

As a journalist, as you know, you have your ear to the ground. You have a Rolodex of people you talk to quite frequently to find out what they’re thinking. You hear grumbling. 

I’d heard that federal lawmakers came out of a meeting where the president allegedly said “shithole nations,” versus people from Norway who’d be welcome here. Not only that: before that was Charlottesville.

So I was hearing this groundswell from black leaders and white leaders. And then I called the NAACP and asked, “What is the definition of a racist?” And it was simple: the intersection or the meeting of prejudice and power.

I was torn about asking it. Even in the room, I was back and forth. I’ll never forget the president kept looking at me from there like he doesn’t like me. When I asked it, I knew I had done something. I realized that it’s a sad day when you have to ask a US president if you’re a racist. It hurt me.

Does the president ever call on you at events?

Not anymore. I’ll yell out a question sometimes. He’ll give me a squint and close his mouth and skip to the next questioner. He doesn’t like me. That’s okay.

At his recent post-Election Day press conference, the one where the president grew furious with CNN’s Jim Acosta, he also rounded on you. “Sit down,” he told you. “I didn’t call on you.” What were your feelings as he said that?

I was shocked, though I wasn’t as surprised as some people. He’s done this before to me. Whatever he thinks about me, I’ll be doing what I’m doing. In this instance, I was asking him about voter suppression and he was avoiding talking about the issue.

I don’t know why people are shocked. He’s gone after women before. We’ve seen how he is when he talks to Kaitlan Collins or Cecilia Vega.  

He has said to ABC’s Cecilia Vega, “You’re not thinking, you never do.”

I love Cecilia. She’s amazing. And a thinker. And she asks really good questions. It’s wrong to denigrate a woman, a journalist. It shows where women, where journalists, stand with him. In a time when women are walking away from him and his message is against women, that’s a dangerous game to play. Independent women are leaving him and going toward the Democrats.

Just last week, the president revoked Jim Acosta’s White House press credentials. In the wake of that, some journalists are suggesting that the entire White House press corps should boycott some of the president’s events. Is that feasible?

If they do boycott, I would have to be part of it. But there are ways that we can be just as effective without a boycott. We have to be all together and we’ll have to work together to find those areas. We have to figure out what to report and what not to report. We can be more strategic in our reporting.

You’ve had some run-ins with Sanders. During Brett Kavanaugh’s confirmation hearings, you threw a remarkable question at her. You asked why the president had so readily believed in Kavanaugh’s innocence, when in 1989 Trump had rushed to judgment in the case of five black teenagers accused of raping a white jogger in New York’s Central Park. Many years later, the real rapist was arrested. Now, the 1980s is a long time ago; why ask that question now?

I didn’t initiate the question. Ayesha Rascoe of NPR did. The minute Ayesha asked it, I saw Sarah getting riled, as if to say, “Leave that question.” And I was like, “You are not going to step over this.” I went right in.

What it does show is that he’s selective in who he believes and who he doesn’t. The reason I asked is because people know what happened there. There’s a record. And he has yet to say, “I’m sorry.”

About your bodyguard, how do you afford him? I know what radio reporters earn.

I’m creative. I put a lot of jobs together. I’m on CNN. I write books, I lecture. The security: it costs a lot, but I can’t leave this job. I’m not a quitter. If people threaten me because I’m asking questions, that’s not right. They view me as the Resistance; I’m not the Resistance, I’m a reporter.

How do your colleagues in the White House press corps react to your being targeted?

Some of them don’t care. They’re of the mindset, “Oh, she gets so much attention.” Then there are others, they feel bad for me. I had a co-worker walk with me when she saw a couple of in-my-face attempts at intimidation.

A lot of the conservative newbies who think I’m not friendly to the president or who want to write stories that are for the president, they want to challenge me. I’m like, “You just got here, who are you?”

You’ve suggested that Sarah Huckabee Sanders ought to be paying for your security detail. Why?

It’s because every time Sarah comes out and says something against me, I get these emails that go, “Oh, I’m going to do this and I’m going to do that to you.” Every time, there’s an elevated level of hate. They are generating this hate. I am “the enemy.” Or one of the enemies.

A hypothetical question: If your predecessor as the dean of the White House press corps, the late Helen Thomas, were, in some mythical way, to come down from journalism heaven, what do you think she’d tell you?

“Keep doing what you’re doing.” She’d be the first person banging on the door for answers. She had the doors closed quite a bit on her, though it never stopped her. But people in power were afraid of her. She wielded real power.   

Like her, I’m not looking for approval. I’m looking to do my job.


Paul J. Richards/AFP Photo/Getty ImagesApril Ryan assisting Helen Thomas, then known as the dean of the White House press corps, Washington, D.C., November 12, 2008

Source Article from http://feedproxy.google.com/~r/nybooks/~3/ri6JvI48v-M/

‘This Is a Reality, Not a Threat’


David LevinthalDavid Levinthal: Untitled, from his 2008 series ‘I.E.D.,’ about the US wars in Afghanistan and Iraq. Levinthal’s work is on view in ‘David Levinthal: War, Myth, Desire,’ at the George Eastman Museum, Rochester, New York, until January 1, 2019. The accompanying book is published by the museum and Kehrer.

The Reign of George VI, 1900–1925, published anonymously in London in 1763, makes for intriguing reading today. Twentieth-century France still groans under the despotism of the Bourbons. America is still a British colony. “Germany” still means the sprawling commonwealth of the Holy Roman Empire. As the reign of George VI opens, the British go to war with France and Russia and defeat them both. But after a Franco-Russian invasion of Germany, the war reignites in 1917. The British invade and subdue France, deposing the Bourbons. After conquering Mexico and the Philippine Islands, the Duke of Devonshire enters Spain, and a general peace treaty is signed in Paris on November 1, 1920.

The impact of revolution on the international system lies far beyond this author’s mental horizons, and he has no inkling of how technological change will transform modern warfare. In his twentieth century, armies led by dukes and soldier-kings still march around the Continent reenacting the campaigns of Frederick the Great. The Britannia, flagship of the Royal Navy, is feared around the world for the devastating broadsides of its “120 brass guns.” The term “steampunk” comes to mind, except there is no steam. But there are passages that do resonate unsettlingly with the present: English politics is mired in factionalism, Germany’s political leadership is perilously weak, and there are concerns about the “immense sums” Russian Tsar Peter IV has invested in British client networks, with a view to disrupting the democratic process.

Predicting future wars—both who will fight them and how they will be fought—has always been a hit-and-miss affair. In The Coming War with Japan (1991), George Friedman and Meredith Lebard solemnly predicted that the end of the cold war and the collapse of the Soviet Union would usher in an era of heightened geopolitical tension between Japan and the US. In order to secure untrammeled access to vital raw materials, they predicted, Japan would tighten its economic grip on southwest Asia and the Indian Ocean, launch an enormous rearmament program, and begin challenging US hegemony in the Pacific. Countermeasures by Washington would place the two powers on a collision course, and it would merely be a matter of time before a “hot war” broke out.

The rogue variable in the analysis was China. Friedman and Lebard assumed that China would fragment and implode just as the Soviet Union had, leaving Japan and America as rivals in a struggle to secure control over it. It all happened differently: China embarked upon a phase of phenomenal growth and internal consolidation, while Japan entered a long period of economic stagnation. The book was clever, well written, and deftly argued, but it was also wrong. “I’m sure the author had good reasons in 1991 to write this, and he’s a really smart guy,” one reader commented in an Amazon review in 2014 (having failed to notice Meredith Lebard’s co-authorship). “But, here we are, 23 years later, and Japan wouldn’t even make the list of the top 30 nations in the world the US would go to war with.”

This is the difficult thing about the future: it hasn’t happened yet. It can only be imagined as the extrapolation of current or past trends. But forecasting on this basis is extremely difficult. First, the present is marked by a vast array of potentially relevant trends, each waxing and waning, augmenting one another or canceling one another out; this makes extrapolation exceptionally hard. Second, neither for the present nor for the past do experts tend to find themselves in general agreement on how the most important events were or are being caused—this too, bedevils the task of extrapolation, since there always remains a degree of uncertainty about which trends are more and which are less relevant to the future in question.

Finally, major discontinuities and upheavals seem by their nature to be unpredictable. The author of The Reign of George VI failed to predict the American and French Revolutions, whose effects would be profound and lasting. None of the historians or political scientists expert in Central and Eastern European affairs predicted the collapse of the Soviet bloc, the fall of the Berlin Wall, the unification of Germany, or the dissolution of the Soviet Union. And Friedman and Lebard failed to foresee the current economic, political, and military ascendancy of China.

Lawrence Freedman’s wide-ranging The Future of War: A History is aware of these limits of human foresight. It is not really about the future at all, but about how societies in the Anglophone West have imagined it. The book doesn’t advance a single overarching argument; its strength lies rather in the sovereign presentation of a diverse range of subjects situated at various distances from the central theme: the abiding military fantasy of the “decisive battle,” the significance of peace conferences in the history of warfare, the impact of nuclear armaments on strategic thought, the quantitative analysis of wars and their human cost, the place of cruelty in modern warfare, and the changing nature of war in a world of cyberweapons and hybrid strategy.

In modern societies, as Freedman shows, imagining wars to come has been done not just by experts and military planners but also by autodidacts and writers of fiction. The most influential early description of a modern society under attack by a ruthless enemy was H.G. Wells’s best seller The War of the Worlds (1897), in which armies of Martians in fast-moving metal tripods poured “Heat-Rays” and poisonous gas into London, clogging the highways with terrified refugees who were subsequently captured and destroyed, their bodily fluids being required for the nourishment of the invaders. The Martians had been launched from their home planet by a “space gun”—borrowed from Jules Verne’s From the Earth to the Moon (1865)—but the underlying inspiration came from the destruction of the indigenous Tasmanians after the British settlement of the island, an early nineteenth-century epic of rapes, beatings, and killings that, together with pathogens carried by the invaders, wiped out virtually the entire black population (a few survived on nearby Flinders Island). The shock of Wells’s fiction derived not so much from the novelty of such destruction, which was already familiar from the European colonial past, but from its unexpected relocation to a white metropolis.

The most accurate forecast of the stalemate on the Western Front in 1914–1918 came not from a professional military strategist but from the Polish financier and peace advocate Ivan Stanislavovich Bloch (1836–1901), whose six-volume study The War of the Future in Its Technical Economic and Political Relations (1898) argued that not even the boldest and best-trained soldiers would be able to cut through the lethal fire of a well-dug-in adversary. The next war, he predicted, would be “a great war of entrenchments” that would pit not just soldiers but entire populations against one another in a long attritional struggle. Bloch’s meticulously detailed scenario was an argument for the avoidance of war. If this kind of thinking failed to have much effect on official planning, it was because military planners foresaw a different future, one in which determined offensives and shock tactics would still carry the day against defensive positions. Their optimism waned during the early years of World War I but was revived in 1917–1918, with the return to a war of movement marked by huge offensive strikes and breakthroughs into enemy terrain.

The prospect of aerial warfare aroused a similar ambivalence. Wells’s War in the Air (1908) imagined a form of warfare so devastating for all sides that a meaningful victory by any one party was unthinkable. He depicted America as under attack from the east by German airships and “Drachenfliegers” and from the west by an “Asiatic air fleet” equipped with swarms of heavily armed “ornithopters” (lightweight one-man flying machines). The book closed with a post-apocalyptic vision of civilizational collapse and the social and political disintegration of all the belligerent states.

But others saw aerial warfare as a means of recapturing the promise of a swift and decisive victory. Giulio Douhet’s The Command of the Air (1921) aimed to show how an aerial attack, if conducted with sufficient resources, could carry war to the nerve centers of the enemy, breaking civilian morale and thereby placing decision-makers under pressure to capitulate. The ambivalance remains. To this day, scholars disagree on the efficacy of aerial bombing in bringing the Allied war against Nazi Germany to an end, and the Vietnam War remains the classic example of a conflict in which overwhelming air superiority failed to secure victory.

The ultimate twentieth-century weapon of shock was the atomic bomb. The five-ton device dropped on Hiroshima by an American bomber on August 6, 1945, flattened four square miles of the city and killed 80,000 people instantly. The second bomb, dropped three days later on Nagasaki, killed a further 40,000. The advent of this new generation of nuclear armaments—and above all the acquisition of them by the Soviet Union—opened up new futures. In 1954, a team at the RAND Corporation led by Albert Wohlstetter warned that if the leadership of one nuclear power came to the conclusion that a preemptive victory over the other was possible, these devastating weapons might be used in a surprise attack. On the other hand, if the destructive forces available to both sides were in broad equilibrium, there was reason to hope that the fear of nuclear holocaust would itself stay the hands of potential belligerents. “Safety,” as Winston Churchill put it in a speech to the British Parliament in March 1955, might prove “the sturdy child of terror, and survival the twin brother of annihilation.”

This line of argument gained ground as the underlying stability of the postwar order became apparent. The “function of nuclear armaments,” the Australian international relations theorist Hedley Bull suggested in 1959, was to “limit the incidence of war.” In a nuclear world, Bull argued, states were not just “unlikely to conclude a general…disarmament agreement,” but would be “behaving rationally in refusing to do so.” In an influential paper of 1981, the political scientist Kenneth Waltz elaborated this line of argument, proposing that the peacekeeping effect of nuclear weapons was such that it might be a good idea to allow more states to acquire one: “more may be better.”*

Most of us will fail to find much comfort in this Strangelovian vision. It is based on two assumptions: that the nuclear sanction will always remain in the hands of state actors and that state actors will always act rationally and abide by the existing arms control regimes. The first still holds, but the second looks fragile. North Korea’s nuclear deterrent is controlled by one of the most opaque personalities in world politics. This past January, Kim Jong-un reminded the world that a nuclear launch button is “always on my table” and that the entire United States was within range of his nuclear arsenal: “This is a reality, not a threat.”

For his part, the president of the United States taunted his Korean opponent, calling him “short and fat,” “a sick puppy,” and “a madman,” warning him that his own “Nuclear Button” was “much bigger & more powerful” and threatening to rain “fire and fury” down on his country. Then came the US–North Korea summit of June 12, 2018, in Singapore. The two leaders strutted before the cameras and Donald Trump spoke excitedly of the “terrific relationship” between them. But the summit was diplomatic fast food. It lacked, to put it mildly, the depth and granularity of the meticulously prepared summits of the 1980s. We are as yet no closer to the denuclearization of the Korean peninsula than we were before.

Meanwhile Russia has installed a new and more potent generation of intermediate-range nuclear missiles aimed at European targets, in breach of the 1987 INF Treaty. The US administration has responded with a Nuclear Posture Review that loosens constraints on the tactical use of nuclear weapons, and has threatened to pull out of the treaty altogether. The entire international arms control regime so laboriously pieced together in the 1980s and 1990s is falling apart. In a climate marked by resentment, aggression, braggadocio, and mutual distrust, the likelihood of a hot nuclear confrontation either through miscalculation or by accident seems greater than at any time since the end of the cold war.

Freedman is unimpressed by Steven Pinker’s claim that the human race is becoming less violent, that the “better angels of our nature” are slowly gaining the upper hand as more and more societies come to accept the view that “war is inherently immoral because of its costs to human well-being.” Pinker’s principal yardstick of progress, the declining number of violent deaths per 100,000 people per year across the world over the span of human history, strikes Freedman as too crude: it fails to take account of regional variations, phases of accelerated killing, and demographic change; it assumes excessively low death estimates for the twentieth century and fails to take account of the fact that deaths are not the only measure of violence in a world that has become much better at keeping the maimed and traumatized alive.

However the numbers stack up, there has clearly been a change in the circumstances and distribution of fatalities. Since 1945, conflicts between states have caused fewer deaths than various forms of civil war, a mode of warfare that has never been prominent in the fictions of future conflict. Two million are estimated to have died under the regime of Pol Pot in Cambodia in the 1970s; 80,000–100,000 of these were actually killed by regime personnel, while the rest perished through starvation or disease. In a remarkable spree of low-tech killing, the Rwandan genocide took the lives of between 500,000 and one million people.

The relationship between military and civilian mortalities has also seen drastic change. In the early twentieth century, according to one rough estimate, the ratio of military to civilian deaths was around 8:1; in the wars of the 1990s, it was 1:8. One important reason for this is the greater resistance of today’s soldiers to disease: whereas 18,000 British and French troops perished of cholera during the Crimean War, in 2002, the total number of British soldiers hospitalized in Afghanistan on account of infectious disease was twenty-nine, of whom not one died. On the other hand, civilians caught up in modern military conflicts, especially in situations where medical services and humanitarian supplies are disrupted, remain highly exposed to disease, thirst, and malnutrition.

A further reason for the disproportionate ballooning of civilian deaths is the tendency of military interventions to morph into chronic insurgencies and civil wars. Counting the dead is extremely difficult in a dysfunctional or destroyed state riven by civil strife, but the broad trends are clear enough. Whereas the total number of Iraqi combat deaths from the air and ground campaigns in the 1991 Gulf War appears to have been between 8,000 and 26,000, the total number of “consequential” Iraqi civilian deaths was around 100,000. Several tens of thousands of Iraqi military personnel were killed in the second Gulf War; the total civilian death toll may have been as high as 460,000 (the Lancet’s estimate of 655,000 is widely regarded as too high). The deaths incurred by the coalition forces in these two conflicts were 292 and 4,809 respectively. The problem is that even the most determined and skillful applications of military force, rather than definitively resolving disputes, inaugurate processes of escalation or disintegration that exact a much higher human toll than the military intervention itself.


British Library/Bridgeman ImagesMartian Tripods; illustration by Jacobus Speenhoff for a Dutch edition of H.G. Wells’s The War of the Worlds, 1899

Today, the phenomenon of the “battle” in which highly organized state actors are engaged is making way for a decentered form of ambient violence in which states engage “asymmetrically” with nonstate militias or civilians; cyberattacks disrupt elections, infrastructures, or economies; and missile-bearing drones cruise over insurgent suburbs. The resulting deterritorialization of violence in regions marked by decomposing states makes the kind of “decision” Clausewitz associated with battle difficult to achieve or even to imagine. “With the change in the type and tactics of a new and different enemy,” Robert H. Latiff writes in Future War, “we have evolved in the direction of total surveillance, unmanned warfare, stand-off weapons, surgical strikes, cyber operations and clandestine operations by elite forces whose battlefield is global.”

In pithy, flip-chart paragraphs, Latiff, a former US Air Force major general, sketches a vision of a future that resembles the fictional scenarios of William Gibson’s Neuromancer.

In the wars of the future, Latiff suggests, the “metabolically dominant soldier” who enjoys the benefits of immunity to pain, reinforced muscle strength, accelerated healing, and “cognitive enhancement” will enter the battlespace neurally linked not just to his human comrades but also to swarms of semiautonomous bots. “Flimmers,” missiles that can both fly and swim, will menace enemy craft on land and at sea, while undersea drones will seek out submarines and communication cables. Truck-mounted “Active Denial Systems” will deploy “pain rays” that heat the fluid under human skin to boiling point. Enemy missiles and aircraft will buckle and explode in the intense heat of chemical lasers. High-power radio-frequency pulses will fry electrical equipment across wide areas. Hypersonic “boost-glide vehicles” will ride atop rockets before being released to attack their targets at such enormous speeds that shooting them down with conventional missiles will be “next to impossible.” “Black biology” will add to these terrors a phalanx of super-pathogens. Of the more than $600 billion the US spends annually on defense, about $200 billion is allocated to research, development, testing, and procurement of new weapons systems.

Latiff acknowledges some of the ethical issues here, though he has little of substance to say about how they might be addressed. How will the psychology of “human-robot co-operation” work out in practice? Will “metabolically dominant” warriors returning from war be able to settle back comfortably into civilian society? What if robots commit war crimes or children get trapped in the path of “pain rays”? What if radio-magnetic pulse weapons shut down hospitals, or engineered pathogens cause epidemics? Will the growing use of drones or AI-driven vehicles diminish the capacity of armed forces personnel to perceive the enemy as fully human? “An arms race using all of the advanced technologies I’ve described,” writes Latiff toward the end of his book, “will not be like anything we’ve seen, and the ethical implications are frightening.”

Frightening indeed. A dark mood overcame me as I read these two books. It’s hard not be impressed by the inventiveness of the weapons experts in their underground labs, but hard, too, not to despair at the way in which such ingenuity has been uncoupled from larger ethical imperatives. And one can’t help but be struck by the cool, acquiescent prose in which the war studies experts portion out their arguments, as if war is and will always be a human necessity, a feature of our existence as natural as birth or the movement of clouds. I found myself recalling a remark made by the French sociologist Bruno Latour when he visited Cambridge in the spring of 2016. “It is surely a matter of consequence,” he said, surprising the emphatically secular colleagues in the room, “to know whether we as humans are in a condition of redemption or perdition.”

The principled advocacy of peace also has its history, though it receives short shrift from Freedman. The champions of peace will always be vulnerable to the argument that since the enemy, too, is whetting his knife, talk of peace is unrealistic, even dangerous or treacherous. The quest for peace, like the struggle to arrest climate change, requires that we think of ourselves not just as states, tribes, or nations, but as the human inhabitants of a shared space. It demands feats of imagination as concerted and impressive as the sci-fi creativeness and wizardry we invest in future wars. It means connecting the intellectual work done in centers of war studies with research conducted in peace institutes, and applying to the task of avoiding war the long-term pragmatic reasoning we associate with “strategy.”

“I don’t think that we need any new values,” Mikhail Gorbachev told an interviewer in 1997. “The most important thing is to try to revive the universally known values from which we have retreated.” And it must surely be true, as Pope Francis remarked in April 2016, that the abolition of war remains “the ultimate and most deeply worthy goal of human beings.” There have been prominent politicians around the world who understood this. Where are they now?

  1. *

    See Kenneth N. Waltz, “The Spread of Nuclear Weapons: More May Be Better,” The Adelphi Papers, Vol. 21, No. 171 (1981). 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/3ejqjSmjxvc/

An Artist’s Menagerie


Childhood in France:

It was the summer of 1947; the war had ended only two years earlier and food was still fairly scarce. My adoptive parents brought me to a farm on an isolated mountaintop of the Morvan region in the center of France. I was nine, pale, and as skinny as a rail. Madame Durand, the farmer, was a wonderful woman. She would take me along when she milked her favorite cow, Blanchette, and she let me drink the milk fresh from the pail, warm, sweet, and foamy. Blanchette’s milk was more delicious than anything I had ever tasted.

When I was a little girl, I used to spend my summers in a village on the French Riviera, staying with my aunt Mada and an old Russian lady who raised goats and rabbits. She only spoke Russian and so did her goats. I soon learned the words you need in order to tell a Russian-speaking goat to go left or right—though there was something absolutely contrary about these goats, and likable as they might be, if you wanted them to go left, you had to say, “Go right!” and vice versa. I was told my Russian accent was terrible, but the goats seemed to understand anyway, and always went the right way.


Except for the fact that they are always clucking and flapping about, hens are not devoid of a certain shapely elegance. Some hens, in fact, are downright beautiful. If I lived in the country, I might like to keep a few hens and give them French names. Then, every morning, I would go looking for fresh eggs to eat “à la coque,” boiled and with a little salt, the way I did when I was young.


Summers in Vermont:

One day, a big white dog came lolloping down the meadow where I was painting in Vermont, and, with a contented sigh, settled himself right between me and the easel. I couldn’t continue without stumbling over him or splashing him with paint. And if I moved my easel, he moved with it. There was nothing to do but surrender and pack up for the day. The dog observed my defeat with a funny look—disdainful or disappointed, I couldn’t tell—and then lolloped off again without a backward glance. I never saw him again.


For years, we left New York to spend the summer northern Vermont, where we had a cabin. One summer, a cat named Booties, which belonged to some folks in the village, got into the habit of visiting us daily. He appeared at the top of the road, then walked in leisurely fashion through our door. We were always delighted, even somewhat honored, to receive his visits. I have never known a more personable cat. He was very smart and dignified; our whole family was crazy about him. The next summer, when we came back, Booties was gone. He’d been killed by a car that winter, we were told. I was always forgetting he was gone, and kept looking toward the top of the road, expecting to see him.  


Occasional meetings:

This seal belongs to the Brooklyn Zoo. She barks like a dog and even looks a bit like a dog. She also looks like an old man—Winston Churchill, specifically—and depending on her expression, like another, more contemporary politician who will remain nameless. She is a fantastic swimmer. She catches small fish and the children love to watch. Altogether, she seems content enough with her life, if a bit pensive.


In his Travels with a Donkey in the Cevennes (1879), Robert Louis Stevenson described a donkey better than anyone ever had. Modestine, as she was named, “was patient, elegant in form, the color of an ideal mouse, and inimitably small. Her faults were those of her race and sex; her virtues were her own.” I have always liked donkeys. I like how they look and I love to paint them. (I find donkeys, goats, and cats best to paint in general.) If I were an animal, I probably would not want to be a donkey as most of them have pretty rotten lives, but I feel that of all the animal kingdom, donkeys are my landsmen, my kin, and chances are I would be one.


Fabulous creatures:

There is something self-assured, brave, even gallant, about this bird. It doesn’t care what you think. All you need to know is that blue-footed boobies live on the coasts of Central and South America, mostly the Galapagos Islands. I have never met a booby in the flesh, but I would recognize one at once if we ever crossed paths.

The wolf in “Little Red Riding Hood” is not very smart. Why can’t he just eat Red Riding Hood right there in the woods? Why all the rigmarole about running to Grandmother’s house, and worse yet, donning her cap and nightgown? It’s completely unrealistic! Wolves are dignified and proud—but dangerous, too: you wouldn’t want to meet a pack of them as you crossed the frozen steppes of Russia alone in a troika. But from afar, there is much to admire about this untamed and beautiful beast.


Between France and America:

I found this portrait of a young badger the other day, and I thought he looked so appealing I decided to consult my French-English dictionary to see what this interesting animal was called in my native French: blaireau! Well, of course, I knew what a blaireau was, just as I knew what a badger was. But I hadn’t thought of either one in a long time, and I had never put the two together. Will the wonders of language and nature never cease?


Source Article from http://feedproxy.google.com/~r/nybooks/~3/4BkuMj_bCek/

Obama and the Legacy of Africa’s Renaissance Generation


Reuters/Obama For America/HandoutBarack Obama as a child with his father Barack Obama Sr., 1960s

It came to be a core belief held by the American public and media that Barack Obama was a self-creation who had stepped out of nowhere. In a racially divided society, for some the idea that he belonged to no tribe made it possible to vote for him. For his detractors, of whom Trump and his birther movement were the most visible, the belief provided an opportunity to claim that Obama was not a true American. Indeed, he cut a solitary figure: parents and American grandparents dead, no full siblings; what else there was of his family lived in Kenya, which might as well have been the moon to many Americans. Marriage to Michelle gave Obama what he appeared to lack, a family and a community, though his Kenyan ancestry meant he was a member of the African-American community by adoption rather than birthright.

Against the backdrop of the fantasy of normality to which American (and not just American) popular culture subscribes—that is to say, the insistence that all but a few grow up in the same town and live there all their lives—Obama’s story appeared unusual. The truth is that his grandparents made the move to Hawaii (after several moves around the country), doing what millions of Americans before them have done and continue to do: searching for better opportunities. One result is that families become stretched over distance and time until the links between uncles, aunts, cousins, and generations are broken and reformed with new generations in new places.

Even so, the stand-out fact of Obama’s biography remained and remains that he had been born of a Kenyan father and a white mother. “No life could have been more the product of randomness than that of Barack Obama,” wrote David Maraniss in his 2012 biography of the former president. This, though, is the case only when his life is viewed from an American perspective. From an African perspective, the tradition of sending young men to study overseas, as was the case with Barack Obama Sr., is a familiar and longstanding one. In 1852, William Wells Brown, the American playwright, fugitive slave, and abolitionist, noted that he might meet half a dozen black students in an hour’s walk through central London. Some sixty years before that, in 1791, the Temne King Naimbana (of what became Sierra Leone in West Africa) sent his son John Frederick to England, for reasons of political expediency (he sent another to France, and a third to North Africa to acquire an Islamic education). Tragically, John Frederick never made it home, but died on the return passage.

In the second half of the twentieth century, geopolitical events—the end of empires, the rise of nationalism in African countries, the cold war, communism, and the second “red scare”—would see an exponential rise in the numbers of Africans sent to study overseas. So the meeting of Obama’s parents came about more as the unintended consequence of political policy than by random chance. For me, Obama’s story is remarkably familiar. My parents met under very similar circumstances. My father was born in 1935 in Sierra Leone; Barack Obama Sr. was born in Kenya in 1936. My mother was white and British; Obama’s mother was a white American. Both women met and married the men who would become our respective fathers when those men were selected to study at university abroad—a story Obama relates only briefly in his memoir Dreams from My Father:

My father grew up herding his father’s goats and attending the local school, set up by the British colonial administration, where he had shown great promise. He eventually won a scholarship to study in Nairobi; and then on the eve of Kenyan independence, he had been selected by Kenyan leaders and American sponsors to attend a university in the United States, joining the first wave of Africans to be sent forth to master Western technology and bring it back to forge an new, modern Africa.

Obama was wrong about one thing: his father was not in the first wave of students sent overseas to master Western technology, though he was in the first wave of Kenyans who were sent to America. Up until then, most African students had been destined for Britain and, starting after World War II, to the Soviet Bloc and China. In fact, the adventures of this generation of Africans would one day inspire a genre of literature, collectively known as the “been to” novels, exemplified by Ay Kwei Armah’s Fragments, No Longer at Ease by Chinua Achebe, and Ama Ata Aidoo’s Dilemma of a Ghost, fictions that told of the challenges both of leaving the motherland for the West and of return.

*

My father’s insistence that only a British boarding school was able to provide an education good enough for his children had me in tears at Freetown’s Lungi Airport three times a year as we waited to board the plane to London. My father was unyielding, reminding us constantly of the value of the enterprise we were undertaking and about which I didn’t care in the slightest. Paying for our education came before buying a house, before foreign travel, before everything. My father’s own story was both extraordinary and yet, in its own way, entirely typical of the changing times in which he was born. The son of a wealthy farmer and a regent chief from the north of Sierra Leone, Mohamed Forna had won a scholarship at an early age to Bo School, “the Eton of the Protectorate,” as it was known, many miles from home in the south of the country.


Aminatta FornaMohamed Forna, 1957

At the time, Sierra Leone was a British colony, though one that was never settled by whites, who, unable to tolerate the climate, died in such droves from malaria and tropical illnesses that the country was dubbed “the white man’s grave.” British fragility made a crucial difference to the style of governance Britain chose to adopt in West Africa. Instead of a full-fledged colonial government such as existed in Kenya, where the climate of the Highlands was suited to both coffee and Europeans, in Sierra Leone the guardians of empire relied instead on a system of “native administration.” Bo School was founded by the British for the sons of the local aristocracy, who, according to plan, would play a leading role in governing Sierra Leone on behalf of the British.

Generally, the British were cautious about allowing their colonial subjects much in the way of book-learning. The colonial project had begun with a great deal of hubris, talk of a civilizing mission and the belief that Britain could create the world in its own image. Education was a part of that mission. But by the time Lord Lugard, the colonial administrator and architect of native administration, became the governor of Nigeria in 1912, he was sounding warnings against “the Indian disease,” namely the creation, through education, of an intellectual class who would embrace nationalism. Burned by the threat of insurrection elsewhere in the Empire, though still intent on pursuit of an administration staffed by local talent, the British allowed a few Africans just enough education to create a core of black bureaucrats, but no more.

Sierra Leone’s beginnings were a little different from those of Britain’s other African holdings. In the late eighteenth century, British philanthropists had established settlements there of people freed from slavery, many of whom had fled from America to Britain following Lord Mansfield’s 1772 ruling that protected escaped slaves. As part of this social engineering experiment, schools and even a university were established in the capital, Freetown. Fourah Bay College, established in 1827, was the first institute of higher education built in West Africa since the demise of the Islamic universities in Timbuktu. Elsewhere in Britain’s African dominions, and in the early days of empire, most educational establishments were built by evangelically motivated Christian missionaries, and they were tolerated but not encouraged by the colonial administration.

In Kenya in the 1920s, precisely what Lugard feared began to happen: missionary-educated Kenyan men established their own churches and challenged white rule. The locals had a name for Western-educated Kenyans: Asomi. Harry Thuku, the father of Kenyan nationalism (whose story is narrated in Ngũgĩ wa Thiong’o’s tale of the Mau Mau rebellion, A Grain of Wheat) was one such. In their churches, Asomi pastors accused the missionaries of distorting the Bible’s message to their own ends and preached an Africanized version of Christianity, and the Asomi founded associations to represent African interests and built their own schools in which pupils were imbued with a sense of patriotism and pride.

Still, whatever resistance Britain’s Colonial Office offered to the idea of the educated native, by the later days of empire, faced with ever-growing demands for colonial reform, the British began to build a limited number of government institutions, with the intention, in the words of the Conservative minister Oliver Stanley in 1943, of guiding “Colonial people along the road to self-government within the framework of the British Empire.” Any future form of self-governance was intended to create the basis for neocolonialism and a bulwark against the threat of communism.

Shifts in British attitudes, however, were soon outstripped by African ambitions. One million African men had fought on the Allied side during World War II, and those experiences had broadened their worldview. Many had learned to read and write—among them, Obama’s grandfather, Onyango, who, according to Obama family lore, traveled to Burma, Ceylon, the Middle East, and Europe as a British officer’s cook. Whether Onyango knew how to read and write English before he was recruited is unknown; it is possible, though unlikely. By the time he came back, however, he was able to teach his young son his letters before sending him to school. In Dreams from My Father, Barack Obama recounts Onyango’s surviving sister and his great aunt Dorsila’s memories of his grandfather: “For to [Onyango] knowledge was the source of all the white man’s power, and he wanted to make sure his son was as educated as any white man.”

Across the continent, emerging nationalist movements were gaining ground. For them, literacy followed by the creation of an elite class of professionals were the necessary first steps toward full independence. The courses on offer at the government colleges were restricted in subject and scope (syllabuses had to be approved by the colonial authorities) and the colleges themselves could admit only limited numbers of students. Energized and impatient, a new generation refused to wait or to play by the Englishman’s rules. With too few opportunities on the continent, they set their sights overseas, on Britain itself.

Few had the means to cover the costs of travel and fees. There were a limited number of scholarships available through the colonial governments, mainly to study subjects the local universities were not equipped to teach, such as medicine. A lucky few found wealthy patrons; others still were sponsored by donations from their extended families, and sometimes from entire villages. The Ghanaian nationalist and politician Joe Appiah, father of the philosopher Kwame Anthony Appiah, ditched his job in Freetown without telling his employers and bought himself a one-way ticket on a ship bound for Liverpool, hoping to get by on his luck and wit.

*


Aminatta FornaThe author’s parents, Mohamed Forna and Maureen Margaret Christison, on their wedding day, 1961

My mother Maureen has a particular memory of my father. On April 27, 1961, the day Sierra Leone became a self-governing nation, he got roaring drunk at a sherry party held by African students at the premises of the British Council in Aberdeen. The couple had married at the registry office in Aberdeen one month before, in a ceremony attended by their friends among the West African students. On the way home, on the top deck of the bus, my father lit six cigarettes and puffed on them all at once. “But Mohamed, you don’t even smoke,” my mother had protested. And my father replied: “I’m smoking the smoke of freedom, man. I’m smoking the smoke of freedom.”

In the decades between the two world wars, Britain emerged as “the locus of resistance to empire” where anti-colonial movements were shaped by the growth of Pan-Africanist ideals among artists, intellectuals, students, and activists from the colonies. The Kenyan writer and activist Ngũgĩ wa’ Thiong’o, commenting on his arrival in Leeds in 1964, remarked to me:

For the first time I was able to look back at Kenya and Africa, from outside Kenya. Many of the things that were happening in Africa at that time, independence and all that, were not clear to me when I was in Kenya but made sense when I was in Leeds meeting other students from Africa, Nigeria, Ghana, students from Australia, every part of the Commonwealth, students from Bulgaria, Greece, Iraq, Afghanistan—we all met there in Leeds, we had encounters with Marx with Lenin, and all that began to clarify for me a change of perspective.

Among those elites who gathered there, driven by, and driving, the desire for self-rule, were Jomo Kenyatta, Kwame Nkrumah, Michael Manley, Marcus Garvey, C.L.R. James, Seretse Khama, Julius Nyerere, as well as a number of African Americans, including Paul and Eslanda Goode Robeson. In London, anti-colonial and Pan-Africanist ideas were shared and enlarged, spurred by a shared experience as colonial subjects in their homelands and as the victims of racism and the color bar in Britain. “They were brought together too by the fact that the British—those who helped and those who hindered—saw them all as Africans, first of all,” writes Anthony Appiah. And so those who may previously never have identified themselves as such began to do so and explore the commonalities of race, racism, and nationalism. And out of those conversations arose new political possibilities involving international organizations and the opportunity for cultural exchange.

Arrival in Britain brought with it many shocks for the colonial student. Whereas before they were Sierra Leonian and Temne, Luo and Kenyan, Hausa and Nigerian, suddenly they were simply black, subject to all the attitudes and reactions conferred by their skin color. Signs declaring “No Irish, No Dogs, No Blacks” were still common on rental properties during my father’s time in Scotland. My mother told me of the insults my father endured in the street—directed at her as well, when they were together. Later, my father’s second wife—my stepmother, who also went to university in Aberdeen and vacationed in London, staying in the apartments of other African students—recalled the gangs of racist skinheads who arrived to break up their gatherings. “Somebody would run and call for the West Indians,” she told me, their Caribbean neighbors being more experienced in fending off such attacks. In a reversal of the immigrant dream story, Sam Selvon’s 1956 novel The Lonely Londoners tells the story of black people arriving in the 1950s in search of prosperity and a new life, only to discover cruelty and misery.

In order to confront the challenges of their new lives, as well as to keep abreast of political developments back home, the colonial students organized themselves into societies and associations. One such was the hugely influential West African Students’ Union, or WASU. If London was the heart of resistance, then WASU was its circulatory system. My father and his friends were all WASU members, as was every former student of that time from a West African country to whom I have ever spoken. WASU was the center of their social, cultural, and, especially, political life. It also “functioned as a training ground for leaders of the West African nationalist movement,” wrote the historian Peter Fryer; indeed, both Kwame Nkrumah and Joe Appiah were among the leading names who served on WASU’s executive committee.

Unnerved at the pace with which calls for independence were gathering, the Colonial Office kept a close eye on the students’ activities. In London, the department funded two student hostels, which aided the many students whom the color bar prevented from finding decent lodging (and also kept the students conveniently in one location). The civil servants also spied on the African students through MI5. A tug-of-war was taking place within the Colonial Office: on one side were the “softly-softlies” who favored an approach designed to promote good relations with the future leaders; on the other were the hardliners concerned that Communist ideas might take root among the rising generation. Such was the fear of Communist-inspired insurrection in West Africa that Marxist literature was banned and travel to Eastern European countries restricted in those countries.

The colonial administrator Lord Milverton once described WASU as “a communist medium for the contact of communists with West Africans” through the Communist Party of Great Britain. Then-parliamentarian David Rees Williams even accused the Communist Party of using prostitutes to spread its message and called for restrictions on the numbers of students entering the country from the colonies. Though MI5 did not go so far as to keep individual files on all the students, they did do so for the most visible leaders like Nkrumah, whose phone they tapped.

Certainly, there were Marxist sympathizers among the WASU leadership and the African student body in general. Ngũgĩ wa’ Thiong’o talked to me about his road to Marxism, which began during his student years in Leeds, when he saw poor whites for the first time and witnessed, during the student demonstrations in Leeds, white policemen turning on their own, a “vicious crushing of dissent.” Julius Nyerere turned to socialism during his time in Edinburgh, returning to Tanzania in 1952 to become a union organizer and later the first president of a new, socialist republic.


wasuproject.org.ukMembers of the West African Students’ Union (WASU), London, 1920s–1930s

By the 1960s, with the colonies gaining independence one by one, and China and the Soviet bloc beginning to offer their own scholarships, the softly-softly approach had prevailed within Britain’s Colonial Office. The administration of the students’ affairs was handed over to the British Council, which began a diplomatic charm offensive. Before they even left home, students on government scholarships were offered induction seminars on what to wear and how to conduct themselves in the homes of British people, and shown films on how to navigate the challenges of daily life. In one of these films, entitled Lost in the Countryside, a pair of Africans abroad (dressed in tweeds, they emerge from behind a haystack) are instructed firmly: “Do not panic! Find a road. Locate a bus-stop. Join the queue [and there in the middle of nowhere is a line of people]. A bus will arrive. Board it and return to town.” Once the students were in the UK, the British Council arranged home-stays for those Africans who wanted an up-close experience of the British (some 9,500 said they did). My stepmother recalls being advised never to sit in the chair of the head of household, a faux pas of which she has retained a dread all her life.

And finally, there were social events at the Council’s premises in various British cities. At a Christmas dance in the winter of 1959, my father, a third-year medical student at Aberdeen University, approached a young woman, a volunteer named Maureen who was helping to pour drinks for the party, put out his hand and said: “I’m Mohamed.”

*

If the attitude of the British authorities toward the West Africans was one of wavering welcome, the attitude toward the East Africans, Kenyans in particular, was even more complicated. In 1945, there were about 1,000 colonial students in Britain, two thirds of whom came from West Africa and only sixty-five of whom came from East Africa. In Kenya, a simmering mood of rebellion had by the 1950s given rise to the Mau Mau, a movement that explicitly rejected white rule and gave voice to the resentment against colonial government taxes, low wages, and the miserable living conditions endured by many Kenyans. The Mau Mau, which found its support mainly among the Kikuyu people who had been displaced from their lands by white farmers, demanded political representation and the return of land. Facing armed insurrection, in 1952 the British declared a state of emergency, and tried and imprisoned the nationalist leader (who would later become the first president of Kenya) Jomo Kenyatta, who had returned to his homeland from London in 1947.

Upon Kenyatta’s imprisonment, Kenyan nationalists turned to the United States for support. The activist Tom Mboya, a rising political star who in 1960 featured on Time magazine’s cover as the face of the new Africa, became the strongest voice calling for independence in Kenyatta’s absence. In 1959, Mboya began working with African-American organizations—in particular, the historically black private and state colleges, as well as civil rights champions such as Harry Belafonte, Sidney Poitier, Jackie Robinson, and Martin Luther King Jr.—and toured the United States talking about black civil rights and African nationalism as two sides of the same coin. His aim was to raise money for a scholarship program to bring Kenyan students to the US. Over two months, Mboya gave a hundred speeches and met with then Vice President Richard Nixon at the White House. By that point, independence for Kenya was a matter of when, not if—after all, Ghana had already attained independence—and it looked very much as though Britain was deliberately refusing Kenyans the help they needed to prepare for self-governance.

So here was Mboya offering the United States a foothold of influence in Africa, which Britain, even against the backdrop of a cold war scramble for the allegiance of African nations, was too churlish or too arrogant to secure. Although Nixon stopped short of agreeing to meet Mboya’s request for help, the Democratic candidate for the 1960 presidential election John F. Kennedy did do so, and his family’s foundation donated $100,000 to what became known as the “African student airlifts,” the first of which had taken place in 1957.

Mboya was a member of the Luo people, a friend of Onyanga’s, and sometime mentor to his son, Barack Obama Sr. On his own initiative, Obama Sr. had managed to secure himself an offer from the University of Hawaii, and this won him a place on a later airlift in 1959. Here was a young man with an excellent brain, and here, too, was a new dawn on the horizon bringing with it a new country—Obama Sr. saw himself as part of it all. The writer Wole Soyinka, who himself studied at Leeds, England, in the 1950s, had a name for them, the young men and women who came of age at the same time as their countries; he called them the “Renaissance Generation.”


Evening Standard/Getty ImagesJomo Kenyatta, the first president of Kenya, with Ghanaian Prime Minister Kwame Nkrumah at the Commonwealth Prime Ministers’ Conference, Marlborough House, London, 1965

Just as the West African students bound for Britain had been coached in what to expect, so the Kenyans were briefed on arrival in the United States, including about the prevailing racial attitudes they should expect to encounter there. The world-renowned anthropologist and now director of the Makarere Institute of Research Mahmood Mamdani, who traveled to the US on a 1963 Kenyan airlift, recalls being told it would be “preferable for us to wear African clothing when going into the surrounding communities because then people would know we were African and we would be dealt with respectfully.” Under colonial rule, Kenyans certainly did not share the privileges of whites; even so, for many African students the daily indignities of racial segregation in America came as a shock. At least one was arrested for trying to buy a sandwich at a whites-only lunch counter, and some of those studying at universities in the South were prompted by their experience of Southern racism to ask to be transferred to Northern colleges. As had been the case for their counterparts in Britain, a close eye was kept on their activities. Returning from a trip to Montgomery, Alabama, Mamdani got a visit from FBI agents; he recalls that they asked if he liked Marx, to which Mamdani replied in perfect innocence that he had never met the man. Informed that Marx was dead, he replied: “Oh no! What happened?” And as he told me in our conversation many years later: “The abiding outcome of that visit was that I went to the library to look up Marx.”

Obama Sr.’s choice of the University of Hawaii was, in many ways, an unfortunate one. Hawaii was more cosmopolitan than other parts of the United States and he did at least escape some of the racist attitudes that confronted other African students, but he was far from all the debates, meetings, lobbying, and activism about independence that were taking place at the universities and historically black colleges on the mainland. When the opportunity arose, he chose to continue his studies at Harvard—and part of the reason was undoubtedly that he wanted to get closer to the action. In 1961, Kenyatta was released from jail; two years later, Kenya declared independence. When all that happened, Obama was still a long way from home—just as my father was when Sierra Leone won its independence.

In time, Ngũgĩ would return from Leeds, and Mamdani from the United States. Ngũgĩ was by then a published author, having abandoned his studies to write Weep Not, Child. Mamdani went on to teach at Makarere University, which became the venue for the famous 1962 African Writers’ Conference, and he helped to transform it from a colonial technical college into a vibrant university. One of the few women on the airlift, Wangari Maathai, flew back home from Pittsburg in 1966, later to found the Green Belt Movement, an initiative focusing on environmental conservation that today is credited with planting fifty one million trees in Kenya and for which Maathai would be awarded a Nobel Peace prize. Still, for Kenya, as for every one of the new African nations, independence proved a steep and rocky road. Five hundred students who had earned their degrees overseas returned home, a significant proportion of them the American-educated AsomiThey would become the educators, administrators, accountants, lawyers, doctors, judges, and businessmen in the new Kenya. Despite the best efforts of Tom Mboya and his supporters, Kenya had only a fraction of the college-educated young professionals it needed.

*


Aminatta FornaThe author with her father, 1966

Eight years after he had left Sierra Leone, my father returned. His elder brother had died and his family wrote that Mohamed was needed at home. By then, he was a qualified medical doctor, with a wife and three children. The year before, Obama Sr. had also returned home after the US government declined to renew his visa. Medical students and those who went on to higher degrees, especially, had found themselves away for long periods, as much as a decade. Unsurprisingly, in that time, many of the men had formed romantic attachments with local women. If those relationships were frowned upon in Britain, they were illegal in much of America. Loving v. Virginia, the case before the Supreme Court that finally overturned the ban on interracial unions, was not decided until 1967. When the Immigration and Naturalization Service declined Obama Sr.’s request to remain in the country, his relations with women were reported to be part of the problem. Already, he had fathered one child with Ann Dunham, a son also named Barack, but that marriage was over, and he had formed a new relationship with another white woman, Ruth Baker.

In Britain, the authorities, though they did not encourage such unions, did not intervene except, notably, in the case of Seretse Khama, heir to the Bangwato chieftaincy in Bechuanaland (now Botswana) and Ruth Williams. This was at the behest of white-ruled South Africa, whose government would not tolerate an interracial marriage within its borders. Jomo Kenyatta had a child, Peter, with his British wife. I used to pass Peter in the corridors of the BBC, where for a time we both worked; he was in management, while I was a junior reporter awed by the prestige of his last name. The marriage of Joe Appiah to Peggy Cripps, the daughter of the Labour politician Sir Stafford Cripps, was one of the most high-profile unions of the day that also happened to be a mixed marriage.

Of Ann Dunham, first wife to Obama Sr. and mother of the future president, a childhood friend would later say: “She just became really, really interested in the world. Not afraid of newness or difference. She was afraid of smallness.” The same could be said of my mother, Maureen Christison. Aberdeen was simply too small for her. The African students represented a world beyond the gray waters of the North Sea. In the Scottish writer Jackie Kay’s Red Dust, her 2010 memoir of her search for her Nigerian father who studied in Scotland in the 1950s, her father overturns conventional wisdom in remarking how popular the male African students were with the local girls. The men frequently came from aristocratic families—both Appiah and Khama were royal, and my father was the son of a regent chief and landowner. “You must remember,” a contemporary of my parents observed during the time when I was researching my own memoir of my father, “they were the chosen ones.”

In 2017, in a New York Times op-ed assessing President Obama’s foreign policy legacy, Adam Shatz noted that Obama was “A well-traveled cosmopolitan… seemingly at home wherever he planted his feet. His vision of international diplomacy stressed the virtues of candid dialogue, mutual respect and bridge building.” Obama’s cosmopolitanism was rooted in several places: the fact of his Kenyan father (though not his immediate influence, since Obama Sr. was gone from the family before Obama was old enough to remember him), and later his painstaking search to assemble the pieces of his birthright, would do much to extend his vision. But before all of that, it was his mother, Ann, who instilled in him the foundations of his internationalism. She rehearsed for her son the version of his father’s story that Obama Sr. told of himself: that of the idealist devoted to building a new Kenya—albeit that in reality he was an unreliable husband and father, whose career came well short of his own expectations. It was Ann who remained true to that vision of a new world, who easily made friends with people of different nationalities, who subsequently married an Indonesian, and took her son to Indonesia to spend a formative period of his childhood, where she spent many years running development projects. My mother Maureen never returned to Scotland after the break-up of her marriage to my father. She married again, to a New Zealander who worked for the United Nations, and spent her life moving around the world, in time building her own international career within the UN.

Both women entered an international professional class, a group that the British historian David Goodhart disparagingly describes as the “anywheres”: people whose sense of self is not rooted in a single place or readymade local identity. If Obama’s search in Dreams from My Father was a quest for his African identity, it was also, and conversely, an attempt to discover whether he could ever be a “somewhere,” whether that somewhere was a place (in time, he would choose Chicago) or a people, part of an African-American community.

His next book, The Audacity of Hope, became, by contrast, a plea for complexity. Of his extended family of Indonesians, white Americans, Africans, and Chinese—in which I find a mirror for mine: African, European, Iranian, New World, and Chinese—Obama writes: “I’ve never had the option of restricting my loyalties on the basis of race or measuring my worth on the basis of tribe.” Obama knew and understood that he had more than one identity, that all of us do. Anthony Appiah credits his own avowed cosmopolitanism to his father Joe’s relaxed way with people from different worlds. I believe my father thought that his children would grow up to be both Sierra Leonian and British, a new kind of citizen, a new African, comfortable with our place in the world.


Kwame Anthony AppiahPeggy and Joe Appiah with their children, Ghana, circa 1972

For all the hope, there were bitter disappointments as well. Shortly after Obama Sr. returned to Kenya, his mentor Tom Mboya was assassinated. Obama Sr. would lose himself to drink and die in a car crash. My father arrived back in Sierra Leone to a government openly talking of introducing a one-party system, a threat to his democratic ideals. As politically opportunistic leaders across the continent quickly realized how easily the newborn institutions of democracy could be subverted to personal gain, the returning graduates would find themselves forced to confront the very governments they had come home to serve. In Ghana, Joe Appiah was jailed by his former good friend Nkrumah; Ngũgĩ wa’ Thiong’o would be imprisoned for sedition against the Kenyan government and then exiled; in Nigeria, Soyinka encountered a similar fate. My father was jailed and killed. Many would pay a high price for the privilege of having traveled beyond Africa, for coming of age at the same time as their countries, for working and dreaming of a Renaissance yet to come.

How many times in my own travels in this world have I come across one of them, the chosen, of my father’s generation? There’s a quality of character they wear, whose origins I have come to understand. They carry, alongside a worldly ease, a sense of duty, of obligation and responsibility, that imbues all they say and do. Unlike the generations that followed, they never saw their own future beyond Africa. I try to imagine an Africa if they had never been, and I cannot. There are those the world over who decry the failings and weaknesses of the post-independence African states at the same time as many in the West—after Afghanistan, after Iraq, and facing assaults on their own democratic institutions—have slowly come to the realization that nation-building is no simple task, that democracy takes more than a parliament building. The generation of Africans to whom the task fell of creating new countries knew, or came to know, that alongside the desires and dreams, and the promise of a new-found freedom, they had been set up to fail. Their real courage lay in the fact that they did not surrender, that they tried to do what they had promised themselves and their countries they would. They went forward anyway.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/M_XaNkPhsIA/

Trump & CNN: Case History of an Unhealthy Codependency


Sean Rayford/Getty ImagesPeople shouting behind CNN reporter Jim Acosta before a Trump rally, West Columbia, South Carolina, June 25, 2018

At nearly every Trump rally prior to the midterm elections, the chant went up: “CNN sucks.” To journalists, the cry had an ominous ring, amplified as it was by Trump’s repeated references to fake news and his description of journalists as the enemy of the people. The delivery of a pipe bomb in late October to CNN’s headquarters in New York confirmed the sense among journalists that they were under siege.

But is there any truth to the claim of CNN’s failings? Even at a time of such anti-press animus, it’s important to assess the fairness of the network’s coverage. From the moment Trump announced his candidacy in 2015, CNN President Jeff Zucker has made him the centerpiece of the network’s journalism—as well as its business model. On the latter, the strategy has been a grand success; according to a recent article in Vanity Fair (“Inside the Trump Gold Rush at CNN”), the network in 2018 expects to turn a profit of $1.2 billion on $2.5 billion in revenues, making it CNN’s most profitable year ever.

But what about its journalism? Much rides on the answer, for the network has become Exhibit A in the case of Trump supporters that the press is hopelessly biased against them. To assess CNN’s coverage, I regularly tuned into it in the days leading up to the elections. It was not a pretty picture.

Thursday, November 1, was representative of its problems. The day’s big story was Trump’s dark warnings about the migrant caravan making its way through Mexico to the US border. “A new low for Trump,” afternoon anchor Brooke Baldwin said of a new Trump ad that blamed Democrats for allowing an undocumented immigrant who had murdered two police officers to remain in the United States. For commentary, Baldwin turned to Valerie Jarrett, the former adviser to President Obama. Why her, I thought. Wouldn’t she be predictably opposed to Trump? She indeed was, calling the ad “a sad page from an old playbook called fearmongering 101.” Baldwin wondered why Trump’s supporters embraced “his lies.” Jarrett said she could offer no insight on that but did note her belief that it was important for our leaders to be “role models,” because “young people are watching.” What banality.

That evening, on Anderson Cooper’s show, the caravan remained the main focus. Earlier in the day, Trump, in a speech at the White House, had announced new measures aimed at stemming illegal immigration. “As he so often does,” Cooper said, the president “uttered a string of untruths.” For elaboration, he interviewed Ralph Peters, a retired lieutenant colonel. Peters had been an analyst on Fox News for years, routinely denouncing Obama and everyone associated with him. Disgusted by Trump, he left Fox in March 2018 and had since appeared frequently on CNN, directing at the president the same vitriol he had formerly heaped on Obama. The day’s events had been really difficult for him, Peters said, “because I want to take the president of the United States seriously, but he manages to be at once an embarrassing fool and an insidious menace.” He was an “un-American American president” who had made “absolutely repulsive, repugnant attacks on America.” When Cooper asked about Trump’s plan to send troops to the border, Peters dismissed him as a draft-dodger. I was puzzled why CNN was giving this marginal figure so much air time.

Next up was Jorge Ramos. The Univision anchor is a well-known critic of the president—in August 2015, he was ejected from a press conference after engaging in a testy exchange with Trump over his immigration policy, which he called “full of empty promises.” The previous week, Ramos had spent two days reporting on the caravan for CNN. Trump, Cooper said, continued to paint the caravan as an invasion when in fact it was a thousand miles from the border; nonetheless, “the president keeps peddling this lie.” Did Ramos agree? Yes, he said, it was a lie. In his time with the caravan he had seen not terrorists or criminals but young kids fleeing poverty and gangs. For several minutes the interview went on in this vein, with Cooper and Ramos jointly dissecting the president’s claims.

Given how bloated those claims were, it was certainly useful to have them punctured, but the amount of time CNN devoted to them seemed to be serving Trump’s aims by giving him a megaphone, and the zeal with which the network went after him seemed unprofessional. Yet, at 9:00 PM, when Cooper handed the baton to Chris Cuomo, the offensive continued. The president, Cuomo said, “is all in on fear and loathing.” Nothing he had said about the immigrant invasion “has any basis in reality” but was simply “Trumped-up talk.” But would enough people buy into that talk to drive turnout? For an answer, Cuomo spoke with Ohio Governor John Kasich. Kasich is another confirmed antagonist of Trump, having run against him in 2016 for the GOP presidential nomination.

“Do you agree with me on the basic proposition that there is no imminent invasion?” Cuomo asked. Yes, Kasich said, he did agree; it was all about “getting people stirred up.” Were the Republicans becoming “the party of fear and loathing?” Cuomo asked. Kasich hoped not, he said, for he doubted such rhetoric could win elections. As the segment ended, Cuomo thanked the governor “for speaking truth to power on this show as always.” I couldn’t decide which was worse—the cliché or its tendentiousness.

Cuomo was far more combative with his next two guests—former Republican Senator Rick Santorum and Amy Kremer, co-founder of Women for Trump. Both praised the president for keeping his campaign promise to crack down on illegal immigration. Cuomo pressed them on the president’s decision to send troops to the border; they pushed back just as hard. Cuomo deserved credit for giving time to the other side, but the exchange was unedifying; both Santorum and Kremer were professionals with well-rehearsed positions, and their conversation with Cuomo had the feel of a ritualized dance.     

And so it went throughout my time watching CNN. Trump was repeatedly criticized for lying, spreading fear and hate, making racist claims, and being a bigot. Anchors and commentators could not understand why he was making immigration the centerpiece of the campaign when he had a good story to tell about the economy. The interviews with the occasional Trump advocate were far outnumbered by those with people like David Glosser, the uncle of Stephen Miller, the Trump aide who has helped define his immigration policy. Glosser bitterly denounced his nephew, saying that had such a policy been in place a century earlier, his own forebears would not have been allowed into America when fleeing anti-Jewish pogroms in Europe. Given all the talk about Trump’s base and whether his race-baiting demagoguery resonated with it, I wanted to hear more from the base itself, but few of its members appeared.

More generally, the network’s coverage seemed uninformative, repetitive, and nakedly partisan. Apart from a some perfunctory I’m-here-in-red-state-America-to-speak-with-the-locals dispatches, it featured few in-depth reports on developments on the ground. Instead, it offered talking heads reciting familiar talking points. With immigration and related questions of national identity having become so salient both in America and throughout the world, I was surprised at how little genuine interest CNN showed in it.

What’s more, while routinely decrying the polarization afoot in the land, CNN hosts and pundits seemed to feed it with their bickering panels and partisan slugfests. On this, MSNBC and Fox News are equally guilty. Alexandra Pelosi, a documentarian whose latest production, Outside the Bubble (airing on HBO), chronicles her travels across the country to talk with ordinary Americans, recently told The New York Times that she blames cable news for the nation’s partisan divide: “There’s too much profit being made right now on the divide. How many people in those cable news studios ever really go spend the night in America, not just in the Four Seasons or wherever Trump is at the moment, but I mean really go to somebody’s house, have dinner and talk to them?”

Pelosi (the daughter of Nancy, the House minority leader) no doubt goes too far in holding the cable networks solely responsible for the nation’s divisions, but her indictment of them for not getting out of their studios more often and engaging with citizens at the grassroots seems not only accurate but applicable to the press as a whole.

To be fair, the nation’s top news organizations—the TimesThe Washington PostThe Wall Street Journal, NPR, Politico—do regularly get out into the field. For months before the midterms, their reporters toured the country, filing fact-filled reports on the battle for control of the House and the Senate. Yet even these followed a well-worn template, focusing overwhelmingly on the candidates and their consultants, polls and fundraising, who’s ahead and who behind, with perhaps two or three fleeting quotes from actual voters. Rare were the dispatches that sought to get beneath the surface and report in depth on communities and their residents—the challenges they face, the struggles they undergo, their aspirations, and their setbacks.

For the most part, the national press approaches the electorate much as the Democratic Party does, as an amalgam of distinct demographic groups, some rising, others declining. Michelle Goldberg captured this mindset last fall in a column for the Times: “America is now two countries, eyeing each other across a chasm of distrust and contempt. One is urban, diverse and outward-looking. This is the America that’s growing. The other is white, provincial, and culturally revanchist. This is the America that’s in charge.”

The type of casual condescension toward a large swath of America suggested by this statement is common in big-city newsrooms. The prevailing line is that white people, having long been accustomed to being in the majority, are panicked at the prospect of becoming a minority and so are drawn to Donald Trump and his campaign to Make America Great Again, which is code for keeping America white. Paul Krugman and many other liberal columnists have confidently concluded (on the basis of spotty data from 2016 exit polls and subsequent surveys) that Trump’s appeal to white workers is due exclusively to racism. Racism is surely a factor, but no doubt the travails of many communities in rural and rustbelt America are, too.

In a recent article in Politico magazine, Michael Kruse quoted the Republican consultant and pollster Frank Luntz on the twofold phenomenon of the Trump voter: “Half the people felt forgotten. And half of the people felt fucked.” This “F-squared” portion of the population, Luntz said, was the key to Trump’s victory. They help explain his sway over members of Congress and will help determine his fortunes over the next two years. Trump, Luntz observed, “is seeking to elevate those who feel oppressed by and taken advantage of by the elites, and he seems to raise them up and say, ‘Hey, guys, you’re now in charge.’”

Journalists—heavily concentrated in cities and mixing mostly with other affluent, highly educated urbanites—face a natural barrier in getting to know the F-squared part of America. Since Trump’s victory in 2016, they have spent more time in it, but it remains mostly a foreign land. With the divisions in the country seeming to harden in the wake of the midterms, journalists need to do a better job of overcoming them. This is especially true at CNN and the other cable networks. As Alexandra Pelosi suggests, I’d like to see Anderson Cooper, Chris Cuomo, and Wolf Blitzer get out of the studio more and really spend a night in America, visiting people in their homes and having dinner with them.

Sadly, Jim Acosta’s confrontation with President Trump at the post-election press conference seemed certain to heighten the divisions. For CNN, the encounter added to their star reporter’s visibility and the network’s image as a fighter for press freedom. To Trump and his supporters, Acosta’s grandstanding provided further evidence of the news media’s implacable hostility to them. Each side, in short, seemed to get from the encounter exactly what it wanted.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/z1kbEBGX0tQ/

Writing as Fast as Reality


Antonio Olmos/eyevine/ReduxAli Smith in her garden, Cambridge, England, 2005

I read the first two novels of Ali Smith’s seasonal quartet in Cairo, where long, warm, sunny days make up most of the year. In a city whose pace—a down-tempo lull—gives a sense that time is expanded, Autumn, with its meandering, time-traveling, light-footed story of a friendship between a young girl and an old man, felt exhilarating, deeply touching, even breathtaking. Winter, which is not strictly a sequel except in the seasonal sense and which revolves around a Christmas gathering at a family home in Cornwall, was fraught, overwhelming, dire. Too many people, too many egos, too many ideas, too much tension. “Ghastly” is how I have heard the season, which I have never experienced in its entirety, described—but the word “somewhat” applies to it and the temperament of the novel as well.

Winter begins tellingly, like Autumn, with a contemporary take on a Dickensian tale:

God was dead: to begin with.

And romance was dead. Chivalry was dead. Poetry, the novel, painting, they were all dead, and art was dead. Theatre and cinema were both dead. Literature was dead. The book was dead. Modernism, postmodernism, realism and surrealism were all dead. Jazz was dead, pop music, disco, rap, classical music, dead. Culture was dead.

As were history, politics, democracy, political correctness, the media, the Internet, Twitter, religion, marriage, sex lives, Christmas, and both truth and fiction. But “life wasn’t yet dead. Revolution wasn’t dead. Racial equality wasn’t dead. Hatred wasn’t dead.”

Smith, who was born in Scotland in 1962, is as attuned to the current moment as she is to the cycles of history that led us here. Growing up in council housing, Smith held odd jobs including waitressing and cleaning lettuce before persuing a Ph.D. in American and Irish modernism at Cambridge; she ultimately abandoned academia to write plays.

In Winter, Arthur (Art), who makes a living tracking down copyright-infringing images in music videos and also maintains a blog, Art in Nature, has just broken up with Charlotte, his conspiracy-theorist anticapitalist girlfriend, who has destroyed his laptop by drilling a hole through it and taken over his Twitter account to impersonate and ridicule him. Unable to face Christmas alone with his emotionally withdrawn, hypersensitive, and self-starved mother—Sophia, aka Ms. Cleves—and having promised her that he would bring along his girlfriend, he hires Velux (Lux), a gay Croatian whom he meets at an Internet café, to be a stand-in Charlotte (for £1000). At some point over that Christmas weekend, a long-estranged, politically and technologically aware hippie aunt, Iris, visits too. In their midst, accompanying Sophia, is the floating, disembodied head of a child. Bashful, friendly, nonverbal, it becomes something of a constant, if gradually dying, presence.

Family banter, conflict, political debate, reckonings, and reconciliation ensue. As do dreams, nightmares, hallucinations, and apparitions. Perspectives and narrators constantly change, shift, and collapse; parallel and tangential events are recounted at the same time. (“Let’s see another Christmas. This one is the one that happened in 1991.”) Conversation is structured and guided intuitively:

I cannot be near her fucking chaos a minute longer. (His mother talking to the wall.)

Lucky I’m an optimist regardless. (His aunt speaking to the ceiling.)

It is no wonder my father hated her. (His mother.)

Our father didn’t hate me, he hated what had happened to him. (His aunt.)

And mother hated her, they both did, for what she did to the family. (His mother.)

Our mother hated a regime that put money into weapons of any sort after the war she’d lived through, in fact she hated it so much that she withheld in her tax payments the percentage that’d go to any manufacture of weapons. (His aunt.)

My mother never did any such thing. (His mother.)

The events and the intricacies of the various interactions are both quotidian—“The walk from the gate to the house is unexpectedly far and the path is muddy after the storm. He puts his phone on to light the way. It buzzes with Twitter alerts as soon as he put it on. Oh God. So much for low reception”—and surreal. The plausible and the implausible are interchangeable, coming together in exuberant, tragicomic, and shrewd scenes:

Good morning, Sophia Cleves said. Happy day-before-Christmas.

She was speaking to the disembodied head…. The head was on the windowsill sniffing in what was left of the supermarket thyme. It closed its eyes in what looked like pleasure. It rubbed its forehead against the tiny leaves. The scent of thyme spread through the kitchen and the plant toppled into the sink.

At the dinner table and in allegory, tales are shared in multiple versions, forming a kaleidoscopic worldview and view of the family. An unreported chemical leak at a factory in Italy has killed trees, birds, cats, and rabbits, sent children to the hospital breaking out in boils, and poisoned the air, forcing everyone to leave their houses, which are then bulldozed. Sophia laughs at the vision of a cat with its tail falling off. Nobody else finds this funny. Smith’s characters navigate one another’s twisted humor and multidimensional takes on the world with various modes and tics of survival (the nervous laugh, the emotional withdrawal). Political and class divides mark every interaction, even the most intimate, and Winter brings out the perversity of privilege and choice:

What’ve you really been doing? Sophia said. Or have you taken idealistic retirement now?

I’ve been in Greece, Iris said. I came home three weeks ago. I’m going back in January.

Holiday? Sophia said. Second home?

Yeah, that’s right, Iris said. Tell your friends that. Tell them to come too. We’ll all have a fabulous time. Thousands of holidaymakers arriving every day from Syria, Afghanistan, Iraq, for city-break holidays in Turkey and Greece.

“None of my friends would be in the least interested in any of this,” Sophia responds.

Autumn set the precedent for Winter’s method of moral inquiry as well as for its use of found language and its form, which discards the conformity of sequence and layers fiction with contemporary political facts. This, as the seasons pass, is perhaps the only continuity, both within and between the novels. Characters don’t walk in from one book to the next, though references and preoccupations do. The pace picks up in Winter, possibly as Smith finds her creative stride. Remarkably, out of the abysmal state of world affairs she finds the capacity for inventiveness and play.

A master craftsperson, Smith seems to be completely liberated from ideas of what a novelist should be or do. There is no self-consciousness, no pretension. One has the impression that everything that meets her fancy, amusing or intriguing her, finds its way into her work. Wordplay, ideas on syntax, puns, banter with poetry and neologisms—“(What’s carapace?) It’s a caravan that goes at a great pace”—musings on images and representation, death, myth, painting, appropriation. As her characters turn to Google or the dictionary, one imagines she just did too:

She looked up at the consonants and vowels of what looked like a nonsense Scrabble game the people living here had painted round the room’s cornicing, still quite elegant regardless of the disrepair. i s o p r o p y l m e t h y l p h o s p h o f l u o r i d a t e w i t h d e a t h.

It is not by chance that Smith references art and artists so frequently in her work. (In her luminous 2014 novel, How to Be Both, the narrator of the historical novella that forms part of the narrative is the fifteenth-century Renaissance painter Francesco del Cossa; in Autumn, the iconoclastic 1960s British pop artist Pauline Boty is a shared fascination among the characters, as is the modernist sculptor Barbara Hepworth for Sophia in Winter.) While literary references seep through her novels, she also excavates and references histories of culture, politics, and art to come up with a language entirely their own. Smith’s novels are not so much prescient as they are intuitive and sensitive to nostalgia, the forces of collapse, and the breakneck speed with which we are hurtling toward further disaster.

She long ago abandoned traditional modes of storytelling. How to Be Both was printed in two editions, one with the historical narrator preceding a contemporary one, the other in reverse order. Before that came her fictionalized book of lectures, Artful, which was narrated by a character haunted by a former lover who writes a series of sharp lectures on art and literature, and the Booker finalist Hotel World, narrated in part by a spirit and the women around her affected by her death (it’s surreal, probing, compassionate, and witty all at once). Autumn and Winter, the first two in the quartet, are written in sort-of real time (think reality TV as novel) with stream-of-consciousness and political commentary coming together to form parallel narrative threads that connect the various characters, their actions, and the stories in their heads—past, present, future.

Smith seems to be attempting to write as fast as information and reality change, as fast as truth turns to fiction and fact is annulled. While letting her characters guide her—as well as guide and muse and struggle with themselves—she responds to current events that find their way into the story:

January:

it is a reasonably balmy Monday, 9 degrees, in late winter a couple of days after five million people, mostly women, take part in marches all across the world to protest against misogyny in power.

A man barks at a woman.

I mean barks like a dog. Woof woof.

This happens in the House of Commons.

The woman is speaking. She is asking a question. The man barks at her in the middle of her asking it.

More fully: an opposition Member of Parliament is asking a Foreign Secretary a question in the House of Commons.

She is questioning a British Prime Minister’s show of friendly demeanour and repeated proclamation of special relationship with an American President, who also has a habit of likening women to dogs.


Bridgeman ImagesOdilon Redon: Homage to Goya, circa 1895

Autumn was as attuned to political forces as Winter, but it seems to have been written in a state of slight shock or dismay—at the refugee crisis, the revolutions and fallen hopes in the Middle East, and the outcome of the Brexit referendum. It is breathless, too, but sadder, slower, and easier to take in. Winter moves with such ferocity that while reading it one is forced to pause, stand back, reread, and take a bird’s-eye view of the absurdity of what our culture has become: we battle to keep people fleeing war-torn countries out of our “homeland” for fear of what they might bring, how they might terrorize our lives, our jobs, our communities: “Ask them what kind of vicar, what kind of church, brings a child up to think that words like very and hostile and environment and refugees can ever go together in any response to what happens to people in the real world.”

There is the absurdity, too, of an age in which we adopt online avatars and take to Facebook, Twitter, and Instagram to share our thoughts, promote our work, curate our identities. Reading a blog post of Art’s, Lux clears her throat:

It doesn’t seem very like you, she says. Not that I know you that well. But from the little I know.

Really? Art says.

They are sitting in front of his mother’s computer in the office.

You don’t seem so ponderous in real life, Lux says.

Ponderous? Art says.

In real life you seem detached, but not impossible, she says.

What the fuck does that mean? he says.

Well. Not like this piece of writing is, Lux says.

Thanks, Art says. I think.

Meanwhile, on Twitter, “Charlotte is demeaning [Art] and simultaneously making it look like he is demeaning his own followers.” This manic unraveling, the pretense of Charlotte-as-Art and Lux-as-Charlotte, isn’t the future—it is our present.

Who are we, the bobbing child’s head begs us to ask, when we have lost who we were? The disembodied head, sometimes sad, sometimes simply looking on, might represent our pasts, or our conscience, or our lack of one:

How could it breathe anyway, the head, with no other breathing apparatus to speak of?

Where were its lungs?

Where was the rest of it? Was there maybe someone else somewhere else with a small torso, a couple of arms, a leg, following him or her about? Was a small torso manoeuvring itself up and down the aisles of a supermarket? Or on a park bench, or on a chair by a radiator in someone’s kitchen? Like the old song, Sophia sings it under her breath so as not to wake it, I’m nobody’s child. I’m no body’s child. Just like a flower. I’m growing wild.

Not that there aren’t glimmers of hope in these books. Political upheaval, and then revolution, change the very nature of our social interactions, splitting society, creating hierarchies, and dividing us into vehement tribal groups (Stay/Leave, pro-coup/pro–Muslim Brotherhood, Trump/anything else). But out of the fractures, the losses, even the mania (at Christmas, or at political breaking points like referendums or coups), we sometimes lose ourselves so completely that we eventually find common ground again. Lux, pretending to be Charlotte but acting with no pretense, disarms Sophia, who warms to her (and begins to eat). Art, skewered for pretension by someone no longer in his life, is forced to reckon with himself. Iris and Sophia, at political odds, so long estranged, reconnect through memories prompted by a song from childhood.

Winter is a novel about being alone, and of becoming more alone in an age of technology and manic ego, on the verge of exploding artificial intelligence. But it is punctuated with reminders of times past and what could still be salvageable. In one such section, Smith imagines “another version of what was happening” on the morning Winter describes:

As if from a novel in which Sophia is the kind of character she’d choose to be, prefer to be, a character in a much more classic sort of story, perfectly honed and comforting, about how sombre yet bright the major-symphony of winter is and how beautiful everything looks under a high frost, how every grassblade is enhanced and silvered into individual beauty by it, how even the dull tarmac of the roads, the paving under our feet, shines when the weather’s been cold enough and how something at the heart of us, at the heart of all our cold and frozen states, melts when we encounter a time of peace on earth.

One can imagine Winter—which is fast-paced and frenetic, sometimes to the point of exhaustion—being read eagerly some hundred years from now, in a future that tries to make sense of an Earth where much has imploded. In that future, it might appeal equally to the literary reader, if there still is one, and to the historian.

Smith’s quartet, so far, is not only an inventive articulation of the forces that have collided to make the present, but also a meditation on—and experiment with—time.* By structuring her books around the changing seasons in an epoch when the seasons themselves are unpredictable, even in question (“November again. It’s more winter than autumn”; “It will be a bit uncanny still to be thinking about winter in April”), she urges us to ask whether we can still save our planet, as well as future generations’ lives. It’s hard to imagine what Spring and Summer might bring—perhaps a complete halt, or inversion, of time awaits us—but the first two novels of the quartet are so free with form, as well as so morally conscious, that they come close to being an antidote to these times.

  1. *

    Spring will be published by Pantheon in April 2019. 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/XuhxKwJ0vxQ/

The Sins of Celibacy

Pope Francis
Pope Francis; drawing by Siegfried Woldhek

On August 25 Archbishop Carlo Maria Viganò published an eleven-page letter in which he accused Pope Francis of ignoring and covering up evidence of sexual abuse in the Catholic Church and called for his resignation. It was a declaration of civil war by the church’s conservative wing. Viganò is a former apostolic nuncio to the US, a prominent member of the Roman Curia—the central governing body of the Holy See—and one of the most skilled practitioners of brass-knuckle Vatican power politics. He was the central figure in the 2012 scandal that involved documents leaked by Pope Benedict XVI’s personal butler, including letters Viganò wrote about corruption in Vatican finances, and that contributed to Benedict’s startling decision to abdicate the following year. Angry at not having been made a cardinal and alarmed by Francis’s supposedly liberal tendencies, Viganò seems determined to take out the pope.

As a result of Viganò’s latest accusations and the release eleven days earlier of a Pennsylvania grand jury report that outlines in excruciating detail decades of sexual abuse of children by priests, as well as further revelations of sexual misconduct by Cardinal Theodore McCarrick, the former archbishop of Washington, D.C., Francis’s papacy is now in a deep, possibly fatal crisis. After two weeks of silence, Francis announced in mid-September that he would convene a large-scale gathering of the church’s bishops in February to discuss the protection of minors against sexual abuse by priests.

The case of Cardinal McCarrick, which figures heavily in Viganò’s letter, is emblematic of the church’s failure to act on the problem of sexual abuse—and of the tendentiousness of the letter itself. In the 1980s stories began to circulate that McCarrick had invited young seminarians to his beach house and asked them to share his bed. Despite explicit allegations that were relayed to Rome, in 2000 Pope John Paul II appointed him archbishop of Washington, D.C., and made him a cardinal. Viganò speculates that the pope was too ill to know about the allegations, but does not mention that the appointment came five years before John Paul’s death. He also praises Benedict XVI for finally taking action against McCarrick by sentencing him to a life of retirement and penance, and then accuses Francis of revoking the punishment and relying on McCarrick for advice on important church appointments. If Benedict did in fact punish McCarrick, it was a very well kept secret, because he continued to appear at major church events and celebrate mass; he was even photographed with Viganò at a church celebration.

Viganò’s partial account of the way the church handled the allegations about McCarrick is meant to absolve Pope Francis’s predecessors, whose conservative ideology he shares. Viganò lays the principal blame for failing to punish McCarrick on Francis, who does appear to have mishandled the situation—one he largely inherited. He may have decided to ignore the allegations because, while deplorable, they dated back thirty years and involved seminarians, who were adults, not minors. Last June, however, a church commission found credible evidence that McCarrick had behaved inappropriately with a sixteen-year-old altar boy in the early 1970s, and removed him from public ministry; a month later Francis ordered him to observe “a life of prayer and penance in seclusion,” and he resigned from the College of Cardinals. On October 7, Cardinal Marc Ouellet, prefect of the Congregation for Bishops at the Vatican, issued a public letter offering a vigorous defense of Francis and a direct public rebuke of his accuser:

Francis had nothing to do with McCarrick’s promotions to New York, Metuchen, Newark and Washington. He stripped him of his Cardinal’s dignity as soon as there was a credible accusation of abuse of a minor….

Dear Viganò, in response to your unjust and unjustified attack, I can only conclude that the accusation is a political plot that lacks any real basis that could incriminate the Pope and that profoundly harms the communion of the Church.

The greatest responsibility for the problem of sexual abuse in the church clearly lies with Pope John Paul II, who turned a blind eye to it for more than twenty years. From the mid-1980s to 2004, the church spent $2.6 billion settling lawsuits in the US, mostly paying victims to remain silent. Cases in Ireland, Australia, England, Canada, and Mexico followed the same depressing pattern: victims were ignored or bullied, even as offending priests were quietly transferred to new parishes, where they often abused again. “John Paul knew the score: he protected the guilty priests and he protected the bishops who covered for them, he protected the institution from scandal,” I was told in a telephone interview by Father Thomas Doyle, a canon lawyer who was tasked by the papal nuncio to the US with investigating abuse by priests while working at the Vatican embassy in Washington in the mid-1980s, when the first lawsuits began to be filed.

Benedict was somewhat more energetic in dealing with the problem, but his papacy began after a cascade of reporting had appeared on priestly abuse, beginning with an investigation published by the Boston Globe in 2002 (the basis for Spotlight, the Oscar-winning film of 2015). The church was faced with mass defections and the collapse of donations from angry parishioners, which forced Benedict to confront the issue directly.

Francis’s election inspired great hopes for reform. But those who expected him to make a clean break with this history of equivocation and half-measures have been disappointed. He hesitated, for example, to meet with victims of sexual abuse during his visit to Chile in January 2018 and then insulted them by insisting that their claims that the local bishop had covered up the crimes of a notorious abuser were “calumny.” In early October, he expelled from the priesthood two retired Chilean bishops who had been accused of abuse. But when he accepted the resignation of Cardinal Donald Wuerl—who according to the Pennsylvania grand jury report repeatedly mishandled accusations of abuse when he was bishop of Pittsburgh—he praised Wuerl for his “nobility.” Francis seems to take one step forward and then one step backward.

Viganò is correct in writing that one of Francis’s closest advisers, Cardinal Oscar Rodriguez Maradiaga, disregarded a grave case of abuse occurring right under his nose in Honduras. One of Maradiaga’s associates, Auxiliary Bishop Juan José Pineda Fasquelle of Tegucigalpa, was accused of abusing students at the seminary he helped to run. Last June, forty-eight of the 180 seminarians signed a letter denouncing the situation there. “We are living and experiencing a time of tension in our house because of gravely immoral situations, above all of an active homosexuality inside the seminary that has been a taboo all this time,” the seminarians wrote. Maradiaga initially denounced the writers as “gossipers,” but Pineda was forced to resign a month later.

“I feel badly for Francis because he doesn’t know whom to trust,” Father Doyle said. Almost everyone in a senior position in the Catholic Church bears some guilt for covering up abuse, looking the other way, or resisting transparency. The John Jay Report (2004) on sexual abuse of minors by priests, commissioned by the US Conference of Catholic Bishops, indicated that the number of cases increased during the 1950s and 1960s, was highest in the 1970s, peaking in 1980, and has gradually diminished since then. Francis may have hoped that the problem would go away and feared that a true housecleaning would leave him with no allies in the Curia.

Much of the press coverage of the scandal has been of the Watergate variety: what the pope knows, when he found out, and so forth. This ignores a much bigger issue that no one in the church wants to talk about: the sexuality of priests and the failure of priestly celibacy.

Viganò blames the moral crisis of the papacy on the growing “homosexual current” within the church. There is indeed a substantial minority of gay priests. The Reverend Donald B. Cozzens, a Catholic priest and longtime rector of a seminary in Ohio, wrote in his book The Changing Face of the Priesthood (2000) that “the priesthood is, or is becoming, a gay profession.” There have been no large surveys, using scientific methods of random sampling, of the sexual life of Catholic priests. Many people—a priest in South Africa, a journalist in Spain, and others—have done partial studies that would not pass scientific muster. The late Dr. Richard Sipe, a former priest turned psychologist, interviewed 1,500 priests for an ethnographic study.

There is some self-selection by priests who agree to answer questions or fill out questionnaires or seek treatment, which is why the estimates on, say, gay priests vary so widely. But the studies are consistent in showing high percentages of sexually active priests and of gay priests. As Thomas Doyle wrote in 2004, “Knowledgeable observers, including authorities within the Church, estimate that 40–50 percent of all Catholic priests have a homosexual orientation, and that half of these are sexually active.” Sipe came to the conclusion that “50 percent of American clergy were sexually active…and between 20 and 30 percent have a homosexual orientation and yet maintained their celibacy in an equal proportion with heterosexually oriented clergy.”

In his letter Viganò repeats the finding in the John Jay Report that 81 percent of the sexual abuse cases involve men abusing boys. But he ignores its finding that those who actually identify as homosexual are unlikely to engage in abuse and are more likely to seek out adult partners. Priests who abuse boys are often confused about their sexuality; they frequently have a negative view of homosexuality, yet are troubled by their own homoerotic urges.

Viganò approvingly cites Sipe’s work four times. But he ignores Sipe’s larger argument, made on his website in 2005, that “the practice of celibacy is the basic problem for bishops and priests.” Sipe also wrote, “The Vatican focus on homosexual orientation is a smoke screen to cover the pervasive and greater danger of exposing the sexual behavior of clerics generally. Gay priests and bishops practice celibacy (or fail at it) in the same proportions as straight priests and bishops do.” He denounced McCarrick’s misconduct on numerous occasions.

While the number of priests abusing children—boys or girls under the age of sixteen—is comparatively small, many priests have secret sex lives (both homosexual and heterosexual), which does not leave them in the strongest position to discipline those who abuse younger victims. Archbishop Rembert Weakland, for example, the beloved liberal archbishop of Milwaukee from 1977 to 2002, belittled victims who complained of sexual abuse by priests and then quietly transferred predatory priests to other parishes, where they continued their abusive behavior. It was revealed in 2002 that the Milwaukee archdiocese had paid $450,000 in hush money to an adult man with whom Weakland had had a longtime secret sexual relationship, which might have made him more reluctant to act against priests who abused children. But this could be true of heterosexual as well as homosexual priests who are sexually active.

Viganò believes that the church’s moral crisis derives uniquely from its abandonment of clear, unequivocal, strict teaching on moral matters, and from overly permissive attitudes toward homosexuality in particular. He does not want to consider the ways in which its traditional teaching on sexuality—emphasized incessantly by recent popes—has contributed to the present crisis. The modern church has boxed itself into a terrible predicament. Until about half a century ago, it was able to maintain an attitude of wise hypocrisy, accepting that priests were often sexually active but pretending that they weren’t. The randy priests and monks (and nuns) in Chaucer and Boccaccio were not simply literary tropes; they reflected a simple reality: priests often found it impossible to live the celibate life. Many priests had a female “housekeeper” who relieved their loneliness and doubled as life companions. Priests frequently had affairs with their female parishioners and fathered illegitimate children. The power and prestige of the church helped to keep this sort of thing a matter of local gossip rather than international scandal.

When Pope John XXIII convened the Second Vatican Council in 1962, bishops from many parts of the world hoped that the church would finally change its doctrine and allow priests to marry. But John XXIII died before the council finished its work, which was then overseen by his successor, Paul VI (one of the popes most strongly rumored to have been gay). Paul apparently felt that the sweeping reforms of Vatican II risked going too far, so he rejected the pleas for priestly marriage and issued his famous encyclical Humanae Vitae, which banned contraception, overriding a commission he had convened that concluded that family planning and contraception were not inconsistent with Catholic doctrine.

Opposing priestly marriage and contraception placed the church on the conservative side of the sexual revolution and made adherence to strict sexual norms a litmus test for being a good Catholic, at a time when customs were moving rapidly in the other direction. Only sex between a man and a woman meant for procreation and within the institution of holy matrimony was allowed. That a man and a woman might have sex merely for pleasure was seen as selfish and sinful. Some 125,000 priests, according to Richard Sipe, left the priesthood after Paul VI closed the door on the possibility of priestly marriage. Many, like Sipe, were straight men who left to marry. Priestly vocations plummeted.

Conversely, the proportion of gay priests increased, since it was far easier to hide one’s sex life in an all-male community with a strong culture of secrecy and aversion to scandal. Many devout young Catholic men also entered the priesthood in order to try to escape their unconfessable urges, hoping that a vow of celibacy would help them suppress their homosexual leanings. But they often found themselves in seminaries full of sexual activity. Father Doyle estimates that approximately 10 percent of Catholic seminarians were abused (that is, drawn into nonconsensual sexual relationships) by priests, administrators, or other seminarians.

This problem is nothing new. Homosocial environments—prisons, single-sex schools, armies and navies, convents and monasteries—have always been places of homosexual activity. “Man is a loving animal,” in Sipe’s words. The Benedictines, one of the first monastic orders, created elaborate rules to minimize homosexual activity, insisting that monks sharing a room sleep fully clothed and with the lights on.

The modern Catholic Church has failed to grasp what its founders understood quite well. “It is better to marry than to burn with passion,” Saint Paul wrote when his followers asked him whether “it is good for a man not to touch a woman.” “To the unmarried and the widows I say that it is well for them to remain unmarried as I am. But if they are not practicing self-control, they should marry.” Priestly celibacy was not firmly established until the twelfth century, after which many priests had secret wives or lived in what the church termed “concubinage.”

The obsession with enforcing unenforceable standards of sexual continence that run contrary to human nature (according to one study, 95 percent of priests report that they masturbate) has led to an extremely unhealthy atmosphere within the modern church that contributed greatly to the sexual abuse crisis. A 1971 Loyola Study, which was also commissioned by the US Conference of Catholic Bishops, concluded that a large majority of American priests were psychologically immature, underdeveloped, or maldeveloped. It also found that a solid majority of priests—including those ordained in the 1940s, well before the sexual revolution—described themselves as very or somewhat sexually active.

Sipe, during his decades of work treating priests as a psychotherapist, also concluded that the lack of education about sexuality and the nature of celibate life tended to make priests immature, often more comfortable around teenagers than around other adults. All this, along with a homosocial environment and the church’s culture of secrecy, has made seminaries a breeding ground for sexual abuse.

There are possible ways out of this dilemma for Francis. He could allow priests to marry, declare homosexuality to be not sinful, or even move to reform the patriarchal nature of the church—and to address the collapse in the number of nuns, which has decreased by 30 percent since the 1960s even though the number of the world’s Catholics has nearly doubled in that time—by allowing the ordination of women. But any of those actions would spark a revolt by conservatives in the church who already regard Francis with deep suspicion, if not downright aversion. John Paul II did his best to tie the hands of his successors by declaring the prohibition of female priests to be an “infallible” papal doctrine, and Francis has acknowledged that debate on the issue was “closed.” Even Francis’s rather gentle efforts to raise the possibility of allowing divorced Catholics who have remarried to receive the host at Mass was met with such strong criticism that he dropped the subject.

The sociology of religion offers some valuable insights into the church’s problems. One of the landmark texts in this field is the 1994 essay “Why Strict Churches Are Strong,” by the economist Laurence Iannaccone, who used rational choice theory to show that people tend to value religious denominations that make severe demands on them. The Mormon Church, for example, requires believers to give it a tenth of their income and a substantial amount of their time, abstain from the use of tobacco and alcohol, and practice other austerities. These costly demands create a powerful sense of solidarity. The commitment of time and money means that the church can undertake ambitious projects and take care of those in need, while the distinctive way of life serves to bind members to one another and set them apart from the rest of the world. The price of entry to a strict church is high, but the barrier to exit is even higher: ostracism and the loss of community.

Since the French Revolution and the spread of liberal democracy in the nineteenth century, the Catholic Church has been torn between the urge to adapt to a changing world and the impulse to resist it at all cost. Pope Pius IX, at whose urging the First Vatican Council in 1870 adopted the doctrine of papal infallibility, published in 1864 his “Syllabus of Errors,” which roundly condemned modernity, freedom of the press, and the separation of church and state. Significantly, its final sentence denounced the mistaken belief that “the Roman Pontiff can, and ought to, reconcile himself, and come to terms with progress, liberalism and modern civilization.” Since then the church has been in the difficult position of maintaining this intransigent position—that it stands for a set of unchanging, eternal beliefs—while still in some ways adapting to the times.

John XXIII, who became pope in 1958, saw a profound need for what he called aggiornamento—updating—precisely the kind of reconciling of the church to a changing world that Pius IX considered anathema. John XXIII was one of the high-ranking church leaders who regarded the Nazi genocide of the Jews as a moral crossroads in history. An important part of his reforms at Vatican II was to remove all references to the Jews as a “deicide” people and to adopt an ecumenical spirit that deems other faiths worthy of respect. After Vatican II, the church made optional much of the traditional window-dressing of Catholicism—the Latin Mass, the elaborate habits of nuns, the traditional prohibition against meat on Friday—but John died before the council took up more controversial issues of doctrine. With Vatican II, Iannaccone argued,

the Catholic church may have managed to arrive at a remarkable, “worst of both worlds” position—discarding cherished distinctiveness in the areas of liturgy, theology, and lifestyle, while at the same time maintaining the very demands that its members and clergy are least willing to accept.

Church conservatives are not wrong to worry that eliminating distinctive Catholic teachings may weaken the church’s appeal and authority. Moderate mainstream Protestant denominations have been steadily losing adherents for decades. At the same time, some forms of strictness can be too costly. The prohibitions against priestly marriage and the ordination of women are clearly factors in the decline of priestly vocations, and the even more dramatic decline in the number of nuns.

Both radical change and the failure to change are fraught with danger, making Francis’s path an almost impossible one. He is under great pressure from victims who are demanding that the church conduct an exhaustive investigation into the responsibility of monsignors, bishops, and cardinals who knew of abusing priests but did nothing—something he is likely to resist. Such an accounting might force many of the church’s leaders into retirement and paralyze it for years to come—but his failure to act could paralyze it as well. As for the larger challenges facing the church, Francis’s best option might be to make changes within the narrow limits constraining him, such as expanding the participation of the laity in church deliberations and allowing women to become deacons. But that may be too little, too late.

—October 25, 2018

Source Article from http://feedproxy.google.com/~r/nybooks/~3/y4v_zul-1Jw/

World War I Relived Day by Day


Photo12/UIG via Getty ImagesGavrilo Princip arrested after his assassination of Archduke Franz Ferdinand of Austria, Sarajevo, June 28, 1914

Four years ago, I went to war. Like many of the people whose stories I followed in my daily “live-tweets” on World War I, I had no idea what I was getting myself into. What began as an impulsive decision to commemorate the hundredth anniversary of Austrian Archduke Ferdinand’s death at the hands of a Serbian assassin, in June 1914, snowballed into a blood-soaked odyssey that took me—figuratively and literally—from the rolling hills of northern France, to the desert wastes of Arabia, to the rocky crags of the Italian Alps, to the steel turret of a rebel cruiser moored within range of the czar’s Winter Palace in St. Petersburg, Russia. And like the men and women who actually lived through it, now that the Great War is ending I find myself asking what, if anything, I’ve learned from it all.

In the American mind, World War I typically occupies an unimpressive place as a kind of shambolic preamble to the great good-versus-evil crusade of World War II, a pointless slugfest in muddy trenches for no worthy purpose, and no worthwhile result. Its catchphrases—“The War to End All Wars,” “Make the World Safe for Democracy”—evoke a wry and knowing chuckle. As if. But the war I encountered, as it unfolded day by day, was far more relevant, passionate, and unpredictable.

Posting daily newspaper clippings and photographs, found mainly in books and online archives, I began to see the Great War as a kind of portal between an older, more distant world—of kings with handlebar mustaches, splendid uniforms, and cavalry charges—and the one that we know: of planes and tanks, mass political movements, and camouflage. It snuffed out ancient monarchies in czarist Russia, Habsburg Austria, and Ottoman Turkey, and gave birth to a host of new nations—Poland, Hungary, Czechoslovakia, Syria, Iraq, Jordan, Lebanon, Finland, Estonia, Latvia, Lithuania, Ukraine, Armenia, Azerbaijan—that, in their struggles to survive and carve out an identity, continue to shape our world today. The British declared their intent to create a national homeland in Palestine for the Jews. 


Daily Mirror/Mirrorpix via Getty ImagesRussian infantry marching to battle, Poland, August 1914

The needs of the war brought women into the workforce, and helped win them the right to vote. The huge privations it inflicted triggered the world’s first (successful) Communist revolution, and the frustrations it unleashed prompted many, afterward, to turn to far-right authoritarians in Italy and then Germany. And finally—though many have forgotten it—the comings and goings of people caused by the war helped spread the deadliest epidemic the world has ever known: the 1918 influenza virus, which quietly killed an estimated 50–100 million human beings in their homes and in hospitals, more than both world wars combined.

I also encountered a cast of characters more varied and amazing than I thought possible. Rasputin, the dissolute Russian mystic who warned Czar Nicholas that going to war would destroy his dynasty, and was murdered in part because he was (falsely) suspected as a German agent. The Austrian Emperor Karl, who inherited a war he didn’t want, and tried fruitlessly to make peace. T.E. Lawrence, a scholarly young intelligence officer whose affinity for the Arabs helped turn them to the Allied cause, and shaped the modern Middle East. Mata Hari, a Dutch-born exotic dancer who played double-agent, seducing high-ranking Allied and German officers for valuable information, until she was caught and shot by the French as a spy.

Some of the names are familiar, and offer hints of future greatness—or infamy. A young anti-war journalist named Benito Mussolini, sensing the way the wind blows, changes his tune and aggressively advocates for Italy to enter the war, before signing up himself. A young Charles De Gaulle is wounded at Verdun and taken prisoner for the rest of the conflict. A relatively young Winston Churchill plans the disastrous Gallipoli Campaign and pays his penance by serving in the trenches, before making a political comeback. A young Harry S. Truman serves as an artillery officer on the Western Front, alongside (and outranked by) a young George C. Marshall (his future Army Chief of Staff and Secretary of State) and Douglas MacArthur (his future general in the Pacific and Korea). A young George S. Patton develops a fascination with tanks. A young Walt Disney doodles cartoons on the side of the ambulances he drives, in the same unit as a young Ray Kroc (the founder of McDonald’s). Another young ambulance driver, Ernest Hemingway, finds inspiration on the Italian Front for his novel A Farewell to Arms. A young Hermann Göring (later head of the Luftwaffe) becomes a dashing flying ace, while a young Erwin Rommel wins renown fighting at Verdun and in the Alps. Meanwhile, an odd young German corporal, who volunteered in the very first days of the war, is blinded by poison gas in its final days, and wakes up in hospital to the bitter news that Germany has lost. His name is Adolf Hitler.


General Photographic Agency/Getty ImagesFrench troops under shellfire during the Battle of Verdun, 1916

The dramatic panoply of people, places, and events, however, only occasionally rises to the fore. For the most part, the war is a steady stream of ordinary people doing ordinary things: washing their clothes, attending a concert, tallying supplies, fixing a car. History books give us a distorted sense of time, because they fast forward to major events. A day may take a chapter, a month may be passed over in a sentence. In fact, there were periods where nothing much happened—plans were being made, troops trained, supplies positioned—and when you live-tweet, you experience that waiting. Sometimes, it led to intriguing surprises, like photographs of dragon dances performed by some of the 140,000 Chinese laborers brought over to France to lend muscle to the Allied war effort. Mostly, it was a matter of endurance. Each winter, the fighting came to almost a complete stop as each country hunkered down and hoped its food would last. The “turnip winter” of 1916–1917, when the potato crop failed, nearly broke Germany; the increasingly desperate craving for “bread and peace” did break Russia the following year.  

The future president Herbert Hoover made his reputation by coordinating food relief shipments to German-occupied Belgium, and later as the US “food czar” ensuring Allied armies and populations were fed. The vast mobilization was effective: by 1918, the Allies were able to relax their food rationing, while Germany and its confederates, strangled by an Allied naval blockade, were on the verge of starvation. America’s war effort was accompanied by a vast expansion in the federal government’s power and reach. It nationalized (temporarily) the railroads and the telephone lines. It set prices for everything from sugar to shoes, and told motorists when they could drive, workers when they could strike, and restaurants what they could put on their menus. It seized half a billion dollars of enemy-owned property, including the brand rights to Bayer aspirin, and sold them at auction. The US government also passed espionage and sedition laws that made it illegal to criticize the war effort or the president. Some people were sent to prison for doing so, including the Socialist Party leader Eugene V. Debs, who ran for president for a fifth and final time from a cell.


Hulton Archive/Getty ImagesA woman munitions worker operating a machine in an armaments factory, Britain, circa 1915

Winning the war, however, was far from a sure thing. For three years, the Allies threw themselves against an evenly-matched enemy on the Western Front, without making any breakthroughs, while the Eastern Front gradually crumbled. An early Allied foray to take out Turkey, at Gallipoli in 1915, ended in bloody disappointment. Inducing Italy to enter the war on the Allies’ side, that same year, was supposed to swing the entire conflict in their favor; instead, the catastrophic Italian rout at Caporetto, in the autumn of 1917, put the Allied effort in greater jeopardy. When Lenin seized power in Russia, at the end of 1917, he took it immediately out of the war and ceded immense land and resources to German control. True, the US had by then entered the war, in response to Germany’s submarine campaign against merchant ships and its clumsy diplomatic scheming in Mexico. But with the war in the East essentially won, the Germans saw a window in which they could shift all of their armies to the West and crush the exhausted British and French before enough American troops could arrive to make a difference. Their spring offensive, or “Kaiser’s Battle,” in early 1918 drove deep into Allied lines, prompting the French government to evacuate Paris.

The Germans’ big roll of the dice failed. The Allies held, and the US mobilized much faster than anyone expected. By the summer of 1918, a perceptible change had taken place. Hundreds of thousands of American troops were arriving every month at French ports, and their first units were taking part in battles, piecemeal at first, to push the Germans back. Even in September, however, nearly everyone expected the war to continue into 1919. That was when a huge US army of 3 million men would be ready to take part in a big Allied offensive that would drive all the way to Berlin. It never happened. That fall, the German army—and those of Turkey, Austria, and Bulgaria—first buckled, then collapsed like a rotten log. By November 11, the war was over.

The fact that nobody saw the end coming, the way it did, highlights the value of going back, a hundred years later, and reliving events day by day, as they took place. What may seem obvious now was anything but so then, and we do the people who lived through it, and our understanding of them, a real disservice when we assume that it was. “Life can only be understood backwards,” the Danish philosopher Søren Kierkegaard observed, “but it must be lived forwards.” The British historian C.V. Wedgewood elaborated on the same idea: “History is lived forwards but is written in retrospect. We know the end before we consider the beginning and we can never wholly recapture what it was like to know the beginning only.” We can’t entirely forget that we know what happened next, but when we at least try to identify with people who did not know, we shed new light on them, and on what did happen.


Fine Art Images/Heritage Images/Getty ImagesLeon Trotsky with the Soviet delegation to negotiate a peace treaty with Germany, Brest-Litovsk, 1918

Take the Russian Revolution. We see it as the birth of a Communist superpower, and struggle to make sense of the seemingly half-baked, half-hearted effort by the Allies to intervene by sending troops, including Americans, to Russia’s ports in the far north and far east. People at the time, however, saw it almost entirely through the prism of the Great War. At first, the Allies welcomed the overthrow of the czar, and believed it would rejuvenate the failing Russian war effort. By replacing an infamous autocrat on the Allied roster with a fledgling democracy, it made “making the world safe for democracy” a more credible call to arms, and helped pave the way for the US to enter the war. When Lenin took over and made a ruinous peace with the Central Powers, he was seen as simply a German puppet. And when Bolshevik forces, augmented with released German and Austrian prisoners of war, attacked a unit of Czech soldiers crossing Siberia to rejoin the Allies on the Western Front, those suspicions blossomed into fear of a full-fledged German takeover of Russia. The Allies sent troops to key Russian ports to secure the war supplies stockpiled there and provide an exit route for the loyal Czechs. They considered trying to “reopen” the Eastern Front, but realized it would take far too many men. They assumed that when Germany was defeated, their proxy Lenin would eventually fall, and when the war ended, they naturally lost interest. It all makes sense, but only if you see through the eyes that people saw through at the time.

Did it really matter who won the war? In its aftermath, the Great War came to be seen as a colossal waste, a testament to the vanity of nations, of pompous older men sending foolish younger men into the meat-grinder for no good reason. War poems like “Dulce et decorum est” and novels like All Quiet on the Western Front have crystalized this impression. But this was not how people felt at the time. German atrocities in Belgium and on the high seas—some exaggerated, but others quite real—convinced many people that civilization, as they knew it, really was at stake. I was consistently and often surprisingly struck by the sincerity of support, not just on the home front, but among soldiers who had seen the worst of combat, for pursuing the war unto victory. The tone matures, but remains vibrant: these were, for the most part, people who believed in what they were fighting for. At what point the bitter cynicism set in, after the war ended, I cannot say. But at some point, that enthusiasm, and even the memory of it, became buried with the dead.


Bettmann/Getty ImagesBoys wearing bags of camphor around their necks to ward off influenza, 1917

Though, in fact, in many places the war did not actually end. An armistice was declared on the Western Front, and the armies there were disbanded and sent home. But Germany, Austria, and Hungary all descended into revolution and civil war for a time, with gangs of demobilized soldiers fighting on all sides. In Russia, the Soviet regime and its multiple enemies would battle for several years, while trying to reconquer territory surrendered when it quit the war against Germany. The Greeks tried to reclaim Constantinople from the Turks, and would be massacred when the Turks succeeded in reconsolidating their country. The Poles fought wars with the Ukrainians and the Soviets to define the boundaries of their newly independent country. Jews and Arabs continue to fight over the new lands liberated from the Ottoman Empire to this day.

In the Great War itself, over 16 million people died, including almost 7 million civilians. The US got off relatively lightly, with 117,465 people killed, just 0.13 percent of its population. In Serbia, somewhere between 17 percent and 28 percent of the country’s population was killed. But even numbers like these leave little concrete impression on our minds. Some of the most touching parts of my experience live-tweeting were the times when people would tweet back to me about a grandfather or great-uncle who fought and died in the war, and is forever twenty-four-years old in some field in France, or Turkey, or Italy, or at sea. For most people, that absence is what defined the war: someone left and never came home. The world that they shaped, by their presence and their absence, is the one that we live in, whether we realize it or not. And we, like them, can only grope our way forward, day by day, into an unknown future.


Historica Graphica Collection/Heritage Images/Getty ImagesBritish artillery at the Somme, France, 1916

Source Article from http://feedproxy.google.com/~r/nybooks/~3/pW4pUZXjwKQ/

The Raunchy Brilliance of Julie Doucet


Julie Doucet/Drawn & QuarterlyDetail from “Levitation” by Julie Doucet, first published February 1989, republished by Drawn & Quarterly in the box set Dirty Plotte: The Complete Julie Doucet, 2018

The Canadian artist Julie Doucet began self-publishing the zine Dirty Plotte in 1988, when she was twenty-two. She had been drawing comics since high school, but this was her first sustained project. Working at a feverish pace, she produced fourteen issues of Dirty Plotte in eighteen months before it was picked up by Chris Oliveros as the debut book from his new publishing outfit in Montreal called Drawn & Quarterly, which went on to publish twelve issues by Doucet from 1990 to 1998, incorporating material from the original run with new work. Louche, mordant, funny, and surreal, Dirty Plotte comprises a mix of short and long comics—wordless and with dialogue, narrative and plotless, autobiographical and fictional (and everything between)—in which there are no rules.

Nor are any subjects off-limits. In the first issue, Doucet levitates from the bed to the bathroom to change a tampon (period maintenance as a mind-body problem). In the second, a prostitute undresses and reveals herself to be a man, who, by unzipping his skin, transforms into a wolf; the wolf turns itself inside out to become a snake that coils up the waiting john’s leg and gives him a blowjob. In a dream recorded in issue six, Doucet sees her reflection in a mirror, and her double comes alive. The original Julie wills herself to turn into a man, her reflection breaks free from the mirror, and they have sex.


Julie Doucet/Drawn & QuarterlyPanels from Julie Doucet’s “Levitation,” Dirty Plotte, 2018

Twenty years after publishing the last issue of Dirty Plotte, Drawn & Quarterly has gathered the first dozen issues along with Doucet’s early, unpublished, and previously uncollected work, and numerous appreciations, in a two-volume slipcase edition, Dirty Plotte: The Complete Julie Doucet. Such lavish treatment can’t dispel the unruliness of Doucet’s project; these comics are as pertinent and captivating today as when they first made their way into the culture (an occasion marked by “a thrilling mix of recognition and horror,” recalls the cartoonist Laura Park). Doucet’s parodic depictions of intense violence are still unsettling; her elastic treatment of sex and gender is still daring; and her open-ended treatment of female identity is still vital. She has said that from 1988 to 1990, “I was not questioning what I was doing… It was so unconscious, so directly my mind on paper.”


Julie Doucet/Drawn & QuarterlyPanels from Julie Doucet’s “A Blow Job,” issue two, first published January 1991, Dirty Plotte, 2018

She gives free rein to complexity and contradiction—and to an athletic id. The unabashed world in her comics isn’t the real one (men in our realm don’t have vaginas surgically implanted into their foreheads, or not yet), but even reality, in Dirty Plotte, is phenomenologically fraught. A five-page story about a disturbing dream in which naked men aggressively invade a picnic and invite Doucet to “taste my croissant” (a literal croissant, but strategically placed) transitions into a waking state in which every inanimate object in her apartment comes alive with murderous rage. “Good ol’ reassuring reality!” she shrugs. But if the environment of Dirty Plotte is acutely Doucet’s own—relying primarily on dreams, fantasies, and imagined scenarios starring a version of herself—it is also freewheeling enough that readers, particularly women, can recognize something of themselves in it. The cover of every issue features possible versions of Doucet: a deranged artist, an old woman in the desert, a small figure lost in the big city. Issue three shows a group of Julies sobbing, laughing, anxious, and aloof—a scene she dubs, on the back cover, “Me, Myself, and I.” That multiplicity appears again in 1995 as a trio of weeping cartoonish Julies on the cover of My Most Secret Desire, which gathers Doucet’s dream comics. As the writer Deb Olin Unferth put it, “All I had to do was see the cover to know this cartoonist had stepped into my subconscious and found me cringing and giggling in a corner.”


Julie Doucet/Drawn & QuarterlyPanels from Julie Doucet’s “Dreamt: February 17, 1990,” first published January 1991, Dirty Plotte, 2018

Each issue of Dirty Plotte occupies that peculiar nexus of cringing and giggling. At the moment when a gag comic might end, Doucet pushes further, into uncomfortable territory. The step-by-step instructions in the four-panel “Do It Yourself: Laugh!” conclude not with a lively chuckle but with an unhinged, sputtering roar. But in calling out her fantasies and fears with words and pictures on the page, Doucet uses transgression to carve out a space of power and freedom. She revels in the joy of unfettered exploration, and her enthusiasm buoys otherwise dark subject matter. A trio of strips called “If I Was a Man” begins by conjuring aggressive male sexual behavior (when male Julie muses dumbly on “the great mysteries of nature” after ejaculating on his girlfriend, it’s hard not to read it as a pointed commentary on the outsize male fantasies present in so many comics). But the series ends with idiosyncratic fantasy: the “useful” penis that can store small items like pens and rolled-up magazines and the “romantic” penis that begets flowers.

Vaginas, too, get full treatment. Plotte is Québécois slang that can refer derogatorily to a woman’s vagina and to the woman herself. Co-opting this term is the linchpin of Doucet’s rowdy perusal of femaleness. If plotte refers to a woman’s body, then Doucet refashions that body. In a what-if strip about breast cancer, she chooses a double mastectomy, then adds a pair of gold rings “for a joyous sucking.” And if plotte refers to the objectification of women, Doucet turns the ferocity of male scrutiny back onto men themselves. An audacious example is the four-panel “Self-Portrait in a Possible Situation,” in issue two. In three of the panels, she slices herself with a razor blade while posing suggestively. In the fourth panel, she addresses the strip’s voyeuristic reader: covered in bandages and seated before an assortment of knives, she portentously petitions her male readers to act as models “for some little drawings! Heh heh heh.” The tit for tat comes to fruition an issue later, in “Strip Tease of a Reader,” in which she kills and fastidiously dismembers “Steve,” a reader who has proffered himself.


Julie Doucet/Drawn & QuarterlyPanels from Julie Doucet’s “If I Was A Man,” issue six, first published January 1993, Dirty Plotte, 2018

Doucet has said that the idea for “Strip Tease of a Reader” came from a French magazine, L‘Écho des Savanes, that asked its male readers to photograph their girlfriends performing a strip tease. “And people did it,” she says. “In every issue, there was a full page with about six Polaroids of girls stripping.” Doucet’s version is a parody—violent, but with a wink. She turns it into a burlesque performance by herself and Steve, implicating him in the farce, even ending the strip by scrawling “Fin“ on the wall with the blood of his dismembered member. The viciousness in her reversal is slyly subversive. If the girlfriends in L‘Écho consented to participate, so too does Steve. And if the titillation in a woman‘s strip tease is the measured revelation of flesh, so too is Steve‘s.

The bite and blood in Doucet’s comics were stirred in part by provocative French bande dessinée—for instance, by Claire Bretécher’s playful satires of self-involved French life, Nicole Claveloux’s surreal and erotic subversions, and F’murr’s absurdist parodies, as well as other, more “risqué” comics published in the French magazine Pilote, to which her mother subscribed. Doucet’s distinctiveness is equally due to her highly graphic drawing style: packed, rambunctious black-and-white panels depicting cramped interiors swarming with bric-a-brac and busy street scenes alive with eccentric humanity. Her dense shading and hatching, which produce moody, high-contrast drawings, become more finely rendered and more articulate in later issues. Doucet’s American antecedent is Aline Kominsky-Crumb, whose comics, beginning in the Seventies, are predicated on exploring the raw corporeality of the female body: masturbation, defecation, hunger, pain, and pleasure. But Doucet didn’t discover the American underground—its men or its women—until Dirty Plotte was underway. Still, earlier generations of North American women cartoonists saw in Doucet a kindred spirit. Kominsky-Crumb included Doucet’s Dirty Plotte comic “Heavy Flow,” in which a Godzilla-size Doucet floods a city with her menstrual blood, in issue number twenty-six (1989) of the comics anthology Weirdo, which Robert Crumb had begun in 1981. The cartoonist Phoebe Gloeckner selected three short comics by Doucet that same year for an issue of the long-running all-female anthology Wimmen’s Comix, where they appeared alongside work by trailblazing underground cartoonists Diane Noomin, Lee Mars, Sharon Rudahl, and Kominsky-Crumb (as well as a young Alison Bechdel).


Julie Doucet/Drawn & QuarterlyPanels from Julie Doucet’s “Heavy Flow,” first published 1989, Dirty Plotte, 2018

Remarkably, Doucet’s comics found an enthusiastic, if awe-struck, fan base among men. Dirty Plotte’s letter columns teem with appreciative notes from male readers, and Doucet’s male contemporaries were among her most ardent admirers. The cartoonist John Porcellino discovered Dirty Plotte through the international review directory Factsheet Five in 1989 and launched his own single-author zine, the minimal King-Cat Comics, two months later, inspired by Dirty Plotte’s monographic verve. In 1990, the Canadian cartoonist Chester Brown plugged Dirty Plotte issue one as “the comic book event of the year” and, a year later, moved his own provocative, sometimes taboo-busting comic, Yummy Fur, to Drawn & Quarterly. Seth, another Canadian cartoonist, approached Oliveros about publishing his poignant new autobiographical series Palookaville the month the first issue of Dirty Plotte came out, and it became Drawn & Quarterly’s second series. Adrian Tomine, a high school student in California in the early Nineties, who would go on to become a New York literary darling, saw “infinite possibilities” in the Dirty Plotte comics; Optic Nerve, which Oliveros began publishing in 1995, was his response. The boys’ club that was Drawn & Quarterly’s early stable—Brown, Seth, Tomine, and Joe Matt—largely developed around Doucet’s work, an implicit argument against the persistent “great masters“ notion of artistic production. Doucet, though perhaps lesser known today than many of her stablemates, was a central figure in the nascent alternative-comics scene. Oliveros has called her “the foundation of Drawn & Quarterly,” and it was his founding intent to publish comics by and for women.


Julie Doucet/Drawn & QuarterlyPanels from Julie Doucet’s “New York Diary,” issue ten, first published 1996, Dirty Plotte, 2018

Most of the final three issues of Dirty Plotte are given over to “My New York Diary,” a self-contained story about Doucet’s year-long stay in New York, where she abruptly moved in 1991 after falling in love there. The story (published as a standalone book, with some additional material, in 1996) charts her time with her lover in his seedy apartment, as the initial bloom of romance is dulled by his resentment at her successes and his overbearing dependence on her, as well as her health problems and the difficulty navigating what she finds to be a “merciless” city. In the end, she leaves the man and the city, with no regrets. “My New York Diary” is the most extended comic in Dirty Plotte; it is one of the very few that is straight autobiography, that is told in hindsight, and that follows a traditional narrative arc. It is a work of realism, yet its nuance and honesty, about female identity, agency, and representation, could not exist without the experimentation that preceded it; “My New York Diary” gathers those earlier ideas in order to spin out its tale of romantic and worldly experience. The comics of Dirty Plotte are indeed complete. Doucet quit making comics altogether a few years after concluding the series (her interest in text-and-image combinations persists in her recent collaged photo comics). Yet they remain, as a fifteen-year-old burgeoning cartoonist Geneviève Castrée once discovered, “a parallel world about home, a world away from home.”


Dirty Plotte: The Complete Julie Doucet is published by Drawn & Quarterly

Source Article from http://feedproxy.google.com/~r/nybooks/~3/EqEJSlHqgUo/

The Don of Trumpery


Mandel Ngan/AFP/Getty ImagesPresident Trump at a rally at the Landers Center, Southaven, Mississippi, October 2, 2018

Trumpery

noun
1. attractive articles of little value or use.

adjective
2. showy but worthless.
“trumpery jewelry”

from the Old French tromper, “to deceive.”

synonyms
• cheesy, crappy, cut-rate, el cheapo, junky, lousy, rotten, schlocky, shoddy, sleazy, trashy

Making fun of other people’s names is one of the lowest forms of humor. But naming can also be an art. Victorian novelists like Charles Dickens named their characters to suggest moral traits: the inflexible pedant Thomas Gradgrind, the slimy Uriah Heep, the miserly Ebenezer Scrooge. In today’s Twitter and reality TV world, where we can name and rename ourselves ad libitum—every James Gatz his own Jay Gatsby—names can outstrip reality, and morality. We are all Reality Winners now. 

We happen to have a president who takes names very seriously, using them for specific purposes and according them strange powers. Having apprenticed himself to mobsters and wrestlers (great adopters of mythic nicknames), he has transformed politics into mass entertainment. He relishes the sound of names, especially his own. He surrounds himself with people whose names seem so appropriate to their roles, so closely aligning form and function—Price, Conway, Pecker, and the rest—that Dickens himself might have named them. Doesn’t Betsy DeVos, for example, have the faux-aristocratic sadism of a villainess from a children’s book, like Cruella de Vil? Let them eat vouchers. 

What gives additional piquancy to the names in Trump’s orbit is the way they seem constantly to be morphing into brands, advertisements for themselves.

For Trump, naming is branding. Extending the Trump brand appears to have been the central driver of his initially only half-serious presidential bid, and it continues to drive Trump’s presidency, as he dreams, no doubt, of a Trump Tower on Mars. His name is German, originally Drumpf. (Since my own name is a deformed German-Jewish name, far be it from me to make fun of Drumpf.) But in Trump: The Art of the Deal, Trump claimed that his grandfather came from Sweden as a boy. The Donald’s father, Fred Trump, apparently didn’t want to upset his Jewish tenants by revealing his German roots.

Donald Trump doesn’t so much name his kids as brand them. Tiffany and Barron are luxury brands. (Trump used to call himself “John Barron” when he pretended to be his own spokesman, giving journalists the inside dope on himself, so Barron Trump might as well be Trump Trump.) Donald Trump Jr. is a vanity brand, and Ivanka has become one, as Kelly Ann Conway—that latter-day Becky Sharp who should write a book on alternative facts called The Way of the Con—discovered when she was chided for breaking White House rules by praising Ivanka’s boots on TV (not that the promotion worked: Ivanka recently closed her ailing fashion line). Among Trump’s children, only Eric seems to have escaped branding. He also seems, perhaps not coincidentally, to have escaped notice. 

A significant part of Trump’s campaign was trafficking in pejorative nicknames—Crooked Hillary, Little Marco, Lying Ted, Low Energy Jeb. (He borrowed the idea of calling Elizabeth Warren “Pocahontas” from Shameless Scott Brown, who scorned her claim of Native American ancestry during his losing Senate run against her in 2012.) But Trump also believes in the power of positive nicknames. On the campaign trail he loved telling crowds that “Mad Dog” Mattis was his choice for Defense. He evidently wanted a general who acted like a mad dog, a sort of Mad Max on steroids. Among the attack dogs in the White House, led by the aptly named John Bolton—always in danger of bolting—Mattis has mercifully turned out to be the calmest of canines.

The president is said to choose his entourage by looks. Reportedly semi-phobic about facial hair on men, he was turned off by Bolton’s drooping mustache. But he also seems to pick his cohort in part by name. Hope Hicks should be the name of Perry Mason’s assistant, of every trustworthy assistant. And when you need someone to stir up trouble, get yourself a Scaramucci. Literally “little skirmisher,” Scaramouche was a clown in the Italian commedia dell’arte. Part servant and part henchman (Capitano), prone to boasting and cowardice, Scaramucci pretended, on stage, to be a Don—or perhaps a Donald.

Jefferson Beauregard Sessions III is a living, breathing Civil War monument. He was named for Jefferson Davis, president of the Confederate States of America, and for the Confederate General P.G.T. Beauregard, who ordered the first shot on Fort Sumter. In the 1956 film comedy Bus Stop, Beauregard “Bo” Decker tells Marilyn Monroe that his name means “good-lucking.” A cunning, conniving climber of the Snopes variety, Sessions belongs in a Faulkner novel.

Pompous Mike Pompeo thrust out his ample chest for his photo-op with MBS, as they cooked up a “narrative” for the butchering of Jamal Khashoggi. “To make an omelette,” their fatuous expressions seemed to say, “you have to crack some eggs.” As the poet Randall Jarrell quipped, “That’s what they tell the eggs.”

Tom Price won a seat in the Cabinet to bring down prescription drug prices. Instead, he lined his pockets with investments in medical stocks he himself had boosted in value. He charged the public for his luxury travel (a practice known, I believe, as Zinking). Asked about it, he presumably said, “The Price is Right.”

David Pecker has a file on Donald’s pecker. Albeit “not freakishly small,” according to Stormy Daniels (née Stephanie Clifford), the First Pecker inspired her nickname for the president: “Tiny.”

And come to think of it, isn’t it odd to have a man named Mark Judge weighing in on the judgeship of his pal and drinking partner, whom he renames, in his memoir Wasted, Bart O’Kavanagh?

For the midterms, the president has discovered a sonic affinity between the words Kavanaugh and Caravan. He pronounces them like anagrams of each other, and repeats the name Kavanaugh like a secret mantra. Kavanaugh will build that wall. Kavanaugh will separate those immigrant families. For people on the left, however, Kavanaugh seems the ultimate disaster, the gift (German for poison) that keeps on taking. Trump will eventually go, but Kavanaugh will last forever.

In these dark times, we need a word for when things seem as though they can’t possibly get any worse, and then they do anyway. There are plenty of names to choose from.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/F6NyCVV3KcI/