Political revolt and the accumulation of more.
This essay will appear in "Revolution," the Spring 2014 issue of Lapham's Quarterly. This slightly adapted version originally appeared at TomDispatch.
In case of rain, the revolution will take place in the hall.
For the last several years, the word “revolution” has been hanging around backstage on the national television talk-show circuit waiting for somebody, anybody—visionary poet, unemployed automobile worker, late-night comedian—to cue its appearance on camera. I picture the word sitting alone in the green room with the bottled water and a banana, armed with press clippings of its once-upon-a-time star turns in America’s political theater (tie-dyed and brassiere-less on the barricades of the 1960s countercultural insurrection, short-haired and seersucker smug behind the desks of the 1980s Reagan Risorgimento), asking itself why it’s not being brought into the segment between the German and the Japanese car commercials.
Surely even the teleprompter must know that it is the beast in the belly of the news reports, more of them every day in print and en blog, about income inequality, class conflict, the American police state. Why then does nobody have any use for it except in the form of the adjective, revolutionary, unveiling a new cellphone app or a new shade of lipstick?
I can think of several reasons, among them the cautionary tale told by the round-the-clock media footage of dead revolutionaries in Syria, Egypt, and Tunisia, also the certain knowledge that anything anybody says (on camera or off, to a hotel clerk, a Facebook friend, or an ATM) will be monitored for security purposes. Even so, the stockpiling of so much careful silence among people who like to imagine themselves on the same page with Patrick Henry—“Give me liberty, or give me death”—raises the question as to what has become of the American spirit of rebellion. Where have all the flowers gone, and what, if anything, is anybody willing to risk in the struggle for “Freedom Now,” “Power to the People,” “Change We Can Believe In”?
My guess is next to nothing that can’t be written off as a business expense or qualified as a tax deduction. Not in America at least, but maybe, with a better publicist and 50 percent of the foreign rights, somewhere east of the sun or west of the moon.
Revolt from Thomas Jefferson to the Colossal Dynamo
The hallowed American notion of armed rebellion as a civic duty stems from the letter that Thomas Jefferson writes from Paris in 1787 as a further commentary on the new Constitution drawn up that year in Philadelphia, a document that he thinks invests the state with an unnecessary power to declare the citizenry out of order. A mistake, says Jefferson, because no country can preserve its political liberties unless its rulers know that their people preserve the spirit of resistance, and with it ready access to gunpowder.
“The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants. It is its natural manure.”
Jefferson conceived of liberty and despotism as plantings in the soil of politics, products of human cultivation subject to changes in the weather, the difference between them not unlike that between the growing of an orchard and the draining of a cesspool, both understood as means of environmental protection. It is the turning of the seasons and the cyclical motions of the stars that Jefferson has in mind when in his letter he goes on to say, “God forbid we should ever be 20 years without such a rebellion”—i.e., one conceived not as a lawless upheaval but as a lawful recovery.
The 20th-century philosopher and political scientist Hannah Arendt says that the American Revolution was intended as a restoration of what its progenitors believed to be a natural order of things “disturbed and violated” by the despotism of an overbearing monarchy and the abuses of its colonial government. During the hundred years prior to the Declaration of Independence, the Americans had developed tools of political management (church congregations, village assemblies, town meetings) with which to govern themselves in accordance with what they took to be the ancient liberties possessed by their fellow Englishmen on the far side of the Atlantic Ocean. They didn’t bear the grievances of a subjugated populace, and the seeds of revolt were nowhere blowing in the wind until the British crown demanded new, and therefore unlawful, tax money.
Arendt’s retrieval of the historical context leads her to say of the war for independence that it was “not revolutionary except by inadvertence.” To sustain the point she calls on Benjamin Franklin’s memory of the years preceding the shots fired at Lexington in April 1775: “I never had heard in any conversation from any person, drunk or sober, the least expression of a wish for a separation, or hint that such a thing would be advantageous to America.” The men who came to power after the Revolution were the same men who held power before the Revolution, their new government grounded in a system of thought that was, in our modern parlance, conservative.
Born 13 years later under the fixed star of a romantic certainty, the French Revolution was advertent, a violent overthrow of what its proponents, among them Maximilien de Robespierre, perceived as an unnatural order of things. Away with the old, in with the new; kill the king, remove the statues, reset the clocks, welcome to a world that never was but soon is yet to come.
The freedom-loving songs and slogans were well suited to the work of ecstatic demolition, but a guillotine is not a living tree, and although manured with the blood of aristocrats and priests, it failed to blossom with the leaves of political liberty. An armed mob of newly baptized citoyens stormed the Bastille in 1789; Napoleon in 1804 crowned himself emperor in the cathedral of Notre Dame.
Jefferson’s thinking had been informed by his study of nature and history, Robespierre’s by his reading of Rousseau’s poetics. Neither set of political ideas brought forth the dream-come-true products of the 19th-century Industrial Revolution—new worlds being born every day of the week, the incoming tide of modern manufacture and invention (the cotton gin, gas lighting, railroads) washing away the sand castles of medieval religion and Renaissance humanism, dismantling Robespierre’s reign of virtue, uprooting Jefferson’s tree of liberty.
So it is left to Karl Marx, along with Friedrich Engels, to acknowledge the arrival of the new world that never was with the publication in German of the Communist Manifesto in 1848: “The bourgeoisie cannot exist without constantly revolutionizing the instruments of production, and thereby the relations of production, and with them the whole relations of society.”
Men shape their tools, their tools shape their relations with other men, and the rain it raineth every day in a perfect storm of creative destruction that is amoral and relentless. The ill wind, according to Marx, blows from any and all points of the political compass with the “single, unconscionable freedom—free trade,” which resolves “personal worth into exchange value,” substitutes “callous ‘cash payment’” for every other form of human meaning and endeavor, devotes its all-devouring enthusiasms to “naked, shameless, direct, brutal exploitation.”
Over the course of the 19th century, the energies of the capitalist dynamic take full and proud possession of the whole of Western society. They become, in Marx’s analysis, the embodiment of “the modern representative state,” armed with the wealth of its always newer and more powerful machines (electricity, photography, the telephone, the automobile) and staffed by executives (i.e., politicians, no matter how labeled) who function as “a committee for managing the common affairs of the whole bourgeoisie.”
What Marx sees in theory as an insatiable abstraction, the American historian Henry Adams sees as concrete and overwhelming fact. Marx is 17 years dead and the Communist Manifesto a sacred text among the left-wing intelligentsia everywhere in Europe when Adams, his habit of mind as profoundly conservative as that of his great-grandfather, stands in front of a colossal dynamo at the Paris Exposition in 1900 and knows that Prometheus, no longer chained to his ancient rock, bestrides the Earth wearing J.P. Morgan’s top hat and P.T. Barnum’s cloak of as many colors as the traffic will bear. Adams shares with Marx the leaning toward divine revelation:
“To Adams the dynamo became a symbol of infinity. As he grew accustomed to the great gallery of machines, he began to feel the 40-foot dynamos as a moral force, much as the early Christians felt the Cross. The planet itself seemed less impressive, in its old-fashioned, deliberate, annual or daily revolution, than this huge wheel, revolving within arm’s length at some vertiginous speed... Before the end, one began to pray to it; inherited instinct taught the natural expression of man before silent and infinite force.”
The '60s Swept Away in a Whirlwind of Commodities and Repressive Surveillance
I inherited the instinct as a true-born American bred to the worship of both machinery and money; an appreciation of its force I acquired during a lifetime of reading newspaper reports of political uprisings in the provinces of the bourgeois world state—in China, Israel, and Greece in the 1940s; in the 1950s those in Hungary, Cuba, Guatemala, Algeria, Egypt, Bolivia, and Iran; in the 1960s in Vietnam, France, America, Ethiopia, and the Congo; in the 1970s and 1980s in El Salvador, Poland, Nicaragua, Kenya, Argentina, Chile, Indonesia, Czechoslovakia, Turkey, Jordan, Cambodia, again in Iran; over the last 24 years in Russia, Venezuela, Lebanon, Croatia, Bosnia, Libya, Tunisia, Syria, Ukraine, Iraq, Somalia, South Africa, Romania, Sudan, again in Algeria and Egypt.
The plot line tends to repeat itself—first the new flag on the roof of the palace, rapturous crowds in the streets waving banners; then searches, requisitions, massacres, severed heads raised on pikes; soon afterward the transfer of power from one police force to another police force, the latter more repressive than the former (darker uniforms, heavier motorcycles) because more frightened of the social and economic upheavals they can neither foresee nor control.
All the shiftings of political power produced changes within the committees managing regional budgets and social contracts on behalf of the bourgeois imperium. None of them dethroned or defenestrated Adams’ dynamo or threw off the chains of Marx’s cash nexus. That they could possibly do so is the “romantic idea” that Albert Camus, correspondent for the French Resistance newspaper Combat during and after World War II, sees in 1946 as having been “consigned to fantasy by advances in the technology of weaponry.”
The French philosopher Simone Weil draws a corollary lesson from her acquaintance with the Civil War in Spain, and from her study of the communist Sturm und Drang in Russia, Germany, and France subsequent to World War I. “One magic word today seems capable of compensating for all sufferings, resolving all anxieties, avenging the past, curing present ills, summing up all future possibilities: that word is revolution... This word has aroused such pure acts of devotion, has repeatedly caused such generous blood to be shed, has constituted for so many unfortunates the only source of courage for living, that it is almost a sacrilege to investigate it; all this, however, does not prevent it from possibly being meaningless.”
During the turbulent decade of the 1960s in the United States, the advancing technologies of bourgeois news production (pictures in place of print) transformed the meaningless magic word into a profitable commodity, marketing it both as deadly menace and lively fashion statement. The commercial putsch wasn’t organized by the CIA or planned by a consortium of advertising agencies; it evolved in two stages as a function of the capitalist dynamic that substitutes cash payment for every other form of human meaning and endeavor.
The disorderly citizenry furnishing the television footage in the early sixties didn’t wish to overthrow the government of the United States. Nobody was threatening to reset the game clock in the Rose Bowl, tear down Grand Central Terminal, or remove the Lincoln Memorial. The men, women, and children confronting racist tyranny in the American South—sitting at a lunch counter in Alabama, riding a bus into Mississippi, going to school in Arkansas—risked their lives in pure acts of devotion, refreshing the tree of liberty with the blood of patriots.
The Civil Rights movement and later the anti-Vietnam War protests were reformative, not revolutionary, the expression of democratic objection and dissent in accord with the thinking of Jefferson, also with President John F. Kennedy’s having said in his 1961 inaugural address, “Ask not what your country can do for you—ask what you can do for your country.” Performed as a civic duty, the unarmed rebellions led to the enactment in the mid-1960s of the Economic Opportunity Act, the Voting Rights Act, the Medicare and Medicaid programs, eventually to the shutting down of the war in Vietnam.
The television camera, however, isn’t much interested in political reform (slow, tedious, and unphotogenic) and so, even in the first years of protest, the news media presented the trouble running around loose in the streets as a revolution along the lines of the one envisioned by Robespierre. Caught in the chains of the cash nexus, they couldn’t do otherwise. The fantasy of armed revolt sold papers, boosted ratings, monetized the fears at all times running around loose in the heads of the propertied classes.
The multiple wounds in the body politic over the course of the decade—the assassination of President Kennedy, big-city race riots, student riots at venerable universities, the assassinations of Dr. Martin Luther King Jr. and Senator Robert F. Kennedy—amplified the states of public alarm. The fantastic fears of violent revolt awakened by a news media in search of a profit stimulated the demand for repressive surveillance and heavy law enforcement that over the last 50 years has blossomed into one of the richest and most innovative of the nation’s growth industries. For our own good, of course, and without forgoing our constitutional right to shop.
God forbid that the excitement of the 1960s should in any way have interfered with the constant revolutionizing of the bourgeois desire for more dream-come-true products to consume and possess. The advancing power of the media solved what might have become a problem by disarming the notion of revolution as a public good, rebranding it as a private good. Again it was impossible for the technology to do otherwise.
The medium is the message, and because the camera sees but doesn’t think, it substitutes the personal for the impersonal; whether in Hollywood restaurants or Washington committee rooms, the actor takes precedence over the act. What is wanted is a flow of emotion, not a train of thought, a vocabulary of images better suited to the selling of a product than to the expression of an idea. Narrative becomes montage, and as commodities acquire the property of information, the amassment of wealth follows from the naming of things rather than the making of things.
The voices of conscience in the early 1960s spoke up for a government of laws, not men, for a principle as opposed to a lifestyle. By the late 1960s the political had become personal, the personal political, and it was no longer necessary to ask what one must do for one’s country. The new-and-improved question, available in a wide range of colors, flower arrangements, cosmetics, and musical accompaniments, underwrote the second-stage commodification of the troubled spirit of the times.
Writing about the socialist turbulence on the late-1930s European left, Weil lists among the acolytes of the magic word, “the bourgeois adolescent in rebellion against home surroundings and school routine, the intellectual yearning for adventure and suffering from boredom.” So again in America in the late 1960s, radical debutantes wearing miniskirts and ammunition belts, Ivy League professors mounting the steps of the Pentagon, self-absorbed movie actors handing around anarchist manifestos to self-important journalists seated at the tables in Elaine’s.
By the autumn of 1968 the restaurant on the Upper East Side of Manhattan served as a Station of the Cross for the would-be revolutionaries briefly in town for an interview with Time or a photo shoot for Vogue, and as a frequent guest of the restaurant, I could see on nearly any night of the week the birth of a new and imaginary self soon to become a boldfaced name. Every now and then I asked one of the wandering stars what it was that he or she hoped to have and to hold once the revolution was won. Most of them were at a loss for an answer. What they knew, they did not want, what they wanted, they did not know, except, of course, more—more life, more love, more drugs, more celebrity, more happiness, more music.
On Becoming an Armed Circus
As a consequence of the political becoming personal, by the time the 1960s moved on to the 1980s and President Reagan’s Morning in America, it was no longer possible to know oneself as an American citizen without the further identification of at least one value-adding, consumer-privileged adjective—female American, rich American, black American, Native American, old American, poor American, gay American, white American, dead American. The costumes changed, and so did the dossier of the malcontents believing themselves entitled to more than they already had.
A generation of dissatisfied bourgeois reluctant to grow up gave way to another generation of dissatisfied bourgeois unwilling to grow old. The locus of the earthly Paradise shifted from a commune in the White Mountains to a gated golf resort in Palm Springs, and the fond hope of finding oneself transformed into an artist segued into the determined effort to make oneself rich. What remained constant was the policy of enlightened selfishness and the signature bourgeois passion for more plums in the pudding.
While making a magical mystery tour of the Central American revolutionary scene in 1987, Deb Olin Unferth remarks on the work in progress: “Compared to El Salvador, Nicaragua was like Ping-Pong ... like a cheerful communist kazoo concert ... We were bringing guitars, plays adapted from Nikolai Gogol, elephants wearing tasseled hats. I saw it myself and even then I found it a bit odd. The Nicaraguans wanted land, literacy, a decent doctor. We wanted a nice singalong and a ballet. We weren’t a revolution. We were an armed circus.”
As a descriptive phrase for what American society has become over the course of the last five decades, armed circus is as good as any and better than most. The constantly revolutionizing technologies have been spinning the huge bourgeois wheel of fortune at the speed of light, remaking the means of production in every field of human meaning and endeavor—media, manufacturing, war, finance, literature, crime, medicine, art, transport, and agriculture.
The storm wind of creative destruction it bloweth every day, removing steel mills, relocating labor markets, clearing the ground for cloud storage. On both sides of the balance sheet, the accumulations of more—more microbreweries and Internet connections, more golf balls, cheeseburgers, and cruise missiles; also more unemployment, more pollution, more obesity, more dysfunctional government and criminal finance, more fear. The too much of more than anybody knows what to do with obliges the impresarios of the armed circus to match the gains of personal liberty (sexual, social, economic, if one can afford the going price) with more repressive systems of crowd control.
To look back to the early 1960s is to recall a society in many ways more open and free than it has since become, when a pair of blue jeans didn’t come with a radio-frequency ID tag, when it was possible to appear for a job interview without a urine sample, to say in public what is now best said not at all. So frightened of its own citizens that it classifies them as probable enemies, the U.S. government steps up its scrutiny of what it chooses to regard as a mob. So intrusive is the surveillance that nobody leaves home without it. Tens of thousands of cameras installed in the lobbies of office and apartment buildings and in the eye sockets of the mannequins in department-store windows register the comings and goings of a citizenry deemed unfit to mind its own business.
The social contract offered by the managing agents of the bourgeois state doesn’t extend the privilege of political revolt, a point remarked upon by the Czech playwright Václav Havel just prior to being imprisoned in the late 1970s by the Soviet regime then governing Czechoslovakia: “No attempt at revolt could ever hope to set up even a minimum of resonance in the rest of society, because that society is ‘soporific,’ submerged in a consumer rat race ... Even if revolt were possible, however, it would remain the solitary gesture of a few isolated individuals, and they would be opposed not only by a gigantic apparatus of national (and supranational) power, but also by the very society in whose name they were mounting their revolt in the first place.”
The observation accounts for the past sell-by date of the celebrity guest alone and palely loitering in the green room with the bottled water and the banana. Who has time to think or care about political change when it’s more than enough trouble to save oneself from drowning in the flood of technological change? All is not lost, however, for the magic word that stormed the Bastille and marched on the tsar’s winter palace; let it give up its career as a noun, and as an adjective it can look forward to no end of on-camera promotional appearances with an up-and-coming surgical procedure, breakfast cereal, or video game.
Lewis H. Lapham is editor of Lapham’s Quarterly and a TomDispatch regular. Formerly editor of Harper’s Magazine, he is the author of numerous books, including Money and Class in America, Theater of War, Gag Rule, and most recently, Pretensions to Empire. The New York Times has likened him to H.L. Mencken; Vanity Fair has suggested a strong resemblance to Mark Twain; and Tom Wolfe has compared him to Montaigne. This essay, slightly adapted for TomDispatch, introduces "Revolution," the Spring 2014 issue of Lapham’s Quarterly, soon to be released at that website.
Copyright 2014 Lewis Lapham
Image above, Liberty Leading the People, by Eugène Delacroix (public domain).
The DoD cuts that didn't actually happen.
This article originally appeared at TomDispatch.
Washington is pushing the panic button, claiming austerity is hollowing out our armed forces and our national security is at risk. That was the message Secretary of Defense Chuck Hagel delivered last week when he announced that the Army would shrink to levels not seen since before World War II. Headlines about this crisis followed in papers like the New York Times and members of Congress issued statements swearing that they would never allow our security to be held hostage to the budget-cutting process.
Yet a careful look at budget figures for the U.S. military—a bureaucratic juggernaut accounting for 57 percent of the federal discretionary budget and nearly 40 percent of all military spending on this planet—shows that such claims have been largely fictional. Despite cries of doom since the across-the-board cuts known as sequestration surfaced in Washington in 2011, the Pentagon has seen few actual reductions, and there is no indication that will change any time soon.
This piece of potentially explosive news has, however, gone missing in action—and the “news” that replaced it could prove to be one of the great bait-and-switch stories of our time.
The Pentagon Cries Wolf, Round One
As sequestration first approached, the Pentagon issued deafening cries of despair. Looming cuts would “inflict lasting damage on our national defense and hurt the very men and women who protect this country,” said Secretary Hagel in December 2012.
Sequestration went into effect in March 2013 and was slated to slice $54.6 billion from the Pentagon’s $550 billion larger-than-the-economy-of-Sweden budget. But Congress didn’t have the stomach for it, so lawmakers knocked the cuts down to $37 billion. (Domestic programs like Head Start and cancer research received no such special dispensation.)
By law, the cuts were to be applied across the board. But that, too, didn’t go as planned. The Pentagon was able to do something hardly recognizable as a cut at all. Having the luxury of unspent funds from previous budgets—known obscurely as “prior year unobligated balances”—officials reallocated some of the cuts to those funds instead.
In the end, the Pentagon shaved about 5.7 percent, or $31 billion, from its 2013 budget. And just how painful did that turn out to be? Frank Kendall, who serves as the Undersecretary of Defense for Acquisition, Technology, and Logistics, has acknowledged that the Pentagon “cried wolf.” Those cuts caused no substantial damage, he admitted.
And that’s not where the story ends—it’s where it begins.
Sequestration, the Phony Budget War, Round Two
A $54.6 billion slice was supposed to come out of the Pentagon budget in 2014. If that had actually happened, it would have amounted to around 10 percent of its budget. But after the hubbub over the supposedly devastating cuts of 2013, lawmakers set about softening the blow.
And this time they did a much better job.
In December 2013, a budget deal was brokered by Republican Congressman Paul Ryan and Democratic Senator Patty Murray. In it they agreed to reduce sequestration. Cuts for the Pentagon soon shrank to $34 billion for 2014.
And that was just a start.
All the cuts discussed so far pertain to what’s called the Pentagon’s “base” budget—its regular peacetime budget. That, however, doesn’t represent all of its funding. It gets a whole different budget for making war, and for the 13th year, the U.S. is making war in Afghanistan. For that part of the budget, which falls into the Washington category of “Overseas Contingency Operations” (OCO), the Pentagon is getting an additional $85 billion in 2014.
And this is where something funny happens.
That war funding isn’t subject to caps or cuts or any restrictions at all. So imagine for a moment that you’re an official at the Pentagon—or the White House—and you’re committed to sparing the military from downsizing. Your budget has two parts: one that’s subject to caps and cuts, and one that isn’t. What do you do? When you hit a ceiling in the former, you stuff extra cash into the latter.
It takes a fine-toothed comb to discover how this is done. Todd Harrison, senior fellow for defense studies at the Center for Strategic and Budgetary Assessments, found that the Pentagon was stashing an estimated extra $20 billion worth of non-war funding in the “operation and maintenance” accounts of its proposed 2014 war budget. And since all federal agencies work in concert with the White House to craft their budget proposals, it’s safe to say that the Obama administration was in on the game.
Add the December budget deal to this $20 billion switcheroo and the sequester cuts for 2014 were now down to $14 billion, hardly a devastating sum given the roughly $550 billion in previously projected funding.
And the story’s still not over.
When it was time to write the Pentagon budget into law, appropriators in Congress wanted in on the fun. As Winslow Wheeler of the Project on Government Oversight discovered, lawmakers added a $10.8 billion slush fund to the war budget.
All told, that leaves $3.4 billion—a cut of less than 1 percent from Pentagon funding this year. It’s hard to imagine that anyone in the sprawling bureaucracy of the Defense Department will even notice. Nonetheless, last week Secretary Hagel insisted that “[s]equestration requires cuts so deep, so abrupt, so quickly that ... the only way to implement [them] is to sharply reduce spending on our readiness and modernization, which would almost certainly result in a hollow force.”
Yet this less than 1 percent cut comes from a budget that, at last count, was the size of the next 10 largest military budgets on the planet combined. If you can find a threat to our national security in this story, your sleuthing powers are greater than mine. Meanwhile, in the non-military part of the budget, sequestration has brought cuts that actually matter to everything from public education to the justice system.
Cashing in on the “Cuts,” Round Three and Beyond
After two years of uproar over mostly phantom cuts, 2015 isn’t likely to bring austerity to the Pentagon either. Last December’s budget deal already reduced the cuts projected for 2015, and President Obama is now asking for something he’s calling the “Opportunity, Growth, and Security Initiative.” It would deliver an extra $26 billion to the Pentagon next year. And that still leaves the war budget for officials to use as a cash cow.
And the president is proposing significant growth in military spending further down the road. In his 2015 budget plan, he’s asking Congress to approve an additional $115 billion in extra Pentagon funds for the years 2016-2019.
My guess is he’ll claim that our national security requires it after the years of austerity.
Mattea Kramer is a TomDispatch regular and Research Director at National Priorities Project, which is a 2014 nominee for the Nobel Peace Prize. She is also the lead author of the book A People's Guide to the Federal Budget.
Copyright 2014 Mattea Kramer
Image by U.S. Navy
What the modern world owes slavery
This article originally appeared at TomDispatch.
Many in the United States were outraged by the remarks of conservative evangelical preacher Pat Robertson, who blamed Haiti’s catastrophic 2010 earthquake on Haitians for selling their souls to Satan. Bodies were still being pulled from the rubble—as many as 300,000 died—when Robertson went on TV and gave his viewing audience a little history lesson: the Haitians had been "under the heel of the French" but they "got together and swore a pact to the devil. They said, 'We will serve you if you will get us free from the French.' True story. And so, the devil said, 'OK, it's a deal.'"
A supremely callous example of right-wing idiocy? Absolutely. Yet in his own kooky way, Robertson was also onto something. Haitians did, in fact, swear a pact with the devil for their freedom. Only Beelzebub arrived smelling not of sulfur, but of Parisian cologne.
Haitian slaves began to throw off the “heel of the French” in 1791, when they rose up and, after bitter years of fighting, eventually declared themselves free. Their French masters, however, refused to accept Haitian independence. The island, after all, had been an extremely profitable sugar producer, and so Paris offered Haiti a choice: compensate slave owners for lost property—their slaves (that is, themselves)—or face its imperial wrath. The fledgling nation was forced to finance this payout with usurious loans from French banks. As late as 1940, 80% of the government budget was still going to service this debt.
In the on-again, off-again debate that has taken place in the United States over the years about paying reparations for slavery, opponents of the idea insist that there is no precedent for such a proposal. But there is. It’s just that what was being paid was reparations-in-reverse, which has a venerable pedigree. After the War of 1812 between Great Britain and the U.S., London reimbursed southern planters more than a million dollars for having encouraged their slaves to run away in wartime. Within the United Kingdom, the British government also paid a small fortune to British slave owners, including the ancestors of Britain’s current Prime Minister, David Cameron, to compensate for abolition (which Adam Hochschild calculated in his 2005 book Bury the Chains to be “an amount equal to roughly 40% of the national budget then, and to about $2.2 billion today”).
Advocates of reparations—made to the descendants of enslaved peoples, not to their owners—tend to calculate the amount due based on the negative impact of slavery. They want to redress either unpaid wages during the slave period or injustices that took place after formal abolition (including debt servitude and exclusion from the benefits extended to the white working class by the New Deal). According to one estimate, for instance, 222,505,049 hours of forced labor were performed by slaves between 1619 and 1865, when slavery was ended.
Compounded at interest and calculated in today’s currency, this adds up to trillions of dollars.
But back pay is, in reality, the least of it. The modern world owes its very existence to slavery.
Voyage of the Blind
Consider, for example, the way the advancement of medical knowledge was paid for with the lives of slaves.
The death rate on the trans-Atlantic voyage to the New World was staggeringly high. Slave ships, however, were more than floating tombs. They were floating laboratories, offering researchers a chance to examine the course of diseases in fairly controlled, quarantined environments. Doctors and medical researchers could take advantage of high mortality rates to identify a bewildering number of symptoms, classify them into diseases, and hypothesize about their causes.
Corps of doctors tended to slave ports up and down the Atlantic seaboard. Some of them were committed to relieving suffering; others were simply looking for ways to make the slave system more profitable. In either case, they identified types of fevers, learned how to decrease mortality and increase fertility, experimented with how much water was needed for optimum numbers of slaves to survive on a diet of salted fish and beef jerky, and identified the best ratio of caloric intake to labor hours. Priceless epidemiological information on a range of diseases—malaria, smallpox, yellow fever, dysentery, typhoid, cholera, and so on—was gleaned from the bodies of the dying and the dead.
When slaves couldn’t be kept alive, their autopsied bodies still provided useful information. Of course, as the writer Harriet Washington has demonstrated in her stunning Medical Apartheid, such experimentation continued long after slavery ended: in the 1940s, one doctor said that the “future of the Negro lies more in the research laboratory than in the schools.” As late as the 1960s, another researcher, reminiscing in a speech given at Tulane Medical School, said that it was “cheaper to use Niggers than cats because they were everywhere and cheap experimental animals.”
Medical knowledge slowly filtered out of the slave industry into broader communities, since slavers made no proprietary claims on the techniques or data that came from treating their slaves. For instance, an epidemic of blindness that broke out in 1819 on the French slaver Rôdeur, which had sailed from Bonny Island in the Niger Delta with about 72 slaves on board, helped eye doctors identify the causes, patterns, and symptoms of what is today known as trachoma.
The disease first appeared on the Rôdeur not long after it set sail, initially in the hold among the slaves and then on deck. In the end, it blinded all the voyagers except one member of the crew. According to a passenger’s account, sightless sailors worked under the direction of that single man “like machines” tied to the captain with a thick rope. “We were blind—stone blind, drifting like a wreck upon the ocean,” he recalled. Some of the sailors went mad and tried to drink themselves to death. Others retired to their hammocks, immobilized. Each “lived in a little dark world of his own, peopled by shadows and phantasms. We did not see the ship, nor the heavens, nor the sea, nor the faces of our comrades.”
But they could still hear the cries of the blinded slaves in the hold.
This went on for 10 days, through storms and calms, until the voyagers heard the sound of another ship. The Spanish slaver San León had drifted alongside the Rôdeur. But the entire crew and all the slaves of that ship, too, had been blinded. When the sailors of each vessel realized this “horrible coincidence,” they fell into a silence “like that of death.” Eventually, the San León drifted away and was never heard from again.
The Rôdeur’s one seeing mate managed to pilot the ship to Guadeloupe, an island in the Caribbean. By now, a few of the crew, including the captain, had regained some of their vision. But 39 of the Africans hadn’t. So before entering the harbor the captain decided to drown them, tying weights to their legs and throwing them overboard. The ship was insured and their loss would be covered: the practice of insuring slaves and slave ships meant that slavers weighed the benefits of a dead slave versus living labor and acted accordingly.
Events on the Rôdeur caught the attention of Sébastien Guillié, chief of medicine at Paris’s Royal Institute for Blind Youth. He wrote up his findings—which included a discussion of the disease’s symptoms, the manner in which it spread, and best treatment options—and published them in Bibliothèque Ophtalmologique, which was then cited in other medical journals as well as in an 1846 U.S. textbook, A Manual of the Diseases of the Eye.
Slaves spurred forward medicine in other ways, too. Africans, for instance, were the primary victims of smallpox in the New World and were also indispensable to its eradication. In the early 1800s, Spain ordered that all its American subjects be vaccinated against the disease, but didn’t provide enough money to carry out such an ambitious campaign. So doctors turned to the one institution that already reached across the far-flung Spanish Empire: slavery. They transported the live smallpox vaccine in the arms of Africans being moved along slave routes as cargo from one city to another to be sold: doctors chose one slave from a consignment, made a small incision in his or her arm, and inserted the vaccine (a mixture of lymph and pus containing the cowpox virus). A few days after the slaves set out on their journey, pustules would appear in the arm where the incision had been made, providing the material to perform the procedure on yet another slave in the lot—and then another and another until the consignment reached its destination. Thus the smallpox vaccine was disseminated through Spanish America, saving countless lives.
Slavery’s Great Schism
In 1945, Allied troops marched into the first of the Nazi death camps. What they saw inside, many have remarked, forced a radical break in the West’s moral imagination. The Nazi genocide of Jews, one scholar has written, is history’s “black hole,” swallowing up all the theological, ethical, and philosophical certainties that had earlier existed.
Yet before there was the Holocaust, there was slavery, an institution that also transformed the West’s collective consciousness, as I’ve tried to show in my new book, The Empire of Necessity: Slavery, Freedom, and Deception in the New World.
Take, for example, the case of the Joaquín, a Portuguese frigate that left Mozambique in late 1803 with 301 enslaved East Africans. Nearly six months later, when a port surgeon opened the ship’s hatch in Montevideo, Uruguay, he was sickened by what he saw: only 31 bone-thin survivors in a foul, bare room, otherwise empty save for hundreds of unused shackles.
City officials convened a commission of inquiry to explain the deaths of the other 270 slaves, calling on the expertise of five surgeons—two British doctors, a Spaniard, a Swiss Italian, and one from the United States. The doctors testified that before boarding the Joaquín, the captives would have felt extreme anguish, having already been forced to survive on roots and bugs until arriving on the African coast emaciated and with their stomachs distended. Then, once on the ocean, crowded into a dark hold with no ventilation, they would have had nothing to do other than listen to the cries of their companions and the clanking of their chains. Many would have gone mad trying to make sense of their situation, trying to ponder “the imponderable.” The surgeons decided that the East Africans had died from dehydration and chronic diarrhea, aggravated by the physical and psychological hardships of slavery—from, that is, what they called “nostalgia,” “melancholia,” and “cisma,” a Spanish word that loosely means brooding or mourning.
The collective opinion of the five surgeons—who represented the state of medical knowledge in the U.S., Great Britain, and Spain—reveals the way slavery helped in what might be called the disenchanting of medicine. In it you can see how doctors dealing with the slave trade began taking concepts like melancholia out of the hands of priests, poets, and philosophers and giving them actual medical meaning.
Prior to the arrival of the Joaquín in Montevideo, for instance, the Royal Spanish Academy was still associating melancholia with actual nighttime demon possession. Cisma literally meant schism, a theological concept Spaniards used to refer to the spiritual split personality of fallen man. The doctors investigating the Joaquín, however, used these concepts in a decidedly secular, matter-of-fact manner and in ways that unmistakably affirmed the humanity of slaves. To diagnose enslaved Africans as suffering from nostalgia and melancholia was to acknowledge that they had selves that could be lost, inner lives that could suffer schism or alienation, and pasts over which they could mourn.
Two decades after the incident involving the Joaquín, the Spanish medical profession no longer thought melancholia to be caused by an incubus, but considered it a type of delirium, often related to seasickness. Medical dictionaries would later describe the condition in terms similar to those used by critics of the Middle Passage—as caused by rancid food, too close contact, extreme weather, and above all the “isolation” and “uniform and monotonous life” one experiences at sea. As to nostalgia, one Spanish dictionary came to define it as “a violent desire compelling those taken out of their country to return home.”
It was as if each time a doctor threw back a slave hatch to reveal the human-made horrors below, it became a little bit more difficult to blame mental illness on demons.
In the case of the Joaquín, however, the doctors didn’t extend the logic of their own reasoning to the slave trade and condemn it. Instead, they focused on the hardships of the Middle Passage as a technical concern. “It is in the interests of commerce and humanity,” said the Connecticut-born, Edinburgh-educated John Redhead, “to get slaves off their ships as soon as possible.”
Follow the Money
Slavery transformed other fields of knowledge as well. For instance, centuries of buying and selling human beings, of shipping them across oceans and continents, of defending, excoriating, or trying reform the practice, revolutionized both Christianity and secular law, giving rise to what we think of as modern human rights law.
In the realm of economics, the importance of slaves went well beyond the wealth generated from their uncompensated labor. Slavery was the flywheel on which America’s market revolution turned—not just in the United States, but in all of the Americas.
Starting in the 1770s, Spain began to deregulate the slave trade, hoping to establish what merchants, not mincing any words, called a “free trade in blacks.” Decades before slavery exploded in the United States (following the War of 1812 with Great Britain), the slave population increased dramatically in Spanish America. Enslaved Africans and African Americans slaughtered cattle and sheared wool on the pampas of Argentina, spun cotton and wove clothing in textile workshops in Mexico City, and planted coffee in the mountains outside Bogotá. They fermented grapes for wine at the foot of the Andes and boiled Peruvian sugar to make candy. In Guayaquil, Ecuador, enslaved shipwrights built cargo vessels that were used for carrying more slaves from Africa to Montevideo. Throughout the thriving cities of mainland Spanish America, slaves worked, often for wages, as laborers, bakers, brick makers, liverymen, cobblers, carpenters, tanners, smiths, rag pickers, cooks, and servants.
It wasn’t just their labor that spurred the commercialization of society. The driving of more and more slaves inland and across the continent, the opening up of new slave routes and the expansion of old ones, tied hinterland markets together and created local circuits of finance and trade. Enslaved peoples were investments (purchased and then rented out as laborers), credit (used to secure loans), property, commodities, and capital, making them an odd mix of abstract and concrete value. Collateral for loans and items for speculation, slaves were also objects of nostalgia, mementos of a fading aristocratic world even as they served as the coin for the creation of a new commercialized one.
Slaves literally made money: working in Lima’s mint, they trampled quicksilver into ore with their bare feet, pressing toxic mercury into their bloodstream in order to amalgamate the silver used for coins. And they were money -- at least in a way. It wasn’t that the value of individual slaves was standardized in relation to currency, but that slaves were quite literally the standard. When appraisers calculated the value of any given hacienda, or estate, slaves usually accounted for over half of its worth; they were, that is, much more valuable than inanimate capital goods like tools and millworks.
In the United States, scholars have demonstrated that profit wasn’t made just from southerners selling the cotton that slaves picked or the cane they cut. Slavery was central to the establishment of the industries that today dominate the U.S. economy: finance, insurance, and real estate. And historian Caitlan Rosenthal has shown how Caribbean slave plantations helped pioneer “accounting and management tools, including depreciation and standardized efficiency metrics, to manage their land and their slaves” —techniques that were then used in northern factories.
Slavery, as the historian Lorenzo Green argued half a century ago, “formed the very basis of the economic life of New England: about it revolved, and on it depended, most of her other industries.” Fathers grew wealthy building slave ships or selling fish, clothing, and shoes to slave islands in the Caribbean; when they died, they left their money to sons who “built factories, chartered banks, incorporated canal and railroad enterprises, invested in government securities, and speculated in new financial instruments.” In due course, they donated to build libraries, lecture halls, botanical gardens, and universities, as Craig Steven Wilder has revealed in his new book, Ebony and Ivy.
In Great Britain, historians have demonstrated how the “reparations” paid to slave-owning families “fuelled industry and the development of merchant banks and marine insurance, and how it was used to build country houses and to amass art collections.”
Follow the money, as the saying goes, and you don’t even have to move very far along the financial trail to begin to see the wealth and knowledge amassed through slavery. To this day, it remains all around us, in our museums, courts, places of learning and worship, and doctors’ offices. Even the tony clothier, Brooks Brothers (founded in New York in 1818), got its start selling coarse slave clothing to southern plantations. It now describes itself as an “institution that has shaped the American style of dress.”
Fever Dreams and the Bleached Bones of the Dead
In the United States, the reparations debate faded away with the 2008 election of Barack Obama—except as an idea that continues to haunt the fever dreams of the right-wing imagination. A significant part of the backlash against the president is driven by the fantasy that he is presiding over a radical redistribution of wealth—think of all those free cell phones that the Drudge Report says he’s handing out to African Americans!—part of a stealth plan to carry out reparations by any means possible.
“What they don't know,” said Rush Limbaugh shortly after Obama’s inauguration, “is that Obama's entire economic program is reparations.” The conservative National Legal Policy Center recently raised the specter of “slavery reparations courts” —Black Jacobin tribunals presided over by the likes of Jessie Jackson, Louis Farrakhan, Al Sharpton, and Russell Simmons and empowered to levy a $50,000 tax on every white “man, woman, and child in this country.” It’s time to rescue the discussion of reparations from the swamp of talk radio and the comment sections of the conservative blogosphere.
The idea that slavery made the modern world is not new, though it seems that every generation has to rediscover that truth anew. Almost a century ago, in 1915, W.E.B Du Bois wrote, “Raphael painted, Luther preached, Corneille wrote, and Milton sung; and through it all, for four hundred years, the dark captives wound to the sea amid the bleaching bones of the dead; for four hundred years the sharks followed the scurrying ships; for four hundred years America was strewn with the living and dying millions of a transplanted race; for four hundred years Ethiopia stretched forth her hands unto God.”
How would we calculate the value of what we today would call the intellectual property—in medicine and other fields—generated by slavery’s suffering? I’m not sure. But a revival of efforts to do so would be a step toward reckoning with slavery’s true legacy: our modern world.
TomDispatch regular Greg Grandin’s new book, The Empire of Necessity: Slavery, Freedom, and Deception in the New World, has just been published.
Copyright 2014 Greg Grandin
Image held in Public Domain.
Anatomy of a thug state.
This article originally appeared at TomDispatch.
Here, at least, is a place to start: intelligence officials have weighed in with an estimate of just how many secret files National Security Agency contractor Edward Snowden took with him when he headed for Hong Kong last June. Brace yourself: 1.7 million. At least they claim that as the number he or his web crawler accessed before he left town. Let’s assume for a moment that it’s accurate and add a caveat. Whatever he had with him on those thumb drives when he left the agency, Edward Snowden did not take all the NSA’s classified documents. Not by a long shot. He only downloaded a portion of them. We don’t have any idea what percentage, but assumedly millions of NSA secret documents did not get the Snowden treatment.
Such figures should stagger us and what he did take will undoubtedly occupy journalists for months or years more (and historians long after that). Keep this in mind, however: the NSA is only one of 17 intelligence outfits in what is called the U.S. Intelligence Community. Some of the others are as large and well funded, and all of them generate their own troves of secret documents, undoubtedly stretching into the many millions.
And keep something else in mind: that’s just intelligence agencies. If you’re thinking about the full sweep of our national security state (NSS), you also have to include places like the Department of Homeland Security, the Energy Department (responsible for the U.S. nuclear arsenal), and the Pentagon. In other words, we’re talking about the kind of secret documentation that an army of journalists, researchers, and historians wouldn’t have a hope of getting through, not in a century.
We do know that, in 2011, the whole government reportedly classified 92,064,862 documents. If accurate and reasonably typical, that means, in the twenty-first century, the NSS has already generated hundreds of millions of documents that could not be read by an American without a security clearance. Of those, thanks to one man (via various journalists), we have had access to a tiny percentage of perhaps 1.7 million of them. Or put another way, you, the voter, the taxpayer, the citizen—in what we still like to think of as a democracy—are automatically excluded from knowing or learning about most of what the national security state does in your name. That’s unless, of course, its officials decide to selectively cherry-pick information they feel you are capable of safely and securely absorbing, or an Edward Snowden releases documents to the world over the bitter protests, death threats, and teeth gnashing of Washington officialdom and retired versions of the same.
Summoned From the Id of the National Security State
So far, even among critics, the debate about what to make of Snowden’s act has generally focused on “balance”; that is, on what’s the right equilibrium between an obvious governmental need for secrecy, the security of the country, and an American urge for privacy, freedom, and transparency—for knowing, among other things, what your government is actually doing. Such a framework (“a meaningful balance between privacy and security”) has proven a relatively comfortable one for Washington, which doesn't mind focusing on the supposedly knotty question of how to define the “limits” of secrecy and whistle-blowing and what “reforms” are needed to bring the two into line. In the present context, however, such a debate seems laughable, if not absurd.
After all, it’s clear from the numbers alone that the urge to envelop the national security state in a blanket of secrecy, to shield its workings from the eyes of its citizens (as well as allies and enemies) has proven essentially boundless, as have the secret ambitions of those running that state. There is no way, at present, to limit the governmental urge for secrecy even in minimal ways, certainly not via secret courts or congressional committees implicated and entangled in the processes of a secret system.
In the face of such boundlessness, perhaps the words “whistleblower” and “leaker” —both traditionally referring to bounded and focused activities—are no longer useful. Though we may not yet have a word to describe what Chelsea (once Bradley) Manning, Julian Assange, and Edward Snowden have done, we should probably stop calling them whistleblowers. Perhaps they should instead be considered the creations of an overweening national security state, summoned by us from its id (so to speak) to act as a counterforce to its ambitions. Imagine them as representing the societal unconscious. Only in this way can we explain the boundlessness of their acts. After all, such massive document appropriations are inconceivable without a secret state endlessly in the process of documenting its own darkness.
One thing is for certain, though no one thinks to say it: despite their staggering releases of insider information, when it comes to the true nature and extent of the NSS, we surely remain in the dark. In the feeling that, thanks to Manning and Snowden, we now grasp the depths of that secret state, its secret acts, and the secret documentation that goes with it, we are undoubtedly deluded.
In a sense, valuable as they have been, Snowden’s revelations have helped promote this delusion. In a way that hasn’t happened since the Watergate era of the 1970s, they have given us the feeling that a curtain has finally, definitively been pulled back on the true nature of the Washington system. Behind that curtain, we have indeed glimpsed a global-surveillance-state-in-the-making of astounding scope, reach, and technological proficiency, whose ambitions (and successes), even when not always fully achieved, should take our breath away. And yet while this is accurate enough, it leads us to believe that we now know a great deal about the secret world of Washington. This is an illusion.
Even if we knew what was in all of those 1.7 million NSA documents, they are a drop in the bucket. As of now, we have the revelations of one (marginal) insider who stepped out of the shadows to tell us about part of what a single intelligence agency documented about its own activities. The resulting global debate, controversy, anger, and discussion, Snowden has said, represents “mission accomplished” for him. But it shouldn’t be considered mission accomplished for the rest of us.
In Praise of Darkness, the Dangers of Sunshine
To gain a reasonable picture of our national security state, five, 10, 20 Snowdens, each at a different agency or outfit, would have to step out of the shadows—and that would just be for starters. Then we would need a media that was ready to roll and a Congress not wrapped in “security” and “secrecy” but demanding answers, as the Church committee did in the Watergate era, with subpoenas in hand (and the threat of prison for no-shows and perjurers).
Yes, we may have access to basic information about what the NSA has been up to, but remind me: What exactly do you know about the doings of the Pentagon’s Defense Intelligence Agency, with its 16,500 employees, which has in recent years embarked on “an ambitious plan to assemble an espionage network that rivals the CIA in size”? How about the National Geospatial-Intelligence Agency, with its 16,000 employees, its post-9/11 headquarters (price tag: $1.8 billion) and its control over our system of spy satellites eternally prowling the planetary skies?
The answer is no more than you would have known about the NSA if Snowden hadn’t acted as he did. And by the way, what do you really know about the FBI, which now, among other things, issues thousands of national security letters a year (16,511 in 2011 alone), an unknown number of them for terror investigations? Since their recipients are muzzled from discussing them, we know next to nothing about them or what the Bureau is actually doing. And how’s your info on the CIA, which takes $4 billion more out of the intelligence “black budget” than the NSA, runs its own private wars, and has even organized its own privatized corps of spies as part of the general expansion of U.S. intelligence and espionage abroad? The answer on all of the above is—has to be—remarkably little.
Or take something basic like that old-fashioned, low-tech form of surveillance: government informers and agents provocateurs. They were commonplace in the 1960s and early 1970s within every oppositional movement. So many decades later, they are with us again. Thanks to the ACLU, which has mapped scattered reports on situations in which informers made it into at least the local news nationwide, we know that they became part of what anti-war movements existed, slipped into various aspects of the Occupy movement, and have run riot in local Muslim-American communities. We know as well that these informers come from a wide range of outfits, including the local police, the military, and the FBI. However, if we know a great deal about NSA snooping and surveillance, we have just about no inside information on the extent of old-style informing, surveilling, and provoking.
One thing couldn’t be clearer, though: the mania for secrecy has grown tremendously in the Obama years. On entering the Oval Office in 2009, Obama proclaimed a sunshine administration dedicated to “openness” and “transparency.” That announcement now drips with irony. If you want a measure of the kind of secrecy the NSS considers proper and the White House condones these days, check out a recent Los Angeles Times piece on the CIA’s drone assassination program (one of the more overt aspects of Washington’s covert world).
That paper recently reported that Chairman of the Senate Armed Services Committee Carl Levin held a “joint classified hearing” with the Senate Intelligence Committee on the CIA, the Pentagon, and their drone campaigns against terror suspects in the backlands of the planet. There was just one catch: CIA officials normally testify only before the House and Senate intelligence committees. In this case, the White House “refused to provide the necessary security clearances for members of the House and Senate armed services committees.” As a result, it would not let CIA witnesses appear before Levin. Officials, reported the Times, “had little appetite for briefing the 26 senators and 62 House members who sit on the armed services committees on the CIA's most sensitive operations.” Sunshine, in other words, is considered potentially dangerous, even in tiny doses, even in Congress.
A Cult of Government Secrecy
In evaluating what may lie behind the many curtains of Washington, history does offer us a small hand. Thanks to the revelations of the 1970s, including a Snowden-style break-in by antiwar activists at an FBI office in Media, Pennsylvania, in 1971, that opened a window into the Bureau’s acts of illegality, some now-famous reporting, and the thorough work of the Church committee in the Senate, we have a sense of the enormity of what the U.S. national security state was capable of once enveloped in a penumbra of secrecy (even if, in that era, the accompanying technology could do so much less). In the Johnson and Nixon years, as we now know, the FBI, the CIA, the NSA, and other acronymic outfits committed a staggering range of misdeeds, provocations, and crimes.
It’s easy to say that post-Watergate “reforms” made such acts a thing of the past. Unfortunately, there’s no reason to believe that. In fact, the nature of that era’s reforms should be reconsidered. After all, one particularly important Congressional response of that moment was to create the Foreign Intelligence Surveillance Court, essentially a judiciary for the secret world which would generate a significant body of law that no American outside the NSS could see.
The irony is again overwhelming. After the shocking headlines, the congressional inquiries, the impeachment proceedings, the ending of two presidencies—one by resignation—and everything else, including black bag jobs, break-ins, buggings, attempted beatings, blackmail, massive spying and surveillance, and provocations of every sort, the answer was a secret court. Its judges, appointed by the chief justice of the Supreme Court alone, are charged with ruling after hearing only one side of any case involving a governmental desire to snoop or pry or surveil. Unsurprisingly enough, over the three and a half decades of its existence, the court proved a willing rubber stamp for just about any urge of the national security state.
In retrospect, this remedy for widespread government illegality clearly was just another step in the institutionalization of a secret world that looks increasingly like an Orwellian nightmare. In creating the FISA court, Congress functionally took the seat-of-the-pants, extra-Constitutional, extra-legal acts of the Nixon era and put them under the rule of (secret) law.
Today, in the wake of, among other things, the rampant extra-legality of the Global War on Terror—including the setting up of a secret, extrajudicial global prison system of “black sites” where rampant torture and abuse were carried to the point of death, illegal kidnappings of terror suspects off global streets and their rendition to the prisons of torture regimes, and the assassination-by-drone of American citizens backed by Justice Department legalisms—it’s clear that NSS officials feel they have near total impunity when it comes to whatever they want to do. (Not that their secret acts often turn out as planned or particularly well in the real world.) They know that nothing they do, however egregious, will be brought before an open court of law and prosecuted. While the rest of us remain inside the legal system, they exist in “post-legal America.” Now, the president claims that he’s preparing a new set of “reforms” to bring this system under check and back in balance. Watch out!
If tomorrow a series of Edward Snowdens were to appear, each from a different intelligence agency or other outfit in the national security state, one thing would be guaranteed: the shock of the NSA revelations would be multiplied many times over. Protected from the law by a spreading cult of government secrecy, beyond the reach of the citizenry, Congress, or the aboveground judicial system, supported by the White House and a body of developing secret law, knowing that no act undertaken in the name of American “safety” and “security” will ever be prosecuted, the inhabitants of our secret state have been moving in dark and disturbing ways. What we know is already disturbing enough. What we don’t know would surely unnerve us far more.
Shadow government has conquered 21st century Washington. We have the makings of a thug state of the first order.
Tom Engelhardt, a co-founder of the American Empire Project and author of The United States of Fear as well as a history of the Cold War, The End of Victory Culture, runs the Nation Institute's TomDispatch.com. His latest book, co-authored with Nick Turse, is Terminator Planet: The First History of Drone Warfare, 2001-2050.
Copyright 2014 Tom Engelhardt
Image by Trevor Paglen, licensed under Creative Commons.
Why are we arming Israel to the teeth?
This article originally appeared at TomDispatch.
We Americans have funny notions about foreign aid. Recent polls show that, on average, we believe 28 percent of the federal budget is eaten up by it, and that, in a time of austerity, this gigantic bite of the budget should be cut back to 10 percent. In actual fact, barely 1 percent of the federal budget goes to foreign aid of any kind.
In this case, however, truth is at least as strange as fiction. Consider that the top recipient of U.S. foreign aid over the past three decades isn’t some impoverished land filled with starving kids, but a wealthy nation with a per-head gross domestic product on par with the European Union average, and higher than that of Italy, Spain, or South Korea.
Consider also that this top recipient of such aid—nearly all of it military since 2008—has been busily engaged in what looks like a 19th-style colonization project. In the late 1940s, our beneficiary expelled some 700,000 indigenous people from the land it was claiming. In 1967, our client seized some contiguous pieces of real estate and ever since has been colonizing these territories with nearly 650,000 of its own people. It has divided the conquered lands with myriad checkpoints and roads accessible only to the colonizers and is building a 440-mile wall around (and cutting into) the conquered territory, creating a geography of control that violates international law.
“Ethnic cleansing” is a harsh term, but apt for a situation in which people are driven out of their homes and lands because they are not of the right tribe. Though many will balk at leveling this charge against Israel—for that country is, of course, the top recipient of American aid and especially military largesse—who would hesitate to use the term if, in a mirror-image world, all of this were being inflicted on Israeli Jews?
Military Aid to Israel
Arming and bankrolling a wealthy nation acting in this way may, on its face, seem like terrible policy. Yet American aid has been flowing to Israel in ever greater quantities. Over the past 60 years, in fact, Israel has absorbed close to a quarter-trillion dollars in such aid. Last year alone, Washington sent some $3.1 billion in military aid, supplemented by allocations for collaborative military research and joint training exercises.
Overall, the United States covers nearly one quarter of Israel’s defense budget—from tear gas canisters to F-16 fighter jets. In their 2008-2009 assault on Gaza, the Israeli Defense Forces made use of M-92 and M-84 “dumb bombs,” Paveway II and JDAM guided “smart bombs,” AH-64 Apache attack helicopters equipped with AGM-114 Hellfire guided missiles, M141 “bunker defeat” munitions, and special weapons like M825A1 155mm white phosphorous munitions—all supplied as American foreign aid. (Uniquely among Washington’s aid recipients, Israel is also permitted to spend 25 percent of the military funding from Washington on weapons made by its own weapons industry.)
Why is Washington doing this? The most common answer is the simplest: Israel is Washington’s “ally.” But the United States has dozens of allies around the world, none of which are subsidized in anything like this fashion by American taxpayer dollars. As there is no formal treaty alliance between the two nations and given the lopsided nature of the costs and benefits of this relationship, a far more accurate term for Israel’s tie to Washington might be “client state.”
And not a particularly loyal client either. If massive military aid is supposed to give Washington leverage over Israel (as it normally does in client-state relationships), it is difficult to detect. In case you hadn’t noticed, rare is the American diplomatic visit to Israel that is not greeted with an in-your-face announcement of intensified colonization of Palestinian territory, euphemistically called “settlement expansion.”
Washington also provides aid to Palestine totaling, on average, $875 million annually in Obama’s first term (more than double what George W. Bush gave in his second term). That’s a little more than a quarter of what Israel gets. Much of it goes to projects of dubious net value like the development of irrigation networks at a moment when the Israelis are destroying Palestinian cisterns and wells elsewhere in the West Bank. Another significant part of that funding goes toward training the Palestinian security forces. Known as “Dayton forces” (after the American general, Keith Dayton, who led their training from 2005 to 2010), these troops have a grim human rights record that includes acts of torture, as Dayton himself has admitted. One former Dayton deputy, an American colonel, described these security forces to al-Jazeera as an outsourced "third Israeli security arm." According to Josh Ruebner, national advocacy director for the U.S. Campaign to End the Occupation and author of Shattered Hopes: Obama’s Failure to Broker Israeli-Palestinian Peace, American aid to Palestine serves mainly to entrench the Israeli occupation.
A Dishonest Broker
Nothing is equal when it comes to Israelis and Palestinians in the West Bank, East Jerusalem, and the Gaza Strip—and the numbers say it all. To offer just one example, the death toll from Operation Cast Lead, Israel’s 2008-2009 assault on the Gaza Strip, was 1,385 Palestinians (the majority of them civilians) and 13 Israelis, three of them civilians.
And yet mainstream opinion in the U.S. insists on seeing the two parties as essentially equal. Harold Koh, former dean of the Yale Law School and until recently the top lawyer at the State Department, has been typical in comparing Washington’s role to “adult supervision” of “a playground populated by warring switchblade gangs.” It was a particularly odd choice of metaphors, given that one side is equipped with small arms and rockets of varying sophistication, the other with nuclear weapons and a state-of-the-art modern military subsidized by the world’s only superpower.
Washington’s active role in all of this is not lost on anyone on the world stage—except Americans, who have declared themselves to be the even-handed arbiters of a conflict involving endless failed efforts at brokering a “peace process.” Globally, fewer and fewer observers believe in this fiction of Washington as a benevolent bystander rather than a participant heavily implicated in a humanitarian crisis. In 2012, the widely respected International Crisis Group described the “peace process” as “a collective addiction that serves all manner of needs, reaching an agreement no longer being the main one.”
The contradiction between military and diplomatic support for one party in the conflict and the pretense of neutrality cannot be explained away. “Looked at objectively, it can be argued that American diplomatic efforts in the Middle East have, if anything, made achieving peace between Palestinians and Israelis more difficult,” writes Rashid Khalidi, a historian at Columbia University, and author of Brokers of Deceit: How the U.S. Has Undermined Peace in the Middle East.
American policy elites are unable or unwilling to talk about Washington’s destructive role in this situation. There is plenty of discussion about a one-state versus a two-state solution, constant disapproval of Palestinian violence, occasional mild criticism (“not helpful”) of the Israeli settlements, and lately, a lively debate about the global boycott, divestment, and sanction movement (BDS) led by Palestinian civil society to pressure Israel into a “just and lasting” peace. But when it comes to what Americans are most responsible for—all that lavish military aid and diplomatic cover for one side only—what you get is either euphemism or an evasive silence.
In general, the American media tends to treat our arming of Israel as part of the natural order of the universe, as beyond question as the force of gravity. Even the “quality” media shies away from any discussion of Washington’s real role in fueling the Israel-Palestine conflict. Last month, for instance, the New York Times ran an article about a prospective “post-American” Middle East without any mention of Washington’s aid to Israel, or for that matter to Egypt, or the Fifth Fleet parked in Bahrain.
You might think that the progressive hosts of MSNBC’s news programs would be all over the story of what American taxpayers are subsidizing, but the topic barely flickers across the chat shows of Rachel Maddow, Chris Hayes, and others. Given this across-the-board selective reticence, American coverage of Israel and Palestine, and particularly of American military aid to Israel, resembles the Agatha Christie novel in which the first-person narrator, observing and commenting on the action in calm semi-detachment, turns out to be the murderer.
Strategic Self-Interest and Unconditional Military Aid
On the activist front, American military patronage of Israel is not much discussed either, in large part because the aid package is so deeply entrenched that no attempt to cut it back could succeed in the near future. Hence, the global BDS campaign has focused on smaller, more achievable targets, though as Yousef Munayyer, executive director of the Jerusalem Fund, an advocacy group, told me, the BDS movement does envision an end to Washington’s military transfers in the long term. This makes tactical sense, and both the Jerusalem Fund and the U.S. Campaign to End the Israeli Occupation are engaged in ongoing campaigns to inform the public about American military aid to Israel.
Less understandable are the lobbying groups that advertise themselves as “pro-peace,” champions of “dialogue” and “conversation,” but share the same bottom line on military aid for Israel as their overtly hawkish counterparts. For instance, J Street (“pro-Israel and pro-peace”), a Washington-based nonprofit which bills itself as a moderate alternative to the powerhouse lobbying outfit, the American Israel Public Affairs Committee (AIPAC), supports both “robust” military aid and any supplemental disbursements on offer from Washington to the Israeli Defense Forces. Americans for Peace Now similarly takes the position that Washington should provide “robust assistance” to ensure Israel’s “qualitative military edge.” At the risk of sounding literal-minded, any group plumping for enormous military aid packages to a country acting as Israel has is emphatically not “pro-peace.” It’s almost as if the Central America solidarity groups from the 1980s had demanded peace, while lobbying Washington to keep funding the Contras and the Salvadoran military.
Outside the various factions of the Israel lobby, the landscape is just as flat. The Center for American Progress, a Washington think tank close to the Democratic Party, regularly issues pious statements about new hopes for the “peace process”—with never a mention of how our unconditional flow of advanced weaponry might be a disincentive to any just resolution of the situation.
There is, by the way, a similar dynamic at work when it comes to Washington’s second biggest recipient of foreign aid, Egypt. Washington’s expenditure of more than $60 billion over the past 30 years ensured both peace with Israel and Cold War loyalty, while propping up an authoritarian government with a ghastly human rights record. As the post-Mubarak military restores its grip on Egypt, official Washington is currently at work finding ways to keep the military aid flowing despite a congressional ban on arming regimes that overthrow elected governments. There is, however, at least some mainstream public debate in the U.S. about ending aid to the Egyptian generals who have violently reclaimed power. Investigative journalism nonprofit ProPublica has even drafted a handy “explainer” about U.S. military aid to Egypt—though they have not tried to explain aid to Israel.
Silence about U.S.-Israel relations is, to a large degree, hardwired into Beltway culture. As George Perkovich, director of the nuclear policy program at the Carnegie Endowment for International Peace told the Washington Post, “It’s like all things having to do with Israel and the United States. If you want to get ahead, you don’t talk about it; you don’t criticize Israel, you protect Israel.”
This is regrettable, as Washington’s politically invisible military aid to Israel is not just an impediment to lasting peace, but also a strategic and security liability. As General David Petraeus, then head of U.S. Central Command, testified to the Senate Armed Services Committee in 2010, the failure to reach a lasting resolution to the conflict between the Israelis and Palestinians makes Washington’s other foreign policy objectives in the region more difficult to achieve. It also, he pointed out, foments anti-American hatred and fuels al-Qaeda and other violent groups. Petraeus’s successor at CENTCOM, General James Mattis, echoed this list of liabilities in a public dialogue with Wolf Blitzer last July:
“I paid a military security price every day as a commander of CENTCOM because the Americans were seen as biased in support of Israel, and that [alienates] all the moderate Arabs who want to be with us because they can’t come out publicly in support of people who don’t show respect for the Arab Palestinians.”
Don’t believe the generals? Ask a terrorist. Khalid Sheikh Mohammed, mastermind of the 9/11 attacks now imprisoned at Guantanamo, told interrogators that he was motivated to attack the United States in large part because of Washington’s leading role in assisting Israel’s repeated invasions of Lebanon and the ongoing dispossession of Palestinians.
The Israel lobby wheels out a battery of arguments in favor of arming and funding Israel, including the assertion that a step back from such aid for Israel would signify a “retreat” into “isolationism.” But would the United States, a global hegemon busily engaged in nearly every aspect world affairs, be "isolated" if it ceased giving lavish military aid to Israel? Was the United States "isolated" before 1967 when it expanded that aid in a major way? These questions answer themselves.
Sometimes the mere act of pointing out the degree of U.S. aid to Israel provokes accusations of having a special antipathy for Israel. This may work as emotional blackmail, but if someone proposed that Washington start shipping Armenia $3.1 billion worth of armaments annually so that it could begin the conquest of its ancestral province of Nagorno-Karabakh in neighboring Azerbaijan, the plan would be considered ludicrous—and not because of a visceral dislike for Armenians. Yet somehow the assumption that Washington is required to generously arm the Israeli military has become deeply institutionalized in this country.
Fake Peace Process, Real War Process
Today, Secretary of State John Kerry is leading a push for a renewed round of the interminable American-led peace process in the region that has been underway since the mid-1970s. It’s hardly a bold prediction to suggest that this round, too, will fail. The Israeli minister of defense, Moshe Ya’alon, has already publicly mocked Kerry in his quest for peace as “obsessive and messianic” and added that the newly proposed framework for this round of negotiations is “not worth the paper it’s printed on.” Other Israeli high officials blasted Kerry for his mere mention of the potential negative consequences to Israel of a global boycott if peace is not achieved.
But why shouldn’t Ya’alon and other Israeli officials tee off on the hapless Kerry? After all, the defense minister knows that Washington will wield no stick and that bushels of carrots are in the offing, whether Israel rolls back or redoubles its land seizures and colonization efforts. President Obama has boasted that the U.S. has never given so much military aid to Israel as under his presidency. On January 29th, the House Foreign Affairs Committee voted unanimously to upgrade Israel’s status to “major strategic partner.” With Congress and the president guaranteeing that unprecedented levels of military aid will continue to flow, Israel has no real incentive to change its behavior.
Usually such diplomatic impasses are blamed on the Palestinians, but given how little is left to squeeze out of them, doing so this time will test the creativity of official Washington. Whatever happens, in the post-mortems to come there will be no discussion in Washington about the role its own policies played in undermining a just and lasting agreement.
How much longer will this silence last? The arming and bankrolling of a wealthy nation committing ethnic cleansing has something to offend conservatives, progressives, and just about every other political grouping in America. After all, how often in foreign policy does strategic self-interest align so neatly with human rights and common decency?
Intelligent people can and do disagree about a one-state versus a two-state solution for Israel and Palestine. People of goodwill disagree about the global BDS campaign. But it is hard to imagine what kind of progress can ever be made toward a just and lasting settlement between Israel and Palestine until Washington quits arming one side to the teeth.
“If it weren’t for U.S. support for Israel, this conflict would have been resolved a long time ago,” says Josh Ruebner. Will we Americans ever acknowledge our government's active role in destroying the chances for a just and lasting peace between Palestine and Israel?
Chase Madar (@ChMadar) is a lawyer in New York, a TomDispatch regular, and the author of The Passion of [Chelsea] Manning: The Story behind the Wikileaks Whistleblower (Verso).
Copyright 2014 Chase Madar
Image by the White House (Public Domain).
Reading Melville in the Age of Terror.
This article originally appeared at TomDispatch.
A captain ready to drive himself and all around him to ruin in the hunt for a white whale. It’s a well-known story, and over the years, mad Ahab in Herman Melville’s most famous novel, Moby-Dick, has been used as an exemplar of unhinged American power, most recently of George W. Bush’s disastrous invasion of Iraq.
But what’s really frightening isn't our Ahabs, the hawks who periodically want to bomb some poor country, be it Vietnam or Afghanistan, back to the Stone Age. The respectable types are the true “terror of our age,” as Noam Chomsky called them collectively nearly 50 years ago. The really scary characters are our soberest politicians, scholars, journalists, professionals, and managers, men and women (though mostly men) who imagine themselves as morally serious, and then enable the wars, devastate the planet, and rationalize the atrocities. They are a type that has been with us for a long time. More than a century and a half ago, Melville, who had a captain for every face of empire, found their perfect expression—for his moment and ours.
For the last six years, I’ve been researching the life of an American seal killer, a ship captain named Amasa Delano who, in the 1790s, was among the earliest New Englanders to sail into the South Pacific. Money was flush, seals were many, and Delano and his fellow ship captains established the first unofficial U.S. colonies on islands off the coast of Chile. They operated under an informal council of captains, divvied up territory, enforced debt contracts, celebrated the Fourth of July, and set up ad hoc courts of law. When no bible was available, the collected works of William Shakespeare, found in the libraries of most ships, were used to swear oaths.
From his first expedition, Delano took hundreds of thousands of sealskins to China, where he traded them for spices, ceramics, and tea to bring back to Boston. During a second, failed voyage, however, an event took place that would make Amasa notorious—at least among the readers of the fiction of Herman Melville.
Here’s what happened: One day in February 1805 in the South Pacific, Amasa Delano spent nearly a full day on board a battered Spanish slave ship, conversing with its captain, helping with repairs, and distributing food and water to its thirsty and starving voyagers, a handful of Spaniards and about 70 West African men and women he thought were slaves. They weren’t.
Those West Africans had rebelled weeks earlier, killing most of the Spanish crew, along with the slaver taking them to Peru to be sold, and demanded to be returned to Senegal. When they spotted Delano’s ship, they came up with a plan: let him board and act as if they were still slaves, buying time to seize the sealer’s vessel and supplies. Remarkably, for nine hours, Delano, an experienced mariner and distant relative of future president Franklin Delano Roosevelt, was convinced that he was on a distressed but otherwise normally functioning slave ship.
Having barely survived the encounter, he wrote about the experience in his memoir, which Melville read and turned into what many consider his “other” masterpiece. Published in 1855, on the eve of the Civil War, Benito Cereno is one of the darkest stories in American literature. It’s told from the perspective of Amasa Delano as he wanders lost through a shadow world of his own racial prejudices.
One of the things that attracted Melville to the historical Amasa was undoubtedly the juxtaposition between his cheerful self-regard—he considers himself a modern man, a liberal opposed to slavery—and his complete obliviousness to the social world around him. The real Amasa was well meaning, judicious, temperate, and modest.
In other words, he was no Ahab, whose vengeful pursuit of a metaphysical whale has been used as an allegory for every American excess, every catastrophic war, every disastrous environmental policy, from Vietnam and Iraq to the explosion of the BP oil rig in the Gulf of Mexico in 2010.
Ahab, whose peg-legged pacing of the quarterdeck of his doomed ship enters the dreams of his men sleeping below like the “crunching teeth of sharks.” Ahab, whose monomania is an extension of the individualism born out of American expansion and whose rage is that of an ego that refuses to be limited by nature’s frontier. “Our Ahab,” as a soldier in Oliver Stone’s movie Platoon calls a ruthless sergeant who senselessly murders innocent Vietnamese.
Ahab is certainly one face of American power. In the course of writing a book on the history that inspired Benito Cereno, I’ve come to think of it as not the most frightening—or even the most destructive of American faces. Consider Amasa.
Since the end of the Cold War, extractive capitalism has spread over our post-industrialized world with a predatory force that would shock even Karl Marx. From the mineral-rich Congo to the open-pit gold mines of Guatemala, from Chile’s until recently pristine Patagonia to the fracking fields of Pennsylvania and the melting Arctic north, there is no crevice where some useful rock, liquid, or gas can hide, no jungle forbidden enough to keep out the oil rigs and elephant killers, no citadel-like glacier, no hard-baked shale that can’t be cracked open, no ocean that can’t be poisoned.
And Amasa was there at the beginning. Seal fur may not have been the world’s first valuable natural resource, but sealing represented one of young America’s first experiences of boom-and-bust resource extraction beyond its borders.
With increasing frequency starting in the early 1790s and then in a mad rush beginning in 1798, ships left New Haven, Norwich, Stonington, New London, and Boston, heading for the great half-moon archipelago of remote islands running from Argentina in the Atlantic to Chile in the Pacific. They were on the hunt for the fur seal, which wears a layer of velvety down like an undergarment just below an outer coat of stiff gray-black hair.
In Moby-Dick, Melville portrayed whaling as the American industry. Brutal and bloody but also humanizing, work on a whale ship required intense coordination and camaraderie. Out of the gruesomeness of the hunt, the peeling of the whale’s skin from its carcass, and the hellish boil of the blubber or fat, something sublime emerged: human solidarity among the workers. And like the whale oil that lit the lamps of the world, divinity itself glowed from the labor: “Thou shalt see it shining in the arm that wields a pick or drives a spike; that democratic dignity which, on all hands, radiates without end from God.”
Sealing was something else entirely. It called to mind not industrial democracy but the isolation and violence of conquest, settler colonialism, and warfare. Whaling took place in a watery commons open to all. Sealing took place on land. Sealers seized territory, fought one another to keep it, and pulled out what wealth they could as fast as they could before abandoning their empty and wasted island claims. The process pitted desperate sailors against equally desperate officers in as all-or-nothing a system of labor relations as can be imagined.
In other words, whaling may have represented the promethean power of proto-industrialism, with all the good (solidarity, interconnectedness, and democracy) and bad (the exploitation of men and nature) that went with it, but sealing better predicted today’s postindustrial extracted, hunted, drilled, fracked, hot, and strip-mined world.
Seals were killed by the millions and with a shocking casualness. A group of sealers would get between the water and the rookeries and simply start clubbing. A single seal makes a noise like a cow or a dog, but tens of thousands of them together, so witnesses testified, sound like a Pacific cyclone. Once we “began the work of death,” one sealer remembered, “the battle caused me considerable terror.”
South Pacific beaches came to look like Dante’s Inferno. As the clubbing proceeded, mountains of skinned, reeking carcasses piled up and the sands ran red with torrents of blood. The killing was unceasing, continuing into the night by the light of bonfires kindled with the corpses of seals and penguins.
And keep in mind that this massive kill-off took place not for something like whale oil, used by all for light and fire. Seal fur was harvested to warm the wealthy and meet a demand created by a new phase of capitalism: conspicuous consumption. Pelts were used for ladies’ capes, coats, muffs, and mittens, and gentlemen’s waistcoats. The fur of baby pups wasn’t much valued, so some beaches were simply turned into seal orphanages, with thousands of newborns left to starve to death. In a pinch though, their downy fur, too, could be used—to make wallets.
Occasionally, elephant seals would be taken for their oil in an even more horrific manner: when they opened their mouths to bellow, their hunters would toss rocks in and then begin to stab them with long lances. Pierced in multiple places like Saint Sebastian, the animals’ high-pressured circulatory system gushed “fountains of blood, spouting to a considerable distance.”
At first the frenetic pace of the killing didn’t matter: there were so many seals. On one island alone, Amasa Delano estimated, there were “two to three millions of them” when New Englanders first arrived to make “a business of killing seals.”
“If many of them were killed in a night,” wrote one observer, “they would not be missed in the morning.” It did indeed seem as if you could kill every one in sight one day, then start afresh the next. Within just a few years, though, Amasa and his fellow sealers had taken so many seal skins to China that Canton’s warehouses couldn’t hold them. They began to pile up on the docks, rotting in the rain, and their market price crashed.
To make up the margin, sealers further accelerated the pace of the killing—until there was nothing left to kill. In this way, oversupply and extinction went hand in hand. In the process, cooperation among sealers gave way to bloody battles over thinning rookeries. Previously, it only took a few weeks and a handful of men to fill a ship’s hold with skins. As those rookeries began to disappear, however, more and more men were needed to find and kill the required number of seals and they were often left on desolate islands for two- or three-year stretches, living alone in miserable huts in dreary weather, wondering if their ships were ever going to return for them.
“On island after island, coast after coast,” one historian wrote, “the seals had been destroyed to the last available pup, on the supposition that if sealer Tom did not kill every seal in sight, sealer Dick or sealer Harry would not be so squeamish.” By 1804, on the very island where Amasa estimated that there had been millions of seals, there were more sailors than prey. Two years later, there were no seals at all.
The Machinery of Civilization
There exists a near perfect inverse symmetry between the real Amasa and the fictional Ahab, with each representing a face of the American Empire. Amasa is virtuous, Ahab vengeful. Amasa seems trapped by the shallowness of his perception of the world. Ahab is profound; he peers into the depths. Amasa can’t see evil (especially his own). Ahab sees only nature’s “intangible malignity.”
Both are representatives of the most predatory industries of their day, their ships carrying what Delano once called the “machinery of civilization” to the Pacific, using steel, iron, and fire to kill animals and transform their corpses into value on the spot.
Yet Ahab is the exception, a rebel who hunts his white whale against all rational economic logic. He has hijacked the “machinery” that his ship represents and rioted against “civilization.” He pursues his quixotic chase in violation of the contract he has with his employers. When his first mate, Starbuck, insists that his obsession will hurt the profits of the ship’s owners, Ahab dismisses the concern: “Let the owners stand on Nantucket beach and outyell the Typhoons. What cares Ahab? Owners, Owners? Thou art always prating to me, Starbuck, about those miserly owners, as if the owners were my conscience.”
Insurgents like Ahab, however dangerous to the people around them, are not the primary drivers of destruction. They are not the ones who will hunt animals to near extinction—or who are today forcing the world to the brink. Those would be the men who never dissent, who either at the frontlines of extraction or in the corporate backrooms administer the destruction of the planet, day in, day out, inexorably, unsensationally without notice, their actions controlled by an ever greater series of financial abstractions and calculations made in the stock exchanges of New York, London, and Shanghai.
If Ahab is still the exception, Delano is still the rule. Throughout his long memoir, he reveals himself as ever faithful to the customs and institutions of maritime law, unwilling to take any action that would injure the interests of his investors and insurers. “All bad consequences,” he wrote, describing the importance of protecting property rights, “may be avoided by one who has a knowledge of his duty, and is disposed faithfully to obey its dictates.”
It is in Delano’s reaction to the West African rebels, once he finally realizes he has been the target of an elaborately staged con, that the distinction separating the sealer from the whaler becomes clear. The mesmeric Ahab—the “thunder-cloven old oak” —has been taken as a prototype of the twentieth-century totalitarian, a one-legged Hitler or Stalin who uses an emotional magnetism to convince his men to willingly follow him on his doomed hunt for Moby Dick.
Delano is not a demagogue. His authority is rooted in a much more common form of power: the control of labor and the conversion of diminishing natural resources into marketable items. As seals disappeared, however, so too did his authority. His men first began to grouse and then conspire. In turn, Delano had to rely ever more on physical punishment, on floggings even for the most minor of offences, to maintain control of his ship—until, that is, he came across the Spanish slaver. Delano might have been personally opposed to slavery, yet once he realized he had been played for a fool, he organized his men to retake the slave ship and violently pacify the rebels. In the process, they disemboweled some of the rebels and left them writhing in their viscera, using their sealing lances, which Delano described as “exceedingly sharp and as bright as a gentleman’s sword.”
Caught in the pincers of supply and demand, trapped in the vortex of ecological exhaustion, with no seals left to kill, no money to be made, and his own crew on the brink of mutiny, Delano rallied his men to the chase—not of a white whale but of black rebels. In the process, he reestablished his fraying authority. As for the surviving rebels, Delano re-enslaved them. Propriety, of course, meant returning them and the ship to its owners.
Our Amasas, Ourselves
With Ahab, Melville looked to the past, basing his obsessed captain on Lucifer, the fallen angel in revolt against the heavens, and associating him with America’s “manifest destiny,” with the nation’s restless drive beyond its borders. With Amasa, Melville glimpsed the future. Drawing on the memoirs of a real captain, he created a new literary archetype, a moral man sure of his righteousness yet unable to link cause to effect, oblivious to the consequences of his actions even as he careens toward catastrophe.
They are still with us, our Amasas. They have knowledge of their duty and are disposed faithfully to follow its dictates, even unto the ends of the Earth.
TomDispatch regular Greg Grandin’s new book, The Empire of Necessity: Slavery, Freedom, and Deception in the New World, has just been published.
Copyright 2014 Greg Grandin.
Image by an unknown photographer (Public Domain).
A glimpse inside the Zapatista Movement, 20 years after its birth.
This article originally appeared at TomDispatch.
Growing up in a well-heeled suburban community, I absorbed our society’s distaste for dissent long before I was old enough to grasp just what was being dismissed. My understanding of so many people and concepts was tainted by this environment and the education that went with it: Che Guevara and the Black Panthers and Oscar Wilde and Noam Chomsky and Venezuela and Malcolm X and the Service Employees International Union and so, so many more. All of this is why, until recently, I knew almost nothing about the Mexican Zapatista movement except that the excessive number of “a”s looked vaguely suspicious to me. It’s also why I felt compelled to travel thousands of miles to a Zapatista “organizing school” in the heart of the Lacandon jungle in southeastern Mexico to try to sort out just what I’d been missing all these years.
The fog is so thick that the revelers arrive like ghosts. Out of the mist they appear: men sporting wide-brimmed Zapata hats, women encased in the shaggy sheepskin skirts that are still common in the remote villages of Mexico. And then there are the outsiders like myself with our North Face jackets and camera bags, eyes wide with adventure. (“It’s like the Mexican Woodstock!” exclaims a student from the northern city of Tijuana.) The hill is lined with little restaurants selling tamales and arroz con leche and pozol, a ground-corn drink that can rip a foreigner’s stomach to shreds.
There is no alcohol in sight. Sipping coffee as sugary as Alabama sweet tea, I realize that tonight will be my first sober New Year’s Eve since December 31, 1999, when I climbed into bed with my parents to await the Y2K Millennium bug and mourned that the whole world was going to end before I had even kissed a boy.
Thousands are clustered in this muddy field to mark the 20-year anniversary of January 1, 1994, when an army of impoverished farmers surged out of the jungle and launched the first post-modern revolution. Those forces, known as the Zapatista Army of National Liberation, were the armed wing of a much larger movement of indigenous peoples in the southeastern Mexican state of Chiapas, who were demanding full autonomy from their government and global liberation for all people.
As the news swept across that emerging communication system known as the Internet, the world momentarily held its breath. A popular uprising against government-backed globalization led by an all but forgotten people: it was an event that seemed unthinkable. The Berlin Wall had fallen. The market had triumphed. The treaties had been signed. And yet surging out of the jungles came a movement of people with no market value and the audacity to refuse to disappear.
Now, 20 years later, villagers and sympathetic outsiders are pouring into one of the Zapatistas’ political centers, known as Oventic, to celebrate the fact that their rebellion has not been wiped out by the wind and exiled from the memory of men.
The plane tickets from New York City to southern Mexico were so expensive that we traveled by land. We E-ZPassed down the eastern seaboard, ate catfish sandwiches in Louisiana, barreled past the refineries of Texas, and then crossed the border. We pulled into Mexico City during the pre-Christmas festivities. The streets were clogged with parents eating tamales and children swinging at piñatas. By daybreak the next morning, we were heading south again. Speed bumps scraped the bottom of our Volvo the entire way from Mexico City to Chiapas, where the Zapatistas control wide swathes of territory. The road skinned the car alive. Later I realized that those speed bumps were, in a way, the consequences of dissent -- tiny traffic-controlling monuments to a culture far less resigned to following the rules. “Up north,” I’d later tell Mexican friends, “we don’t have as many speed bumps, but neither do we have as much social resistance.”
After five days of driving, we reached La Universidad de la Tierra, a free Zapatista-run school in the touristy town of San Cristóbal de Las Casas in Chiapas. Most of the year, people from surrounding rural communities arrive here to learn trades like electrical wiring, artisanal crafts, and farming practices. This week, thousands of foreigners had traveled to the town to learn about something much more basic: autonomy.
Our first “class” was in the back of a covered pickup truck careening through the Lacandon jungle with orange trees in full bloom. As we passed, men and women raised peace signs in salute. Spray-painted road signs read (in translation):
“You are now entering Zapatista territory. Here the people order and the government obeys.”
I grew nauseous from the exhaust and the dizzying mountain views, and after six hours in that pickup on this, my sixth day of travel, two things occurred to me: first, I realized that I had traveled “across” Chiapas in what was actually a giant circle; second, I began to suspect that there was no Zapatista organizing school at all, that the lesson I was supposed to absorb was simply that life is a matter of perpetual, cyclical motion. The movement’s main symbol, after all, is a snail’s shell.
Finally, though, we arrived in a village where the houses had thatched roofs and the children spoke only the pre-Hispanic language Ch’ol.
Over the centuries, the indigenous communities of Chiapas survived Spanish conquistadors, slavery, and plantation-style sugar cane fields; Mexican independence and mestizo landowners; racism, railroads, and neoliberal economic reforms. Each passing year seemed to bring more threats to its way of life. As the father of my host family explained to me, the community began to organize itself in the early 1990s because people felt that the government was slowly but surely exterminating them.
The government was chingando, he said, which translates roughly as deceiving, cheating, and otherwise screwing someone over. It was, he said, stealing their lands. It was extracting the region’s natural resources, forcing people from the countryside into the cities. It was disappearing the indigenous languages through its version of public education. It was signing free trade agreements that threatened to devastate the region’s corn market and the community’s main subsistence crop.
So on January 1, 1994, the day the North America Free Trade Agreement went into effect, some residents of this village -- along with those from hundreds of other villages -- seized control of major cities across the state and declared war on the Mexican government. Under the name of the Zapatista Army for National Liberation, they burned the army’s barracks and liberated the inmates in the prison at San Cristóbal de Las Casas.
In response, the Mexican army descended on Chiapas with such violence that the students of Mexico City rioted in the streets. In the end, the two sides sat down for peace talks that, to this day, have never been resolved.
The uprising itself lasted only 12 days; the response was a punishing decade of repression. First came the great betrayal. Mexican President Ernesto Zedillo, who, in the wake of the uprising, had promised to enact greater protections for indigenous peoples, instead sent thousands of troops into the Zapatistas’ territory in search of Subcomandante Marcos, the world-renowned spokesperson for the movement.
They didn’t find him. But the operation marked the beginning of a hush-hush war against the communities that supported the Zapatistas. The army, police, and hired thugs burned homes and fields and wrecked small, communally owned businesses. Some local leaders disappeared. Others were imprisoned. In one region of Chiapas, the entire population was displaced for so long that the Red Cross set up a refugee camp for them. (In the end, the community rejected the Red Cross aid, in the same way that it also rejects all government aid.)
Since 1994, the movement has largely worked without arms. Villagers resisted government attacks and encroachments with road blockades, silent marches, and even, in one famous case, an aerial attack comprised entirely of paper airplanes.
The Boy Who Is Free
Fifteen years after the uprising, a child named Diego was born in Zapatista territory. He was the youngest member of the household where I was staying, and during my week with the family, he was always up to something. He agitated the chickens, peeked his head through the window to surprise his father at the breakfast table, and amused the family by telling me long stories in Ch’ol that I couldn’t possibly understand.
He also, unknowingly, defied the government’s claim that he does not exist.
Diego is part of the first generation of Zapatista children whose births are registered by one of the organization’s own civil judges. In the eyes of his father, he is one of the first fully independent human beings. He was born in Zapatista territory, attends a Zapatista school, lives on unregistered land, and his body is free of pesticides and genetically modified organisms. Adding to his autonomy is the fact that nothing about him -- not his name, weight, eye color, or birth date -- is officially registered with the Mexican government. His family does not receive a peso of government aid, nor does it pay a peso worth of taxes. Not even the name of Diego’s town appears on any official map.
By first-world standards, this autonomy comes at a steep price: some serious poverty. Diego’s home has electricity but no running water or indoor plumbing. The outhouse is a hole in the ground concealed by waist-high tarp walls. The bathtub is the small stream in the backyard. Their chickens often free-range it right through their one-room, dirt-floor house. Eating them is considered a luxury.
The population of the town is split between Zapatistas and government loyalists, whom the Zapatistas call “priistas” in reference to Mexico’s ruling political party, the PRI. To discern who is who, all you have to do is check whether or not a family’s roof sports a satellite dish.
Then again, the Zapatistas aren’t focused on accumulating wealth, but on living with dignity. Most of the movement’s work over the last two decades has involved patiently building autonomous structures for Diego and his generation. Today, children like him grow up in a community with its own Zapatista schools; communal businesses; banks; hospitals; clinics; judicial processes; birth, death, and marriage certificates; annual censuses; transportation systems; sports teams; musical bands; art collectives; and a three-tiered system of government. There are no prisons. Students learn both Spanish and their own indigenous language in school. An operation in the autonomous hospital can cost one-tenth that in an official hospital. Members of the Zapatista government, elected through town assemblies, serve without receiving any monetary compensation.
Economic independence is considered the cornerstone of autonomy -- especially for a movement that opposes the dominant global model of neoliberal capitalism. In Diego’s town, the Zapatista families have organized a handful of small collectives: a pig-raising operation, a bakery, a shared field for farming, and a chicken coop. The 20-odd chickens had all been sold just before Christmas, so the coop was empty when we visited. The three women who ran the collective explained, somewhat bashfully, that they would soon purchase more chicks to raise.
As they spoke in the outdoor chicken coop, there were squealing noises beneath a nearby table. A tangled cluster of four newly born puppies, eyes still crusted shut against the light, were squirming to stay warm. Their mother was nowhere in sight, and the whole world was new and cold, and everything was unknown. I watched them for a moment and thought about how, although it seemed impossible, they would undoubtedly survive and grow.
Unlike Diego, the majority of young children on the planet today are born into densely packed cities without access to land, animals, crops, or almost any of the natural resources that are required to sustain human life. Instead, we city dwellers often need a ridiculous amount of money simply to meet our basic needs. My first apartment in New York City, a studio smaller than my host family’s thatched-roof house, cost more per month than the family has likely spent in Diego’s entire lifetime.
As a result, many wonder if the example of the Zapatistas has anything to offer an urbanized planet in search of change. Then again, this movement resisted defeat by the military of a modern state and built its own school, medical, and governmental systems for the next generation without even having the convenience of running water. So perhaps a more appropriate question is: What’s the rest of the world waiting for?
Around six o’clock, when night falls in Oventic, the music for the celebration begins. On stage, a band of guitar-strumming men wear hats that look like lampshades with brightly colored tassels. Younger boys perform Spanish rap. Women, probably from the nearby state of Veracruz, play son jarocho, a type of folk music featuring miniature guitar-like instruments.
It’s raining gently in the open field. The mist clings to shawls and skirts and pasamontañas, the face-covering ski masks that have become iconic imagery for the Zapatistas. “We cover our faces so that you can see us” is a famous Zapatista saying. And it’s true: For a group of people often erased by politicians and exploited by global economies, the ski-masks have the curious effect of making previously invisible faces visible.
Still, there are many strategies to make dissent disappear, of which the least effective may be violence. The most ingenious is undoubtedly to make the rest of the world -- and even the dissenter herself -- dismissive of what’s being accomplished. Since curtailing its military offensive, the government has waged a propaganda war focused on convincing the rest of Mexico, the world, and even Zapatista communities themselves that the movement and its vision no longer exists.
But there are just as many strategies for keeping dissent and dissenters going. One way is certainly to invite thousands of outsiders to visit your communities and see firsthand that they are real, that in every way that matters they are thriving, and that they have something to teach the rest of us. As Diego’s father said in an uncharacteristic moment of boastfulness, “I think by now that the whole world has heard of our organization.”
Writing is another way to prevent an idea and a movement from disappearing, especially when one is hurtling down the highway in Texas headed back to New York City, already surrounded by a reality so different as to instantly make the Zapatistas hard to remember.
The most joyous way to assert one’s existence, however, is through celebration.
The New Year arrived early in Oventic. One of the subcomandantes had just read a communique issued by the organization's leadership, first in Spanish, then in the indigenous languages Tzotzil and Tzeltal. The latter translations took her nearly twice as long to deliver, as if to remind us of all the knowledge that was lost with the imposition of a colonial language centuries ago. Then, a low hiss like a cracked soda can, and two fireworks exploded into the air.
“Long live the insurgents!” a masked man on stage cried.
“Viva!” we shouted. The band burst into song, and two more fireworks shot into the sky, their explosions well timed drumbeats of color and sound. The coordination was impeccable. As the chants continued, the air grew so smoky that we could barely see the fireworks exploding, but in that moment, I could still feel their brilliance and the illumination, 20 years old, of the movement releasing them.
TomDispatch regular Laura Gottesdiener is a journalist and the author of A Dream Foreclosed: Black America and the Fight for a Place to Call Home. She is an editor for Waging Nonviolence and has written for Playboy, Al Jazeera America, RollingStone.com, Ms., the Huffington Post and other publications.
Copyright 2014 Laura Gottesdiener
Image by Julia Stallabrass, licensed under Creative Commons.