"I quote others so that I can better express myself." — Michel de Montaigne
 

                                                                                                                                                                     

a blog featuring a history professor’s scattered ruminations about the past, the present,and the ways that we connect the two

                                                                                                                                                                         
 
 
 
The Archives: Spring 2012
 

11 MAY

    I Fully Agree: The Cost of College is Too High

One issue that has recently become politicized this campaign season is the cost of college.  The pivotal question of the current political moment is whether the federal government should step in to help lower the cost of college through reducing future interest rates for student loans. Although most Democrats and most Republicans now say “yes,” Washington is at a standstill on the issue because of disagreement over how the federal backing of student loans will be paid for.  Democrats favor closing tax loopholes and tax breaks enjoyed by select corporations.  Republicans want to cut programs that help constitute the social safety net.

But the politicization of student loans raises the larger question of why college is increasingly expensive for most students, regardless of whether they go to a private four-year college, a public equivalent, or even a community college.  Writing for the online magazine, Salon, Andrew Leonard offers a succinct but detailed explanation of many issues involved, thus demonstrating that there's no easy answer to the question.

One explanation for tuition hikes favored by many conservatives is that because government-backed, low-interest student loans are so accessible, colleges take advantage of this “easy money.”  They raise their tuition, in other words, because they know students have access to more money and are willing to take on such loans to secure a future prosperity.  But as Leonard points out, the problem with this argument is that government funding for public institutions of higher learning has actually dropped markedly over the past decade.  This has forced most institutions to raise their tuition, which in turn has led to students taking on greater debt:

 

While there are certainly some sectors of higher education in which there is a clear relationship between student loans and higher tuitions, for the great majority of college students the problem isn’t that the government is giving them too much money. Quite the opposite: It’s the collapse of direct government support for higher education that is the main driver of higher tuition costs.

“The reality is that student debt is not rising because the government is putting more money into higher education,” says Kevin Carey, policy director at Education Sector, a Washington-based nonpartisan think tank. “It’s rising because the government is putting less money into higher education.”
 

Even so, Leonard agrees that rising college tuition has been a trend seen not only in public institutions, but also private ones and even for-profit colleges such as the University of Phoenix.  Thus, as he also argues, numerous factors have contributed to the problem.  Yet since a large majority of college students end up going to public institutions, the primary causal connection is clear for Leonard:

 

The bottom line: For the large majority of college students, rising tuitions have nothing to do with the availability of student loans or Pell Grants. What’s happening, instead, is that the burden of paying for college that was previously provided directly by government has now been shifted onto the backs of students, in the form of crippling debt.

 

At the risk of alienating some at my own institution, allow me to offer some other reasons—outside the decline in state funding—for why the cost of tuition has risen significantly.  First, virtually all institutions of higher education have made massive investments in new technology: computers, servers, internet infrastructure such as wi-fi, countless software applications, and advanced telecommunication delivery systems.  Most of this has been done, I would argue, based on a questionable assumption that this new investment will not only lower the cost of the delivery of the “product” in the long run, but also make education itself somehow better or more efficacious.  By no means do I qualify as a technological luddite, as made obvious by the very existence of this blog.  Nonetheless, I largely question the assumption about technology’s educational benefits, in part because I haven’t seen much evidence on the ground level to support it.

Second, much of higher education in the United States is marked by administrative bloat.  As Leonard rhetorically asks in passing, “do schools really need to have as many administrative personnel as teaching personnel?”   At my own institution and others like it, I’m struck by not only how many comprise the staffs of most major administrators, but also how much the faculty is expected to take part in administrative work, which is often couched as doing “service” for the university.  Accordingly, I continue to be mystified by why, at institutions supposedly dedicated to student thinking, reading, and writing, so much time and energy goes toward shaping the framework of the institutions themselves.  And at what cost?   My fear is that the time and energy devoted to “service” will take away from critical intellectual engagement on behalf of the faculty.  It’s not as if the time and energy of each faculty member are, after all, unlimited.  In my case, the study of history and student understanding of the past inevitably take a hit.  But even more significant, I’ve seen a lot of cost-cutting measures whereby an overworked faculty has been further whittled down, while I’ve seen virtually none of the same when it has come to administrators—whose salaries, by the way, are usually significantly higher than those of the faculty.

My own reasoning and observations about rising tuitions, I’ll admit, are somewhat speculative, anecdotal, and assailable by valid counter-arguments.  Yet what pains me most is that the two fundamental questions I continue to raise—Does more technology automatically translate into a better education for students?  Do we really need such a large administrative body to oversee the fundamental mission of making students better thinkers, readers, and writers?—are not even showing up on the radar screen, neither among the faculty nor among the administration at my own institution.  What matters more, it seems, is how aesthetically pleasing the campus looks this graduation weekend.  "Pomp and Circumstance" indeed.

UPDATE (16 MAY). Please note that the New York Times has been running a very informative series on college student debt this week. Much of this series confirms what Leonard wrote last week, but it also considers other aspects of the issue, including how and why state support for higher education has fallen. As Andrew Martin and Andrew Lehren explained in the introductory article to the series last Sunday, the decision to reduce state funding for higher education is highly emblematic of current political cowardice, to say nothing of incredible short-sightedness:

 

From 2001 to 2011, state and local financing per student declined by 24 percent nationally. Over the same period, tuition and fees at state schools increased 72 percent, compared with 29 percent for nonprofit private institutions, according to the College Board. Many of the cuts were the result of a sluggish economy that reduced tax revenue, but the sharp drop in per-student spending also reflects a change: an increasing number of lawmakers voted to transfer more of the financial burden of college from taxpayers to students and their families. (Local funding is a small percentage of the total, and mostly goes to community colleges.) . . . Donald E. Heller, an expert on higher education, said elected officials in both parties had figured out that colleges were one of the few parts of state government that could raise money on their own.  If lawmakers cut state financing, the schools could make it up by raising tuition. “It lets legislators off the hook and makes universities look like the bad guy,” said Mr. Heller, dean of the College of Education at Michigan State University.

 

Massive budget cuts on state institutions of higher learning are currently being depicted as "courageous" and indicative of "making the hard decisions." As a matter of fact, it's one of the most cowardly acts politicians make these days because they realize that colleges, not themselves, will be held responsible for higher tuition and the accompanying student debt.

 
 

9 MAY

    Austerity and Capitalism: Two Sides of the Same Historical Myth

Most of my productive time these days is spent reading extensive amounts of crappy French handwriting from the eighteenth century, thanks to digital images I’ve taken of original documents from various archives.  It’s never an easy task, though occasionally there is some payoff in the form of a very interesting story or two.

But whenever I have a few minutes to do some recreational reading, I'm now turning to Marilynne Robinson’s latest collection of essays titled, When I Was a Child I Read Books.  I first saw this book in a Minneapolis airport bookstore a couple months back and after reading a few pages, became convinced that Robinson's essays contained extraordinary insight.  It’s a delight to sift through this good writing, particularly when the prose is as thoughtful as hers.

There are many parts to her essays that strike me as deeply penetrating, but the most prescient may be found in her “Austerity as Ideology.”  One of her arguments in this essay is that the recent push for economic austerity has become an ideology, for which reason it stands immune to historical facts that call its most fundamental assumptions into question.  Even more to the point, the current ideological march toward austerity is not simply a cognitive or political morass for Robinson, but also a sign of an ongoing metaphysical and moral crisis:

 

But in a strange alembic of this moment, the populace at large is thought of by a significant part of this population as a burden, a threat to their well-being, to their “values.”  There is at present a dearth of humane imagination for the integrity and mystery of other lives.  In consequence, the nimbus of art and learning and reflection that has dignified our troubled presence on this planet seems now like a thinning atmosphere.  Who would have thought that a thing so central to human life could prove so vulnerable to human choices?

 

Those observations largely coincide with my own on the present zeitgeist in the United States.  To wit, one reason why the historical discipline and similar ones in the Humanities are increasingly perceived as superfluous and unworthy of professionalized study today is that learning, creativity, and contemplation—the fruit of which is to behold the what Robinson calls the “integrity and mystery” of human life—cannot be easily commodified and given a price.  “Austerity has been turned against institutions and customs that have been major engines of wealth creation,” Robinson writes, “because they are anomalous in terms of a radically simple economics.”  When the quote is placed in context, it's clear that Robinson is referring to not only her own institution—the University of Iowa—but also ones like it, including my own.

But hasn’t it always been this way in America?  Fiscal austerity has been credited with making the nation more economically successful, entrepreneurially innovative, and socially mobile than most.  A close look at the historical record, however, suggests that America's austere legacy and its putative benefits are more fiction than fact. For example, during the Cold War the United States spent lavishly on science and technology—albeit in an effort to beat the Soviets to the moon and beyond—and did the same with the Marshal Plan, though there were ulterior political motives for doing this as well. The U.S. government's spending on national defense, moreover, has been profligate ever since World War II.

No matter how historically fallacious American austerity may be, Robinson argues that something unprecedented has recently occurred with this belief; it has become an ideology, which is to say that “it all makes perfect sense once its assumptions are granted. . . it gathers evidence opportunistically, and is utterly persuaded by it, fueling its own confidence, sometimes to the point of messianic certainty.”  In other words, it has graduated from a mere cultural proclivity to being “rational, a pure product of the human mind.”

As Robinson cogently explains further, ideological austerity is inseparable from a belief in the infallibility of unbridled capitalism, in which markets supposedly act of their own accord and magically end up benefitting everyone.  Indeed, ideological austerity depends on an assumption that individualized capital and the freedom to do whatever one wants with it are the key to humanity’s unlimited progress.  Yet here again, the historical reality calls such an ideological assumption into serious doubt.  Unadulterated capitalism has never existed in the United States.  And it’s a safe bet that, unless we want an epic economic collapse, it never will either.  In this respect, Robinson’s history is spot on:

 

. . . America has never been an especially capitalist country.  The postal system, the land grant provision for public education, the national park system, the Homestead Act, the graduated income tax, the Social Security system, the G.I. Bill—all of these were and are massive distributions or redistributions of wealth meant to benefit the population at large.  Even “the electrification of the countryside,” Lenin’s great and unrealized dream, was achieved in America by a federal program begun in 1936.

 

In addition, Robinson writes that “Europeans are generally unaware of the degree to which individual state governments provide education, health care, libraries, and other services that complement or supplement federal programs.”  Quite true, but I would add that most Americans are unaware of this as well, if only because a historical mythology reinforces the faulty idea that the United States has always had a stand-on-your-own, make-and-remake-yourself society.

Thus what we see at this moment is the prominence of one historical myth that has two sides: those of fiscal austerity and of unbridled capitalism.  In all likelihood, this two-sided myth will continue to play a role in eviscerating public institutions, including those of primary, secondary, and higher education.  That’s why it’s especially critical for professional historians to identify such a myth and call it out for what it really is.

 
 

3 MAY

    Washington Gridlock: Who's Responsible and How Bad Is It?

It doesn’t take a seasoned political wonk to recognize that functional governance in Washington, D.C. is virtually impossible these days.  And if one were to rely on the mainstream media for explaining why this is so, the story would be as follows:

Both the Republican and Democratic parties are to blame, and largely in equal measure.  Beholden to the most extreme elements of their parties, politicians on both sides of the aisle are contributing to an inability to work together and arrive at compromise.  As long as both Democrats and Republicans continue their ideological rigidity, nothing will get done in Washington, no matter who is president now or in the next four years.

There’s only one problem with this narrative—it’s flat-out wrong.  Or so says Thomas Mann from the left-leaning Brookings Institution, and Norm Ornstein of the right-tilted American Enterprise Institute.  Though their institutional affiliations represent opposing sides of the political spectrum, their conclusion is rather one-sided:

 

We have been studying Washington politics and Congress for more than 40 years, and never have we seen them this dysfunctional.  In our past writings, we have criticized both parties when we believed it was warranted.  Today, however, we have no choice but to acknowledge that the core of the problem lies with the Republican Party. . . . The GOP has become an insurgent outlier in American politics.  It is ideologically extreme; scornful of compromise; unmoved by conventional understanding of facts, evidence and science; and dismissive of the legitimacy of its political opposition. . . . When one party moves this far from the mainstream, it makes it nearly impossible for the political system to deal constructively with the country’s challenges.

 

In their analysis of recent history, Ornstein and Mann point to several developments that helped create an ideologically intransigent Republican Party.  The Civil Rights Movement and Lyndon Johnson’s War on Poverty were responsible for shifting most whites in the south away from the Democratic Party and toward Republicans.  Moreover, social conservatives mobilized themselves after the 1973 Roe v. Wade decision, an anti-tax movement and broader libertarianism emerged in the late 1970s, and a conservative media network featuring talk radio, Fox News, and right-wing blogs congealed into a coordinated attack machine for the Republican Party beginning in the early 1990s.

But according to Ornstein and Mann, “the real move to the bedrock right starts with two names: Newt Gingrich and Grover Norquist.”  Gingrich initiated a scorched-earth political strategy in which congressional compromise is heresy.  Norquist has been a highly successful champion of anti-tax politics, to such a degree that today all but 4 of the 242 Republicans in the House of Representatives and 41 of the 46 Republicans in the U.S. Senate have signed a pledge exacted by Norquist to never vote to raise taxes.

What have been the repercussions of such GOP recalcitrance?  In case you haven’t been paying attention in the last three and a half years,

 

. . . thanks to the GOP, compromise has gone out the window in Washington.  In the first two years of the Obama administration, nearly every presidential initiative met with vehement, rancorous and unanimous Republican opposition in the House and the Senate, followed by efforts to delegitimize the results and repeal the policies. The filibuster, once relegated to a handful of major national issues in a given Congress, became a routine weapon of obstruction, applied even to widely supported bills or presidential nominations.  And Republicans in the Senate have abused the confirmation process to block any and every nominee to posts such as the head of the Consumer Financial Protection Bureau, solely to keep laws that were legitimately enacted from being implemented. . . .  In the third and now fourth years of the Obama presidency, divided government has produced something closer to complete gridlock than we have ever seen in our time in Washington, with partisan divides even leading last year to America’s first credit downgrade.

 

Ornstein and Mann are quick to emphasize, however, that Democrats haven’t been innocent when it comes to slash-and-burn politics.  But the degree to which Democrats have engaged in it varies dramatically from Republicans:

 

No doubt, Democrats were not exactly warm and fuzzy toward George W. Bush during his presidency.  But recall that they worked hand in glove with the Republican president on the No Child Left Behind Act, provided crucial votes in the Senate for his tax cuts, joined with Republicans for all the steps taken after the Sept. 11, 2001, attacks and supplied the key votes for the Bush administration’s financial bailout at the height of the economic crisis in 2008.  The difference is striking.

 

Ornstein and Mann add that one reason why this trend has continued is the narrative created by a supposedly neutral media.  They write, “We understand the values of mainstream journalists, including the effort to report both sides of a story. But a balanced treatment of an unbalanced phenomenon distorts reality. If the political dynamics of Washington are unlikely to change anytime soon, at least we should change the way that reality is portrayed to the public.”  Thus they ask journalists not to take the safe path of appearing to be even-handed by simply publishing opposing views without any kind of filter.  They suggest that journalists should reveal to their readers which politicians are telling the truth and which are “taking hostages.”

Still, Ornstein and Mann admit that ultimately the voters must decide whether this political dynamic will continue.  “If they can punish ideological extremism at the polls and look skeptically upon candidates who profess to reject all dialogue and bargaining with opponents,” they conclude, “then an insurgent outlier party will have some impetus to return to the center. Otherwise, our politics will get worse before it gets better.”

No doubt political intransigence is as American as apple pie.  And previous struggles between political parties in United States history have been nothing short of vicious.  Still, more times than not, opposing politicians found ways of striking compromise, which has been indispensible for workable democratic governance.  As I have said elsewhere, that there are highly polarized political views in America today is not, in and of itself, the problem.  Rather, the danger lies with the tendency of one political party to employ any means necessary to obstruct a functional federal government, including the refusal to compromise legislatively on anything.  What will become of such obstruction is impossible to say.  But similar scenarios of the past generally suggest—and the U.S. Civil War is a stellar example—that this kind of dynamic rarely ends well.

 
 

2 MAY

    Projecting on the Election in France

I don’t consider myself an expert in contemporary French politics.  Yes, I follow current events there, especially with the help of Le Parisien.  And I tend to pay attention to France’s politics more closely whenever I’m in the hexagon.

With that said, I've noticed that a lot has been written—even in the English-language press—about the first round of the presidential election on 22 April and the upcoming second round on 6 May.  Much of it has centered on how much support the far-right candidate, Marine Le Pen, garnered in the first round (almost 20 percent).  The long and short of it, however, is that center-right President Nicolas Sarkozy will face off against center-left candidate François Hollande this Sunday.  Hollande has a slight lead in the polls, but either could conceivably win.

As a historian, I can’t help but take a long-term view of this presidential election.  The most basic observation is that once more, the final round of the election pits the left against the right: a division that in many ways was cemented by the French Revolution.  To be sure, what is “left” and “right” in French politics has changed considerably over the past 223 years.  Yet what divides the two remains anchored in what happened in the 1789-1799 revolution as well as how this event came to be understood in the nineteenth and twentieth centuries.  Viewing the Revolution as largely a mistake, the French right has underscored—as it did during the event—the importance of law and order, the preservation of “natural” social and economic distinctions, the dangers of rapid political change, and the maintenance of values rooted in Christianity and other traditional institutions.  The French left, on the other hand, has had a more favorable take on the Revolution and its ideals of liberty, equality, and brotherhood.   Although many on the left have acknowledged the Revolution’s excesses, they also have been quick to exalt its reforms: the abolition of seigneurialism; the introduction of rudimentary democracy; equality under the law; and the principle of public secularization.

But to stop there with the historical backfill would be extremely reductive.  What matters much more to this election is undoubtedly Europe’s past one hundred years.  As Adam Gopnik writes in the current issue of the New Yorker, the contextual number to keep in mind for this election is sixty million:

 

That is the approximate (and probably understated) number of Europeans killed in the thirty years between 1914 and 1945, victims of wars of competing nationalisms on a tragically divided continent. The truth needs re-stating: social democracy in Europe, embodied by its union, has been one of the greatest successes in history.  Like all successes, it can seem exasperatingly commonplace. . . Yet the truth ought to remain central.  A continent torn by the two most horrible wars in history achieved a remarkable half century of peace and prosperity, based on a marriage of liberalism properly so called (individual freedoms, including the entrepreneurial kind) and socialism rightly so ordered (as an equitable care for the common good).

 

Gopnik thus suggests that a mildly petulant center-right politician facing off against a charismatically-challenged, milquetoast one from the center-left in France represents no less than a remarkable triumph of political consensus and pragmatism in Europe as a whole, especially given how its politics has been devastatingly defined by extremist ideologies dating back to—you guessed it—the French Revolution.  And while there is much schadenfreude on this side of the Atlantic over the troubles that the European Union is now having, many in the United States run the risk of overlooking how destructive their own uncompromising ideologies can be.  As Gopnik aptly put it,

 

Any pleasure taken in the failure of Europe to expunge all its demons threatens to become one more way of not having to examine our own.  A mild-mannered, European-minded citizen king is, at least, better than a passionately convinced exceptionalist.  France, and Europe, learned that lesson the hard way.

 

Sooner or later, the United States will have to learn the same lesson, though one hopes in a way that precludes sixty million deaths.  Even so, there is a hard reckoning awaiting Americans if we cannot find ways of mediating political differences in such a way so that good governing is possible.  Europe may now seem like a basket case to many in this nation, but much of this view may be due to not only a woeful absence of historical perspective, but also a projection of America’s own current political dysfunction and paralysis.

 
 

29 APR

    A Tortured History in the Making

Whether it was intentional or not, this was buried in the news over the weekend:

 

A nearly three-year-long investigation by Senate Intelligence Committee Democrats is expected to find there is little evidence the harsh "enhanced interrogation techniques" the CIA used on high-value prisoners produced counter-terrorism breakthroughs.

People familiar with the inquiry said committee investigators, who have been poring over records from the administration of President George W. Bush, believe they do not substantiate claims by some Bush supporters that the harsh interrogations led to counter-terrorism coups.

 

As the article points out, not only did torture fail to yield any significant intelligence, but also the information gained through torture may have sidetracked intelligence officials and prevented them from pursuing better leads on terrorism:

 

Critics also say that still-classified records are likely to demonstrate that harsh interrogation techniques produced far more information that proved false than true. . .  Some U.S. counter-terrorism officials have acknowledged that in the years after the September 11 attacks, U.S. agencies were overwhelmed with bogus tips about possible plots and attacks.

 

What I find so incredible about the torture regimen of the George W. Bush administration is how ignorant this whole lot was regarding what history suggests about torture and its efficacy.  To be sure, torture has as long a history as humanity itself, but in the eighteenth century it was increasingly discarded by states in western Europe, both as a punishment and as a judicial tool. In keeping with this trend, moreover, its use was prohibited by the United States' Constitution not long after the nation was founded.  But why?

It's a historically complicated question to answer. The popular view seems to be that this was due to the emergence of human rights, which were seen to be gravely violated by acts of torture. The U.S. Bill of Rights, for example, forbade not only cruel and unusual punishment but also self-incrimination, which was one of the primary purposes for judicial torture's use.

But probably the more likely reason for torture's progressive abandonment in the eighteenth century was that it came to be recognized as counterproductive for uncovering the truth. As more and more philosophers applied principles of human reason to matters of crime and justice, they realized that judicial torture was prone to lead the court astray. This explains how Cesare Beccaria, writing in 1764, correctly understood what George W. Bush and Dick Cheney, among others, did not:

 

Every act of the will is invariably in proportion to the force of the impression of our senses.  The impression of pain, then, may increase to such a degree that, occupying the mind entirely, it will compel the sufferer to use the shortest method of freeing himself from torment.  His answer, therefore, will be an effect as necessary as that of fire or boiling water, and he will accuse himself [or someone else] of crimes of which he is innocent: so that the very means employed to distinguish the innocent from the guilty [or truth from lies] will most effectually destroy all difference between them.

 

In other words, if the pain becomes too extreme, the person being tortured will do anything in order to make the pain stop, which usually means saying what one thinks the torturers wanted said.  It's really that simple.

Beccaria provided other reasons why torture was counterproductive to criminal justice, including the fact that those who confessed to a crime under torture were not necessarily more innocent or more guilty, but were simply less physically or psychologically able to withstand the torture.  As for divulging accomplices or "actionable intelligence," Beccaria pointed out that those who did commit crimes, especially given their moral character, could hardly be counted on when it came to reliable information.

The use of torture by American intelligence officials was not only a violation of the most basic human rights; it was also based on the some of the most stupid assumptions one could make about torture and its effects.  And what makes it all the worse, in my mind, is that these asinine ideas were known and demonstrated to be false 250 years ago by Beccaria and other social reformers of the Enlightenment.  Indeed, they had been thoroughly debunked by the end of the eighteenth century.

Many will look back at the George W. Bush administration, will recognize the torture regimen for what it was, and will conclude that Bush and his lackeys were responsible for a spectacular moral failure.  And of course they will be right.  But those with a broader appreciation of history will also identify what happens when an acute lack of historical consciousness is found among those in the highest echelons of power.

Update (30 APR): In light of last night's "60 minutes" interview of Jose Rodriguez, who by international standards is a war criminal and should have been prosecuted for destroying incriminating evidence, Andrew Sullivan posted this today. Needless to say, I largely agree with Sullivan. Also note this statement released by Senators Diane Feinstein and Carl Levin regarding the numerous false claims made by Rodriguez in the interview.

 
 

24 APR

    The “Armocracy” That Is Today's America

Jill Lepore, an historian of the United States who currently teaches at Harvard, has written a highly informative story about guns in America that helps shed some light on the shooting of Trayvon Martin. It can found in the 23 April edition of The New Yorker.

She makes several important points about guns and gun control in the United States and, fortunately for this blog, places them in a sound historical context.  For example, although the Supreme Court recently ruled that the Second Amendment protects the individual right to bear arms, the period when the amendment was written and adopted had guns (relatively few overall) with mostly long stocks and barrels.  The small number of pistols existing in the late eighteenth century, not unlike contemporaneous muskets, could be fired only once before being cumbersomely reloaded.  Thus, as Lepore writes, “In size, speed, efficiency, capacity, and sleekness, the difference between an eighteenth-century musket and the gun that George Zimmerman was carrying is roughly the difference between the first laptop computer—which, not counting the external modem and the battery pack, weighed twenty-four pounds—and an iPhone.”

Lepore also points out that, contrary to what gun-rights proponents claim, most of U.S. history has been marked by the control of firearms, not their unlimited use:

 

In the two centuries following the adoption of the Bill of Rights, in 1791, no amendment received less attention in the courts than the Second, except the Third. As Adam Winkler, a constitutional-law scholar at U.C.L.A., demonstrates in a remarkably nuanced new book, “Gunfight: The Battle Over the Right to Bear Arms in America,” firearms have been regulated in the United States from the start. Laws banning the carrying of concealed weapons were passed in Kentucky and Louisiana in 1813, and other states soon followed: Indiana (1820), Tennessee and Virginia (1838), Alabama (1839), and Ohio (1859). Similar laws were passed in Texas, Florida, and Oklahoma. As the governor of Texas explained in 1893, the “mission of the concealed deadly weapon is murder. To check it is the duty of every self-respecting, law-abiding man.”

 

So what changed?  As Lepore further explains, “between 1968 and 2012, the idea that owning and carrying a gun is both a fundamental American freedom and an act of citizenship gained wide acceptance and, along with it, the principle that this right is absolute and cannot be compromised; . . .” This trend was aided, in large part, by the emergence of the National Rifle Association as an electoral and lobbying juggernaut. This growth in the NRA's political potency happened to dovetail with the rise of a modern conservativism emphasizing libertarian rights and decrying any form of government regulation.  The NRA and opportunistic politicians in both major parties became so successful in promoting gun rights that “[i]n 1991, a poll found that Americans were more familiar with the Second Amendment than they were with the First: the right to speak and to believe, and to write and to publish, freely.”

Twenty years later now, gun violence in America is as senseless as ever and shows little sign of significantly dissipating.  To wit, as Lepore concludes:

 

One in three Americans knows someone who has been shot. As long as a candid discussion of guns is impossible, unfettered debate about the causes of violence is unimaginable. Gun-control advocates say the answer to gun violence is fewer guns. Gun-rights advocates say that the answer is more guns: things would have gone better, they suggest, if the faculty at Columbine, Virginia Tech, and Chardon High School had been armed. That is the logic of the concealed-carry movement; that is how armed citizens have come to be patrolling the streets. That is not how civilians live. When carrying a concealed weapon for self-defense is understood not as a failure of civil society, to be mourned, but as an act of citizenship, to be vaunted, there is little civilian life left.

 

In the last part of the paragraph Lepore admittedly injects opinion, but from my perspective it’s well founded on fact. From a purely historical point of view, there is no precedent—not even in the real “wild west,” as opposed to the mythical one—for the now-accepted idea that owning a gun and carrying it concealed are part and parcel of American citizenship.  Yet how are current conceal-and-carry laws justified?  More often than not, it’s through one of two historical arguments.  The first, of course, is an appeal to the Second Amendment—the recent interpretation of which by the Supreme Court is rooted in a claim of “originalism,” meaning the alleged original intent of the Constitution’s founders.  Similarly, the second is a blatantly fabricated view of history in which virtually everyone owned guns at the time of the nation’s founding, ostensibly because it was the only way to deal with lawbreakers as well as secure something for dinner.

If one wants to argue that the ideal of universal gun ownership and gun use is confirmed by American history, particularly with reference to the nation’s founders, then a few questions about this version of the past must be asked.  Did the founders envision a time when semi-automatic handguns—which as we have seen, are capable of massive civilian casualties—were so ubiquitous that virtually anyone in the nation could secure one?  Did they prophesy about the destructive force of such diminutive ammunition now used, including armor-piercing bullets?  Did they foresee a country that possessed, according to Lepore, 106 million handguns, 105 million rifles, and 38 million shotguns?  Did they look forward to a republic that had, as reflected in the above statistic, approximately one gun for every man, woman, and child in this nation?  Merely raising these questions demonstrates how absurd historical arguments about a right to carry and conceal firearms are.

Which brings me to my own point about abusing the past.  When the creation of laws or their subsequent interpretation is based on lousy history, the end result is going to be lousy laws.  And in this case, as we have seen too often already, such short-sighted and historically baseless laws lead to senseless and needless gun violence committed by, and done to, individuals who supposedly live in a civil society.

 
 

18 APR

    From the Files of Sad but True

This, my history friends and colleagues, is what we’re up against.

And I don’t know whether to laugh or cry.

While this may not necessarily be a sign that the end of our civilization is near, it does raise the question of whether Twitter attracts people who assume that all the world's knowledge can be reduced to 140 characters or less.

 
 

17 APR

    History and the Health Insurance Mandate

Wouldn’t it be great if we could just go back to the founding of the United States and recapture what true democratic governance was all about?

Had we done this two years ago, we wouldn’t have ended up wasting the U.S. Supreme Court’s precious time about an obviously unconstitutional law like the Affordable Care Act.  The founders of this nation, after all, were especially attuned to government overreach and would have never even considered mandates, let alone pass them.  Right?

Lesson number one for those who revel in the good old days: know your history.  As Einer Elhauge demonstrates in the on-line New Republic, there’s a history that conforms to an ideology, and then there’s one that's a little more accurate:

 

The founding fathers, it turns out, passed several mandates of their own. In 1790, the very first Congress—which incidentally included 20 framers—passed a law that included a mandate: namely, a requirement that ship owners buy medical insurance for their seamen. This law was then signed by another framer: President George Washington. That’s right, the father of our country had no difficulty imposing a health insurance mandate.

That’s not all. In 1792, a Congress with 17 framers passed another statute that required all able-bodied men to buy firearms. Yes, we used to have not only a right to bear arms, but a federal duty to buy them. Four framers voted against this bill, but the others did not, and it was also signed by Washington. Some tried to repeal this gun purchase mandate on the grounds it was too onerous, but only one framer voted to repeal it.

Six years later, in 1798, Congress addressed the problem . . . [of whether] the employer mandate to buy medical insurance for seamen covered drugs and physician services but not hospital stays. And you know what this Congress, with five framers serving in it, did? It enacted a federal law requiring the seamen to buy hospital insurance for themselves. That’s right, Congress enacted an individual mandate requiring the purchase of health insurance. And this act was signed by another founder, President John Adams.

 

I’m no legal scholar and, as the many viewer comments about Elhauge’s article indicate, the applicability of these laws to current legal argument is not as clear-cut as it may first seem.  Still, it’ll be interesting to see how, if indeed the Supreme Court rules that the health insurance mandate violates the U.S. Constitution on the basis of an “originalist” argument (and I fully expect this to be the case), it couches the argument in a way that distinguishes the legislative mandate of “Obamacare” from those approved by the Constitution's founders.

I’ll leave it to constitutional scholars to decide whether the founders’ mandates speak to the legality of the Affordable Care Act.  In any case, what surprises me most about Elhauge’s article is that there was little to no discussion of these historical facts in the larger debate over the ACA—especially as documented by the mainstream media—during the past two years.  An innocent oversight or an indication of which side was able to control the debate?  Either way, the omission is glaring.

 
 

15 APR

    Why We Are Selective about Our Past

As you’ll note, numerous posts below dwell on the theme of how people in the present forget about certain aspects of the past, either unconsciously or by concerted choice.  And given how common the practice is, it evokes a key question: Why is this so often the case?

In short, I don’t think there's a monolithic answer.  At the same time, however, I have little doubt that there are larger political, social, and cultural, and economic forces at work in this than most people—even historians—are willing to acknowledge.  Let me offer two examples of these forces to illustrate my point.

One reason why individuals are so selective about remembering the past is that there's too much culturally and psychically at stake not to do otherwise.  Many times what’s at issue is identity: how we see ourselves and define the group to which we belong in relation to others.  Admittedly every person has multiple identities: one can be a father, a son, a neighbor, a spouse, a Republican, an American—all at the same time.  Yet at the core of many of these identities is a collective past that makes us what we are in the present.  If that collective past is suggestive of shame and dishonor, inevitably it casts our highly-prized identity in a questionable light.

This is why, it seems to me, much of America continues to struggle with race.  As Americans, we adhere to a mythology about our past that upholds ideals such as freedom, equality of opportunity, unanimity, and respect for others.  To a degree this is understandable and perhaps even necessary.  But of course such mythology is inherently one-sided; it fails to tell the story of servitude, inequality, division, and hatred, which has been as much a part of America as their opposites.  Since the mythology has become so deeply entrenched in beliefs, values, and assumptions, most people fail to critically examine it and thus it becomes an unspoken reality for many.  This is why, as Brent Staples points out in today’s New York Times, many Americans do not see themselves as racists even though their action suggests racist beliefs and assumptions:

 

Very few Americans make a conscious decision to subscribe to racist views.  But the toxic connotations that the culture has associated with blackness have been embedded in thought, language and social convention for hundreds of years.  This makes it easy for people to see the world through a profoundly bigoted lens without being aware that they are doing so.

Over the last three decades, a growing body of research has shown that racial stereotypes play a powerful role in judgments made by ostensibly fair-minded people.  Killers of whites, for example, are more likely to receive the death penalty than killers of blacks—and according to the psychologist Jennifer Eberhardt, juries tend to see darker defendants as more “deadworthy” in capital cases involving white victims.

 

Our identity in the present, therefore, often dissuades us from becoming aware of unsavory aspects of the past that shape us today.  To say that race in America matters no more today is to conclude that a 300-year past has no claim on the present.  Not only is this false; it’s an insult to critical thought.

A second reason why people either consciously or unconsciously forget about parts of the past is that there is too much in the current political or economic balance to remember it in a comprehensive way.  In other words, there is power at stake in the present, and a selective view of the past can help maintain one’s power or even increase it.

One of the big complaints in American political discourse these days concerns how polarized the nation is.  But as Paul Krugman and Robin Wells point out in a Salon piece, polarization has long been the rule of American politics, not the exception:

 

What’s more surprising is the fact that the relatively nonpolarized politics of the post-war generation is a relatively recent phenomenon—before the war, and especially before the Great Depression, politics was almost as polarized as it is now. And the track of polarization closely follows the track of income inequality, with the degree of polarization closely correlated over time with the share of total income going to the top 1 percent.

 

Krugman and Wells thus assert that there is a correlation between political polarization and income inequality; the 1950s and 1960s were exceptionally non-polarized politically because Americans tended to be more on the same economic page.  This was not the case, however, between 1900 and 1940.

So what does this have to do with selective remembrance of the past and its role in the present?  Krugman and Wells go on to contend that the lessons of the Great Depression, particularly those articulated by John Maynard Keynes, were all but ignored when the crisis of 2008 hit—not only by Republicans and many Democrats, but also by many academics who should have known better:

 

Again, our point is that the dramatic rise in the incomes of the very affluent left us ill prepared to deal with the current crisis. We arrived at a Keynesian crisis demanding a Keynesian solution — but Keynesian ideas had been driven out of the national discourse, in large part because they were politically inconvenient for the increasingly empowered 1 percent.

 

Krugman has made these points many times over.  And the historical argument often employed against him is that Keynesian economics had nothing to do with the recovery from the Great Depression; the demands of World War II are what really did it.  But even if this counter- argument against Krugman has some credence, who initiated the demands of war that helped the nation recover?  In keeping with what Keynes argued, it was none other than the U.S. government in its extended wartime role.

Overall, Krugman and Wells conclude that “rising inequality played a central role in causing an ineffective response once the [2008] crisis hit,” and that it “also gave rise to what we have called a Dark Age of macroeconomics, in which hard-won insights about how depressions happen and what to do about them were driven out of the national discourse, even in academic circles.”  But even more to my point, the amnesia about Keynesianism strikes me as an example of how a part of the past is ignored so that power in the present is maintained.  The top 1 percent, after all, are emerging relatively unscathed from the Great Recession.  What about everyone else?

 
 

13 APR

    When a Troubling Past Leads to One's Own Door

The English on-line version of Der Spiegel includes a fascinating article in its recent issue about Moritz Pfeiffer, a German historian whose interest in his nation’s role in the atrocities of World War II led him to interview his own grandfather and grandmother.  In doing so, he sought to discover not only what they had done in the Nazi regime, but how they viewed their beliefs and actions at that time sixty to seventy years later.  As the article explains, Pfeiffer

 

. . . juxtaposed his findings with context from up-to-date historical research on the period and wrote a book that has shed new light on the generation that unquestioningly followed Hitler, failed to own up to its guilt in the immediate aftermath of the war and, more than six decades on, remains unable to express personal remorse for the civilian casualties of Hitler's war of aggression, let alone for the Holocaust.

 

His grandfather was an officer in the Wehrmacht who saw much action in World War II, including a stint on the eastern front against Soviet forces.  When Pfeiffer matched historical records with his grandfather’s personal experience, he found that his grandfather almost surely was aware of war crimes—including the killing of some Soviet prisoners of war—and likely helped to carry them out.  Yet his grandfather said nothing of this in his interviews.  Examining his grandmother’s letters from the period, Pfeiffer found that her devotion to Hitler approached the level of fanaticism.  But he also found her a sweet and loving person as she grew old.  What did this mean for Pfeiffer’s study?

 

Pfeiffer concluded that his grandfather wasn't lying outright in his interviews, but merely doing what millions of Germans had done after the war—engaging in denial, playing down their role to lessen their responsibility. . . . But Pfeiffer admits that his book didn't answer a key question about his loving, kind grandparents who were pillars of his family for decades.

"Why did the humanity of my grandparents not rebel against the mass murders and why didn't my grandfather, even in his interview in 2005, concede guilt or shame or express any sympathy for the victims?"

 

Pfeiffer found that his grandparents were characterized by a "state of emotional coldness, a lack of self-criticism and absolute egotism combined with a strong deficit of moral judgment as well as the support, acceptance and justification of cruelty when the enemy was affected by it."  In spite of this, he asserts that he was not interested in passing judgment on his grandparents, but rather wanted to better understand this German generation before it disappeared altogether.

To turn one’s personal history into a contribution to a broader account of the Nazi era is an interesting and innovative approach.  And surely it could not have been accomplished if many in Germany had not already come to terms with its moral responsibility for what unfolded in World War II (the article mentions that as a nation, Germany has accomplished this more so than Austria or Japan).  Nonetheless, it also raises questions about Pfeiffer’s ability to be a sound historian.  Does knowing that one’s own family members have been in denial about participating in an immoral past enable the historian to better understand how and why people fail to come to terms with their own historical responsibility?  Or does it encourage him or her to justify, excuse, or explain away such evasion?

 

Pfeiffer said his grandparents' generation probably had no choice but to suppress their guilt in order to keep on functioning in the hard post-war years when all their energy was focused on rebuilding their livelihoods. "It was a necessary human reaction," said Pfeiffer.

 

True, professional historians should not be in the business of assigning moral responsibility.  Yet more often than not, their findings have moral repercussions in the present, to say nothing of the future.  If we truly want to insure that World War II’s atrocities never occur again, it’s incumbent upon us to discover how and why many evaded responsibility for them after the fact.  But if it’s found that such evasion leads to one’s own door, does it make oneself more or less capable to assess this evasion and see it for what it is?

It sounds as though Pfeiffer searched for a balance; he concluded that his grandfather was likely complicit in war crimes, yet he also sought to explain why his grandfather consciously or unconsciously overlooked this part of his past.  Between these two, however, remains a difficult question of whether personal relationships and the moral implications related to them shaded how Pfeiffer viewed those unable to face up to what they had done.

The subject matter considered by Pfeiffer and the issues raised by it are crucial because they have near-universal application.  It's easy for many in this country to identify who was right and who was wrong in World War II.    Yet when it comes to one of the most morally reprehensible deeds recently committed by the United States—the ill treatment of high-value terrorist suspects, which was “torture” by any reasonable definition of the term—I see little evidence that those responsible can admit to what they had done, much less that the nation as a whole can acknowledge what these “enhanced interrogation techniques” truly were.  Evading responsibility for the past, it seems, is everybody’s problem.

 
 

9 APR

    Reflecting on Religion

What would Easter Sunday be without a whole host of opinion pieces in major newspapers about religion and its role in our society?

Let's start with Ross Douthat, the conservative voice for the Sunday Review of the New York Times. Douthat is concerned that America's religious common ground has all but disappeared. In making this claim, he contrasts the present with the not-too-distant past:

 

Here it's worth contrasting the civil rights era to our own. Precisely because America's religious center was stronger and its leading churches more influential, the preachers and ministers who led the civil rights movement were able to assemble the broadest possible religious coalitionfrom the ministers who marched with protesters to the Catholic bishops who desegregated parochial schools and excommunicated white supremacists. Precisely because they shared so much theological common ground with white Christians, the leaders of black churches were able to use moral and theological arguments to effectively shame many southerners into accepting desegregation.

 

Granted, I have little expertise in the civil-rights era of American history, so I'm flying by the seat of my pants here. Even so, I find three apsects of Douthat's rendering of the past questionable. First, his narrative suggests that many southern white Protestants were only ephermerally committed to segregation. If black leaders could convince them in short order of the evils of segregation, after all, such whites must have been rather open-minded and predisposed to a biblically based and color-blind notion of equality, right? Second, were black leaders really in a political, social, and moral position vis-a-vis white Protestants to "shame" the latter into desegregation? To be shamed requires that the one doing the shaming is in a position of some authority in the eyes of the shamed. And third, didn't the end of segregation have something to do with a Supreme Court case called Brown v. Board of Education, in addition to the sending of federal troops to some major public institutions to make sure that the court order was effected? I seem to remember something about that in my high school American History textbook. The broader point, of course, is that many white southerners—including the most religiously fervent—did not accept desegregation until the national government all but compelled them to do so.

The larger problem I have with Douthat's piece, however, is his suggestion that current religious polarization in this country is preventing it from coming to a political and social consensus. From a historical perspective, I see little to suggest that differing religious belief is the primary source of our current divisiveness, and that all would be well if we all just cultivated religious moderation. I live less than 30 miles, after all, from the site where Joseph Smith was murdered in 1844. We indeed are a nation divided, but we're far from killing each other over religious belief. The more disconcerting polarization, as I see it, is that of wealth between the very few at the top and and the enormous number at the middle and bottom.

Which brings me to another New York Times regular on Sunday, Nicholas Kristof, who also writes about religion, but more from the perspective of how believers and non-believers relate. Kristof sees encouraging signs that non-believers are beginning to recognize that religion may have—for lack of a better phrase—a redeemable side:

 

The latest wave of respectful atheist writing strkes me as a healthy step toward nuance. I've reported on some of the worst of religion—such as smug, sanctimonious indifference among Christian fundamentalists at the toll of AIDS among gay men—yet I've also been awed by nuns and priests risking their lives in war zones. And many studies have found that religious people donate more money and volunteer more time to charity than the nonreligious. Let's not answer religious fundamentalism with secular fundamentalism, religious intolerance with irreligious intolerance.

 

Kristof's point of "learning to respect religion" personally strikes home for a couple reasons. First, my focus of research is on an event—the French Revolution—in which religious intolerance (see resistance to the Civil Constitution of the Clergy) was indeed met with irreligious intolerance (see de-Christianization in 1793-1794). With these two working together, they virtually blew France apart. And second, I sometimes find my own sub-discipline—French revolutionary studies—overlooking religion as a topic worthy of academic endeavor. Thus I more than welcome the trend of more scholars, regardless of their personal belief, recognizing the power of religion in the shaping of history and current society.

Nonetheless, I see Kristof as suffering from the similar kind historical blindness afflicting Douthat. Both writers praise religion because it putatively promotes social cohesion: an ever-present concept of social science first suggested by Durkheim. But what Douthat and Kristof ignore is that just as religion brings a group together, it as much scatters people apart. Thus although there was a religious coalition in favor of civil rights, there also was (and arguably is) an "ecumenical" and theological argument against it. True, religion can make a society more ethical and harmonious, but generally only when everyone is on the same religious page, which has been a relatively rare historical phenomenon. Yes, let's respect religion and recognize its pivotal role both in history and current society. But let's also be aware that divisions over religion are nothing new, and that as much as religion helps promote social cohesion, it often divides not only believers from non-believers, but also the believers from each other.

 
 

8 APR

    The Need to Read

Disturbing news, but not altogether surprising:

 

. . . 19 percent of respondents aged 16 and over said that they hadn't read a single book in any format, over the previous 12 monthsthe highest since such surveys on American reading habits began in 1978. If this figure is accurate, that means more than 50 million Americans don't read books at all.

 

The finding is part of a poll from the Pew Center. Most of the poll measures the degree to which people are reading in new formats, such as the various e-readers now on the market.

That fewer Americans are reading books doesn't mean, of course, that they're not reading at all. But it does indicate that the span of attention required for in-depth reading and the mental discipline needed to sustain it long enough to get through a book are likely on the decline.

I doubt that this will come as a shocker for anyone who has recently assigned a monograph, much less a textbook, for a history class. Still, the evidence remains particularly worrying for those whose profession largely depends on a practice of sustained reading.

Such news is why, in part, I'm led to believe that as liberating and edifying as recent technologies are, there's a heavy price to pay for them as well. And I don't see how they'll do anything in the near future but make this book-reading deficit worse.

It's occurred to me more than once that by writing a blog, I'm merely enabling the problem of short reading spans of attention. Guilty as charged, I'm afraid. So if you're reading this, once you're finished, please find a quiet corner and read a book.

 
 

4 APR

    Having a Hunger for History

Since the focus of this blog is how the past is being connected to the present, it’ll frequently consider one of the primary ways that this is done: formalized education.  What goes on in history classrooms, after all, is central to how we know of the past and how well that past is represented.  With this in mind, I took note of an opinion piece in last Sunday’s New York Times, in which an English professor from the University of Virginia, Mark Edmundson, tackled the recently debated issue of who should go to college.  One of his points: that the best students in college are not the most intellectually gifted, but rather are those “who come to school with the most energy to learn” or—in the other phrase he employs—those with “hungry hearts.” He goes on to argue that “there are plenty of young people out there who will end up in jobs that don’t demand college degrees: yet college is still right for them.”  Edmundson’s point about intellectually hungry students nonetheless raises an obvious question: how did these good students get that way?  Edmundson thinks that he knows the answer:

 

I sometimes think that what the truly hungry students have in common is pretty simple: their parents loved them a lot and didn’t saddle them with gross expectations, spoken or unspoken.  These students aren’t adventurous because they’re insecure and uncertain.  It’s very much the opposite.  Students willing to risk their beliefs and values in school do so because they have confident beliefs and values to risk.

 

He then concludes the piece by stating, “Hungry hearts—smart or slow, rich or poor—still deserve a place in the class.”

There is much in this op-ed with which I agree, especially Edmundson's point that the value of a college degree should not be equated to how lucrative a job one can get with it. Still, I’m left to wonder whether a student's intellectual hunger has as little to do with socio-economic background as implied by the professor's last sentence. Can the desire to learn be attributed in isolation to parental love and unrealistic expectations not being placed on children?  Providing the nurturing that children need and helping them to develop a sense of curiosity, it seems to me, are a lot easier when parents have the time and the energy to do it.  And thus a big part of the answer to why there aren’t more eager and ardent students may be that given the economic demands confronting many lower- and middle-income parents, it is more difficult—though clearly not impossible—for such parents to instill the emotional security and foster the inquisitiveness that yield within a child the desire to learn more.  Parenting is hard: I now know this from experience.  And the multiple economic stressors that many American parents face do little to help.

I raise the matter because in my classes, much like Edmundson has seemingly observed in his own, the fundamental problem with many underachieving students is not that they’re unable to make their subjects and verbs agree.  And it’s not that they come in late for class or decide not to show up at all.  Rather, these issues are merely symptomatic of something larger: a profound lack of curiosity, an absence of intellectual imagination, a resistance to expand one’s cultural horizons.  I don’t believe these can be overcome through mere teaching, no matter how good it is.  Thus I’m interested in finding out how a society as a whole can cultivate not just superior teaching, but also unparalleled “studenting.”  If we knew this, I think it would go a long way in assuring that the past and the study of it gained a greater respect.

 
 
2 APR
    Whatever Happened to "the Union?"

In my first blog post (and it's been a long time in coming), I’ve chosen to elaborate on a recent piece from Charles Pierce, the often-irreverent political blogger for Esquire.  Reflecting on a sacralization of radical individualism by those outside the Supreme Court protesting the Affordable Care Act, Pierce astutely observes what is now being conveniently forgotten by many about this nation’s past:

 

For all the huffing and blowing we get about rugged individualism, the American spirit and the American experiment always have had at their heart the notion that the government is all of us and that, therefore, the government may keep things in trust for all of us. That was present at the very beginning, in the Mayflower Compact, which was not a document through which individuals demanded to be free of their obligations to each other and to society; rather, it was a document through which free people bound themselves together, for their own good into a political commonwealth:

Having undertaken, for the glory of God, and advancement of the Christian faith, and honor of our King and Country, a voyage to plant the first colony in the northern parts of Virginia, do by these presents solemnly and mutually, in the presence of God, and one of another, covenant and combine our selves together into a civil body politic, for our better ordering and preservation and furtherance of the ends aforesaid; and by virtue hereof to enact, constitute, and frame such just and equal laws, ordinances, acts, constitutions and offices, from time to time, as shall be thought most meet and convenient for the general good of the Colony, unto which we promise all due submission and obedience.

This was never far from the country's basic ideals. It was the way that the Founders managed to merge 13 unruly colonies into a single nation — the Declaration of Independence is a lot of things, but a laissez-faire charter of rights isn't one of them — and the Constitution itself lays a burden of commitment on all of us to maintain those things in which we have a common interest, including the general welfare and the common defense. We, The People is more than a statement of purpose. It is an acknowledgement of an obligation to each other.

 

Pierce pinpoints the foundation of modern democracy: we, the people, are sovereign, and as such we are bound together for the common good.  This isn’t socialism.  Nor is it communism.  It’s the fundamental basis for American government since its inception, both in theory and practice.  Individual rights are often cited as the vital underpinning of democracy, both now and when it began to emerge in the eighteenth century.  The problem with such a proposition, though, is that even the most basic of these rights were long denied to the poor, African-Americans, and women.  The concept of a singular civil body constituting sovereignty, on the other hand, remains one of the few continuous threads binding the democracy of then with that of now.

Admittedly the notion of a political society as a unified body, to say nothing of an accompanying idea that whatever happens to one part of the body affects all of it, are far from new. These were prevalent in European political theory for centuries, as evidenced, for example, in the early modern practice of symbolically expelling criminals—viewed as  pollution within the body politic—from society through grisly executions.  Nevertheless, these notions took on new meaning in the seventeenth and eighteenth centuries: first by Hobbes, then by Locke, and later by Rousseau.  Fundamentally different about this “modern” civil body was that it was theoretically composed of people who freely entered into a contract, whereupon they consented to live as one.  The nature of government formed by the “social contract” varied significantly among Hobbes, Locke, and Rousseau.  Yet they all agreed that sovereignty, as exercised by government, derived from the will of a singular body of people.

Later, when many Americans referred to their country in middle of the nineteenth century as “the Union,” arguably they were talking about more than just a conglomeration of individual states; they were suggesting something about the nature of the nation itself. One civil war and 150 years later, at a time when the gap between rich and poor is reaching an unprecedented level, many fail to realize that “the Union” has always involved the mutual responsibilities of citizenship, some of which we have asked our government to oversee since individuals cannot do so of their own accord. The only time Americans now extol a unity marked by shared obligation, it seems, is in the run-up to a war. Yet as the last two wars have showed, the sacrifices that actual war demands—both in terms of those who fight and die, and those who pay for it through tax revenues—have disproportionately fallen on some more than others.

It's disheartening as an historian, therefore, to see how many Americans are overlooking the historical legacy of a singular body politic and its implications regarding the nation’s common good, all the while the current fetish over individual “freedom” flourishes.  And it serves as just one of many examples to be cited in this blog by which the past is being selectively remembered.