The American Presidents Series, first started by Arthur Schlesinger, Jr. and now continued by Sean Wilentz, offers readers a series of short, concise biographies of each U.S. president that are accessible to a wide audience. They are wonderful introductions into the character and political outlook of past presidents, and I have a number of these biographies in my library. The latest addition to my collection is historian Jean H. Baker’s biography of James Buchanan, and I can’t recommend it enough.
I learned a lot about Buchanan in this short volume. When past historians have chosen to assess Buchanan’s presidency and the coming of the American Civil War, they often portray him as a weak, ineffective leader who did too little to stop the onslaught of southern secession prior to Abraham Lincoln’s election to the presidency. Kenneth Stampp’s America in 1857: A Nation on the Brink, among other studies, hews to this standard interpretation. While Baker concurs that Buchanan’s response to secession was weak, she instead portrays him overall as an overwhelming figure whose domineering personality, unwillingness to compromise, and inability to take dissent seriously doomed his presidency from the start of his term in 1857. Despite proclaiming himself as the only non-sectional candidate who would promote the interests of the entire country during the 1856 presidential election (a claim that Ulysses S. Grant took seriously when he voted in his first presidential election that year), Buchanan was in fact a pro-South sectional candidate in his own right who downplayed the extent of Northern frustration with Southern proslavery demands. I was particularly struck by this passage:
Buchanan had long since chosen sides. Both physically and politically, he had only one farsighted eye, and it looked southward. Looking to the past and heralding the Democratic party’s eternal principles against the “isms” of free-soilism and anti-slaveryism, the president-elect was blind to what was happening in the North . . . despite his experience in politics, [he] read the opposition party as ephemeral as lighting bugs in August.
In his desire to end division between North and South, the president-elect moved beyond the tradition of permissible institutionalized antagonism between political organizations. The concept of loyal opposition, inherited from Great Britain, sanctioned criticism of administrations and the presentation of alternative policies. What it did not permit was the castigation of another party as disloyal and un-American, as Buchanan held the Republicans. In his years as president, Buchanan did a great deal to popularize the view that the Republicans were a threat to the South, thereby encouraging its secession from the Union when Abraham Lincoln was elected president in 1860 [p. 72].
Perhaps there is something for us to learn in Buchanan’s failure as a president. He was arguably one of the most qualified candidates based on his experience as a politician and diplomat for nearly forty years before his election in 1856, but his lack of leadership, vision, communication skills, or a sense of changing political circumstances in the 1850s doomed his tenure. As more white Northerners desired restrictions for slavery’s westward expansion into new territories, Buchanan came to view such a position as dangerous and an abridgement of constitutional rights. That most Northerners had no intention to touch slavery where it existed and held strong racial prejudices against blacks made no difference to him. Buchanan couldn’t handle differing interpretations of the constitution or dissent from his ideology, which in his mind meant that his enemies were not fellow Americans with a difference of opinion who were still worthy of respect, but traitors whose views had to be obliterated at all costs. The president’s rhetoric damaged any future compromise over slavery since any such agreement would be considered a threat to Southern honor.
And then the war came…
Whenever I study a particular time period in history, I find it very helpful to think about the sorts of questions people at the time would have been mulling over as they looked towards the future. It is easy to look at past events in hindsight and assume that everyone knew what would come next. Even trained historians can be guilty of minimizing the significance of a social, cultural, political, or economic change as “inevitable” when in reality it was anything but. I often wonder if assigning students papers in which they have to make a “thesis statement” is as effective as perhaps asking them to first think about one or more “guiding questions” to provide structure to their inquiry before formulating any sort of answer or argument when explaining a historical event.
In any case, the Reconstruction Era (generally defined as between 1863 to 1877) presents itself as one of the most misunderstood and ignored periods in American history, and the political complexities of the era do not lend themselves to easy explanation. Even after studying the period for a number of years I still find myself sometimes struggling to explain the significance of the era to visitors and students in a cogent manner. What follows are four questions that have helped me make sense of Reconstruction’s complexities:
- How would the United States restore and maintain a stronger union in the wake of a major secession crisis and the nation’s deadliest conflict?
- How would the country’s leaders find a balance between promoting liberty and establishing order?
- What economic labor system would replace slavery in the South, and to what extent would national, state, and local governments involve themselves in economic affairs?
- What would be the future status of African American freedpeople, former Confederate secessionists, and American Indian tribes? How would the government protect and expand the rights of African Americans, encourage former Confederates to become law-abiding citizens again, AND promote peace with American Indian tribes at the same time they promoted westward expansion?
(4a. What would be the correct size and scope of government to regulate society in a time of vast social, political, and economic changes?)
While the black freedom struggle has become a centerpiece of recent Reconstruction studies, we should always remember that for most whites in the North, the central question for them was how to restore the Union quickly and peacefully. African Americans served loyally in the Civil War and many believed they were entitled to protection, citizenship, and voting rights. Once white Northerners felt that the country had stabilized and that enough legislation had been passed to protect African Americans (most notably the 13th, 14th, and 15th Amendments), it did not take long for them to abandon Reconstruction and essentially state that blacks were on their own to face the future even though rampant racism, discrimination, and violence continued to exist.
What do you think? What essential questions do we need to consider when studying Reconstruction?
Earlier this month I was in northwest Arkansas for a conference and had an opportunity to visit a number of history museums while there. Those site visits included the Daisy Air Gun Museum, the Rogers Historical Museum, and the Walmart Museum (yes, they have one). I found each site charming and the people who work at these sites extremely friendly. Everyone made me feel welcomed and were glad to have me as a visitor. On the whole I enjoyed my experiences at these places.
I am a critical viewer of museum exhibits, however, much in the same way that a musician is a critical viewer of other musicians or a filmmaker critically views rival cinema. My training in museum and historical methods ensures that I can never go back to looking at museums and public history sites as objective storehouses of artifacts and disinterested facts. I view every aspect from aesthetics to text markers to guided tours in an effort to see what larger interpretive messages these places hope to convey to their viewers. Although each site covers a wide time period that in some cases goes back to the late nineteenth century, they all had a similar interpretive centerpiece at the heart of their expererince: nostalgia for the 1950s.
Nostalgia is an inherently conservative emotion in my view. It smooths over the rough edges of history’s complexities and often focuses inward on our idealized personal memories of life experiences. Nobody looks back at a bad life memory in a nostalgic way. Nostalgia doesn’t convey how things were but how we wish they were and how we wish them to be. It tries to recreate an image of a past world that can never be recreated in the present, and the inability to bring this past world alive in the present intensifies our desire to bring it back against all odds. And above all else, we use nostalgia to reclaim our innocence – to return to a time when fear and insecurity didn’t exist and when things were simpler (at least in our minds). As Alan Jay Levinovitz argues in Aeon, “it is crucial to distinguish between wistful memories of grandma’s kitchen and belief in a prior state of cultural perfection.” Nostalgia is wistful thinking about a state of perfection that never existed. And it often sells within the context of museums.
The 1950s are a particularly unique time period shrouded in more nostalgia than any other era in recent history. Each museum I visited covered different aspects of this nostalgia. Men worked hard and had jobs to support the family; women stayed home and tended to the domestic sphere; children went to school and behaved like good little boys and girls; local law enforcement always had residents’ best interests at hand; everyone went to church and prayed to the same Christian God; racial, labor, or any other form of social strife was non-existent; everyone knew their place in society and happily accepted that place without reservation. We might call this interpretive phenomenon “Andy Griffith History.”
At one of the aforementioned sites I overheard a woman ask a museum employee why there were no exhibits on the contributions of African Americans or any other minority group to the life of the people in northwest Arkansas. The employee said that “well, we don’t have any exhibits on that topic unfortunately and the town of Rogers was a Sundown town in the 1950s.” A person visiting these sites without any sort of background in the history of the Civil Rights Movement would not realize that Walmart’s growth as a company occurred as Arkansas Governor Orville Faubus supported racial segregation of public schools and the Little Rock Nine crisis occurred. Nor would many people without prior knowledge look at the Walmart museum and learn that labor conflicts have occurred frequently throughout the company’s history. The pull of nostalgia only allows for a innocent view of the period devoid of any social conflict.
I suspect that 1950s nostalgia draws people to these places because the period has been so mythologized in popular culture and many (white) people alive today remember the era in fond terms. I do wonder, however, if this approach will continue to work over the next twenty or thirty years and if places that rely on nostalgia this way will have staying power in the long run. Again, I found a certain charm in these museums, and there were certainly good aspects of the 1950s that we should remember and celebrate. We should always heed Levinovitz’s advice, however, and avoid believing that any past era was perfect. That sort of thinking is bad for history and probably bad for determining contemporary policy too.
A few months ago a friend of mine gave me a copy of Suhi Choi’s recent book about the Korean War and how it has been remembered in both the United States and (South) Korea. Choi, a communications professor at the University of Utah, employs public history techniques throughout the book to analyze oral histories she conducted with victims of the No Gun Ri massacre, media accounts of the massacre, and various monuments that have been erected in both countries to commemorate the war as a whole. I enjoyed reading the book for its content and arguments, but what I enjoyed the most was its brevity. Clocking in at 115 pages of main text and five chapters, the book was a quick read (with the exception of some jargon-y passages throughout) yet thoroughly researched and intellectually stimulating. The book’s shortness reminded me of the Southern Illinois University Press “Concise Lincoln Library” series that has published numerous short studies on various aspects of Abraham Lincoln’s life that are typically between 100 and 150 pages long.
While I acknowledge that different historical topics require studies of varying length and depth (I currently have one book on my nightstand that is more than 800 pages long), I find myself increasingly supportive of the idea that academic histories, generally speaking, should be shorter and more concise than what they typically are now. I am no expert on publishing books with an academic press, but I’ve been told by those who’ve been through the process that they normally don’t accept anything less than 75,000 words, or roughly 250 to 300 pages. That makes sense because most PhD dissertations end up being about that length, but I think there should be some sort of system in place to encourage and publish more scholarship that would be more appropriately covered in a study between 100 and 150 pages.
As a scholar who regularly reads books from academic publishers, I crave the analysis, interpretation, and detailed research that such books offer to their readers. As a reader, however, I am more likely to go back to a short book and read it again in the future, whereas with a longer book I feel less inclined to read it in full or go back to read it a second time. It’s important for me to read as many print books as possible to get a more comprehensive understanding of historical topics that fascinate me, but the presence of thoughtful online essays and history blogs has changed how I read and reduced the amount of time I dedicate to reading full-length print books. I admit that nowadays page length plays an extremely important role in determining what I read next. 150 pages is more often compelling to me than 500 pages.
John C. Calhoun has become the latest casualty in an ongoing conversation about America’s commemorative landscape and who, exactly, is deserving of continued commemoration and a place of honor within that landscape. After the formation of a committee and much debate (noticeably without the voice of historian David Blight), Yale University has decided to remove Calhoun’s name from a residential college that was established in 1931. Journalist Geraldo Rivera, decrying “political correctness,” announced that he left a position at the university and, unsurprisingly, numerous thinkpieces have emerged making arguments for or against the change. Rivera is free to do what he pleases, although I find this episode a strange cause for which to give up a job. I also respect Yale’s position even though I can see compelling points on both sides of the argument.
One of the more thought-provoking essays I’ve come across since the announcement is Roger Kimball’s op-ed in the Wall Street Journal. Kimball claims that the process by which Yale decided to remove Calhoun’s name was inconsistent and that, just like Rivera’s claim, the change was flawed since a “politically correct circus” of academic groupthink dominated the process. He rightly points out that Yale’s history as an educational institution is loaded with notable alumna and professors with controversial backgrounds, including Elihu Yale himself. In making this argument, however, Kimball downplays Calhoun’s historical legacy and never makes a compelling argument as to why Yale should have kept his name associated with the residential college. Equally important, he doesn’t make an effort to examine Yale’s reasoning for naming the college after Calhoun in the first place.
There are two major problems with Kimball’s thinking, in my view. The first is the way he characterizes Calhoun. Kimball acknowledges that he was a slaveholder and brilliant politician who argued that slavery was a “positive good.” But Calhoun wasn’t just a racist slaveholder; he was a political and intellectual leader of American proslavery thought whose words influenced a generation of proslavery thinkers.
Thomas Jefferson was a slaveholder who maintained an uneasy relationship with the institution. He called slavery a “moral depravity” and contrary to the laws of human nature. John Calhoun told slaveholders to not feel ashamed any longer; slavery was a law of nature and servitude to white enslavers was the correct station in life for black people. Proslavery religious leaders used Calhoun’s logic to argue that enslavement allowed for African souls to be Christianized. Scientific thinkers like Samuel Cartwright said African Americans were biologically inferior and went so far as to invent a new disease, “Drapetomania,” to explain why slaves tried to run away from their enslavers. Proslavery political leaders in the days of the early republic through the 1820 Missouri Compromise were willing to set aside some new U.S. territories from slavery’s expansion in the interest of sectional harmony and free state/slave state political balance. But those proslavery leaders were replaced by new leaders in the 1840s and 1850s who said that compromise on the slavery question was dishonor and that all new territories should be opened for slavery. John Calhoun, in his racism but also his intellectual brilliance that was in part cultivated by his Yale education, played an integral role in fostering these developments, which in turn led to the eventual breakout of the American Civil War, the deadliest war in this country’s history.
These truths are too much for Kimball, however. He states that “You might, like me, think that Calhoun was wrong about [slavery]. But if you are [Yale President] Peter Salovey, you have to disparage Calhoun as a “white supremacist” whose legacy—“racism and bigotry,” according to a university statement—was fundamentally ‘at odds’ with the noble aspirations of Yale University” . . . “Who among whites at the time was not [a white supremacist]? Take your time.”
Wendel Phillips; the Grimke sisters; the Tappan brothers; Theodore Weld; John Brown; Thomas Wentworth Higginson; Gerrit Smith…
Is Calhoun somehow not a racist or white supremacist? Did he not believe that blacks as a race were inferior and that the white race should be able to control the black race through whatever legal means it saw fit? What does it say about Kimball that he becomes infuriated with the words “white supremacist” to describe Calhoun?
The second problem with Kimball’s argument is his “whatabout-ism” in the essay. What about slave trader Elihu Yale or other slaveholders associated with Yale like Timothy Dwight, Benjamin Silliman, and Jonathan Edwards? Shouldn’t they also have their names removed, he asks? Certainly Yale was a more “objectionable” person than Calhoun, right? This line of thinking is a crucial element of Kimball’s argument because it intends to discredit Yale’s process for removing Calhoun’s name and ultimately paint the university’s administration as playing politics with the issue. This is a fair critique, but only to a point. Isn’t it a bit subjective and unproductive to debate whether Yale or Calhoun was “more objectionable” when both said and did despicable things? Aren’t we deflecting from the real conversation–whether or not John Calhoun as an individual, regardless of anyone else’s legacy, is deserving of a place of honor at Yale–by arguing that other people were bad too and that all white people were racist at the time?
Buildings all over the world are named after historical figures whose names were placed there because powerful cultural elites believed that person represented values that were important to contemporary society and were therefore deserving of honor and recognition. Some of these names will remain in their location forever. Some of these names change over time because new people make history and earn a spot within the commemorative landscape while older names are forgotten. And sometimes those names change because contemporary values–which are always a factor in selecting who gets to be a part of a commemorative landscape–change.
It is more than fair to ask whether or not the process of removing Calhoun’s name was legitimate, but it’s a separate question from whether or not Calhoun deserves his place of honor. If we wish to have a productive conversation about John C. Calhoun’s historical legacy, we must be willing to have an honest look at his life, his deeds, and his time. We must be willing to acknowledge that he was a white supremacist and a controversial figure in his time. And we must consider why Yale leaders felt the need to honor Calhoun with a college in his namesake in 1931 and why it was considered acceptable at that time to do so. From there we can begin to debate Calhoun’s individual legacy without resorting to tired “political correctness” arguments or childishly saying that other people were bad too. If Calhoun deserves a college in his name, make a compelling case to justify it based on his merits as a historical figure.
Over the past few days a good number of historians have been sharing an article from the Washington Post that ostensibly confirms what many of us in the field already know: history is relevant, important, and worth studying. The article, “In Divided America, History is Weaponized to Praise or Condemn Trump,” points out that thousands upon thousands of Americans on social media are using history–or, more appropriately, their understanding of history–to make arguments to “support or oppose” the current administration’s actions. Moreover, the article provocatively claims that the President’s election has “certainly revived interest in U.S. history.” Many historians on social media are applauding these developments.
I don’t buy it.
While I agree that in our current moment we are seeing more online conversations that invoke historical figures and events, it’s worth asking a number of questions about this development. History is a tool that can be used to better understand where we came from and how we got to where we are now. Are we really engaging in conversations that actually strive to utilize historical thinking to understand what happened in the past, or have we simply turned basic historical facts into superficial rhetorical weapons to make political arguments about today? How productive is it to use history to debate government policy or predict how current policy will work in the long run? How useful is it to cite historical examples when the record is so vast as to justify any sort of political ideology or belief?
If there’s so much interest in history, why is the National Endowment for the Humanities facing the possibility of being cut completely from the federal budget? Why do colleges and universities continually trim down the budgets and staffing of history departments? Why is there a decline in students majoring in history? Why do high schools so frequently hire history teachers based on a candidate’s ability to coach a sports team and not because of their ability to educate students about the discipline? Why is visitor attendance to historic sites in a state of decline? Why do I have friends on Facebook who will simultaneously tell me that they enjoy reading history but that pursuing a liberal arts degree is “stupid” because such degrees are “fake” and “useless” on the job market?
Senator Ted Cruz recently argued that “The Democrats are the party of the Ku Klux Klan . . . The Klan was founded by a great many Democrats.” While it’s factually true that the KKK was founded by Southern Democrats after the Civil War, anyone who has even a cursory understanding of U.S. history knows that the Republican and Democrat party platforms have changed, evolved, and in some cases flipped from what they were in 140 years ago. But then again, Senator Cruz isn’t making this statement in the interest of understanding the context and complexity of history, in this case the Reconstruction era. He doesn’t care that the second wave of the KKK that emerged following the theatrical release of The Birth of a Nation in 1915 recruited many of its members from the Republican party, so much so that in Indiana the KKK essentially took over the state Republican party and the State House in the 1924 state election. He doesn’t care that in 1890, amid a growing wave of black voting disenfranchisement initiatives throughout the South, the Republican party sold out its black constituents by giving up on the Lodge Bill, which would have allowed for federal oversight of federal elections and given circuit courts the ability to investigate voter fraud, disenfranchisement, and ensure fair elections. The Republican Party gave up on this bill so that it could get Southern support for a different bill that would raise tariffs rates, the party’s primary concern at the time. He doesn’t care that racism has been a staple of U.S. history and something widely supported by Americans of all political persuasions.
Senator Cruz doesn’t care about any of this because he is only concerned about using history as a weapon to praise his buddies and condemn his enemies. He wants to portray contemporary Democrats as bigots, racists, and ideological descendants of the KKK Democrats of the 1870s. He doesn’t care about the history.
It’s a shame that so many politicians on all sides of the political spectrum so often resort to weaponizing history.
A few days before the Washington Post article was published, Northwestern University history professor Cameron Belvins wrote what is in my mind the best essay of 2017 so far. He warns of the dangers of using history to predict the future and calls upon historians to consider the ways history might be counter-productive to understanding the complexities of today’s politics. You must read this essay – it is fantastic.
In sum, I think we historians still have a long way to go before we can declare victory in our effort to expose our students and the public more broadly to the joys and benefits of studying history. And I would argue that the value of studying history is not that it provides “answers” to contemporary problems or a solid blueprint for effective government policy in the future, but that it trains us how to interpret source material, appreciate change over time, and ask better questions about our world, both then and now.
The Westmoreland County Historical Society responded to my email about their mock hanging reenactment yesterday. There is good news, on the one hand, as the organization has decided to no longer engage in this particular reenactment in the future. On the other hand, the email was a pre-written pseudo apology, and it’s evident that my message (and probably anyone else who wrote one) was not read by any staff members. This is a particularly disingenuous action given the fact that the organization’s previous Facebook statement encouraged discussion about “this sensitive aspect of American history in a constructive way.” Why encourage constructive feedback but then ignore that feedback and write a second pre-written statement?
Here is the email response in full:
I don’t want to belabor my complaints here, but really? “We deeply regret that people were offended” instead of simply apologizing and/or acknowledging that engaging in a public hanging reenactment might be problematic. Also, the person who posted the video to YouTube is truly at fault because the video took things out of context. Everything would make sense if it weren’t for this video. Really?
Once again, the historical society gets it wrong by defending their program through harping on their obligation to discuss “sensitive” aspects of history, “even those that are unacceptable to our modern sensibilities.” No one is questioning that obligation. Most visitors can handle programs about sensitive topics and public historians in the field applaud that approach, as we have an obligation to discuss difficult topics in human history. The problem that the critics had was with how the program was organized, the medium by which it was conducted, and the lack of an explanation about the educational purpose a mock hanging serves towards understanding this particular event in American history.
As I mentioned in my email, there are many different ways public history institutions can discuss difficult topics like slavery, genocide, the Holocaust, or a public hanging without having to literally reenact the particular event. I can visit Manzanar National Historic Site and understand the significance of the site without having to watch a reenactment of a Japanese American family being thrown into an internment camp. I can read a historic marker commemorating the 1866 Memphis Massacre and understand the significance of the event without living history performers reenacting a scene of angry whites torching the homes of black neighbors and then firing gunshots into those homes when their inhabitants tried to get out. I can visit a place like the Sachsenhausen Concentration Camp, as I did in 2015, and engage in thoughtful discussions with fellow visitors and staff about sensitive aspects of history without watching performers reenacting SS guards torturing the camp’s inmates.
In sum, a living history reenactment of someone’s death is a tasteless, wholly unnecessary exercise that does little to enhance understanding or empathy of a given historical topic.
The online publication Indian Country Media Network recently reported on a public history site that engaged in a public hanging reenactment of a Native American man this past summer. The article garnered some attention among public historians on social media, many of which expressed serious concerns about the appropriateness of this program. The Westmoreland County Historical Society attempted to defend their program with the following statement on Facebook:
I decided to respond to this statement. Here’s my email to the organization:
To whom it may concern at the Westmoreland Historical Society,
This message is in response to your statement on Facebook about a recent program your institution hosted in which living history performers reenacted an eighteenth century court case, including the gruesome hanging of the Native American Mamachtaga. I am a public historian who occasionally participates in living history programs, and I heard about this particular program through social media. While I respect your dedication to educating visitors about eighteenth century American history—particularly complex legal cases that involve thorny issues of race, gender, and indigenous rights—I have serious concerns about your institution continuing to engage in this mock hanging and similar reenactments in the future.
I found your Facebook statement in defense of your institution’s program to be inadequate. The statement appears to fundamentally misunderstand what many critics of this program are saying. Several Facebook users who commented on your statement also missed the point. The genesis of your statement is that your institution worked very hard to present a historically accurate program; your team engaged in primary source research and worked to provide context for the “historic political climate and social attitudes as well.” That’s good, but it’s not enough.
By focusing your statement on the historical accuracy of the program, you seem to suggest that your critics must either have a problem with the idea of doing any program on the Mamachtaga case, or that they can’t handle the idea of a historic education program that focuses on the “bad” parts of history. In any case, the program was historically accurate, so what’s the problem? That is a mistaken argument. On the contrary, few professionals in the public history field would have any problem with doing a program about Mamachtaga or similar cases like his. The problem that many of us have with your program is the lack of consideration about how the story is being told and the interpretive medium in which it is being told.
Interpretive programs take many shapes and forms, including tours led by trained guides, public lectures, video presentations, historical markers, digital presentations, living history programs, and other mediums. As educators, we want these programs to foster understanding and appreciation for history and the role it shapes in our daily lives. But what educational value does a hanging reenactment offer for the visitors who come to your site? What is it that you want your visitors to take away from this program?
There are have been numerous controversial living history programs about slavery in recent years that you may have heard about. In 2011, the Jefferson National Expansion Memorial in St. Louis, Missouri, hosted a “slave auction” reenactment on the steps of the Old Courthouse, and Conner Prairie Interactive History Park in Fishers, Indiana, hosts an award-winning living history program about the Underground Railroad entitled “Follow the North Star.” These programs have received both praise and criticism, which I think is fair. The problem with any program that attempts to literally “reenact” a historical experience like slavery, genocide, the Holocaust, or in this case a hanging, is that such a program cannot be accurately recreated for a modern audience, which in turn trivializes the experience. Furthermore, such a program runs the risk of de-humanizing the historical figures these performers are attempting to portray, and they can be emotionally hurtful to people who are watching regardless of what their particular background may be.
What makes the Old Courthouse slave auction and “Follow the North Star” largely successful, however, is that they incorporate pre- and post-event activities to allow people a chance to become emotionally prepared for what they are about to see and then share their feelings with a trained professional in a facilitated dialogue setting afterwards. Rather than coming to the site unprepared and leaving with a bunch of bottled-up emotions, these dialogue activities allow people to unwind and feel welcome in an environment that promotes learning and inclusiveness. Attendance in these activities is mandatory for visitors who want to participate in the main event. Your statement does not indicate whether or not such activities were incorporated into your program.
I have no doubt that the Westmoreland Historical Society is dedicated to conveying accurate history to its audiences. But at the end of the day, making sure that living history programs are historically accurate is only half the challenge of creating a successful program. Considerations of audience, setting, and interpretive medium must also be considered, and I believe the Mamachtaga program failed to account for these considerations. I believe a living history program focused on the hanging of a Native American or anyone else distracts from the history you want to impart on your audiences and is ultimately a program in poor taste. American history is drenched in the blood of victims of state violence, whether that be the largest mass hanging in U.S. history after the Dakota Uprising in 1862, the Memphis Massacre of 1866, the East St. Louis riots of 1919, or any other countless instances of bloodshed. Must we reenact these stories in a literal fashion in order to attract visitors and dollars? In the future, I hope you reconsider the merits of doing a mock hanging and consider other ways of bringing American history to life.
One of the last things I did in 2016 involved taking a short trip to New Orleans, Louisiana, to visit a good friend of mine and explore some of the historical sites in the area. The trip was wonderful and I also enjoyed the eighty-degree temperature outside, a nice contrast to winter in the Midwest.
About three years ago I had the opportunity to visit the National World War I museum in Kansas City. The National World War II museum just so happens to be located in New Orleans, and we made a point of spending nearly an entire day visiting the site. I came away from the World War II museum impressed with some aspects and less impressed with others. I’ve been thinking about the similarities and differences of my experiences at both museums since the trip, and what follows are some rough thoughts on those experiences.
One of the major aspects of the World War II museum is its use of technology throughout the museum. Upon arriving at the museum, visitors have the option of obtaining a “Dog Tag” card that looks like a credit card. Computer stations throughout the museum have a spot where you can put your card on a scanner, upon which the computer shows a short video of a World War II solider who is assigned to your card. Five stations throughout the museum tell a different aspect of your soldier’s experiences before, during, and after the war (if they survived it). Notwithstanding the difficulty of finding some of the computer stations (I missed two of them) and the lack of available computers (I don’t think I’ve been to a museum that was so busy and people were almost always hounded around the computers), I though the activity was thoughtful and educational. My “Dog Tag” had the story of four-star General Benjamin O. Davis, who happens to be an extremely important and heroic figure in U.S. military history. Elsewhere there was an interactive activity about the USS Tang, a ship that sunk thirty-three enemy ships during the war, that was immersive and interesting. Visitors were assigned to a station within a recreated model of the Tang and given a specific duty on the ship to complete during a mission.
Other uses of technology in the World War II museum were not as successful, in my opinion. The museum was full of videos throughout the exhibits, all of which had sound. The sounds from each of the videos often bled into each other, creating a wall of cacophonous sound that distracted from the exhibit text and artifacts in a given area. Equally frustrating was how the walkways throughout the exhibits were not large enough to isolate video-watchers from the rest of the crowd. People would stop to watch the videos and block the walkways for other museum-goers, creating cramped hallways and little breathing room to maneuver through the museum. The World War I museum, by contrast, doesn’t utilize as much digital technology in its exhibits but uses its resources in ways that are more user-friendly. Videos about the political situation in the 1910s, the coming of World War I, and the United States’ decision to enter the war are isolated from the rest of the museum exhibits, allowing visitors who want to see the videos the freedom to do so while not distracting from others who want to visit the museum’s other exhibits. While the World War I museum doesn’t offer a “Dog Tag”-type activity for visitors, it did offer one interactive activity in which visitors created their own propaganda posters using graphics and artwork from posters used in various countries at the time.
The other noticeable aspect of both museums is the role of politics in their interpretive exhibits. The World War I museum does a masterful job in both its exhibits and videos of analyzing the political conditions that existed in Europe before the coming of the Great War. Topics such as nationalism, colonialism, imperialism, racism, and entangling alliances are explained with clarity and precision without sacrificing complexity. Equally important, the World War I museum places particular interpretive emphasis on conditions in Europe, not the United States. I believe this distinction is really important. While the museum is tasked with educating visitors about the American role in war, the staff at the museum astutely understand that this role must be fit within a larger story that spends several years in Germany, England, France, Austria-Hungary, Serbia, and the rest of Europe before the U.S. became a leading actor. The museum’s splitting into two sections between the years of 1914-1916 and 1917-1918 (when the U.S. entered the war) reinforces the importance of not focusing on the war’s history through too strong of an American lens.
The World War II museum, however, struggles to address the equally messy politics of that conflict. The exhibits throughout the museum barely touch political matters beyond the interactions between politicians and generals with regards to military strategy and tactics. A film narrated by the actor Tom Hanks does acknowledge that the U.S. faced two growing enemies in fascist Europe and imperial Japan, but doesn’t explain how these two forces came to be. Visitors are told, for example, that the Nazi party ruled Germany through the ideological lens of hatred and Aryan racial purity, but doesn’t explain how the Nazi party appealed to a wide swath of German voters or point out that Hitler was democratically elected. Likewise, Japan is portrayed as a militaristic, land- and -resource-hungry empire bent on conquering all of Asia, but why Japan held these ambitions and how they gained such power in the first place is left unexplained.
Another contrast of equal interest is the use of patriotic themes through these museums. The World War I museum takes a somber, reflective tone throughout its exhibits. The most notable example is the Paul Sunderland Glass Bridge. Underneath the bridge lies 9,000 poppy flowers in a field. Each flower represents 1,000 deaths during World War I, symbolizing the nine million people worldwide (not just Americans) who died in that conflict. No such display is exhibited in the World War II museum, and while the Tom Hanks film points out that 65 million people worldwide died in World War II, it becomes evident in the film and surrounding exhibits that the 400,000-plus Americans who died during the war will get particular attention in the interpretive programs. Nothing demonstrated this fact more than a musical program in one of the World War II museum’s buildings. Three women in 1940s-style dresses–one red, one white, and one blue–sang patriotic songs for roughly thirty minutes, including the songs of each military branch and Lee Greenwood’s “God Bless the USA.” During Greenwood’s song the women pulled out a U.S. flag, which in turn led to rancorous applause among the museum’s visitors. This exercise isn’t necessarily wrong or out of place at a museum of American history, but I can’t help but feel like such a display would feel unusual at the World War I museum. Likewise, similar exercises would feel awkward in a German museum like the Jewish Museum, Berlin, or the German Historical Museum, both of which I visited in 2015, where such open displays of patriotism and nationalism are fraught with their own difficulties and historical baggage. The musical program reinforces the history of World War II as a “Good War” in American memory, as historian John Bodnar explains in his 2010 book on the topic. U.S. involvement in World War II was good, of course, but the story is more complicated than singing a Lee Greenwood song.
In sum, the interpretive focus of the World War I Museum is a warning in the dangers of excessive nationalist sentiment and an elegiac meditation on the destructiveness of war, particularly one in which modern technology further amplifies the killing. Conversely, the interpretive focus of the World War II Museum is openly nationalist, Ameri-centric, and a borderline glorification of war. The World War I museum explains the causes of the war, its effect on world affairs, and the consequences of an inadequate peace treaty that helped foster another tragic world war just a few decades later. The World War II museum only mentions the causes of the war in passing through a video. While it does highlight the bloodshed of the war, particularly the blood shed by American soldiers, it struggles to tie in the conflict with other global affairs and chooses to stop at the war itself. The messy politics of the Cold War are put to the side in favor of a simple narrative of American progress and freedom.
I enjoyed both museums and believe that everyone would benefit from visiting both, but I believe that the World War I museum is superior in its interpretive programming and educational themes. It remains one of the best museums I have ever visited.
It is common in history, government, and political science classes to stress that the United States has a republican form of government, not a democracy. Article Four, Section Four, Clause One of the U.S. Constitution stipulates that “the United States shall guarantee to every State in this Union a Republican Form of Government . . .” James Madison, one of the Constitution’s leading architects and an author of the pro-Constitution Federalist Papers, expressed fears in Federalist No. 10 that “factions” (what many folks might call “identities” today) would look inward towards the interests of their group at the expense of the common good, which in turn would lead them to vote in ways that were harmful to the public interest. A republic, according to Madison, would provide a check against the excesses of direct democracy and factional politics. Elsewhere it’s been argued that in a republic, a written constitution establishes a list of inalienable rights that cannot be taken away by the government, whereas in a democracy no such protections are offered to the populace and everything is based on the will of the majority.
Case closed, right? Maybe not.
Here is what the late political scientist Robert Dahl wrote in How Democratic is the American Constitution? about the contradictory and confused views of Madison towards republics and democracies (pages 159-162). It’s good food for thought and worth considering if the distinction between the two terms is as stark as many of us often assume it is at first blush.
The view that the Framers intended to create a republic, not a democracy, probably had its origins in Federalist no. 10. Although there as elsewhere [Madison] also used the expression “popular government” as a kind of generic term, he distinguished further between “a pure democracy, by which I mean a society consisting of a small number of persons, who assemble and administer the government in person,” and a “republic, by which I mean a government in which the scheme of representation takes place . . . The two great points of difference between a democracy and a republic are: first, the delegation of the government, in the latter, to a small number of citizens elected by the rest; secondly, the greater number of citizens, and greater sphere of the country, over which the latter may be extended.”
Here Madison was making the common distinction that political scientists and others would later differentiate between “direct democracy” and “representative democracy.” For it was as evident to the Framers as it is to us that given the size of the nation composed of the thirteen existing states, with more to come, “the people” could not possibly assemble directly to enact laws, as they did at the time in New England town meetings and had done two millennia earlier in Greece, where the term “democracy” was invented. It was perfectly obvious to the Framers, then, that in such a large country, a republican government would have to be a representative government, where national laws would be enacted by a representative legislative body consisting of members chosen directly or indirectly by the people.
Madison was probably also influenced by a long tradition of “republicanism” that in both theory and practice leaned somewhat more towards aristocracy, limited suffrage, concern for property rights, and fear of the populace than toward a broadly based popular government more dependent on “the will of the people.”
It is also true, however, that during the eighteenth century the terms “democracy” and “republic” were used rather interchangeably in both common and philosophical usage. Madison, in fact, was well aware of the difficulty of defining “republic.” In Federalist No. 39, he posed the question “What, then, are the distinctive characters [sic] of the republican form?” “Were an answer to this question be sought . . . in the application of the term by political writers, to the constitutions of different states, no satisfactory one could be found. Holland, in which no particle of the supreme authority is derived from the people, has passed almost universally under the denomination of a republic. The same title has been bestowed on Venice, where absolute power over the great body of the people is exercised, in the most absolute manner, by a small body of hereditary nobles.”
In view of this ambiguity, Madison proposed that “we may define a republic to be . . . a government which derives all its power directly or indirectly from great body of the people, and is administered by persons holding their offices during pleasure, or for a limited period, or during good behavior.” By defining a republic as a government which derives all its powers “directly or indirectly from the great body of the people,” Madison now seems to be contradicting the distinction he had drawn earlier in Federalist No. 10. We might read his struggle with definitions as a further illustration of the prevailing confusion over the two terms.
If further evidence were needed of the ambiguity of terminology, we could turn to a highly influential writer whose work was well known to Madison and many of his contemporaries. In The Spirit of the Laws (1748) Montesquieu had distinguished three kinds of government: republican, monarchic, and despotic. Republican governments were of two kinds: “When, in a republic, the people as a body have the sovereign power, it is a Democracy. When the sovereign power is in the hands of a part of the people, it is called an Aristocracy.” But Montesquieu also insisted that “It is in the nature of a Republic that it has only a small territory: without that it could scarcely exist.”
Although the Framers differed among themselves as to how democratic they wanted their republic to be, for obvious reasons they were of one mind about the need for a representative government. But as events soon showed, they could not fully determine just how democratic that representative government would become–under the leadership of, among others, James Madison.