Tearing Down the Barriers Between “Experts” and “Buffs” in the Historical Enterprise

Over the past few days I have been going back and forth with a commenter on a recent post I wrote about mediocre, good, and great biographies of Ulysses S. Grant. One of the issues raised in the conversation was my citing of a book written by a professional lawyer instead of an academically trained historian with a PhD. Without having read the book in question the commenter wondered aloud if the author’s choice to publish with a non-academic press reflected a desire to “bypass the normal refereeing process at a scholarly press” and, in a defense of scholarly publishing, warned that not all history writers are in a position to make sound judgements about the past. The commenter also equated the history profession with the medical profession: you wouldn’t trust someone not trained in medical practices to examine you for a disease, so why would you trust a non-historian with interpreting the past?

I believe these comments are unfair to the author in question, but looking at the bigger picture this conversation also reflects an unfortunate and all too common desire to create false barriers between “experts” and “buffs” within the historical enterprise. Few would disagree that training in historical thinking and interpreting primary/secondary source documents is very important to good historical scholarship, but the question of whether someone needs to hold a history PhD to be considered a competent historian is very much debatable.

My argument is simple: Some people focus on the players; I focus on the game. Some people focus on credentials; I focus on arguments.

I am far less concerned about a person’s academic background than I am with the substance of their arguments. I am far less concerned with what a person does for a living than what scholars in any particular field have to say about how that person’s work shapes their field. Take Gordon Rhea as an example. The fact that he holds a law degree from Stanford (and no history PhD) and has worked as a trial lawyer for 35 years means far less to me than the fact that his scholarship on the Overland Campaign of 1864 is highly respected by both Civil War military historians and general readers.

This is not to say that everyone’s opinion is equally valid when interpreting history. The point is that the historical enterprise should strive to cast a wide scholarly net that allows people from many different types of backgrounds to contribute their voice to the conversations we have about the past. Setting the bar for good historical scholarship to only include history PhDs who work in academic institutions impoverishes our field and shuts out many people who care about history but may not have pursued an advanced degree for any number of reasons, not least the fact that it’s damn expensive and time-consuming to get a PhD.

Equating the history profession’s standards with the medical profession is also a poor apples-to-oranges comparison. It might be better to compare the history profession to the music profession. There are musicians with PhDs in music, others who have more limited training through k-12 schooling and private lessons, and still others with no formal training whatsoever. Chances are that when you first discovered your favorite artist you probably didn’t go online to check that person’s formal training before determining whether or not their artistry was valid. The musician’s credentials matter far less than the fact that their music makes you feel good. Different types of music have different goals and required standards of training. You don’t need a PhD to play punk rock, but you might need it to teach classical music in a college setting.

Obviously the end goals of historical scholarship don’t necessarily compare to those of music, but the point stands that history is something that exists far beyond the walls of academia. Different works of historical scholarship–whether they’re written in a book or designed for a public history setting–call for different sets of training and expertise. Not every person who engages in these scholarly endeavors comes with a history PhD in their academic background, and that’s okay with me. Hit me with your best argument and I promise to look at it with an open mind.



Avoiding Buzzwords and Jargon Phrases in Writing

"Buzzword Bingo" Photo Credit: http://mavenagency.com/blog/2011/03/the-sting-of-buzzwords/
“Buzzword Bingo” Photo Credit: http://mavenagency.com/blog/2011/03/the-sting-of-buzzwords/

Clearly defined terms and active language are fundamental to good writing. If readers don’t understand the vocabulary you employ in your narrative, the potential for frustration and misunderstanding on their part raises exponentially. The point is obvious, but surprisingly hard to put into practice (much academic writing proves the point). We converse with our friends and loved ones on a day-to-day basis assuming they will understand our vocabularies the same way we do, and it’s easy to assume when writing that our reading audiences will readily understand our arguments and “be on our level,” so to speak.

As a historian I must always be cognizant of terms and phrases that could potentially distort my arguments: what does it mean for a person, place, or thing to be either “modern” or “traditional”? What is “culture”? What is “identity”? What does it mean to “learn”? Do my readers understand these terms the way I do? Likewise, I must also strive to use a clear, active tone that places actions in the subject of my sentences. As this guide from the University of North Carolina suggests, an active sentence asks “why did the chicken cross the road?,” whereas a passive sentence might ask, “why was the road crossed by the chicken?” I am as guilty as anyone of using imprecise terms and passive language in my writing, and I constantly strive to do better with each blog post, essay, and article I write.

Buzzwords and passive, jargon-laden phrases should be avoided in writing. All writers, regardless of topic, rely on words and phrases with specific meanings to convey their ideas, but many words necessarily change over time. “Liberal” and “conservative” political philosophies, for example, represented a set of ideas in 1775 that meant something different in 1850, 1900, and 1990. Anyone writing on these eras must use precise definitions of “liberal” and “conservative” to clarify their arguments. I believe buzzwords and jargon phrases can do much to distort good writing. A word becomes a buzzword when its use becomes so ubiquitous and wide-ranging as to become completely devoid of any clear meaning. A phrase becomes jargon when its use is restricted to a small, exclusive group of people while confusing readers on the outside. Both are bad!

Below you will find a list of ten buzzwords and jargon phrases that I avoid in my own writing, although I’ve been guilty of using some of these terms in the past without fully thinking about their meaning.

1. General Public/Average Person: Whenever I hear the term “general public” I envision unthinking humans whose brains are empty vessels waiting to be filled by all-knowing scholars and expert practitioners. What is “general” about this public? What is “average,” and who is an “average person”? How does our definition of “average” highlight our own biases and prejudices? In the quest to write for a general public or an average person, who might be left out of the conversation? Wouldn’t it be better to write for a “non-academic” audience or simply “the public”? Countless writing guides suggest that writers “simplify” or “dumb-down” their writing for the general public/average person, but I think it’s far better to write clearly for the sake of acknowledging the intelligence of your readers, who–regardless of intelligence level or education background–don’t need to be inundated with deliberately obstructionist language.

2. “The Ways in Which”: 99.9% of the time this jargon phrase is completely unnecessary and easily replaced with either “how” or “the ways.”

“Harry Smith’s study of the Civil War era examines the ways in which Civil War veterans fought for generous government pension benefits in the 1880s.”
“Harry Smith’s study of the Civil War era examines how Civil War veterans fought for generous government pension benefits in the 1880s.”

3. Foment: When you foment, you “instigate or stir up (an undesirable or violent sentiment or course of action).” This word could be an appropriate verb for describing many historical actions, but for some reason I have only seen it within the context of slave rebellions. Denmark Vesey and Nat Turner “fomented” rebellion against white slaveholders, but the thirteen colonies never fomented rebellion against the colonies, laborers never fomented rebellions against their employers during the Gilded Age, and Civil Rights activists were never seen as fomenting civil unrest in the 1960s. Why is it that only slaves are charged with fomenting anything? Far better, it seems, to use words like “instigate,” “encourage,” “incite,” “provoke,” and “urge.”

4. Discourse: One of the worst examples of academic jargon in existence. Most folks participate in “conversation,” “discussion,” or “debate.” Academics participate in “discourse.” The former terms represent action verbs, whereas “discourse” represents a boring, passive noun. Changing a verb to a noun is never good.

5. Jettison: Most folks “throw,” “drop,” or “remove” things. Academics “jettison” things. Another jargon term worth avoiding.

6. Engagement/Civic Engagement: Countless education programs, centers, and non-profit organizations win financial grants and private donations because they state in their mission statements that they promote “engagement” or “civic engagement.” But what do these terms mean, especially the latter? As I’ve previously discussed on this blog, these terms represent a million different things to millions of people, but I suspect no one really knows what it means to participate in engagement or civic engagement.

7. Impact: Much writing—regardless of topic—attempts to explain correlations and causations between people, places, and things. In describing these relationships, writers discuss “impact.” But again, what does it mean for something to have an “impact”? Even more problematic, “impact” as a verb refers to hitting something or a collision, which is not the same as describing the effect of one thing upon another. As this brief essay points out as an example, “Impact means collision . . . Laws don’t impact people. Laws affect people.”

8. “Lifelong Learning”: Lord, what in the world does this phrase mean? What is the University of Missouri-St. Louis trying do with this “Lifelong Learning” program? Isn’t the goal of any self-respecting education institution to help sharpen their students’ critical faculties and develop a lifelong passion for learning and discovery? Have you ever heard of an education program whose mission statement says, “Yada-Yada University: committed to promoting a 23-year passion for learning!”?

9. Disruption/Disruptive Innovation: There is a lot talk these days about “disruptive innovation” as a form of radical change in business and education. But the term is a buzzword, used so often and in so many contexts as to render it completely meaningless. As Matthew Yglesias argues, the term is now “a lame catchphrase.”

10. Postmodern: The ultimate academic buzzword, used to describe any cultural, social, philosophical, economic, literary, or political thought since World War II. Wikipedia can only say that postmodernism is “a departure from modernism” (whatever ‘modernism’ means!). Here’s what Dick Hebdige had to say about “postmodern” in his book Hiding in the Light: On Images and Things:

When it becomes possible for a people to describe as ‘postmodern’ the décor of a room, the design of a building, the diegesis of a film, the construction of a record, or a ‘scratch’ video, a television commercial, or an arts documentary, or the ‘intertextual’ relations between them, the layout of a page in a fashion magazine or critical journal, an anti-teleological tendency within epistemology, the attack on the ‘metaphysics of presence’, a general attenuation of feeling, the collective chagrin and morbid projections of a post-War generation of baby boomers confronting disillusioned middle-age . . . a fascination for images, codes and styles, a process of cultural, political or existential fragmentation and/or crisis, the ‘de-centring’ of the subject, an ‘incredulity towards metanarratives’ . . . the collapse of cultural hierarchies, the dread engendered by the threat of nuclear self-destruction, the decline of the university . . . then it’s clear we are in the presence of a buzzword.

Have any buzzwords or jargon phrases to add? Feel free to leave a comment below!


Can a Distinction Be Made Between “Academic” and “Popular” History?

A colleague and I recently engaged in a fascinating discussion comparing and contrasting works of “popular history” and “academic history.” Through this conversation I realized that I’m not sure how to define the proper criteria for what constitutes a work of “popular history.” Does a work of historical scholarship become popular once it hits a certain number of book sales? If so, what is that number? Does one need to have a certain educational background in order to be considered a popular historian? Can a work geared towards academic scholars become popular with a non-academic audience? Can a clear distinction be made between works of popular history and academic history?

Some professional historians with PhDs believe that they alone are qualified to shape and participate in the historical enterprise. A couple years ago historians Nancy Isenberg and Andrew Burstein attempted to act as gatekeepers in a condescending article for Salon that dismissed popular history written by non-academics and argued that only PhD historians were qualified to write credible historical scholarship:

Frankly, we in the history business wish we could take out a restraining order on the big-budget popularizers of history (many of them trained in journalism) who pontificate with great flair and happily take credit over the airwaves for possessing great insight into the past. Journalists are good at journalism – we wouldn’t suggest sending off historians to be foreign correspondents. But journalists aren’t equipped to make sense of the eighteenth and nineteenth centuries.

I find this perspective badly flawed and unrealistic. Yes, a history PhD provides a blanket of scholarly authority and a thorough training in research, writing, and interpretation. But to suggest that only history PhDs alone can “do history” negates the fact that people of all education levels use historical thinking on a daily basis without the help of history PhDs. There are many different ways people learn about and understand history, including film, television, blogs, twitter, and cultural institutions like history museums and historical societies. All of these mediums attract larger audiences than books written by academics. The wish that historians, journalists, etc. would simply stay in their academic “silos” of expertise and dictate their knowledge to the rest of society–without the input of non-academics–smacks of what Tara McPherson defines as “lenticular logic.” In a complex and wide-ranging critique of academic “silos” and the racialization of the digital humanities, McPherson argues that “the lenticular image partitions and divides, privileging fragmentation. A lenticular logic is a logic of the fragment or the chunk, a way of seeing the world as discrete modules or nodes, a mode that suppresses relation and context.” History is all around us and anyone can participate in the making of new scholarship, not just the academic gatekeepers. To suggest that one’s credentials are more important than the substance of their arguments is profoundly un-academic to me.

Notwithstanding Isenberg and Burstein’s arguments, can we still make generalizations about what makes a work of history “popular history”? In the course of our conversation I attempted to outline a few distinctions to my colleague.

Interpretation vs. Reporting: Some of the more popular works of history I’ve come across tend to do more reporting of “what actually happened” rather than closely examining primary and secondary source documents for new ways of interpreting the past or questioning common understandings of historical events. For example, Jay Winik’s April 1865: The Month That Saved America is a widely popular retelling of the events leading up to Confederate General Robert E. Lee’s surrender to United States General Ulysses S. Grant, but the narrative Winik embraced didn’t change our understanding of these events and simply repeated past interpretations about the supposed beginning of a national reconciliation following Appomattox. Meanwhile, a more recent book published by an academic press and written by an academic scholar about the same events in April 1865 will most likely not gain the same audience as Winik’s book. Elizabeth Varon’s Appomattox: Victory, Defeat, and Freedom at the End of the Civil War interrogates our popular understanding of Lee’s surrender to Grant and convincingly shows that the Appomattox surrender was not necessarily the starting point of a happy national reconciliation that past scholars have argued. Her book is more interpretive than Winik’s and leaves us asking new questions rather than accepting a grand narrative about Appomattox.

Methods vs. Content: Academic scholars are trained to place their scholarship within a larger framework that analyzes how historians have interpreted and understood a historical event over time – what is commonly referred to as “historiography.” In a previous essay I criticized David McCullough for never placing his book 1776 within the historiography of George Washington studies. We never get a sense in 1776 of where McCullough’s understanding of Washington’s generalship fits within the scholarly discussions about this topic, and we struggle to figure out how and where McCullough is obtaining the information he is using to inform his scholarship (and the footnotes are awful, although the organization of footnotes is often controlled by publishers, unfortunately). The work of other scholars gets flattened in works of popular history, and historical methods are replaced by a focus on content and narrative. I don’t necessarily think it’s bad to focus on content at the sacrifice of methods, but had I written the way McCullough writes while in graduate school I would have failed all my classes. As a historian I want to see the author’s methodology, sources, and historiography regardless of topic, but I suppose it remains an open question as to the necessity of these things and where they would fit within a work intended for a non-academic audience.

Type of History: Certain types of history are more popular than others, and national histories and grand narratives remain popular despite changing interests from academic historians. In the 1960s and 1970s academic historians sought new ways of understanding the past through the experiences of ordinary people rather than grand narratives about politicians, monarchies, and cultural elites. They started asking questions about marriage, divorce, alcohol consumption, group rituals and sexual habits, and they started using social science techniques (from economics, anthropology, and political science) and devising quantitative methods for answering these questions in an effort to capture a more holistic understanding of past societies. Also crucial to this “new social history” was a focus on local context, whether it be families, tribes, cities, counties, and states. According to Gordon Wood, “by the 1970s this new social history of hitherto forgotten people had come to dominate academic history writing,” and almost every facet of human behavior was placed under scrutiny by historians (2). Gone was the focus on grand narrative and national history. Despite these radical changes (and subsequent ones) within the academy, non-academic interest in the work of social historians remains lukewarm to this day. Wood points out that history degrees awarded to students from 1970 to 1986 declined by two-thirds (3), and the numbers still look questionable today. Go to a Barnes & Noble history bookshelf and you’ll find a plethora of books on war, politics, and national histories, but few studies on gender, social history, cultural history, or local history. The Bill O’Reillys, David McCulloughs, and Walter Issacsons are still the big sellers at popular bookstores.

Despite these generalizations, I came to realize in my conversation that there are exceptions to all these rules. McCullough’s 1776 presented a bold interpretation suggesting that Washington’s subordinate generals deserve more credit for their role in keeping the Continental Army together in 1776, and by all accounts McCullough is a meticulous researcher and well-respected by most academic historians. Columbia University historian Eric Foner’s magisterial 1988 “academic” publication Reconstruction: America’s Unfinished Revolution, 1863-1877 delved deeply into historiographical arguments and interpretive history, yet it gained widespread popularity and remains a standard in Reconstruction studies. Bruce Catton, Douglas Southall Freeman, and Allen Nevins–all journalists–inspired generations of Americans (and future history PhDs) throughout the mid twentieth century to study the American Civil War through their meticulously researched narratives on the war. And academic historians throughout the 1950s and 1960s were seen as highly respected public intellectuals.

The more I think about it, the more unsure I become of this academic-popular divide. In the end I think all historians can learn a lot from each other about method, content, style, tone, and organization without putting each other into boxes based solely on book sales.


Non-Academics and the Historical Enterprise

Part Two in a series of posts about the CWI 2014 Summer Conference and the Civil War in 1864.

One of the great things about history is that anyone can study and contribute their own historical scholarship without the need of fancy credentials or even employment in a history-related field. History is all around us, and there are many ways to engage with it beyond the confines of an academic classroom. Even if you grew up hating high school history courses and their seemingly endless focus on “dates, dead people, and dust,” many people in their adult life eventually acknowledge the importance of history and accumulate enough historical knowledge to at least partially recognize their place in it.

The Civil War Institute’s 2014 Summer Conference at Gettysburg College demonstrated to me–perhaps better than any other conference I’ve attended–the benefits of academics and non-academics sharing historical knowledge with each other. Almost every history conference I’ve attended or participated in prior to last week was dominated by academic historians in the crowd and at the speaker’s podium, an environment that essentially consisted of academic historians talking to each other about topics that were mostly of interest to them and only them. I have no problem with academic conferences that are mostly composed of professional historians, but it was a really remarkable experience seeing so many non-academics at the CWI conference, both as attendees and participants. I met so many people who attended the conference not because they worked for a prestigious university that paid for their travels but because their love of Civil War history led them to spend their own hard-earned money and time at Gettysburg. People from a wide range of occupations came to see the conference, including high school teachers on summer break, people in business and law, and retired enthusiasts who now spend their time learning about history.

The presenters at CWI also came from a wide range of occupations. Emmanuel Dabney and Eric Leonard of the National Park Service, independent writer Megan Kate Nelson, high school teacher Kevin Levin, and Licensed Battlefield Guide Sue Boardman all demonstrated to me that one does not need to be a university professor to help shape the field of Civil War studies. I must also acknowledge the talents of Gordon Rhea, who participated in a sit-down interview with Gettysburg College professor Peter Carmichael on the first day of the conference. Rhea was a full-time lawyer in Washington, D.C. in the 1990s and 2000s when he wrote his trilogy of books on Ulysses S. Grant’s 1864 Overland Campaign in Virginia, essentially turning himself from a lawyer into a historian by nightfall (a fourth installment on the campaign is forthcoming). These books have become standard resources for analyzing the Overland Campaign and are doubtless included in the libraries of academic Civil War historians across the country. Rhea’s accomplishments are really amazing if you think about it. You don’t often see non-academics writing standard treatises on medical practices, quantum physics, or German literature. But that’s the great thing about history – anyone who’s interested can ostensibly contribute their interpretations of history without worrying about a lack of credentials. All you need is good evidence and interpretive skills to back up your claims.

As someone who has great reservations about pursuing a history Ph.D. or an academic career, it was inspiring to see so many public historians, students, and history enthusiasts contributing to the scholarly discussions that took place at CWI. While I haven’t completely ruled out the possibility of someday continuing my education, I’ve come away from this conference thinking I can get pretty far in the history world even if I choose to focus on my public history career without any further education. For now we’ll have to wait and see what happens on that front. Life as a public historian has been pretty great so far.


A Moment of Gratitude for Academics

This past week has been an absolute blur. I gave a paper at a conference on Saturday the 8th, I’m working on creating a visitor studies evaluation for the Indianapolis Museum of Art, and my work with the National Council on Public History is hectic as we plan for our 2014 Annual Meeting in Monterey, California, which starts on Wednesday, March 19. My formal thesis defense smashed itself on top of these other tasks, taking place this past Tuesday, March 11.

The defense went amazing. It lasted about an hour and a half and it was really more of a conversation than an interrogation. We ended up spending some time towards the end discussing post-graduation life and ways to get parts of my thesis turned into journal articles (more on that in the future). I also got a really good question about how I would interpret the Grand Army of the Republic, Department of Indiana in a public history setting, to which I responded with something along the lines of what I discussed in this post about the Indiana Soldiers’ and Sailors’ Monument and the collective memories of Indianapolis.

I got home from school pretty late that night, around 11PM. Once I got back I did something I haven’t done in a long time: I shed a few tears. It wasn’t a huge blowout sort of cry; just a light moment that acted as a cleanup for my heart in the same way that a person gets an oil change for their car. It felt good. I wasn’t crying because I’m done–I actually still have many edits and a formal format review with the graduate office that still need to be addressed–or because I’m really proud of myself. More than anything it was the fact that I received the approval of a thesis committee that did so much to help me and whose scholarly and personal credentials I stand in awe of. These people took time out of their busy schedules to offer feedback and suggestions for my research while giving me room to make mistakes and find my own way through the process. To have their enthusiastic support meant the world to me, and I don’t take their approval lightly. Beyond these few words it’s hard to convey the sense of gratitude I have for my entire thesis committee and the countless other faculty members at IUPUI who have done so much over the past two years to help me discover the joys of studying history while becoming a critical thinker and engaged citizen in the present.

It is easy for people to construct a picture in their minds of academic scholars as out of touch with the world outside of their ivory tower (or in the case of my professors, the 1970s type building with terrible drinking fountains, windowless offices, and moldy ceilings). While I agree somewhat with Nicholas Kristoff’s calls for some academic writers to move beyond “turgid prose” in their writing and research endeavors, Kristoff goes too far in portraying academics as willfully ignorant of the outside world and perhaps even their classrooms. Academics, in Kristoff’s mind, have relegated themselves to their highly specialized research silos while avoiding the use of social media tools or commentaries on contemporary problems.

In my own experiences I have continually encountered and worked with academics who were the polar opposite of Kristoff’s portrayal. Sure, some disdain social media and blogs, and they all have specialized topics of study that provide fuel for their scholarly endeavors. But almost all of them have also been nothing but supportive of their students’ own scholarly pursuits, and they continually promote their own work in other ways besides monographs and obscure journals. These endeavors include consulting with outside cultural institutions, participating in workshops for k-12 teachers, and in the case of some of them, blogging and tweeting about their research.

I guess the point I’m making is that some of the biggest intellectual heroes in my life are the academics who not only do their own amazing research but also enthusiastically work to help their students in any way possible. Indeed, these academics care greatly about the world around them because they are helping to prepare students for work in that world. Isn’t that one of the reasons people become teachers in the first place? Isn’t the goal of the humanities to make us better humans at the end of the day?

No scholarly writing endeavor can ever be a fully individual effort. While a writer may do all the research and writing, no writer can complete the task of getting their work published without the help of others who offer suggestions, make edits, and ensure that your work is of the highest standard. I am thankful for the help I’ve received and can only hope that I get many opportunities in the future to help people–whether in or out of the classroom–pursue their own journeys in learning.


The “Nationalist Tradition” and Civil War History

In my last post, I briefly analyzed a Memorial Day speech at Crown Hill Cemetery in Indianapolis given by Leo Rassieur, the Grand Army of the Republic’s National Commander for a one year term in 1900-1901. I argued that Rassieur engaged in a great deal of reckoning in his Memorial Day speech. As did many other orators in the years after the war, Rassieur didn’t recall what actually happened during and after the Civil War. Instead, he focused on what he believed to be the true meaning of the war and what he thought was important for his audience to learn/understand about the war, which can be summarized as such:

  • Former Confederates did not engage in treason. In fact, they were “fellow citizens” who had merely demonstrated excessive pride in their section before the war. Once the war concluded, Union veterans happily accepted their former adversaries back into the American body politic to reap the benefits of citizenship.
  • True citizenship was demonstrated through duty, honor, and sacrifice to nation.
  • Slavery and Emancipation were not worth discussing. To Rassieur, it was far more important to forgive former Confederates and look towards the future rather than dwell on past conflicts (or even those who had died as a result of those conflicts).

Interestingly enough, many of Rassieur’s comments reflected the ideas and sentiments of academic historians who were writing about the Civil War at that time. Whether or not Rassieur read books of history about the war is impossible to know, but the similarities are nevertheless fascinating. Reading Thomas J. Pressly’s 1954 work Americans Interpret Their Civil War over the past few days has given me new insights into how people interpreted the war at the end of the Nineteenth Century.

One important moment in the “history of history” was in 1884, when the American Historical Association was formed. During this period, the field of history became professionalized as universities began creating history departments and training young students for work in academia. Some of the early Civil War historians that emerged from this class professionalized historians included future President Woodrow Wilson, Fredrick Jackson Turner, Albert Bushnell Hart, and Edward Channing.

According to Pressly, academic historians attempted to establish professional standards for engaging in “good history.” Students were taught that history was best conducted through “impartial, objective, and scientific” methods that were completely free of bias. History was composed of set facts and knowledge outside the minds of historians, and it was their duty to collect these facts and deliver them as “definitive” accounts of the Civil War. In fact, “by the turn of the [twentieth] century the trained historians considered sectional bias ‘unscientific’ and a threat to their professional aims and standards,” according to Pressly (184). Therefore, showing any sort of bias or favoritism towards one side or the other when analyzing the Civil War was a representation of bad history and perhaps even a sign of mental weakness. Both sides were to be treated fairly and impartially so the “facts” would eventually come out.

Each of the aforementioned historians fell into what Pressly describes as the “Nationalist Tradition” (221). Each wrote from a perspective of promoting sectional reconciliation between North and South, and each was devoutly nationalist. They viewed themselves as members of a unified community in a nation-state ruled by a central government, not as members of a decentralized union of sovereign states (or members of no community at all). This distinction is crucial because it demonstrates that while objectivity was encouraged in historical study, the actual result of many works during this period was subjective satisfaction with the results of the war, most notably the fact that the Union was preserved intact. Likewise, any commentary on slavery was usually critical of the institution, but that criticism stemmed from a belief that slavery had caused a destructive chasm between North and South, not necessarily because it was a moral wrong.

Nevertheless, the first batch of professional historians made strenuous efforts to demonstrate an “objective” analysis for understanding what the Confederacy fought for. Woodrow Wilson essentially argued in his 1893 history of the Civil War–Division and Reunionthat both sides were “right” in their own way. Supporters of the Confederacy, according to Wilson, were right for interpreting the U.S. Constitution as supporting the “compact theory” of government (that the union was composed of sovereign states). Supporters of the Union, however, were also right in that they understood the Constitution as “a living organism” that changed over time and that the “compact theory” of governance was no longer in play by the time of the Civil War (211). Additionally, both sides had fought heroically and neither side could be blamed for starting the war. Wilson and Frederick Jackson Turner, rather than blaming the war on one side based on the actions of an “evil” individual (as did writers in the 1860s and 1870s like Edward Pollard in The Lost Cause), used their professional training to analyze economic, social, and geographical factors that contributed to the war. This in turn played a role in interpreting the war as a blameless conflict, at least with regards to political leaders at the time of the war’s breakout in 1861.

In perhaps one of the most ridiculous statements of the time, Emerson D. Fite, writing a book on the 1860 Presidential Election in 1911, boldly proclaimed that “both sides were right! Neither could have given in and have remained true to itself.” (195-196)

The fact that these historians were nationalists and professionally trained partially explains why they were so vocal about sectional reconciliation, but another crucial factor was also in play. None of these historians experienced the Civil War firsthand, and they had little to no interaction with the prewar antebellum society that had failed to resolve its problems peacefully (each of the aforementioned historians was born in 1856, 1861, 1854, 1856, and 1874 respectively). They were not participants of the war but rather the first generation of observers to study it from afar. This fact is particularly important when one realizes that these men had little to no experience with the institution of slavery, and perhaps this is why they did not consider it as an important factor in reinforcing their nationalist sentiments.

None of these historians analyzed the Grand Army of the Republic or what veterans were doing after the war, and part of this is because GAR Encampment records were secret at the time. However, it is nonetheless important to analyze the “Nationalist Tradition” school of thought because their interpretations of the Civil War were popular with many historians well into the 1950s. Historians that began writing about the GAR in the 1930s and onward were heavily influenced by the “Nationalist Tradition,” and these feelings of nationalism and sectional reconciliation would seep into their analyses of Civil War veterans in the twentieth century. It is this period to which we will turn to next as I analyze one historian who won a Pulitzer Prize for his book on Civil War veterans in 1937.


[Note: the original draft of this essay mistakenly stated that the year of this historian’s book was 1936. My apologies].