A Parting Gift From General Grant

Frederick Tracy Dent (1820-1892). Photo Credit: Wikipedia

Frederick Tracy Dent (1820-1892). Photo Credit: Wikipedia

Yesterday my friend and colleague Bob Pollock and I took a trip to Southern Illinois University Carbondale to conduct research on the papers of Frederick Tracy Dent, the brother-in-law of General and President Ulysses S. Grant. Bob is currently researching the Dent family and has been writing extensively about this research on his blog Yesterday…and Today. We didn’t really find any documents within the collection to help answer our questions about the Dent family, but we nonetheless found some really interesting material, including letters from Grant, William T. Sherman, James Longstreet, and Frederick T. Dent himself. Bob discussed some of our findings here.

Frederick T. Dent was born on December 17, 1820, in St. Louis, Missouri, and grew up at the White Haven estate that is now the Ulysses S. Grant National Historic Site. He attended the West Point Military Academy and was a graduate of the class of 1843. During his time at West Point Dent befriended and was roommates with Ulysses S. Grant, a fellow 1843 graduate. When Grant was sent with the 4th U.S. Infantry to Jefferson Barracks in St. Louis following graduation, Fred invited him to meet the rest of his family at White Haven. Ulysses met Fred’s sister Julia at White Haven in the Spring of 1844, and it was here where Ulysses and Julia fell in love and began a four-year courtship. Their wedding took place in downtown St. Louis in August 1848 and the marriage lasted thirty-seven years.

Fred had a stellar forty-year career in the U.S. military. He served in the Western frontier and the Mexican-American War during the antebellum years. When the Civil War broke out in 1861 he stayed with the United States military and started the war out west. Following Grant’s promotion to Lieutenant-General in March 1864, Dent was appointed an aide-de-camp on Grant’s staff. Following the war he was promoted to Brigadier General of the U.S. Army in 1866, and he served as a “military secretary” for Grant during most of his Presidency (1869-1873). If you wanted to meet with President Grant, you had to go through Fred first (this fact became evident yesterday as we sifted through numerous letters from people seeking Fred’s assistance in securing an interview with Grant). Fred retired from the U.S. military in 1883 and later moved to Denver, Colorado, where he died in 1892.

In the course of our research we stumbled upon a touching letter written by Frederick Dent Grant to Frederick T. Dent in March 1886. Fred Grant was Ulysses and Julia Grant’s oldest child and Fred Dent’s namesake. Ulysses had died on July 23, 1885, exactly eight months prior to this letter, and Fred Grant had an important message for his uncle:

New York
March 23d 1886

Dear Uncle Fred,

Just before my beloved father died he gave some instructions about what he would like done. Among these wishes was one about you. He said he wanted to send you a little present in memory of old and happy days. That he had grown very fond of you, and that if Mother could spare it he would like her to send you $500[,] which I now enclose to you with her love.

Mother says if you and Aunt Helen can come she would like you to pay her a visit. All join in love for you and yours[.]

Affectionately

Your Nephew
Fred Grant

Ulysses S. Grant never forgot the assistance, kindness, and companionship his brother-in-law provided him throughout a more than forty-year friendship. Through this relationship we can see the generous character of both men.

Cheers

Thinking Out Loud About the History Ph.D. and the Future of Higher Ed

History PhD Reporting Employment 2012

Regular readers of Exploring the Past know that I occasionally muse on the state of higher education in the United States and what the academy might hold for someone like me in the future. I wish I had the ability to look into a crystal ball and envision this future, but I instead find myself in a paradox when it comes to whether or not I should pursue my history Ph.D. On the one hand, I’ve worked extremely hard to position myself for a full-time, permanent status placement with the National Park Service, an agency whose mission and values I care deeply about. Now that I’ve earned that position (for which I’m extremely grateful), it seems that getting “experience in the field” and not pursuing a Ph.D. at present makes the most sense for my career development, and I’ve been told as much by some of my teachers. On the other hand, I received a lot of support from other teachers during graduate school who encouraged me to strongly consider the Ph.D. path and pursue it as soon as possible. By pursuing the Ph.D. now, I could also position myself for potential employment within higher education in addition to possibly furthering my public history career.

I love teaching history in both formal and informal learning settings, and I hope to do more of both in my career. But it can be mentally overwhelming thinking about the unknown contingencies that will shape where I go and what I do through the course of my career. It’s important not to discount any avenue of opportunity at this point, and I’ve been doing my best to get a feel for what I might expect if I were to pursue my Ph.D. Unfortunately, the research I’ve done so far indicates that my prospects don’t look good if I pursue this path.

Overeducated, Unemployed: There are too many Ph.D. candidates in all fields of study. Hope College English professor William Pannapacker suggests that this glut of Ph.Ds (especially in the humanities) stems from schools’ reliance on cheap teaching labor. “It’s my view that higher education in the humanities exists mainly to provide cheap, inexperienced teachers for undergraduates so that shrinking percentage of tenured faculty members can meet an ever-escalating demand for specialized research.” These schools, according to Pannapacker, don’t really care about the employment status of their students once they graduate.

Because there is a glut of Ph.Ds on the academic job market, schools set the terms of employment to their favor. Amid severe funding cuts for public colleges and universities and rising costs for non-academic ventures (more on that in a moment) since the 2008 Great Recession, a race to the bottom has ensued within academia. More than 50 percent of all faculty in today’s schools are part-time. Some faculty voluntarily choose to be part-time because they either have full-time employment outside academia, are retired from the workplace and choose to teach occasionally, or simply prefer this sort of schedule. But the vast majority of these faculty members do not work outside the academy and are placed in a position where they frantically run around from school to school looking for classes to teach. Adjunct faculty members have no job security, no health benefits, and make an average of $2,700 per course (which means that a person teaching four classes per semester would be making $21,600 annually, before taxes). There are professors on food stamps.

Efforts to find new Ph.Ds employment opportunities outside the academy (“alt-ac”) are still in their infancy, but the transferability of academic skills and training to alternative careers remains an open question. The Boston Globe recently published an essay about a “quiet crisis” in the science community, where recent Ph.Ds have been increasingly forced to work in low-paying postdoctoral apprenticeships thanks to cuts in higher education employment and federal funding. (And, while I’m at it, I should point out that STEM graduates of any level are struggling to find employment, contrary to popular belief that the U.S. lacks a sufficient number of young, competent science, technology, engineering, and mathematics professionals).

And then there are those who simply can’t find employment. The graph above, taken from a recent study by the American Historical Association, found that less than 50 percent of new history Ph.Ds reported finding “definite employment” following the completion of their degree and about 40 percent saying that they were still “seeking employment.” (Here’s a collection of data studies from the AHA on history programs, employment, and students).

What is the cost of pursuing a Ph.D?: Many–if not most–Ph.D. students live on a monthly or yearly stipend that ostensibly covers the cost of living. Schools pay their students to work as Teaching Assistants, researchers, or a range of other jobs for roughly twenty hours a week. These stipends end up totaling around $10,000-$12,000 per year, which is low enough that some students have been forced to take public assistance to help pay the bills while in school. A select few are lucky enough to get their tuition fully covered in addition to their yearly stipend, but most students are forced to take out loans to cover their rising tuition costs and usage fees while in school. The average debt burden for graduate students today is $60,000. Moreover, this debt doesn’t account for the years of lost income that accompany any full-time investment in the pursuit of a Ph.D., which can take between five and ten years to complete. University of Iowa Sally Mason recently attributed at least half of this student debt to so-called “‘lifestyle debts’ caused by students buying things like iPhones, iPads, and laptops” (see link above), reflective of her clear ignorance of these serious problems.

Where does the money go?: The sad thing about this rising debt is that so much of the increasing tuition rates in U.S. schools are reflective of the rising costs of things completely detached from the academic classroom. Contrary to popular assumptions and beliefs, colleges and universities are not increasing tuition rates because of rising faculty costs. That money is actually going towards the building of fancy campus centers, sports stadiums, dorms, and 9,000 Square Feet President’s Residences. And we can’t forget the huge growth of higher education administrators who take an increasing amount of the budgetary pie in academia. From 1987 to 2012 the number of administrators in colleges and universities more than doubled, with 517,636 administrators and professional employees added to the payrolls during that period.

I’m not one for big freak-outs and alarmist rhetoric, but the above information definitely sobers my perspective whenever I start thinking about furthering my education, and I have a bad feeling that things will either stay the same or get worse in the future.

Cheers

Should Historians Influence Present-Day Politics?

I’ve been thinking a lot about Gordon Wood’s ideas on the differences between history and political theory. In his book The Purpose of the Past: Reflections on the Uses of History, Wood briefly reflects on his conception of these differences. One passage in the book is particularly noteworthy and worth quoting in full:

Political theorists, especially those influenced by the ideas of Leo Strauss, tend to believe that the history of political thought can be studied as a search for enduring answers to perennial questions that can enhance contemporary political thought. Historians, on the other hand, tend to hold that ideas are the products of particular circumstances and particular moments in time and that using them for present purposes is a distortion of their original historical meaning. It doesn’t follow from this distinction that past ideas cannot be legitimately used in the very different circumstances of the present; of course, they can be used and are used all the time. Jefferson’s idea of equality, for example, has been used time and again throughout our history, by Lincoln as well as Martin Luther King, Jr. Historians contend that such usages violate the original historical meaning of the ideas and cannot be regarded as historically accurate, but they don’t deny the rationality and legitimacy of such violations.

Such distinctions and violations are indeed necessary for contemporary discussions of political thought and are no great sin, as long as the theorists are aware that they are not being historically accurate. It’s the theorists’ claim that their present-day use of past ideas is true to the original way they were used in the past that historians quarrel with. Ideas, of course, do not remain rooted in the particular circumstances of time and place. Ideas can, and often do, become political philosophy, do transcend the particular intentions of the creators, and become part of the public culture, become something larger and grander than their sources. Political theory, studying these transcendent ideas, is a quite legitimate endeavor; it is, however, not history (162-163).

Wood expresses his concern that historians run the risk of thinking “unhistorically” by manipulating past ideas to fit our understanding of political conditions in the present. He worries that holding people from the past responsible for a future they could never envision or conceive in their own time leads to a poor understanding of past ideas within their own historical context – “the original way they were used in the past.” Thus, when politicians like Lincoln and MLK use the past to justify their political philosophies in the present (a common, rational practice then and now), they are distorting and violating the values of historical thinking, according to Wood. Historians analyze change over time and help us understand how our contemporary world came into being, but to use past ideas as a framework for establishing political theories in the present is not history. As Wood comments later in the book, “I suppose the most flagrant examples of present-mindedness in history writing come from trying to inject politics into history books . . . Historians who want to influence politics with their history writing have missed the point of the craft; they ought to run for office” (308).

These are sharp, intelligent reflections on the historian’s craft, but I think the extent to which politics plays a role in historical thinking is more of an open question than Wood would like to acknowledge. For one, it’s hard to imagine a history book free of politics because the past and present hold a reciprocal relationship with each other that makes it nearly impossible for us to get our politics out of the past. It is often assumed that the past shapes how we view the present, but less often do we acknowledge that the present shapes our conception of the past. It’s probably true that a place like Ferguson, Missouri, has been shaped by a past legacy of racist government policy and white supremacy, but it’s also true that the political ramifications of a 2014 police shooting of a black teenager by a white cop in Ferguson (regardless of the case’s final outcome) shape our conceptions of what, exactly, that legacy of racist government policy and white supremacy means for our history.

Public historians must also face these sorts of questions because so much of what they do is inherently political. The National Council on Public History defines public history as a set of practices aimed at “describ[ing] the many and diverse ways in which history is put to work in the world.  In this sense, it is history that is applied to real-world [i.e. present-day] issues.” And a recent essay on the NCPH’s blog, History@Work, praised the efforts of a federal district court judge to provide a historical context for explaining her opposition to a 2011 Texas Voter ID law, arguing that this case was an example of putting history to work in the world. These sorts of present-day uses of the past seem to contradict Wood’s distinction between history and political theory.

What is a historian of racism, Jim Crow, and the police state supposed to do with a political hotcake like Ferguson? If that historian embraces Wood’s avoidance of politics in history writing, he or she may choose to focus solely on the political ideas surrounding these topics from around 1830-1890 or a period of that sort, focusing on how these ideas materialized within the context of that period without mentioning or connecting them to present-day politics. But a critic of this approach might argue that limiting a discussion of these topics to the nineteenth century is in itself a political act that also leaves out a crucial piece of the historical narrative. Critics might also invoke the arguments of other historians like Howard Zinn, who asserted in 1970 that “we can separate ourselves in theory as historians and citizens. But that is a one-way separation which has no return: when the world blows up, we cannot claim exemption as historians.”

Food for thought.

Cheers

Can a Distinction Be Made Between “Academic” and “Popular” History?

A colleague and I recently engaged in a fascinating discussion comparing and contrasting works of “popular history” and “academic history.” Through this conversation I realized that I’m not sure how to define the proper criteria for what constitutes a work of “popular history.” Does a work of historical scholarship become popular once it hits a certain number of book sales? If so, what is that number? Does one need to have a certain educational background in order to be considered a popular historian? Can a work geared towards academic scholars become popular with a non-academic audience? Can a clear distinction be made between works of popular history and academic history?

Some professional historians with PhDs believe that they alone are qualified to shape and participate in the historical enterprise. A couple years ago historians Nancy Isenberg and Andrew Burstein attempted to act as gatekeepers in a condescending article for Salon that dismissed popular history written by non-academics and argued that only PhD historians were qualified to write credible historical scholarship:

Frankly, we in the history business wish we could take out a restraining order on the big-budget popularizers of history (many of them trained in journalism) who pontificate with great flair and happily take credit over the airwaves for possessing great insight into the past. Journalists are good at journalism – we wouldn’t suggest sending off historians to be foreign correspondents. But journalists aren’t equipped to make sense of the eighteenth and nineteenth centuries.

I find this perspective badly flawed and unrealistic. Yes, a history PhD provides a blanket of scholarly authority and a thorough training in research, writing, and interpretation. But to suggest that only history PhDs alone can “do history” negates the fact that people of all education levels use historical thinking on a daily basis without the help of history PhDs. There are many different ways people learn about and understand history, including film, television, blogs, twitter, and cultural institutions like history museums and historical societies. All of these mediums attract larger audiences than books written by academics. The wish that historians, journalists, etc. would simply stay in their academic “silos” of expertise and dictate their knowledge to the rest of society–without the input of non-academics–smacks of what Tara McPherson defines as “lenticular logic.” In a complex and wide-ranging critique of academic “silos” and the racialization of the digital humanities, McPherson argues that “the lenticular image partitions and divides, privileging fragmentation. A lenticular logic is a logic of the fragment or the chunk, a way of seeing the world as discrete modules or nodes, a mode that suppresses relation and context.” History is all around us and anyone can participate in the making of new scholarship, not just the academic gatekeepers. To suggest that one’s credentials are more important than the substance of their arguments is profoundly un-academic to me.

Notwithstanding Isenberg and Burstein’s arguments, can we still make generalizations about what makes a work of history “popular history”? In the course of our conversation I attempted to outline a few distinctions to my colleague.

Interpretation vs. Reporting: Some of the more popular works of history I’ve come across tend to do more reporting of “what actually happened” rather than closely examining primary and secondary source documents for new ways of interpreting the past or questioning common understandings of historical events. For example, Jay Winik’s April 1865: The Month That Saved America is a widely popular retelling of the events leading up to Confederate General Robert E. Lee’s surrender to United States General Ulysses S. Grant, but the narrative Winik embraced didn’t change our understanding of these events and simply repeated past interpretations about the supposed beginning of a national reconciliation following Appomattox. Meanwhile, a more recent book published by an academic press and written by an academic scholar about the same events in April 1865 will most likely not gain the same audience as Winik’s book. Elizabeth Varon’s Appomattox: Victory, Defeat, and Freedom at the End of the Civil War interrogates our popular understanding of Lee’s surrender to Grant and convincingly shows that the Appomattox surrender was not necessarily the starting point of a happy national reconciliation that past scholars have argued. Her book is more interpretive than Winik’s and leaves us asking new questions rather than accepting a grand narrative about Appomattox.

Methods vs. Content: Academic scholars are trained to place their scholarship within a larger framework that analyzes how historians have interpreted and understood a historical event over time – what is commonly referred to as “historiography.” In a previous essay I criticized David McCullough for never placing his book 1776 within the historiography of George Washington studies. We never get a sense in 1776 of where McCullough’s understanding of Washington’s generalship fits within the scholarly discussions about this topic, and we struggle to figure out how and where McCullough is obtaining the information he is using to inform his scholarship (and the footnotes are awful, although the organization of footnotes is often controlled by publishers, unfortunately). The work of other scholars gets flattened in works of popular history, and historical methods are replaced by a focus on content and narrative. I don’t necessarily think it’s bad to focus on content at the sacrifice of methods, but had I written the way McCullough writes while in graduate school I would have failed all my classes. As a historian I want to see the author’s methodology, sources, and historiography regardless of topic, but I suppose it remains an open question as to the necessity of these things and where they would fit within a work intended for a non-academic audience.

Type of History: Certain types of history are more popular than others, and national histories and grand narratives remain popular despite changing interests from academic historians. In the 1960s and 1970s academic historians sought new ways of understanding the past through the experiences of ordinary people rather than grand narratives about politicians, monarchies, and cultural elites. They started asking questions about marriage, divorce, alcohol consumption, group rituals and sexual habits, and they started using social science techniques (from economics, anthropology, and political science) and devising quantitative methods for answering these questions in an effort to capture a more holistic understanding of past societies. Also crucial to this “new social history” was a focus on local context, whether it be families, tribes, cities, counties, and states. According to Gordon Wood, “by the 1970s this new social history of hitherto forgotten people had come to dominate academic history writing,” and almost every facet of human behavior was placed under scrutiny by historians (2). Gone was the focus on grand narrative and national history. Despite these radical changes (and subsequent ones) within the academy, non-academic interest in the work of social historians remains lukewarm to this day. Wood points out that history degrees awarded to students from 1970 to 1986 declined by two-thirds (3), and the numbers still look questionable today. Go to a Barnes & Noble history bookshelf and you’ll find a plethora of books on war, politics, and national histories, but few studies on gender, social history, cultural history, or local history. The Bill O’Reillys, David McCulloughs, and Walter Issacsons are still the big sellers at popular bookstores.

Despite these generalizations, I came to realize in my conversation that there are exceptions to all these rules. McCullough’s 1776 presented a bold interpretation suggesting that Washington’s subordinate generals deserve more credit for their role in keeping the Continental Army together in 1776, and by all accounts McCullough is a meticulous researcher and well-respected by most academic historians. Columbia University historian Eric Foner’s magisterial 1988 “academic” publication Reconstruction: America’s Unfinished Revolution, 1863-1877 delved deeply into historiographical arguments and interpretive history, yet it gained widespread popularity and remains a standard in Reconstruction studies. Bruce Catton, Douglas Southall Freeman, and Allen Nevins–all journalists–inspired generations of Americans (and future history PhDs) throughout the mid twentieth century to study the American Civil War through their meticulously researched narratives on the war. And academic historians throughout the 1950s and 1960s were seen as highly respected public intellectuals.

The more I think about it, the more unsure I become of this academic-popular divide. In the end I think all historians can learn a lot from each other about method, content, style, tone, and organization without putting each other into boxes based solely on book sales.

Cheers

Taking the #Historiannchallenge

The renowned Civil War historian James McPherson recently conducted an interview with the New York Times Book Review about his favorite works of history, who he believes are the best historians writing today, and what sort of reader he was as a child. Several historians were perplexed by the outdated nature of McPherson’s book list and, more significantly, how 29 men were mentioned while only one woman (who is actually a trained political scientist, not a historian) received any credit for their work in the field. Several bloggers critiqued the McPherson interview, including historian Ann M. Little, who challenged other historians to interview themselves and think about the sorts of answers they’d give to these questions if they were interviewed by the New York Times. My interview is below. Enjoy!

What books are you currently reading?

I am currently reading Gerard N. Magliocca’s biography of Civil War Era Congressman John Bingham, American Founding Son: John Bingham and the Invention of the Fourteenth Amendment. As Yale professor David Blight remarks in his iTunes U lecture series, the United States Constitution prior to the Civil War is not the same Constitution we adhere to today. The antebellum constitution is the “Old Testament” of American governance, whereas our current constitution—shaped largely by the “Civil War Amendments,” especially the 14th—is our “New Testament” of American governance. And John Bingham is the largely unknown author of that New Testament.

What was the last truly great book you read?

Karen and Barbara Fields’ Racecraft: The Soul of Inequality in American Life is simply outstanding. It challenged me to reconsider how I view race and inequality in the United States and helped me better understand how the historical concept of “race” is the creation of racism. I wrote extensively about this book here.

Who are the best historians writing today?

I don’t like the way this question is phrased. Some of the best historians in the field today aren’t known for or primarily interested in writing, which is only one facet of work historians undertake within the larger historical enterprise. There are historians who teach in high schools, community colleges, and liberal arts universities who don’t research and write extensively, and there are public historians working in government, National Parks, museums, historical societies, archives, historic preservation, and the non-profit sector who do things like interpretation, museum education, document preservation, and public policy. As American universities corporatize, adjunctify, and eliminate tenure track positions, younger historians like myself are increasingly shunned from the sorts of research/academic opportunities scholars like James McPherson had in the 1950s and 60s. The reality is that we face bleak prospects as research-based academic historians whether we like it or not. And regardless, some of us just have different interests within the field.

That makes sense. To rephrase, who are the most inspiring historians to you today?

Good question! There are too many to name here, but I’ll mention a few noteworthy scholars, regardless of their written scholarship. While a graduate student at IUPUI I was very fortunate to study under some phenomenal historians, including John Dichtl, Jason M. Kelly, Modupe Labode, Anita Morgan, Rebecca Shrum, and Stephen E. Towne. My classmate and good friend Nicholas K. Johnson also deserves mention. Nick is currently in Berlin, Germany, for one year conducting research on Weimar Germany cultural history and studying public history. He is a brilliant scholar who is going to be well-known in the field someday.

Outside of IUPUI, I admire the work of Yoni Appelbaum, David Blight, Ta-Nehisi Coates, Kalani Craig, Eric FonerBarbara Fields, Gary Gallagher, Drew Gilpan Faust, Jennifer Guiliano, Keith Harris, John Hennessey, Caroline Janney, Kevin Levin, James Loewen, Al Mackey, Megan Kate Nelson, Andrew Joseph Pegoda, Bob Pollock, Mary Rizzo, Liz Ševčenko, Brooks Simpson, and Laurel Thatcher Ulrich. And many, many more.

What’s the best book ever written about the Civil War?

Impossible to answer. There are literally hundreds of thousands of studies on the American Civil War, and our understanding of history is constantly revised as historians reinterpret and ask new questions of their primary source documents. I will happily recommend a few noteworthy selections that I personally love, however.

James McPherson’s Battle Cry of Freedom: The Civil War Era still remains a fine, comprehensive study of the Civil War that is easily accessible for readers. Charles Dew’s short volume Apostles of Disunion: Southern Secession Commissioners and the Causes of the Civil War is the one of the best studies of causes of the Civil War. Kenneth M. Stampp’s The Imperiled Union: Essays on the Background of the Civil War explore similar territory and includes a fantastic essay of the evolving nature of U.S. nationalism and the changing interpretation of what, exactly, “a more perfect union” means in American politics. Andre M. Fleche’s The Revolution of 1861: The American Civil War in the Age of Nationalist Conflict places the American Civil War within the global struggle for liberal democracy in the nineteenth century. Mark A. Noll’s The Civil War as a Theological Crisis demonstrates how differing interpretations regarding the Biblical sanctity of slavery presaged political conflicts over slavery. Stephanie McCurry’s Confederate Reckoning: Power and Politics in the Civil War South chronicles the undemocratic nature of the Confederacy’s experiment in nationhood and, by extension, how the Civil War spawned new (albeit temporary) gender norms in the slaveholding South. And David Blight’s Race and Reunion: The Civil War in American Memory provides a beautifully thought-provoking analysis of the ways the Civil War was remembered throughout the nation after the guns fell silent.

I think that’s a good starting point.

Do you have a favorite biography of a Civil War-era figure?

Although the book has its factual and interpretive problems, I really enjoyed Jean Edward Smith’s biography of Ulysses S. Grant. The book introduced me to Grant and helped spawn my interest in the Civil War Era, and for those things I will always be thankful. Here’s my list of recommended Grant studies for those interested.

What are the best military histories?

I regret to admit that I’m not much of a military historian. That said, I enjoyed Joseph Glatthaar’s General Lee’s Army: From Victory to Collapse, J.F.C. Fuller’s Grant and Lee: A Study in Personality and Generalship, Paul Fussell’s The Great War and Modern Memory, and anything from Gary Gallagher and Bruce Catton (it clearly appears that my reading of military history is still dominated by old or dead white men, unfortunately).

And what are the best books about African-American history?

I like David Blight’s A Slave No More: Two Men Who Escaped to Freedom, Including Their Own Narratives of Emancipation, John Hope Franklin and Loren Schweninger, Runaway Slaves: Rebels on the Plantation, Colin Grant, Negro with a Hat: The Rise and Fall of Marcus Garvey, Isabel Wilkerson, The Warmth of Other Suns: The Epic Story of America’s Great Migration, and Bruce Baker and Brian Kelly, eds., After Slavery: Race, Labor, and Citizenship in the Reconstruction South. There are many other wonderful studies on African-American history that I have yet to read, and I’d love to hear more recommendations from readers.

What kind of reader were you as a child?

A voracious one. My parents read to me every night as a child and I devoured anything and everything I could get my hands on in our public library and school bookmobile. I still remember winning a reading challenge in my first grade classroom after reading more than 400 books that year.

If you had to name one book that made you who you are today, what would it be?

Miles Davis’s autobiography, Miles: The Autobiography, is my favorite book of all time. Part memoir, part history of jazz music, part cultural criticism, The Autobiography exposed me during my teenage years to the ways music acts as a form of cultural expression, political dissent, and historical interpretation all at the same time. And it taught me that our heroes–no matter how much we venerate them–have their own faults, shortcomings, mistakes, fears, and concerns about the world around them. Acknowledging those mistakes and taking steps to rectify them is perhaps the really heroic act of humanity we could all do a better job of practicing.

If you could require the president to read one book, what would it be?

He’s got plenty of other things he should be doing now rather than taking book recommendations from me, but he should read Jonathan M. Hansen’s Guantánamo: An American History once he leaves the Presidency so he can see how badly he screwed up by not closing down America’s most visible symbol of imperial ambition and the post-9/11 legal ambiguities of the War on Terror.

What books are you embarrassed not to have read yet?

Too many, but I am particularly embarrassed about not having read Edmund Morgan’s American Slavery, American Freedom.

What books are on your night stand?

Three books waiting to be read as soon as possible are Michel Foucault, Discipline and Punish, George E. Hein, Learning in the Museum, and Caleb McDaniel, The Problem of Democracy in the Age of Slavery: Garrisonian Abolitionists and Transatlantic Reform.

Cheers

Gary Gallagher on the Differences Between History and Memory

One of my favorite lectures from renowned Civil War historian and University of Virginia professor Gary Gallagher is included in the video above. The bulk of the lecture is dedicated to analyzing the Battle of Gettysburg within the larger context of the Civil War and questioning the common belief that it was a “turning point” that signaled the end of the Confederacy. In the process of questioning these assumptions, Gallagher provides a fine explanation of the differences between history and memory (this explanation starts around the 7:00 minute mark and continues for about ten minutes).

Gallagher points out that history students oftentimes confuse history and memory as being one in the same, and these confusions can lead to questionable interpretations of primary source documents. He argues that in any historical event there’s a certain sequence of complexities and contingencies that shape the outcome of that event (history). But how we remember that event (memory) can be at odds with what actually happened at the time. Gallagher suggests that of the two, memory is oftentimes more important than history to individuals and societies because “it doesn’t matter what happened, it’s what we think happened.” Our memories guide how we engage with and think about history, and they shape how we choose to remember historical events.

Too often, however, our memories can lead us to think of historical events as inevitable. Within the context of the Civil War Gallagher diagnoses this problem as the “Appomattox Syndrome,” where scholars start off with the two great legacies of the war (Union and Emancipation) and then move backwards from these moments of inevitability. It’s easy to for us today to view events from this perspective because we are merely observers of Civil War history, not participants in the making of historical events during that war. But Gallagher says that starting at the end of the story “is the wrong way to do history.” If we want to better understand how things actually happened, then we must start from the beginning (while keeping in mind that defining the “beginning” is a subjective exercise) and move forward, taking account of all complexities and contingencies along the way.

We can compare Gallagher’s differentiation between history and memory to different approaches of understanding a sporting event. Sports pundits oftentimes analyze games following the “Appomattox Syndrome” approach. They already know the outcome of the game and let that influence their analysis of crucial “turning points” and highlights worth showing to their audiences: a fight in the second period of the hockey game is defined as a “turning point” that energizes the home crowd and inevitably pushes that team towards the winning goal; a hitter on a baseball team gets a game-winning hit in the ninth inning, and a pundit argues that he “found his swing” during an earlier at-bat in the sixth inning. And so on. But by looking at sporting events as they happened and moving forward, we might do a better job of understanding the complexities and contingencies that shaped the outcomes of those matches.

Although historians are more apt to trust a primary source document that is created around the time an event took place (a diary written while fighting at Gettysburg) rather than a remembrance (such as a memoir or a newspaper interview forty years later), we should still proceed with caution when looking at primary source documents. All primary source documents are written from a perspective that doesn’t necessarily account for the entire scope of a historical event, and these documents can include their own biases, distortions, and speculations. One soldier at Little Round Top will have a different perspective than the one at Culp’s Hill (and even another soldier at Little Round Top). A Union soldier will have a different view of events than a Confederate. A local resident of Gettysburg might have different desires and concerns than Confederate General Robert E. Lee or U.S. President Abraham Lincoln.

Through this mix of perspectives and primary source documents created during and after the course of a historical event, historians attempt to reconstruct things as they actually occur and discover the “truths” of history. But can we really discover the truths of history? I think there are certain times when we can, but there are many times (probably more than we are willing to acknowledge) when we find ourselves in the shoes of my friend and colleague Bob Pollock, shouting “just gimme some truth” and wondering if we can ever pick up all the pieces of the past.

Cheers

Racism, Racecraft, and the Folly of Tolerance

I just finished reading Karen and Barbara Fields’ fine 2012 publication Racecraft: The Soul of Inequality in American Life. Like Ta-Nehisi Coates (who discusses the book here and here) I found that some of the book’s essays were more difficult to read than others, and there were times when I did not understand the arguments being made. But the main idea of the book–that the concept of “race” in America is an invention of racism–really challenges me to reconsider how the language American society uses to discuss “race” is oftentimes embedded with racist connotations, even if those doing the talking have good intentions.

The Fields’ point out in the introduction to Racecraft that Martin Luther believed in witchcraft: men stealing milk by thinking of cows; a mother getting asthma because of a mean glare from a neighbor; seeing demons at one’s deathbed. In each of these cases, Witchcraft became a strategy by which Luther explained his everyday experiences and the world around him. Witchcraft became an ideology for Luther, as it did for many people for many hundreds of years before and after his time. But at some point the ideology of witchcraft no longer sufficed as an explanation for the everyday experiences of peoples’ lives. Today we would never attribute someone’s asthma to witchcraft, much less suggest that witchcraft is the direct cause for anything in society. The illusion of witches produced the concept of witchcraft in Luther’s time; the Fields’ argue that the illusion of race produces the concept of racecraft in our own time. The comparison is significant not because both concepts are silly superstitions, but because both concepts have been so widely accepted as plausible explanations of fundamental truths about human nature.

The concept of “race” was originally conceived by Enlightenment-era thinkers as a way of classifying and categorizing people and justifying transatlantic slavery on the basis of skin color and ancestry. But “race” as a biological concept has been discredited by the scientific community today, and there is actually no genetic basis for race. (see Jason Kelly’s brief list of resources on scientific racism here and information from the PBS program “Race – The Power of an Illusion” here) What constitutes “black” in the United States is not the same as what constitutes “black” in Brazil or Colombia. People who fall outside the U.S. racial paradigm–say, a Muslim immigrant–oftentimes aren’t attributed a “race” within that paradigm.

Racecraft, according to the Fields’, fails to explain inequality because it mutes class inequality. They cite a recent case in which a white electrician in Ohio fell on hard times after the 2008 Great Recession and was forced to rely on government aid. The electrician went out at midnight to buy groceries and, in a fit of disgust, had no qualms commenting in a New York Times article his exasperation with seeing large “crowds of midnight [food-stamp] shoppers once a month when benefits get renewed…Generally, if you’re up at the hour and not working, what are you into?” The Fields’ point out that even though the electrician was out at midnight with food stamps and ostensibly conducting legitimate business, “he assumed that the people in the crowds were not on legitimate business…Racism tagged the midnight shoppers as ‘into’ something unsavory because they appeared to be out of work; racecraft concealed the truth that the electrician and the midnight shoppers suffer under the same regime of inequality” (269-270).

Racecraft also uses “race” to explain racism, when the opposite is the actual reality. The Fields’ point out that while scholars and school teachers often argue that “legal segregation was based upon race,” legal segregation was actually based upon racism and the false science that justified the classifying of “races.” Thomas Jefferson justified slavery in his Notes on the State of Virginia on the scientifically “factual” basis that the ancestry of slaves in America made them biologically inferior people, but in reality racism sustained slavery’s justification and expansion in the United States. In sum, “racism becomes something Afro-Americans are, rather than something racists do” (97).

In chapter three, Barbara Fields pushes the race-racism paradigm to its outer boundaries by suggesting that the notion of “racial tolerance” masks racial discrimination through “good intentions.” In response to an author who argued that the refusal of cab drivers to stop for black passengers was attributable to “intolerance,” Fields clearly outlines the problematic nature of “tolerance”:

Tolerance itself, generally surrounded by a beatific glow in American political discussion, is another evasion born of the race-racism switch. Its shallowness as a moral or ethical precept is plain. (“Tolerate thy neighbor as thyself” is not quite what Jesus said…). As a political precept, tolerance has unimpeachably anti-democratic credentials, dividing society into persons entitled to claim respect as a right and persons obliged to beg tolerance as a favor. The curricular fad for “teaching tolerance” underlines the anti-democratic implications. A teacher identifies for the children’s benefit characteristics (ancestry, appearance, sexual orientation, and the like) that count as disqualifications from full and equal membership in human society. These, the children learn, they may overlook, in an act of generous condescension–or refuse to overlook, in an act of ungenerous condescension. Tolerance thus bases equal rights on benevolent patronization rather than democratic first principles, much as a parent’s misguided plea that Jason “share” the swing or seesaw on a public playground teaches Jason that his gracious consent, rather than another child’s equal claim, determines the other child’s access (104-105).

There’s a lot to digest in Racecraft, but it’s well worth the read and I learned much. While race is a biological fallacy, discrimination and double standards against people based on their ancestry–racism–is very real. The logic of racecraft–the illusion of race–masks inequality through false scientific explanations for why some people are supposedly inferior to others. Racecraft also challenges me to consider how public historians, museum practitioners, and classroom educators approach ideas of tolerance, equality, and understanding in their work with students.

Cheers

What’s in a Name? “Slavery” vs. “Slaving” in Historical Interpretation

As a public historian working at a historic site with intimate connections to U.S. antebellum culture, I am tasked with discussing American slavery and interpreting the perspectives of the enslaved people who worked at the White Haven estate. I relish the opportunity to discuss a complex and difficult topic and believe it is my duty to keep slavery at the forefront of my interpretive presentations, even though I can clearly tell (eye-rolls, exasperated breaths, etc.) that some people don’t want to hear about slavery.

When discussing slavery, it’s important to use precise language that clearly conveys the fact that it was a form of submission, oppression, and control masquerading as a form of legitimate “property” in human flesh. For example, I never refer to slaves as “servants” in my interpretations. In the English colonies, Virginia law clearly distinguished between servitude and slavery by the mid-seventeenth century. Indentured servitude stipulated that both parties–the servant and his/her benefactor–voluntarily engaged in a labor agreement. Most indentured servants by that time agreed to sell their labor for a number years (but not a lifetime) for passage and housing in the New World. Even though indentured servants’ labor agreements were sometimes violated, extended, and abused in ways that made it look like slavery, the two were not mutually exclusive. Virginia laws after 1661 stipulated that slavery meant lifetime servitude based on race and ancestry, and that it could be hereditary depending on the mother’s prior status as either free or slave upon the child’s birth. What constituted a temporary (and mostly) voluntary agreement under indentured servitude eventually became lifetime involuntary enslavement passed down through heredity.

Another difficulty with conflating “slaves” as “servants” lies in the ways Confederate apologists downplayed slavery in the years after the Civil War, an effort still continued by certain interest groups today (who will remain unnamed). Examples abound of former masters who suggested after the war that their slaves were happy, contented servants who were well taken care of and uninterested in gaining their freedom. Confederate Vice President Alexander Stephens, facing temporary imprisonment in Boston following the end of the Civil War in 1865, sadly remarked as he left his home that “leave-takings were hurried and confused. The servants all wept. My grief at leaving them and home was too burning, withering, scorching for tears. At the depot was an immense crowd, old friends, black and white, who came in great numbers and shook hands” (109). If one were to read this pitiful story without any other context, he or she would probably think Stephens was a beloved racial egalitarian and benevolent employer, not a slaveholder and author of the “Cornerstone Speech” of March 21, 1861, in which he argued that the Confederacy’s “corner- stone rests, upon the great truth that the negro is not equal to the white man; that slavery subordination to the superior race is his natural and normal condition.”

Therefore, when I talk about slavery, I always make sure to explicitly use the words “slavery” and “enslavement” so that no one leaves thinking that the African Americans who worked at White Haven before the war were servants who voluntarily sold their labors to their owner, Frederick Dent.

Some history scholars, however, are now challenging the use of the term “slavery.” Joseph Miller, a history professor at the University of Virginia, finds the term “slavery” to be too passive:

In order better to understand slavery’s march across history and into our time, Miller challenges historians to radically revise some basic assumptions. We can best comprehend how human bondage actually worked and still works today, he argues, if we abandon the noun “slavery” and our attempts to describe “the institution of slavery.” These, Miller argues, are static characterizations that convey none of the dynamism of slavery’s durability, variability and evolution across the centuries… Instead, Miller insists, the best way to describe human bondage is by using the active voice. Employ the dynamic gerund “slaving,” he recommends, and dispense with the use of “slavery” with its connotations of static model building. The gerund, Miller argues, forces us to recognize that human bondage is above all a historical process carried forward by slavers in response to discrete and ever-changing historical contingencies.

I like the idea of bringing enslavement into the present and using active verbs and language to highlight the “historical process” of slavery. As a public historian, however, I question the practicality of incorporating the concept of “slaving” into my interpretations. I get ten minutes to spark my audience’s imagination and illuminate the complex intersection between Ulysses S. Grant, his wife Julia Dent’s family, and her family’s use of slave labor at their St. Louis home. Visitors of all ages are sometimes confused about the realities of slavery in the United States, challenging me to neatly define slavery without turning my entire interpretation into a history of the institution. It seems like this “slavery vs. slaving” debate needs to be played out first at the academic level before public historians introduce a concept like “slaving” to their audiences. Or maybe the National Park Service or a similar organization can find ways to collaborate with academic scholars to encourage a better understanding of the term.

What do you think? Should historians start using the term “slaving” instead of slavery?

Cheers

Good History Classes Need Primary AND Secondary Sources

Photo Credit: Kathryn Scott Olser and the Denver Post

Photo Credit: Kathryn Scott Olser and the Denver Post

To teach the principles of historical thinking in a classroom without the aid of primary source documentation is the equivalent of teaching someone to play guitar without giving them an instrument to practice on. During the G.W. Bush “No Child Left Behind” Era (and no doubt before that) education leaders in the United States preached the gospel of standardized testing. Through the use of history textbooks, pre-written tests (usually in the form of multiple choice scantron forms without any written essay questions), and pre-written classroom activities, a generation of historically-informed youth would acquire a correct and appreciative view of the nation’s past, which in turn would promote good citizenship and a healthy obedience to democratic values. As a high schooler in the early 2000s I was frequently treated to long-winded lectures about supposedly “important” dates, dead people, and dust, a barrage of multiple-choice tests, and assigned readings in history textbooks that would place the worst insomniac into a deep sleep. Primary sources–the “musical instruments of history”–were nowhere to be found in my high school education. My own teaching experiences in 2011 and 2012 were equally frustrating once I realized how little control I had in the design of my unit plans.

The No Child Left Behind (and President Obama’s “Race to the Top”) framework for teaching k-12 history is now being challenged by some historians and educators. The College Board recently drafted a new framework for teaching Advance Placement U.S. History courses that shifts the focus from rote memorization of factual information to the critical analysis and interpretation of primary source documentation. These proposed changes call for shifting the classroom experience towards teaching historical content and historical process. They also emphasize a broad view of history showing that our nation’s history is subject to multiple interpretations and perspectives.

If we adhere to the belief that history is a complex landscape composed of many viewpoints, however, the place of United States history within that landscape becomes more ambiguous than the NCLB framework would have us believe. The nationalist leanings of the American state–built largely on the foundations of a shared national history and the mythical stories we teach each other about that history–might be placed on infirm foundations. Beliefs in American exceptionalism could be replaced by a crisis of patriotism. The heroic can be challenged and criticized. Obedience to the social status quo transitions to questioning, dissent, and potential civil disobedience.

Unsurprisingly, there are critics who are concerned about teaching a complex form of American history that places our heroes, our “good wars,” and our heritage in limbo. Stanley Kurtz says the College Board’s revisions are “an attempt to hijack the teaching of U.S. history on behalf of a leftist political ideological perspective.” The Texas State Board of Education accuses the College Board of encouraging a “disdain for American principles.” And a Jefferson County, Colorado, School Board Member named Julie Williams is proposing that a new nine-member committee be formed to inspect U.S. history textbooks in the Jefferson County School District because, according to her, “I don’t think we should encourage kids to be little rebels. We should encourage kids to be good citizens” (high school students in the district are now protesting these school board proposals. Who says kids don’t care about history?).

Is there a better way to teach history, expose students to its “truths,” and remove its politics from the classroom?

One idea that is gaining steam throughout the country calls for the complete removal of history textbooks from the history curriculum. Public schools in Nashville, Tennessee, are removing textbooks from the classroom in favor of websites, “interactive” videos, and primary source documentation, all of which are being implemented through $1.1 million in funds for the 2014-2015 academic year. Historian and educator Fritz Fischer argues (but with a dose of skepticism) that these changes are welcome because “not relying on traditional history books cuts down on the potential for ‘textbook wars’ where residents object to certain conclusions.” Stephanie Wager of the Iowa Department of Education concurs, arguing that “you don’t really need to have the traditional textbook.” If we simply remove these politicized textbooks from the classroom, we can focus on primary sources and let students make their own conclusions from the historical evidence presented to them.

I agreed with this perspective a year ago, but I don’t agree with getting rid of history textbooks (or at least a selection of secondary-source readings) now. Here’s why:

For one, the notion that students will automatically learn more and prefer the use of fancy digital tools and “interactive” materials rather than print books is based on the faulty logic that today’s students are “digital natives” who are more comfortable using digital technology than older people who did not grow up around this technology. I addressed those claims here.

Secondly, removing secondary sources from the classroom prevents students from learning about the interpretive nature of history and how our understanding of the past is constantly revised as new questions about the present prompt new questions about the past. Jim Grossman is right when he argues that revisionism is fundamental to historical inquiry, and we lose that critical component of the historian’s toolbox when we simply throw primary sources at students without showing them how historians interpret and sometimes disagree with the meaning of those documents. If primary sources are the “musical instrument” with which historians conduct their performances, secondary sources are the “technique” we employ to help us competently perform with our musical instruments.

Thirdly, primary source documents are laced with their own biases, speculative claims, faulty memories, and political agendas. If you don’t believe that, just imagine what sorts of primary sources historians of the early 2000s will have at their disposal one hundred years from now. The best contemporary historical scholarship provides us strategies for assessing the reliability of a primary source, and that scholarship should be an integral part of the classroom experience. Again, just giving students the “facts” without giving them a framework for critically thinking about those “facts” does little to advance their own understanding of history’s complexities.

History is political and always will be. The United States has plenty of accomplishments to be proud of, but an unquestioning self-congratulatory narrative of progress doesn’t tell the whole story of this nation’s history. And it’s boring! We need to teach both content and process in the history classroom. We need more primary sources in the classroom, but we also need more secondary sources that do a better job of providing students with a framework for interpreting those primary sources. And we need to show students how the very nature of American identity and citizenship has changed over time, which means taking a critical look at both the good AND bad in American history.

Cheers

American History Doesn’t Start at Jamestown

ViralNova ListicleAs I went about my day this morning I came across a Facebook post from a friend that nearly made my eyes roll out of my head. The post linked to a 6th-grade quality “listicle” entitled “21 Things About America That Most Americans Don’t Realize.” I typically ignore these sorts of things, especially lists like this one that provide absolutely no evidence to back up their so-called historical facts. But a commenter on said friend’s post remarked about his pleasure in seeing number five, which is posted above, and I had to jump into the debate. I am no expert in 17th century history and I probably should have stayed out of the fray, but there’s a larger point about American history than needs to be made here.

Blacks have their own troubling role in slavery’s legacy. African elites in Western and Central Africa happily complied in helping advance the slave trade, which Henry Louis Gates explains in the New York Times. And there were some free blacks who owned slaves in America. Anthony Johnson is perhaps the most well-known black slaveholder. Born in Africa but sent to Virginia in 1821, Johnson worked as an indentured servant until 1635 or whereabouts. By the 1650s he was a well-to-do property owner with his own indentured servants. In 1653, one of those servants, John Casor, sued Johnson and argued that his term of service had expired. The courts found in 1655 that Johnson still “owned” Casor and that he be returned to Johnson immediately. This case was the first one in which a court found that a person who had not committed a crime could be legally held into lifelong servitude–enslavement–under British law. This is where claims emerge that Anthony Johnson was America’s first slaveholder.

The reality of the situation is much more complex, of course.

With regards to British law, the first legally enslaved African was John Punch, who was sentenced to lifetime servitude after attempting to run away to Maryland at some point in either late 1639 or early 1640, fifteen years before Johnson took ownership of Casor. Sociologist Rodney D. Coates of Miami University, in an analysis of the racialization of early American law cases, accurately concludes that “John Punch’s name should go down in history as being the first official slave in the English colonies” since he was the first one to be legally enslaved through English law (333). Anthony Johnson was not the first slaveholder in America and, by extension, the first slaveholder in America was not black. It’s also puzzling why so many biographies of Johnson (see here, here, and here) would omit such an important historical fact if it is actually true that he was the first American slaveholder.

This debate over who was the first slaveholder in America exposes the sorts of biases Americans have when it comes to understanding their own history.

All too often we Americans are taught in our White Anglo-Saxon Protestant history textbooks that the beginning of “American history” started with the Virginia Company’s “Mayflower Compact” at Jamestown, Virginia, in 1619 and later Pilgrim settlements in Plymouth, Massachusetts. History textbooks portray the growth of what eventually became the United States as a growth spreading from East to West, with America’s origins in the thirteen colonies followed by steady expansion westward. But settlement patterns in the present-day United States actually started in the opposite direction! According to sociologist and historian James Loewen, “people…discovered the Americas and settled it from west to east. People got to the Americas by boat from northeastern Asia or by walking across the Bering Strait during an ice age. Most Indians in the Americas can be traced by blood type, language similarity, and other evidence to a very small group of first arrivals…either way, afoot or by boat, evidence suggests that people entered Alaska first” (20). Moreover, following Columbus’s discovery of the “New World” in 1492, Spanish colonists settled in places like present-day New Mexico, Texas, and Florida before other Europeans settlers went to Virginia, Maryland, and Massachusetts. When they came to the New World, these Spanish colonists enslaved its indigenous populations and later began importing African slaves to the New World following Ferdinand and Isabella’s approval of African slavery in 1501. St. Augustine, Florida, was a hub for the Spanish slave trade. And it was all legal!

Clearly there were people living in the Americas thousands of years before any Europeans came over, although we often ignore that reality. For example, even though my hometown of St. Louis, Missouri, is celebrating 2014 as the 250th anniversary of the city’s “founding,” there was an advanced society hundreds of years before 1764 right in our backyard that was larger than London at one point in time.  After 1492, there were non-British Europeans who colonized the Americas and traded slaves long before the British came over. To ultimately suggest that Anthony Johnson was the first legal slaveholder in what would eventually become the United States is utter poppycock, no matter what any viral internet garbage tries to tell you.

Cheers

Follow

Get every new post delivered to your Inbox.

Join 181 other followers