Should Historians Influence Present-Day Politics?

I’ve been thinking a lot about Gordon Wood’s ideas on the differences between history and political theory. In his book The Purpose of the Past: Reflections on the Uses of History, Wood briefly reflects on his conception of these differences. One passage in the book is particularly noteworthy and worth quoting in full:

Political theorists, especially those influenced by the ideas of Leo Strauss, tend to believe that the history of political thought can be studied as a search for enduring answers to perennial questions that can enhance contemporary political thought. Historians, on the other hand, tend to hold that ideas are the products of particular circumstances and particular moments in time and that using them for present purposes is a distortion of their original historical meaning. It doesn’t follow from this distinction that past ideas cannot be legitimately used in the very different circumstances of the present; of course, they can be used and are used all the time. Jefferson’s idea of equality, for example, has been used time and again throughout our history, by Lincoln as well as Martin Luther King, Jr. Historians contend that such usages violate the original historical meaning of the ideas and cannot be regarded as historically accurate, but they don’t deny the rationality and legitimacy of such violations.

Such distinctions and violations are indeed necessary for contemporary discussions of political thought and are no great sin, as long as the theorists are aware that they are not being historically accurate. It’s the theorists’ claim that their present-day use of past ideas is true to the original way they were used in the past that historians quarrel with. Ideas, of course, do not remain rooted in the particular circumstances of time and place. Ideas can, and often do, become political philosophy, do transcend the particular intentions of the creators, and become part of the public culture, become something larger and grander than their sources. Political theory, studying these transcendent ideas, is a quite legitimate endeavor; it is, however, not history (162-163).

Wood expresses his concern that historians run the risk of thinking “unhistorically” by manipulating past ideas to fit our understanding of political conditions in the present. He worries that holding people from the past responsible for a future they could never envision or conceive in their own time leads to a poor understanding of past ideas within their own historical context – “the original way they were used in the past.” Thus, when politicians like Lincoln and MLK use the past to justify their political philosophies in the present (a common, rational practice then and now), they are distorting and violating the values of historical thinking, according to Wood. Historians analyze change over time and help us understand how our contemporary world came into being, but to use past ideas as a framework for establishing political theories in the present is not history. As Wood comments later in the book, “I suppose the most flagrant examples of present-mindedness in history writing come from trying to inject politics into history books . . . Historians who want to influence politics with their history writing have missed the point of the craft; they ought to run for office” (308).

These are sharp, intelligent reflections on the historian’s craft, but I think the extent to which politics plays a role in historical thinking is more of an open question than Wood would like to acknowledge. For one, it’s hard to imagine a history book free of politics because the past and present hold a reciprocal relationship with each other that makes it nearly impossible for us to get our politics out of the past. It is often assumed that the past shapes how we view the present, but less often do we acknowledge that the present shapes our conception of the past. It’s probably true that a place like Ferguson, Missouri, has been shaped by a past legacy of racist government policy and white supremacy, but it’s also true that the political ramifications of a 2014 police shooting of a black teenager by a white cop in Ferguson (regardless of the case’s final outcome) shape our conceptions of what, exactly, that legacy of racist government policy and white supremacy means for our history.

Public historians must also face these sorts of questions because so much of what they do is inherently political. The National Council on Public History defines public history as a set of practices aimed at “describ[ing] the many and diverse ways in which history is put to work in the world.  In this sense, it is history that is applied to real-world [i.e. present-day] issues.” And a recent essay on the NCPH’s blog, History@Work, praised the efforts of a federal district court judge to provide a historical context for explaining her opposition to a 2011 Texas Voter ID law, arguing that this case was an example of putting history to work in the world. These sorts of present-day uses of the past seem to contradict Wood’s distinction between history and political theory.

What is a historian of racism, Jim Crow, and the police state supposed to do with a political hotcake like Ferguson? If that historian embraces Wood’s avoidance of politics in history writing, he or she may choose to focus solely on the political ideas surrounding these topics from around 1830-1890 or a period of that sort, focusing on how these ideas materialized within the context of that period without mentioning or connecting them to present-day politics. But a critic of this approach might argue that limiting a discussion of these topics to the nineteenth century is in itself a political act that also leaves out a crucial piece of the historical narrative. Critics might also invoke the arguments of other historians like Howard Zinn, who asserted in 1970 that “we can separate ourselves in theory as historians and citizens. But that is a one-way separation which has no return: when the world blows up, we cannot claim exemption as historians.”

Food for thought.

Cheers

Can a Distinction Be Made Between “Academic” and “Popular” History?

A colleague and I recently engaged in a fascinating discussion comparing and contrasting works of “popular history” and “academic history.” Through this conversation I realized that I’m not sure how to define the proper criteria for what constitutes a work of “popular history.” Does a work of historical scholarship become popular once it hits a certain number of book sales? If so, what is that number? Does one need to have a certain educational background in order to be considered a popular historian? Can a work geared towards academic scholars become popular with a non-academic audience? Can a clear distinction be made between works of popular history and academic history?

Some professional historians with PhDs believe that they alone are qualified to shape and participate in the historical enterprise. A couple years ago historians Nancy Isenberg and Andrew Burstein attempted to act as gatekeepers in a condescending article for Salon that dismissed popular history written by non-academics and argued that only PhD historians were qualified to write credible historical scholarship:

Frankly, we in the history business wish we could take out a restraining order on the big-budget popularizers of history (many of them trained in journalism) who pontificate with great flair and happily take credit over the airwaves for possessing great insight into the past. Journalists are good at journalism – we wouldn’t suggest sending off historians to be foreign correspondents. But journalists aren’t equipped to make sense of the eighteenth and nineteenth centuries.

I find this perspective badly flawed and unrealistic. Yes, a history PhD provides a blanket of scholarly authority and a thorough training in research, writing, and interpretation. But to suggest that only history PhDs alone can “do history” negates the fact that people of all education levels use historical thinking on a daily basis without the help of history PhDs. There are many different ways people learn about and understand history, including film, television, blogs, twitter, and cultural institutions like history museums and historical societies. All of these mediums attract larger audiences than books written by academics. The wish that historians, journalists, etc. would simply stay in their academic “silos” of expertise and dictate their knowledge to the rest of society–without the input of non-academics–smacks of what Tara McPherson defines as “lenticular logic.” In a complex and wide-ranging critique of academic “silos” and the racialization of the digital humanities, McPherson argues that “the lenticular image partitions and divides, privileging fragmentation. A lenticular logic is a logic of the fragment or the chunk, a way of seeing the world as discrete modules or nodes, a mode that suppresses relation and context.” History is all around us and anyone can participate in the making of new scholarship, not just the academic gatekeepers. To suggest that one’s credentials are more important than the substance of their arguments is profoundly un-academic to me.

Notwithstanding Isenberg and Burstein’s arguments, can we still make generalizations about what makes a work of history “popular history”? In the course of our conversation I attempted to outline a few distinctions to my colleague.

Interpretation vs. Reporting: Some of the more popular works of history I’ve come across tend to do more reporting of “what actually happened” rather than closely examining primary and secondary source documents for new ways of interpreting the past or questioning common understandings of historical events. For example, Jay Winik’s April 1865: The Month That Saved America is a widely popular retelling of the events leading up to Confederate General Robert E. Lee’s surrender to United States General Ulysses S. Grant, but the narrative Winik embraced didn’t change our understanding of these events and simply repeated past interpretations about the supposed beginning of a national reconciliation following Appomattox. Meanwhile, a more recent book published by an academic press and written by an academic scholar about the same events in April 1865 will most likely not gain the same audience as Winik’s book. Elizabeth Varon’s Appomattox: Victory, Defeat, and Freedom at the End of the Civil War interrogates our popular understanding of Lee’s surrender to Grant and convincingly shows that the Appomattox surrender was not necessarily the starting point of a happy national reconciliation that past scholars have argued. Her book is more interpretive than Winik’s and leaves us asking new questions rather than accepting a grand narrative about Appomattox.

Methods vs. Content: Academic scholars are trained to place their scholarship within a larger framework that analyzes how historians have interpreted and understood a historical event over time – what is commonly referred to as “historiography.” In a previous essay I criticized David McCullough for never placing his book 1776 within the historiography of George Washington studies. We never get a sense in 1776 of where McCullough’s understanding of Washington’s generalship fits within the scholarly discussions about this topic, and we struggle to figure out how and where McCullough is obtaining the information he is using to inform his scholarship (and the footnotes are awful, although the organization of footnotes is often controlled by publishers, unfortunately). The work of other scholars gets flattened in works of popular history, and historical methods are replaced by a focus on content and narrative. I don’t necessarily think it’s bad to focus on content at the sacrifice of methods, but had I written the way McCullough writes while in graduate school I would have failed all my classes. As a historian I want to see the author’s methodology, sources, and historiography regardless of topic, but I suppose it remains an open question as to the necessity of these things and where they would fit within a work intended for a non-academic audience.

Type of History: Certain types of history are more popular than others, and national histories and grand narratives remain popular despite changing interests from academic historians. In the 1960s and 1970s academic historians sought new ways of understanding the past through the experiences of ordinary people rather than grand narratives about politicians, monarchies, and cultural elites. They started asking questions about marriage, divorce, alcohol consumption, group rituals and sexual habits, and they started using social science techniques (from economics, anthropology, and political science) and devising quantitative methods for answering these questions in an effort to capture a more holistic understanding of past societies. Also crucial to this “new social history” was a focus on local context, whether it be families, tribes, cities, counties, and states. According to Gordon Wood, “by the 1970s this new social history of hitherto forgotten people had come to dominate academic history writing,” and almost every facet of human behavior was placed under scrutiny by historians (2). Gone was the focus on grand narrative and national history. Despite these radical changes (and subsequent ones) within the academy, non-academic interest in the work of social historians remains lukewarm to this day. Wood points out that history degrees awarded to students from 1970 to 1986 declined by two-thirds (3), and the numbers still look questionable today. Go to a Barnes & Noble history bookshelf and you’ll find a plethora of books on war, politics, and national histories, but few studies on gender, social history, cultural history, or local history. The Bill O’Reillys, David McCulloughs, and Walter Issacsons are still the big sellers at popular bookstores.

Despite these generalizations, I came to realize in my conversation that there are exceptions to all these rules. McCullough’s 1776 presented a bold interpretation suggesting that Washington’s subordinate generals deserve more credit for their role in keeping the Continental Army together in 1776, and by all accounts McCullough is a meticulous researcher and well-respected by most academic historians. Columbia University historian Eric Foner’s magisterial 1988 “academic” publication Reconstruction: America’s Unfinished Revolution, 1863-1877 delved deeply into historiographical arguments and interpretive history, yet it gained widespread popularity and remains a standard in Reconstruction studies. Bruce Catton, Douglas Southall Freeman, and Allen Nevins–all journalists–inspired generations of Americans (and future history PhDs) throughout the mid twentieth century to study the American Civil War through their meticulously researched narratives on the war. And academic historians throughout the 1950s and 1960s were seen as highly respected public intellectuals.

The more I think about it, the more unsure I become of this academic-popular divide. In the end I think all historians can learn a lot from each other about method, content, style, tone, and organization without putting each other into boxes based solely on book sales.

Cheers

Taking the #Historiannchallenge

The renowned Civil War historian James McPherson recently conducted an interview with the New York Times Book Review about his favorite works of history, who he believes are the best historians writing today, and what sort of reader he was as a child. Several historians were perplexed by the outdated nature of McPherson’s book list and, more significantly, how 29 men were mentioned while only one woman (who is actually a trained political scientist, not a historian) received any credit for their work in the field. Several bloggers critiqued the McPherson interview, including historian Ann M. Little, who challenged other historians to interview themselves and think about the sorts of answers they’d give to these questions if they were interviewed by the New York Times. My interview is below. Enjoy!

What books are you currently reading?

I am currently reading Gerard N. Magliocca’s biography of Civil War Era Congressman John Bingham, American Founding Son: John Bingham and the Invention of the Fourteenth Amendment. As Yale professor David Blight remarks in his iTunes U lecture series, the United States Constitution prior to the Civil War is not the same Constitution we adhere to today. The antebellum constitution is the “Old Testament” of American governance, whereas our current constitution—shaped largely by the “Civil War Amendments,” especially the 14th—is our “New Testament” of American governance. And John Bingham is the largely unknown author of that New Testament.

What was the last truly great book you read?

Karen and Barbara Fields’ Racecraft: The Soul of Inequality in American Life is simply outstanding. It challenged me to reconsider how I view race and inequality in the United States and helped me better understand how the historical concept of “race” is the creation of racism. I wrote extensively about this book here.

Who are the best historians writing today?

I don’t like the way this question is phrased. Some of the best historians in the field today aren’t known for or primarily interested in writing, which is only one facet of work historians undertake within the larger historical enterprise. There are historians who teach in high schools, community colleges, and liberal arts universities who don’t research and write extensively, and there are public historians working in government, National Parks, museums, historical societies, archives, historic preservation, and the non-profit sector who do things like interpretation, museum education, document preservation, and public policy. As American universities corporatize, adjunctify, and eliminate tenure track positions, younger historians like myself are increasingly shunned from the sorts of research/academic opportunities scholars like James McPherson had in the 1950s and 60s. The reality is that we face bleak prospects as research-based academic historians whether we like it or not. And regardless, some of us just have different interests within the field.

That makes sense. To rephrase, who are the most inspiring historians to you today?

Good question! There are too many to name here, but I’ll mention a few noteworthy scholars, regardless of their written scholarship. While a graduate student at IUPUI I was very fortunate to study under some phenomenal historians, including John Dichtl, Jason M. Kelly, Modupe Labode, Anita Morgan, Rebecca Shrum, and Stephen E. Towne. My classmate and good friend Nicholas K. Johnson also deserves mention. Nick is currently in Berlin, Germany, for one year conducting research on Weimar Germany cultural history and studying public history. He is a brilliant scholar who is going to be well-known in the field someday.

Outside of IUPUI, I admire the work of Yoni Appelbaum, David Blight, Ta-Nehisi Coates, Kalani Craig, Eric FonerBarbara Fields, Gary Gallagher, Drew Gilpan Faust, Jennifer Guiliano, Keith Harris, John Hennessey, Caroline Janney, Kevin Levin, James Loewen, Al Mackey, Megan Kate Nelson, Andrew Joseph Pegoda, Bob Pollock, Mary Rizzo, Liz Ševčenko, Brooks Simpson, and Laurel Thatcher Ulrich. And many, many more.

What’s the best book ever written about the Civil War?

Impossible to answer. There are literally hundreds of thousands of studies on the American Civil War, and our understanding of history is constantly revised as historians reinterpret and ask new questions of their primary source documents. I will happily recommend a few noteworthy selections that I personally love, however.

James McPherson’s Battle Cry of Freedom: The Civil War Era still remains a fine, comprehensive study of the Civil War that is easily accessible for readers. Charles Dew’s short volume Apostles of Disunion: Southern Secession Commissioners and the Causes of the Civil War is the one of the best studies of causes of the Civil War. Kenneth M. Stampp’s The Imperiled Union: Essays on the Background of the Civil War explore similar territory and includes a fantastic essay of the evolving nature of U.S. nationalism and the changing interpretation of what, exactly, “a more perfect union” means in American politics. Andre M. Fleche’s The Revolution of 1861: The American Civil War in the Age of Nationalist Conflict places the American Civil War within the global struggle for liberal democracy in the nineteenth century. Mark A. Noll’s The Civil War as a Theological Crisis demonstrates how differing interpretations regarding the Biblical sanctity of slavery presaged political conflicts over slavery. Stephanie McCurry’s Confederate Reckoning: Power and Politics in the Civil War South chronicles the undemocratic nature of the Confederacy’s experiment in nationhood and, by extension, how the Civil War spawned new (albeit temporary) gender norms in the slaveholding South. And David Blight’s Race and Reunion: The Civil War in American Memory provides a beautifully thought-provoking analysis of the ways the Civil War was remembered throughout the nation after the guns fell silent.

I think that’s a good starting point.

Do you have a favorite biography of a Civil War-era figure?

Although the book has its factual and interpretive problems, I really enjoyed Jean Edward Smith’s biography of Ulysses S. Grant. The book introduced me to Grant and helped spawn my interest in the Civil War Era, and for those things I will always be thankful. Here’s my list of recommended Grant studies for those interested.

What are the best military histories?

I regret to admit that I’m not much of a military historian. That said, I enjoyed Joseph Glatthaar’s General Lee’s Army: From Victory to Collapse, J.F.C. Fuller’s Grant and Lee: A Study in Personality and Generalship, Paul Fussell’s The Great War and Modern Memory, and anything from Gary Gallagher and Bruce Catton (it clearly appears that my reading of military history is still dominated by old or dead white men, unfortunately).

And what are the best books about African-American history?

I like David Blight’s A Slave No More: Two Men Who Escaped to Freedom, Including Their Own Narratives of Emancipation, John Hope Franklin and Loren Schweninger, Runaway Slaves: Rebels on the Plantation, Colin Grant, Negro with a Hat: The Rise and Fall of Marcus Garvey, Isabel Wilkerson, The Warmth of Other Suns: The Epic Story of America’s Great Migration, and Bruce Baker and Brian Kelly, eds., After Slavery: Race, Labor, and Citizenship in the Reconstruction South. There are many other wonderful studies on African-American history that I have yet to read, and I’d love to hear more recommendations from readers.

What kind of reader were you as a child?

A voracious one. My parents read to me every night as a child and I devoured anything and everything I could get my hands on in our public library and school bookmobile. I still remember winning a reading challenge in my first grade classroom after reading more than 400 books that year.

If you had to name one book that made you who you are today, what would it be?

Miles Davis’s autobiography, Miles: The Autobiography, is my favorite book of all time. Part memoir, part history of jazz music, part cultural criticism, The Autobiography exposed me during my teenage years to the ways music acts as a form of cultural expression, political dissent, and historical interpretation all at the same time. And it taught me that our heroes–no matter how much we venerate them–have their own faults, shortcomings, mistakes, fears, and concerns about the world around them. Acknowledging those mistakes and taking steps to rectify them is perhaps the really heroic act of humanity we could all do a better job of practicing.

If you could require the president to read one book, what would it be?

He’s got plenty of other things he should be doing now rather than taking book recommendations from me, but he should read Jonathan M. Hansen’s Guantánamo: An American History once he leaves the Presidency so he can see how badly he screwed up by not closing down America’s most visible symbol of imperial ambition and the post-9/11 legal ambiguities of the War on Terror.

What books are you embarrassed not to have read yet?

Too many, but I am particularly embarrassed about not having read Edmund Morgan’s American Slavery, American Freedom.

What books are on your night stand?

Three books waiting to be read as soon as possible are Michel Foucault, Discipline and Punish, George E. Hein, Learning in the Museum, and Caleb McDaniel, The Problem of Democracy in the Age of Slavery: Garrisonian Abolitionists and Transatlantic Reform.

Cheers

Gary Gallagher on the Differences Between History and Memory

One of my favorite lectures from renowned Civil War historian and University of Virginia professor Gary Gallagher is included in the video above. The bulk of the lecture is dedicated to analyzing the Battle of Gettysburg within the larger context of the Civil War and questioning the common belief that it was a “turning point” that signaled the end of the Confederacy. In the process of questioning these assumptions, Gallagher provides a fine explanation of the differences between history and memory (this explanation starts around the 7:00 minute mark and continues for about ten minutes).

Gallagher points out that history students oftentimes confuse history and memory as being one in the same, and these confusions can lead to questionable interpretations of primary source documents. He argues that in any historical event there’s a certain sequence of complexities and contingencies that shape the outcome of that event (history). But how we remember that event (memory) can be at odds with what actually happened at the time. Gallagher suggests that of the two, memory is oftentimes more important than history to individuals and societies because “it doesn’t matter what happened, it’s what we think happened.” Our memories guide how we engage with and think about history, and they shape how we choose to remember historical events.

Too often, however, our memories can lead us to think of historical events as inevitable. Within the context of the Civil War Gallagher diagnoses this problem as the “Appomattox Syndrome,” where scholars start off with the two great legacies of the war (Union and Emancipation) and then move backwards from these moments of inevitability. It’s easy to for us today to view events from this perspective because we are merely observers of Civil War history, not participants in the making of historical events during that war. But Gallagher says that starting at the end of the story “is the wrong way to do history.” If we want to better understand how things actually happened, then we must start from the beginning (while keeping in mind that defining the “beginning” is a subjective exercise) and move forward, taking account of all complexities and contingencies along the way.

We can compare Gallagher’s differentiation between history and memory to different approaches of understanding a sporting event. Sports pundits oftentimes analyze games following the “Appomattox Syndrome” approach. They already know the outcome of the game and let that influence their analysis of crucial “turning points” and highlights worth showing to their audiences: a fight in the second period of the hockey game is defined as a “turning point” that energizes the home crowd and inevitably pushes that team towards the winning goal; a hitter on a baseball team gets a game-winning hit in the ninth inning, and a pundit argues that he “found his swing” during an earlier at-bat in the sixth inning. And so on. But by looking at sporting events as they happened and moving forward, we might do a better job of understanding the complexities and contingencies that shaped the outcomes of those matches.

Although historians are more apt to trust a primary source document that is created around the time an event took place (a diary written while fighting at Gettysburg) rather than a remembrance (such as a memoir or a newspaper interview forty years later), we should still proceed with caution when looking at primary source documents. All primary source documents are written from a perspective that doesn’t necessarily account for the entire scope of a historical event, and these documents can include their own biases, distortions, and speculations. One soldier at Little Round Top will have a different perspective than the one at Culp’s Hill (and even another soldier at Little Round Top). A Union soldier will have a different view of events than a Confederate. A local resident of Gettysburg might have different desires and concerns than Confederate General Robert E. Lee or U.S. President Abraham Lincoln.

Through this mix of perspectives and primary source documents created during and after the course of a historical event, historians attempt to reconstruct things as they actually occur and discover the “truths” of history. But can we really discover the truths of history? I think there are certain times when we can, but there are many times (probably more than we are willing to acknowledge) when we find ourselves in the shoes of my friend and colleague Bob Pollock, shouting “just gimme some truth” and wondering if we can ever pick up all the pieces of the past.

Cheers

Racism, Racecraft, and the Folly of Tolerance

I just finished reading Karen and Barbara Fields’ fine 2012 publication Racecraft: The Soul of Inequality in American Life. Like Ta-Nehisi Coates (who discusses the book here and here) I found that some of the book’s essays were more difficult to read than others, and there were times when I did not understand the arguments being made. But the main idea of the book–that the concept of “race” in America is an invention of racism–really challenges me to reconsider how the language American society uses to discuss “race” is oftentimes embedded with racist connotations, even if those doing the talking have good intentions.

The Fields’ point out in the introduction to Racecraft that Martin Luther believed in witchcraft: men stealing milk by thinking of cows; a mother getting asthma because of a mean glare from a neighbor; seeing demons at one’s deathbed. In each of these cases, Witchcraft became a strategy by which Luther explained his everyday experiences and the world around him. Witchcraft became an ideology for Luther, as it did for many people for many hundreds of years before and after his time. But at some point the ideology of witchcraft no longer sufficed as an explanation for the everyday experiences of peoples’ lives. Today we would never attribute someone’s asthma to witchcraft, much less suggest that witchcraft is the direct cause for anything in society. The illusion of witches produced the concept of witchcraft in Luther’s time; the Fields’ argue that the illusion of race produces the concept of racecraft in our own time. The comparison is significant not because both concepts are silly superstitions, but because both concepts have been so widely accepted as plausible explanations of fundamental truths about human nature.

The concept of “race” was originally conceived by Enlightenment-era thinkers as a way of classifying and categorizing people and justifying transatlantic slavery on the basis of skin color and ancestry. But “race” as a biological concept has been discredited by the scientific community today, and there is actually no genetic basis for race. (see Jason Kelly’s brief list of resources on scientific racism here and information from the PBS program “Race – The Power of an Illusion” here) What constitutes “black” in the United States is not the same as what constitutes “black” in Brazil or Colombia. People who fall outside the U.S. racial paradigm–say, a Muslim immigrant–oftentimes aren’t attributed a “race” within that paradigm.

Racecraft, according to the Fields’, fails to explain inequality because it mutes class inequality. They cite a recent case in which a white electrician in Ohio fell on hard times after the 2008 Great Recession and was forced to rely on government aid. The electrician went out at midnight to buy groceries and, in a fit of disgust, had no qualms commenting in a New York Times article his exasperation with seeing large “crowds of midnight [food-stamp] shoppers once a month when benefits get renewed…Generally, if you’re up at the hour and not working, what are you into?” The Fields’ point out that even though the electrician was out at midnight with food stamps and ostensibly conducting legitimate business, “he assumed that the people in the crowds were not on legitimate business…Racism tagged the midnight shoppers as ‘into’ something unsavory because they appeared to be out of work; racecraft concealed the truth that the electrician and the midnight shoppers suffer under the same regime of inequality” (269-270).

Racecraft also uses “race” to explain racism, when the opposite is the actual reality. The Fields’ point out that while scholars and school teachers often argue that “legal segregation was based upon race,” legal segregation was actually based upon racism and the false science that justified the classifying of “races.” Thomas Jefferson justified slavery in his Notes on the State of Virginia on the scientifically “factual” basis that the ancestry of slaves in America made them biologically inferior people, but in reality racism sustained slavery’s justification and expansion in the United States. In sum, “racism becomes something Afro-Americans are, rather than something racists do” (97).

In chapter three, Barbara Fields pushes the race-racism paradigm to its outer boundaries by suggesting that the notion of “racial tolerance” masks racial discrimination through “good intentions.” In response to an author who argued that the refusal of cab drivers to stop for black passengers was attributable to “intolerance,” Fields clearly outlines the problematic nature of “tolerance”:

Tolerance itself, generally surrounded by a beatific glow in American political discussion, is another evasion born of the race-racism switch. Its shallowness as a moral or ethical precept is plain. (“Tolerate thy neighbor as thyself” is not quite what Jesus said…). As a political precept, tolerance has unimpeachably anti-democratic credentials, dividing society into persons entitled to claim respect as a right and persons obliged to beg tolerance as a favor. The curricular fad for “teaching tolerance” underlines the anti-democratic implications. A teacher identifies for the children’s benefit characteristics (ancestry, appearance, sexual orientation, and the like) that count as disqualifications from full and equal membership in human society. These, the children learn, they may overlook, in an act of generous condescension–or refuse to overlook, in an act of ungenerous condescension. Tolerance thus bases equal rights on benevolent patronization rather than democratic first principles, much as a parent’s misguided plea that Jason “share” the swing or seesaw on a public playground teaches Jason that his gracious consent, rather than another child’s equal claim, determines the other child’s access (104-105).

There’s a lot to digest in Racecraft, but it’s well worth the read and I learned much. While race is a biological fallacy, discrimination and double standards against people based on their ancestry–racism–is very real. The logic of racecraft–the illusion of race–masks inequality through false scientific explanations for why some people are supposedly inferior to others. Racecraft also challenges me to consider how public historians, museum practitioners, and classroom educators approach ideas of tolerance, equality, and understanding in their work with students.

Cheers

What’s in a Name? “Slavery” vs. “Slaving” in Historical Interpretation

As a public historian working at a historic site with intimate connections to U.S. antebellum culture, I am tasked with discussing American slavery and interpreting the perspectives of the enslaved people who worked at the White Haven estate. I relish the opportunity to discuss a complex and difficult topic and believe it is my duty to keep slavery at the forefront of my interpretive presentations, even though I can clearly tell (eye-rolls, exasperated breaths, etc.) that some people don’t want to hear about slavery.

When discussing slavery, it’s important to use precise language that clearly conveys the fact that it was a form of submission, oppression, and control masquerading as a form of legitimate “property” in human flesh. For example, I never refer to slaves as “servants” in my interpretations. In the English colonies, Virginia law clearly distinguished between servitude and slavery by the mid-seventeenth century. Indentured servitude stipulated that both parties–the servant and his/her benefactor–voluntarily engaged in a labor agreement. Most indentured servants by that time agreed to sell their labor for a number years (but not a lifetime) for passage and housing in the New World. Even though indentured servants’ labor agreements were sometimes violated, extended, and abused in ways that made it look like slavery, the two were not mutually exclusive. Virginia laws after 1661 stipulated that slavery meant lifetime servitude based on race and ancestry, and that it could be hereditary depending on the mother’s prior status as either free or slave upon the child’s birth. What constituted a temporary (and mostly) voluntary agreement under indentured servitude eventually became lifetime involuntary enslavement passed down through heredity.

Another difficulty with conflating “slaves” as “servants” lies in the ways Confederate apologists downplayed slavery in the years after the Civil War, an effort still continued by certain interest groups today (who will remain unnamed). Examples abound of former masters who suggested after the war that their slaves were happy, contented servants who were well taken care of and uninterested in gaining their freedom. Confederate Vice President Alexander Stephens, facing temporary imprisonment in Boston following the end of the Civil War in 1865, sadly remarked as he left his home that “leave-takings were hurried and confused. The servants all wept. My grief at leaving them and home was too burning, withering, scorching for tears. At the depot was an immense crowd, old friends, black and white, who came in great numbers and shook hands” (109). If one were to read this pitiful story without any other context, he or she would probably think Stephens was a beloved racial egalitarian and benevolent employer, not a slaveholder and author of the “Cornerstone Speech” of March 21, 1861, in which he argued that the Confederacy’s “corner- stone rests, upon the great truth that the negro is not equal to the white man; that slavery subordination to the superior race is his natural and normal condition.”

Therefore, when I talk about slavery, I always make sure to explicitly use the words “slavery” and “enslavement” so that no one leaves thinking that the African Americans who worked at White Haven before the war were servants who voluntarily sold their labors to their owner, Frederick Dent.

Some history scholars, however, are now challenging the use of the term “slavery.” Joseph Miller, a history professor at the University of Virginia, finds the term “slavery” to be too passive:

In order better to understand slavery’s march across history and into our time, Miller challenges historians to radically revise some basic assumptions. We can best comprehend how human bondage actually worked and still works today, he argues, if we abandon the noun “slavery” and our attempts to describe “the institution of slavery.” These, Miller argues, are static characterizations that convey none of the dynamism of slavery’s durability, variability and evolution across the centuries… Instead, Miller insists, the best way to describe human bondage is by using the active voice. Employ the dynamic gerund “slaving,” he recommends, and dispense with the use of “slavery” with its connotations of static model building. The gerund, Miller argues, forces us to recognize that human bondage is above all a historical process carried forward by slavers in response to discrete and ever-changing historical contingencies.

I like the idea of bringing enslavement into the present and using active verbs and language to highlight the “historical process” of slavery. As a public historian, however, I question the practicality of incorporating the concept of “slaving” into my interpretations. I get ten minutes to spark my audience’s imagination and illuminate the complex intersection between Ulysses S. Grant, his wife Julia Dent’s family, and her family’s use of slave labor at their St. Louis home. Visitors of all ages are sometimes confused about the realities of slavery in the United States, challenging me to neatly define slavery without turning my entire interpretation into a history of the institution. It seems like this “slavery vs. slaving” debate needs to be played out first at the academic level before public historians introduce a concept like “slaving” to their audiences. Or maybe the National Park Service or a similar organization can find ways to collaborate with academic scholars to encourage a better understanding of the term.

What do you think? Should historians start using the term “slaving” instead of slavery?

Cheers

Good History Classes Need Primary AND Secondary Sources

Photo Credit: Kathryn Scott Olser and the Denver Post

Photo Credit: Kathryn Scott Olser and the Denver Post

To teach the principles of historical thinking in a classroom without the aid of primary source documentation is the equivalent of teaching someone to play guitar without giving them an instrument to practice on. During the G.W. Bush “No Child Left Behind” Era (and no doubt before that) education leaders in the United States preached the gospel of standardized testing. Through the use of history textbooks, pre-written tests (usually in the form of multiple choice scantron forms without any written essay questions), and pre-written classroom activities, a generation of historically-informed youth would acquire a correct and appreciative view of the nation’s past, which in turn would promote good citizenship and a healthy obedience to democratic values. As a high schooler in the early 2000s I was frequently treated to long-winded lectures about supposedly “important” dates, dead people, and dust, a barrage of multiple-choice tests, and assigned readings in history textbooks that would place the worst insomniac into a deep sleep. Primary sources–the “musical instruments of history”–were nowhere to be found in my high school education. My own teaching experiences in 2011 and 2012 were equally frustrating once I realized how little control I had in the design of my unit plans.

The No Child Left Behind (and President Obama’s “Race to the Top”) framework for teaching k-12 history is now being challenged by some historians and educators. The College Board recently drafted a new framework for teaching Advance Placement U.S. History courses that shifts the focus from rote memorization of factual information to the critical analysis and interpretation of primary source documentation. These proposed changes call for shifting the classroom experience towards teaching historical content and historical process. They also emphasize a broad view of history showing that our nation’s history is subject to multiple interpretations and perspectives.

If we adhere to the belief that history is a complex landscape composed of many viewpoints, however, the place of United States history within that landscape becomes more ambiguous than the NCLB framework would have us believe. The nationalist leanings of the American state–built largely on the foundations of a shared national history and the mythical stories we teach each other about that history–might be placed on infirm foundations. Beliefs in American exceptionalism could be replaced by a crisis of patriotism. The heroic can be challenged and criticized. Obedience to the social status quo transitions to questioning, dissent, and potential civil disobedience.

Unsurprisingly, there are critics who are concerned about teaching a complex form of American history that places our heroes, our “good wars,” and our heritage in limbo. Stanley Kurtz says the College Board’s revisions are “an attempt to hijack the teaching of U.S. history on behalf of a leftist political ideological perspective.” The Texas State Board of Education accuses the College Board of encouraging a “disdain for American principles.” And a Jefferson County, Colorado, School Board Member named Julie Williams is proposing that a new nine-member committee be formed to inspect U.S. history textbooks in the Jefferson County School District because, according to her, “I don’t think we should encourage kids to be little rebels. We should encourage kids to be good citizens” (high school students in the district are now protesting these school board proposals. Who says kids don’t care about history?).

Is there a better way to teach history, expose students to its “truths,” and remove its politics from the classroom?

One idea that is gaining steam throughout the country calls for the complete removal of history textbooks from the history curriculum. Public schools in Nashville, Tennessee, are removing textbooks from the classroom in favor of websites, “interactive” videos, and primary source documentation, all of which are being implemented through $1.1 million in funds for the 2014-2015 academic year. Historian and educator Fritz Fischer argues (but with a dose of skepticism) that these changes are welcome because “not relying on traditional history books cuts down on the potential for ‘textbook wars’ where residents object to certain conclusions.” Stephanie Wager of the Iowa Department of Education concurs, arguing that “you don’t really need to have the traditional textbook.” If we simply remove these politicized textbooks from the classroom, we can focus on primary sources and let students make their own conclusions from the historical evidence presented to them.

I agreed with this perspective a year ago, but I don’t agree with getting rid of history textbooks (or at least a selection of secondary-source readings) now. Here’s why:

For one, the notion that students will automatically learn more and prefer the use of fancy digital tools and “interactive” materials rather than print books is based on the faulty logic that today’s students are “digital natives” who are more comfortable using digital technology than older people who did not grow up around this technology. I addressed those claims here.

Secondly, removing secondary sources from the classroom prevents students from learning about the interpretive nature of history and how our understanding of the past is constantly revised as new questions about the present prompt new questions about the past. Jim Grossman is right when he argues that revisionism is fundamental to historical inquiry, and we lose that critical component of the historian’s toolbox when we simply throw primary sources at students without showing them how historians interpret and sometimes disagree with the meaning of those documents. If primary sources are the “musical instrument” with which historians conduct their performances, secondary sources are the “technique” we employ to help us competently perform with our musical instruments.

Thirdly, primary source documents are laced with their own biases, speculative claims, faulty memories, and political agendas. If you don’t believe that, just imagine what sorts of primary sources historians of the early 2000s will have at their disposal one hundred years from now. The best contemporary historical scholarship provides us strategies for assessing the reliability of a primary source, and that scholarship should be an integral part of the classroom experience. Again, just giving students the “facts” without giving them a framework for critically thinking about those “facts” does little to advance their own understanding of history’s complexities.

History is political and always will be. The United States has plenty of accomplishments to be proud of, but an unquestioning self-congratulatory narrative of progress doesn’t tell the whole story of this nation’s history. And it’s boring! We need to teach both content and process in the history classroom. We need more primary sources in the classroom, but we also need more secondary sources that do a better job of providing students with a framework for interpreting those primary sources. And we need to show students how the very nature of American identity and citizenship has changed over time, which means taking a critical look at both the good AND bad in American history.

Cheers

American History Doesn’t Start at Jamestown

ViralNova ListicleAs I went about my day this morning I came across a Facebook post from a friend that nearly made my eyes roll out of my head. The post linked to a 6th-grade quality “listicle” entitled “21 Things About America That Most Americans Don’t Realize.” I typically ignore these sorts of things, especially lists like this one that provide absolutely no evidence to back up their so-called historical facts. But a commenter on said friend’s post remarked about his pleasure in seeing number five, which is posted above, and I had to jump into the debate. I am no expert in 17th century history and I probably should have stayed out of the fray, but there’s a larger point about American history than needs to be made here.

Blacks have their own troubling role in slavery’s legacy. African elites in Western and Central Africa happily complied in helping advance the slave trade, which Henry Louis Gates explains in the New York Times. And there were some free blacks who owned slaves in America. Anthony Johnson is perhaps the most well-known black slaveholder. Born in Africa but sent to Virginia in 1821, Johnson worked as an indentured servant until 1635 or whereabouts. By the 1650s he was a well-to-do property owner with his own indentured servants. In 1653, one of those servants, John Casor, sued Johnson and argued that his term of service had expired. The courts found in 1655 that Johnson still “owned” Casor and that he be returned to Johnson immediately. This case was the first one in which a court found that a person who had not committed a crime could be legally held into lifelong servitude–enslavement–under British law. This is where claims emerge that Anthony Johnson was America’s first slaveholder.

The reality of the situation is much more complex, of course.

With regards to British law, the first legally enslaved African was John Punch, who was sentenced to lifetime servitude after attempting to run away to Maryland at some point in either late 1639 or early 1640, fifteen years before Johnson took ownership of Casor. Sociologist Rodney D. Coates of Miami University, in an analysis of the racialization of early American law cases, accurately concludes that “John Punch’s name should go down in history as being the first official slave in the English colonies” since he was the first one to be legally enslaved through English law (333). Anthony Johnson was not the first slaveholder in America and, by extension, the first slaveholder in America was not black. It’s also puzzling why so many biographies of Johnson (see here, here, and here) would omit such an important historical fact if it is actually true that he was the first American slaveholder.

This debate over who was the first slaveholder in America exposes the sorts of biases Americans have when it comes to understanding their own history.

All too often we Americans are taught in our White Anglo-Saxon Protestant history textbooks that the beginning of “American history” started with the Virginia Company’s “Mayflower Compact” at Jamestown, Virginia, in 1619 and later Pilgrim settlements in Plymouth, Massachusetts. History textbooks portray the growth of what eventually became the United States as a growth spreading from East to West, with America’s origins in the thirteen colonies followed by steady expansion westward. But settlement patterns in the present-day United States actually started in the opposite direction! According to sociologist and historian James Loewen, “people…discovered the Americas and settled it from west to east. People got to the Americas by boat from northeastern Asia or by walking across the Bering Strait during an ice age. Most Indians in the Americas can be traced by blood type, language similarity, and other evidence to a very small group of first arrivals…either way, afoot or by boat, evidence suggests that people entered Alaska first” (20). Moreover, following Columbus’s discovery of the “New World” in 1492, Spanish colonists settled in places like present-day New Mexico, Texas, and Florida before other Europeans settlers went to Virginia, Maryland, and Massachusetts. When they came to the New World, these Spanish colonists enslaved its indigenous populations and later began importing African slaves to the New World following Ferdinand and Isabella’s approval of African slavery in 1501. St. Augustine, Florida, was a hub for the Spanish slave trade. And it was all legal!

Clearly there were people living in the Americas thousands of years before any Europeans came over, although we often ignore that reality. For example, even though my hometown of St. Louis, Missouri, is celebrating 2014 as the 250th anniversary of the city’s “founding,” there was an advanced society hundreds of years before 1764 right in our backyard that was larger than London at one point in time.  After 1492, there were non-British Europeans who colonized the Americas and traded slaves long before the British came over. To ultimately suggest that Anthony Johnson was the first legal slaveholder in what would eventually become the United States is utter poppycock, no matter what any viral internet garbage tries to tell you.

Cheers

President Ulysses S. Grant and The Panic of 1873

Historical interpretations and popular memories of Ulysses S. Grant’s tenure as President of the United States (1869-1877) devote a considerable amount of time analyzing cases of corruption–whether real or imagined–within the Grant administration. History textbooks throughout the twentieth century told tales of Grant’s personal integrity but also his naivety when it came to trusting questionable subordinates. The White House’s biography of Grant-which curiously focuses more on Grant during the Civil War than his presidency–goes so far as to question Grant’s motives for accepting lavish gifts from Wall Street speculators Jay Gould and James Fisk, even though such transactions dated back to Andrew Jackson’s establishment of a complex political patronage system in the 1830s. This patronage system entailed gift-giving in exchange for political offices and favorable legislation, and was standard practice at the time.

There was indeed political corruption during Grant’s presidency, and few scholars would deny that fact. But by shaping Grant’s presidency almost solely around the corruption questions, introductory biographies and general histories of the era overlook other important facets of Grant’s presidency that provide insights into the complex challenges he faced during the post-Civil War era. One such challenge centered around the need to restore the country’s financial equilibrium following the American Civil War.

During the American Civil War, President Abraham Lincoln and the Republican majority in Congress sought ways to fund the government’s deployment of the U.S. military into the Confederacy. Congress passed the nation’s first income tax (3% of all incomes over $800) through the Revenue Act of 1861 at the beginning of the war, but another significant act was the decision to print paper money without specie (gold or silver) backing. According to economic historian David Blanke, roughly $356 million in paper “greenbacks” were printed throughout the duration of the Civil War to fund soldier salaries, military supplies, and the creation of what would eventually be the Transcontinental Railroad (which was completed after the war in 1869). These greenbacks soaked the market place and provided easy capital to investors, some of which greatly profited from the war. Since the greenbacks were not backed by specie, however, they were essentially IOU promissory notes whose value was largely based on the confidence of wealthy investment bankers.

President Grant sought a return to specie-backed money upon taking office in 1869 (he also later abolished the income tax in 1872). In his First Inaugural Address, Grant argued that the return to “sound money” was an essential step on the road towards national reconciliation:

A great debt has been contracted in securing to us and our posterity the Union. The payment of this, principal and interest, as well as the return to a specie basis as soon as it can be accomplished without material detriment to the debtor class or to the country at large, must be provided for. To protect the national honor, every dollar of Government indebtedness should be paid in gold, unless otherwise expressly stipulated in the contract. Let it be understood that no repudiator of one farthing of our public debt will be trusted in public place, and it will go far toward strengthening a credit which ought to be the best in the world, and will ultimately enable us to replace the debt with bonds bearing less interest than we now pay.

Although Grant requested that the government pay its debts in gold, both gold and silver were still legal specie at this time. The days of using silver, however, were numbered. The newly-unified country of Germany ended its use of silver as a form of specie in 1871, and the implications of this move reverberated in the United States. By no longer using silver as currency, Germany placed more silver on the open marketplace, driving down its value in countries that still accepted it as legal specie. Congress followed suit with the Coinage Act of 1873, which outlawed silver as a form of legal specie and put the United States on a path towards the gold standard. While President Grant and Congress believed the Coinage Act would provide future financial stability for the country, the combination of industrial overexpansion (especially railroads) and the decreasing amount of available capital for investors bred the recipe for a potential economic disaster. That disaster came in September 1873 when Wall Street financial institutions like the New York Warehouse & Security Co. and Jay Cooke & Co. “began to fall like dominoes,” according to Jean Edward Smith (575). Railroad companies shut their doors, investors went bankrupt, and laborers lost jobs. These events marked the beginning of the Panic of 1873.

Debates emerged regarding the best strategy for addressing what soon became a full-blown depression, the worst of its kind in the U.S. at that point. Congress eventually pushed through Senate Bill 617 in March 1874, which called for the infusion of $400 million Greenbacks into circulation and the addition of $100 million into the nation’s money supply. The bill went to President Grant for approval on April 14, 1874.

Grant deliberated on the measure and initially wrote a message to Congress supportive of S.B. 617. The more he thought about it, however, the more he came to view the bill as an inflationary threat to the nation’s long-term credit. Grant vetoed the bill on April 22. In his veto message, Grant feared that passage of the bill would lead to future efforts to print even more inflationary greenbacks. S.B. 617, according to Grant, “is a departure from the principles of finance, national interest, the nation’s obligations to creditors, Congressional promises, party pledges (on the part of both political parties), and of personal views and promises made by me in every annual message sent to Congress and in each inaugural address.” The nation would ride the course and stay on the gold standard.

What were the effects of the Panic of 1873 for Grant’s presidency and the country’s future?

Scholars have taken different perspectives towards Grant’s economic policies, and these questions remain open for debate today. The Panic led to a prolonged depression that lasted until 1879, but the nation’s taxes and national debt were reduced by $300 million and $435 million, respectively, during Grant’s tenure in office. Annual interest rates were reduced by $30 million and one-fifth of the nation’s debt was eliminated. The resumption of specie-based payments led to substantial economic growth and greatly increased business activity in Gilded Age America during the 1880s. Frank Scaturro deems Grant’s economic policy as one that “was singularly successful in the aftermath of the most serious fiscal problems the nation had ever faced” (49).

There were also negative consequences of these policies, however. Reconstruction policies aimed at enforcing the fifteenth amendment and protecting Southern blacks at the voting booth lost support from Northerners more concerned about their own financial difficulties than protecting black rights. Southern whites also expressed outrage when federal funding for infrastructure projects in the former Confederate states dried up. The expense of keeping the military in the South to enforce federal law was seen as excessive in the eyes of many Northerners, although it is important to point out that these same Northerners had no qualms about deploying the military to quell labor strikes in the North such as the Great Railroad Strike of 1877. Blanke takes a more critical perspective than Scaturro towards Grant’s economic policies, arguing that “the long downturn further concentrated capital in the hands of fewer and fewer suppliers,” leading to a concentration of wealth in the hands of the few. By 1890, 71 percent of the nation’s wealth was in the hands of 9 percent of its citizens, “an unhealthy and lopsided disparity of wealth distribution that has only been equaled, in this country, in the past 20 years.”

The challenges Ulysses S. Grant faced during his presidency alert us to the difficulties that emerge when economies take unexpected downturns. Should the government print and infuse more cash to alleviate unemployment and bankruptcy, or is it wiser to move towards “sound money” and the payment of past debts? Our own economic difficulties, spawned from the Great Recession of 2008, show that we still continue to debate these questions today.

Cheers

News and Notes: September 16, 2014

Here is a compilation of good reads and newsworthy events I’ve recently come across:

  • The Americanist Independent: Independent historian and fellow Grand Army of the Republic scholar Keith Harris started his own peer-reviewed journal of U.S. history a few months ago. He is currently offering one week of complimentary access to his journal, which you can find here. I signed up and am liking what I’ve seen so far.
  • References, Please: Tim Parks makes a compelling argument for reforming standard scholarly practices for referencing citations and footnotes. “In the age of the Internet, do we really need footnotes to reference quotations we have made in the text? For a book to be taken seriously, does it have to take us right to the yellowing page of some crumbling edition guarded in the depths of an austere library, if the material could equally well be found through a Google search? Has an element of fetishism perhaps crept into what was once a necessary academic practice?”
  • The Importance of Historical Thinking: Historian and education professor Sam Wineburg’s seminal essay “Historical Thinking and Other Unnatural Acts” was liberated from its academic paywalls. If you’re looking to learn about or teach others about historical thinking, start with this essay. It’s here.
  • Don’t Throw the Bums Out: Historian Jon Grinspan argues in the New York Times that claiming that all politicians are bums “makes it harder to throw out the real bums.” Grinspan dives into Gilded Age political culture in this delightful essay.
  • A Nation of Readers: Brandeis University history Ph.D. candidate Yoni Appelbaum writes about the efforts of book publishers to distribute free literature to U.S. soldiers during World War II. Appelbaum finds that a stunning 122,951,031 books were given away during WWII.
  • Addressing Brazil’s Complicated History of Slavery: Brazil has a complex and troubling history of slavery. The slave trade from African to Brazil was ten times the size of the slave trade in the United States, and the institution was not abolished until 1888, twenty-three years after the U.S. abolished it. “For the last century Brazil has tried to forget its past, refusing to accept the legacy of the slave trade. It has sought to project the image of a country of mixed descent, where the colour of a person’s skin does not count, a land unfettered by racism where cordial relations reign between citizens of Indian, European and African descent.” Enter ‘United States’ where Brazil is located in that last sentence and you’ve got the views of many Americans towards the legacies of race and slavery today, unfortunately.
  • The Scourge of “Relatability”: The New Yorker writer Rebecca Meade suggests that judging “good” art, music, and theater by its “relatability” reflects our lack of willingness to patronize artistic endeavors that challenge us to ask new questions and think differently about the world: “to demand that a work be “relatable” expresses a different expectation: that the work itself be somehow accommodating to, or reflective of, the experience of the reader or viewer. The reader or viewer remains passive in the face of the book or movie or play: she expects the work to be done for her.” This essay isn’t really history related, but I found it thought-provoking.
  • The Academic Job Market for Historians is Terrible: Just look at the data.
  • Finding Ways to Defeat Art Apathy and Museum Misery: Daily Californian writer Sahil Chinoy visited eighteen different art galleries and museums around the world this past summer. He left the experience unimpressed with the way Art Museums interpret and present their collections to audiences and criticized exhibit label writers for writing bland, uninformative labels that did little to enhance the museum experience. “The problem is that museum captions are unequivocally boring, yet they’re the only lens through which most visitors see art. Historical context is fascinating for some pieces, but for many, information like the place where the artist was born simply does not matter.”

Cheers

Follow

Get every new post delivered to your Inbox.

Join 176 other followers