Analyzing Visitor Attendance to Civil War Sites During the Sesquicentennial

Visitor use statistics for Fort Sumter National Monument.
Visitor use statistics for Fort Sumter National Monument.

In yesterday’s post I raised questions about a Wall Street Journal article that deemed the United States Civil War Sesquicentennial a failure because of declining Civil War memorabilia sales and participation rates in Civil War battle reenactments (another article in The Week argues that the number of active Civil War reenactors has declined by 50% since 2000). While decreasing interest in these activities may be lamentable to some, I suggested that we should proceed with caution before deeming the entire commemoration a failure. Rather, we should consider the ways people are engaging with and learning about the war through their experiences in history classrooms and at a free-choice informal learning settings like Civil War battlefields and museums. Measuring the extent to which people demonstrate changes in knowledge through their learning experiences at Civil War sites can tell us more about the influence of the Sesquicentennial than the purchase of a teddy bear with a Union or Confederate kepi.

A reader left a comment on that post that seems to agree with my perspective but nevertheless asks, “have Civil War sites and museums seen an uptick in visitation during the anniversary?” This question is a fair one to ask, so I decided to take a look at visitor use statistics at several Civil War sites run by the National Park Service (NPS). While there are literally thousands of Civil War-related cultural institutions throughout the United States, I am choosing to focus on a few select NPS sites largely because their visitor use statistics are readily available online. Furthermore, the five battlefields I am choosing to analyze–Antietam, Chickamauga/Chattanooga, Gettysburg, Shiloh, and Vicksburg–are central locations for Sesquicentennial events and activities (they were also the first five battlefields to be placed under federal control in the 1890s through the Department of War). I’m also looking at Fort Sumter because the NPS is using the Sesquicentennial to further discuss the causes of the Civil War. This site is where the war started, making it an ideal place to discuss pertinent issues of citizenship, democracy, race, and slavery.

So what do the numbers say?

  • The Antietam National Battlefield’s peak visitation year was 1986, when more than 700,000 visitors came to the battlefield. In the 1990s, however, attendance took a sharp decline, plummeting to around 181,000 in 1993. In 2008 (the year of the Great Recession) attendance was 352,548. In 2012 (the 150th anniversary of the battle), attendance rose to 510,921, a 45% increase from 2008. Attendance declined to 370,832 in 2013, but that is still a 5.2% increase in attendance from 2008.
  • Chickamauga/Chatanooga National Military Park’s peak visitation year was 1970, when more than 1.7 million visitors came to the battlefield. Interestingly enough, a sharp decline to 674,400 followed in 1971, which is the lowest annual attendance to the site since 1960. Following years of steady growth after 1971 and several years surpassing the one million mark, attendance took another decline in 2001 to 749,913. In 2011 (the first year of the Sesquicentennial), attendance surpassed the one million mark (1,036,699) for the first time since 1998. From 2001 to 2011 annual attendance increased 38.2%.
  • Cameron McWhirter’s Wall Street Journal articles points out that visitation to Gettysburg National Military Park in 2013 (1,213,349) has declined sharply from its peak visitation year in 1970, when nearly 7 million came to the park. These numbers are accurate, but McWhirter conveniently leaves out the fact that in 1979 attendance declined to 994,035, the only year since 1960 in which Gettysburg failed to attract at least one million visitors. So it seems to me that we should be asking what happened from 1970 to 1979 for visitor attendance to take such a sharp decline in the 1970s rather than comparing attendance between 1970 and 2013. Compared to 1979, visitor attendance today is actually up 22%. The Great Recession of 2008 hurt visitor attendance to Gettysburg and the site has not returned to its pre-recession attendance levels (which hit 1.8 million in 2002), but visitor attendance is still up 16.5% from 2009.
  • Shiloh National Military Park’s peak visitation year was 1961, when 927,400 visitors came to the site. Visitor attendance fell to 317,046 in 2010, but in 2012 (the 150th anniversary of the battle of Shiloh), attendance rose to 587,620, a stunning 85% increase in attendance over two years. Attendance remained high in 2013 with 536,206 visitors.
  • Vicksburg National Military Park’s peak visitation year was 1984, when 1,112,881 visitors came to the battlefield. From 1991 to 2004 annual visitation hovered around the 800,000 to one million mark, but since 2004 the park has seen a steady decline in attendance. During the 2008 recession attendance declined to 555,109, and attendance numbers during the Sesquicentennial continue to hover around that number with the exception of a 41.6% jump in attendance to 796,035 in 2011, the first year of the Sesquicentennial.
  • Fort Sumter National Monument’s peak visitation year was 2002, when 922,776 visitors came to the site. In fact, the twenty-first century has been a boon for Fort Sumter. Attendance from 2000 to 2001 rose 188% from around 319,000 visitors to 919,000 visitors, and annual attendance to Fort Sumter continues to hover around 850,000 during the Civil War Sesquicentennial.

These statistics clearly show that when it comes to some of the more prominent National Park Service Civil War battlefields (and Fort Sumter), there has been a significant uptick in visitation during the Sesquicentennial. Of course, these numbers don’t tell us much about visitor learning experiences at these sites. Nevertheless, I think these numbers do much to challenge any claims of “public apathy” or an “anemic” Sesquicentennial since 2011.

Cheers

Advertisements

The Civil War Sesquicentennial and the Challenge of Measuring “Success” in Free-Choice Learning Environments

A couple weeks ago Cameron McWhirter of the Wall Street Journal wrote an article portraying the Civil War Sesquicentennial as a “disappointment” to Civil War history buffs. Even though the Sesquicentennial will continue for another year to commemorate the end of the American Civil War in 1865, McWhirter has enough confidence to preemptively suggest that the nation’s awareness of the war’s influence to United States history is minimal at best. To wit:

Promoters of Civil War memorabilia, tourism and re-enactments across the country are fighting a losing battle against apathy for one of the most important periods in U.S. history—a cataclysmic event that shaped the nation and helped define its soul. Limited government funding to stage events and public unease over the divisive racial issues that the war represents are two factors for low turnout, say Civil War buffs . . . “If it’s a celebration, it’s a celebration that the public is either not aware of or not interested in,” sighs Jamie Delson, owner of the Toy Soldier Company, a mail-order business with a warehouse in Jersey City, N.J.

According to the Wall Street Journal, sluggish sales of Civil War memorabilia and poor turnouts at Civil War reenactments reflect a failed Sesquicentennial. Given the nature of the Wall Street Journal and their focus on economic issues, perhaps it is not surprising to see them deem the Sesquicentennial a failure based on economic shortcomings. This perspective is problematic, however, and I strongly disagree with it.

For one, I find it significant that no k-12 educators were included in this article (even though they were interviewed). Gary Gallagher of the University of Virginia was mentioned, but he too has a problematic view of the Sesquicentennial because he places too much emphasis on the work of state commissions rather than what is happening in classrooms and public history sites. Besides the views of Gallagher, is it really fair to deem the Sesquicentennial a failure because people aren’t buying memorabilia or dressing up in Civil War clothing? What about teachers like Chris Lese who dedicate themselves to giving their students a nuanced understanding of the war through field trips, Skype conversations with teachers and students at other schools, and the use of primary source documents in classroom activities? What about the work of public historians in the National Park Service who are giving interpretations of the war on a daily basis and including important stories about race, slavery, and emancipation that were left out of NPS interpretations well into the 1990s?

The American Civil War–perhaps more than any war in history besides World War II–has been commercialized and celebrated as a “good war” rather than commemorated and contemplated as a deadly war with serious consequences for Americans today. This desire to commercialize the war through toy soldiers, clothing, teddy bears, replica rifles, and a range of other kitschy artifacts says a lot about how Americans choose to remember their Civil War. Americans in 1998 didn’t run out to buy replica designs of Teddy Roosevelt marching upon San Juan Hill, and I highly doubt that the Wall Street Journal will publish an article towards the end of the World War I centennial in 2018 decrying poor sales in Doughboy uniforms or replica mustard gas devices. Yet the Civil War seems to have always been accompanied by a culture of commercialism that turns Union and Confederate military paraphernalia into tangible memory devices for remembering the war (something I’d like to further research in the future). While that paraphernalia may indicate a person’s interest in Civil War history, it does not always represent an acquisition of knowledge or a successful learning experience, as the Wall Street Journal suggests.

So how do we measure “success” or “failure” during the Civil War Sesquicentennial? Part of the answer, of course, lies in what students are learning in a formal classroom setting, but I believe any comprehensive measurement must be connected to the experiences people are having at free-choice informal learning environments. Assessing the influence of informal learning experiences on a person’s knowledge is difficult because there are no standardized tests like those in a formal setting to measure outcomes. People come to informal learning environments like the Gettysburg National Battlefield on their own free will, and as John Falk and Lynn Dierking show us, their motivation to “learn” at these places is often secondary to a desire for social interaction with friends and family. Little Jimmy, for example, may want to visit Gettysburg to learn about the Civil War, but Mom may have no interest in the war and views the experience as a chance for Little Jimmy to interact with his Grandparents. Furthermore, it may take days, months, or years for an informal learning experience to “sink in” the mind of a visitor. Little Jimmy may never visit another Civil War battlefield again during his childhood, but his memory of the experience may help him score higher in a high school exam or inspire him to take his own children to Civil War sites when he becomes an adult. Can we consider Little Jimmy’s visit to Gettysburg a success? I’d say yes.

The finest tool for measuring informal learning that I’ve come across is Deborah L. Perry’s concept of “knowledge hierarchies.” In What Makes Learning Fun? Principles for the Design of Intrinsically Motivating Museum Exhibits, Perry argues that “gaining knowledge and developing understandings is not a clean, step-by-step process. Rather, it loops around, in and out, taking detours and side journeys, following myriad whims and fancies, starts and stops, dead ends and tunnels” (45). Knowledge hierarchies outline different ways of learning and acknowledge that learners engage in their own unique journey towards knowledge and understanding. Perry suggests that many learning “levels” exist within the hierarchy, and helping people move up this hierarchy should be the goal of informal learning. She outlines a brief chart of knowledge levels on page 47:

  • Level 0: I don’t know, I don’t care.
  • Level 1: I don’t know, but I’m curious and interested and would like to find out more.
  • Level 2: I think I know, but I have a very limited understanding.
  • Level 3: I have a solid but basic understanding of the main concept.
  • Level 4 and beyond: I have a sophisticated understanding of the main concept.

We should be thinking about these little yet significant learning acquisitions rather than measuring the “success” of the Civil War Sesquicentennial through dollars. One person may be at level 0 and have no interest in the Battle of Gettysburg but may leave the battlefield at level 1. Another person may have heard about the battle before visiting but left knowing that Robert E. Lee commanded Confederate forces and George Meade commanded Union forces during the three-day battle. Still another person may have read a book about Gettysburg before visiting the battlefield but left having a more nuanced understanding of Union and Confederate military strategies leading into the 1864 Overland campaign. In each of these cases, learners have experienced a change in their knowledge and have moved up the knowledge hierarchy. If the Civil War Sesquicentennial is helping people advance their learning journey, can we really deem the entire endeavor a failure?

Cheers

Goin’ to Gettysburg (Again)

The McLean House at Oak Ridge, Gettysburg, Pennsylvania. Photo Credit: Nick Sacco
The McLean House at Oak Ridge, Gettysburg, Pennsylvania. Photo Credit: Nick Sacco

I just found out the other day that I won a prestigious Public Historian scholarship to attend the Civil War Institute at Gettysburg College 2014 Summer Conference from June 20 to June 25. I am absolutely ecstatic about attending and cannot thank the Civil War Institute enough for offering me this wonderful opportunity to deepen my knowledge of the war and network with some of the brightest Civil War scholars in the field. It is truly an honor.

Last year I attended the Civil War Institute’s “The Future of Civil War History” conference and participated on a panel about teaching Civil War history in the k-12 classroom. It was my first time at Gettysburg and the experience was transformative from a personal standpoint. I spent an extensive amount of time on the Gettysburg battlefield (my first time at a major Civil War battlefield) and I got to meet a lot of fellow graduate students and professionals in the field. I’m graduating from IUPUI on May 11, so this time I’ll be coming as a new professional (stay tuned for more on that front).

The Summer Conference proves to be an absolute blast. There a lot of exciting panels about the war in 1864 and two days of jam-packed battlefield tours on the 23rd and 24th. We are going to Virginia to visit prominent 1864 battlefield sites and on the 24th we’ll be learning more about individual soldiers and their experiences at the Gettysburg battlefield. There are a wide range of tours to select from on both days and I have no idea which ones to pick right now, but I have no doubt that all of them are going to be top notch. As a scholarship award winner I also have the opportunity to mentor a handful of Gettysburg graduate students looking to break into the field of public history, which is sort of funny because I was in their position only a year ago :). I can’t wait!

Cheers

Reconsidering the Reconsideration of Civil War Death

Professor Nicholas Marshall (Marist College) recently wrote a thought-provoking essay on the New York Times Disunion” blog, a favorite website of mine for reading some of the newest scholarship in the field of American Civil War studies. Marshall comes out swinging in his essay “The Civil War Death Toll, Reconsidered,” which aims to offer a revisionist corrective to our understanding of American society’s collective mindset towards death during the Civil War.

Marshall criticizes Civil War scholars such as James McPherson, Eric Foner, and Drew Gilpin Faust for using the Civil War death toll-which is now estimated to be around 750,000-“to drive home a characterization of the war based on the scale of death.” These historians often convert this Civil War death statistic to present-day numbers (which would equate to seven million deaths today) in order to convey the influence of wartime death on the lives of Americans at the time. Marshall laments the use of the Civil War death toll in this manner, arguing that “while factually correct, the statistics work to exaggerate the impact of the war. At its essence, the use of these statistics is designed to provide perspective, a laudatory goal. It is supposed to allow those of us looking back on the war to get a clear sense of the emotional texture of the time. The problem is that doing so violates one of the central codes of historical analysis: avoid presentism.”

I agree with Marshall in this regard. By translating Civil War death totals to present-day equivalents, we impose our own perspective of death into our interpretations of the past rather than considering the perspectives of those who lived through the conflict. Indeed, the thought of such a destructive contemporary war provokes more thoughts about who within our circle of friends and loved ones today would be a part of the seven million hypothetical deaths in this hypothetical war rather than thoughts about the 750,000 soldiers who actually died in combat 150 years ago. Marshall correctly reminds us that numbers and statistics are meaningless until we construct meanings for them. To truly understand the culture of Civil War death, we must go beyond the numbers themselves or statistical conversions that translate the past on our terms without understanding historical context.

Marshall continues by suggesting that prewar, wartime, and postwar conceptions of sickness and life expectancy actually accounted for the “everyday existence” of disease and the possibility of unexpected death within society. When looking at antebellum disease:

It is important to keep in mind that death rates were tremendously variable in the period, even within relatively stable locales, because of the unpredictable nature of contagious disease. Some areas reported rates that varied from below 2 percent up to 6 percent. A conservative estimate of a 2 percent death rate for 1860 would have meant about 629,000 deaths that year for the nation as a whole, while a 3 percent rate would have resulted in 943,000 deaths (today’s rate is consistently below 0.8 percent). The additional battlefield deaths in the war would thus represent an increase of between 7 and 10 percent over the normal rates. Significant, but hardly catastrophic.

The threat of disease continued into the early twentieth century, according to Marshall:

During the global flu epidemic at the end of the World War I as many as 100 million worldwide, including 600,000 in the United States (roughly five times the number of American casualties in World War I and approaching the total number of deaths in the Civil War), perished over the course of just a few months. In addition, this was an unusual strain of influenza that killed mainly the healthiest cohort of the population (those in their 20s and 30s) through a violent immune response. If any event should have triggered re-evaluation of the nation’s approach to death (based solely on changes in incidence and scale, as Civil War historians often calculate), this would be it. Yet one historian’s book on the subject is titled “America’s Forgotten Pandemic,” and he spends a significant portion of the book trying to explain why the epidemic seemed to disappear from public consciousness so soon after it waned. The answer, in part, is that well into the 20th century Americans viewed disease — and the death that came with it — as a constant, as something that had to be dealt with as part of everyday existence.

In sum, Marshall asserts that the since two-thirds of Civil War deaths were due to sickness and not battlefield combat, death during the war was not as influential on the mindset of Americans during the war as historians traditionally suggest. “The war added to an existing demographic and cultural problem rather than creating an entirely new one,” according to Marshall, and for this reason historians should re-evaluate the relationship between disease, death, and culture during the Civil War.

Marshall had me largely convinced with his arguments at first, and there is certainly much to agree with here. But after reading this essay a second and third time, a fatally flawed statement (pun intended) exposed itself to me, weakening the structural foundation of this entire essay. To wit:

If we work from an assumption that deaths from disease were not viewed at the time as war casualties, but rather as a continuation of prewar circumstances, instead of 750,000 casualties faced by Civil War-era Americans, we are left with 250,000.

The problem with this statement is that people at the time did view the loss of loved ones as war casualties, regardless of whether they died by disease or by gunfire. Drew Gilpin Faust’s This Republic of Suffering: Death and the American Civil War–which Marshall is quick to criticize in the beginning of his essay–shows us in precise terms how Civil War fatalities transformed the culture of death in the United States. True, the war itself did not necessarily present an unprecedented amount of death from contagious disease in American society. What was shocking to many people at the time was where these people died and what happened to their bodies after battle. Faust argues that prewar loss of life revolved around the notion of the “Good Death.” According to Faust, “dying was an art” in antebellum American, and the Good Death revolved around the idea that people died on their deathbed with friends and loved ones by their side. “Family was central to the [Good Death], for kin performed its essential rituals. Victorian ideals of domesticity further reinforced these assumptions about death’s appropriate familial setting . . family members needed to witness a death in order to assess the state of the dying person’s soul, for these critical last moments of life would epitomize his or her spiritual condition” (6-10).

Civil War death shocked Americans because of its disregard for the Good Death. Soldiers died on battlefields, far away from home and family. They were sometimes buried in unmarked and/or mass graves, provoking complaints from loved ones at home that their remains were being treated no better than the remains of dead animals. If possible, parents, wives, and children sometimes traveled hundreds or thousands of miles to exhume the bodies of dead soldiers and take them home for a “proper” burial. Others paid thousands of dollars for professionals experienced in the new fields of embalming and refrigeration to find and preserve the bodies of loved ones to be sent home. Still others lacked any financial means to bring their dead soldiers home.

These facts represent the culture of shock and death that contemporaries understood to be unprecedented in American history. They did not view these deaths as a continuation of prewar circumstances, which partly explains why they did not forget about these deaths like Americans forgot about the flu epidemic in the wake of World War I. Indeed, those who lived through the war grappled with the memories of their dead through the creation of veterans’ fraternal organizations, monuments, memorials, Memorial Day commemorations, and the everyday experience of life after war. To suggest that contemporaries did not view these death as war casualties simply misses the mark, in my opinion.

Cheers

Using Computer Technology to Teach Perplexity

I stumbled across the video below a couple days ago. It captures a talk given by Dan Meyer, a Ph.D. candidate at Stanford University, who gave the keynote speech at the 2014 Computer-Using Educators Conference in Palm Springs, Florida, earlier this month. There’s a lot that I like about this talk. For one, Meyer gives what I think is a realistic perspective of the ed-tech industry. There are many, many computer technologies (hardware, software, smart phones, iPads, etc. etc.) teachers can utilize in their classrooms. Not all computer technology is built equally, and the teacher’s focus, argues Meyer, should be on finding technology that helps to capture, share, and resolve perplexity in the classroom. By encouraging perplexity in the classroom, Meyer encourages teachers to use technology in a way that prompts students to ask questions that tap into their curiosities rather than using technology to simply deliver content deemed important by the Common Core. Finally, Meyer’s talk is pretty funny. Most conference keynote speeches aren’t this funny, so I definitely appreciate that aspect of this talk.

Cheers!

Indiana’s Teacher Evaluation Problem

The state of Indiana has a teacher evaluation problem. Or does it?

On Monday, April 7, the Indiana Department of Education released data pertaining to the evaluation of public school teachers during the 2012-2013 academic year. A 2011 law passed by the Indiana General Assembly stipulates that each public school district in the state is required to conduct an annual in-house review of teachers and administrators. Teachers are graded on a four-tier scale as either “Highly Effective,” “Effective,” “Improvement Necessary,” or “Ineffective.” Districts are given wide latitude to interpret the definitions of these terms and the nature of their evaluations as they see fit, raising questions about potential biases and the effectiveness of these evaluations.

As I flipped through the evaluations of various schools throughout the state, my eye caught some particularly weird data for the Indianapolis Public School District. In addition to teacher evaluations, Indiana School districts are also graded on an A-F scale. The IPS District, according to the Department of Education, is one of four school districts in the state with an F grade. You would imagine, therefore, that these teacher evaluations would not be very good for teachers in IPS, right?

You would be wrong.

According to the DOE’s data, here is how 2,672 IPS teachers stacked up in their evaluations (keep in mind that 142 teachers were not evaluated because they are retiring):

Highly Effective: 347 (12.9%)
Effective: 2,028 (75.8%)
Improvement Necessary: 150 (5.6%)
Ineffective: 5 (0.1%)

So, even though IPS is a failing school district, only 5.7 percent of its teachers are in need of improvement or have been deemed ineffective by the DOE. What gives?

The failure of IPS cannot fall solely on its teachers: As discussed before, I have a fundamental disagreement with Davis Guggenheim–the creator of the popular film Waiting for Superman–who argues that public education fails many of its students because of inadequate teaching and the teachers unions that protect those bad teachers. While acknowledging that there does in fact exist a small minority of bad teachers in public education (perhaps around 5.7% of all teachers?), the failure of these schools cannot be divorced from larger socioeconomic issues such as segregation, poverty, and broken homes. Waiting for a “Superman” teacher to pop out of nowhere and single-handedly lift all students in failing schools out of poverty and into college is meaningless if you’re not doing anything to better the communities in which these students are being raised. Middle School and High School teachers spend maybe 3-5 hours a week with each of their students; shouldn’t parents be spending at least that much time with their children every day (or every other day)? IPS is failing for many reasons, and not all of them are connected to the teachers. And it should be pointed out that IPS is performing better than local charter schools in the area.

Who in your local district would you fire? Everyone has an opinion on public education, and it seems like many of those opinions, if not most, are negative. I can’t find any studies to back up my opinions, but I’ve had several conversations where people have complained about public schools to me, and when asked by me which teachers in their local school district they would fire if given the chance, these people can’t give me an answer. I don’t know why this is, but it seems like we’re too often ready to criticize public education in the abstract without thinking about the ramifications and consequences of our proposed “solutions.” Who in IPS will need to get fired before the school district starts improving? I’m not sure if firing teachers is the right idea.

Who should be evaluating the schools?: Having districts conduct their own evaluations seems problematic in the same way that having eighth graders assess the final grades of their fifth grade classmates is problematic. But I’m also concerned about giving the Department of Education and/or the Indiana General Assembly any more power in evaluating teachers. I’m especially concerned with the latter because their hostile view of public education has led to the creation of the largest voucher program in the country, taking away crucial tax dollars from public schools and siphoning that money to private schools in some instances. I’m not sure who would be the best third-party evaluator for public schools, but I wonder if there are ways for local township and county leaders to get involved in the evaluation of their local schools and their teachers.

I find it odd that the Indiana DOE has deemed the IPS district a failure while at the same time giving an overwhelmingly positive assessment of its teachers. I think the state’s teacher evaluation system needs to be reworked, but in a way that keeps the state legislators themselves away from the evaluation sheets. In the end, perhaps this strange situation is somehow an acknowledgement that teachers cannot succeed on their own in an individualized vacuum, free from the concerns of the world outside the classroom. There are effective teachers out there, but they need the support of their communities in order to do their jobs effectively.

What do you think?

Cheers

 

A Few Questions About Audiences at Academic Conferences

I find myself at this moment still thinking about the National Council on Public History’s Annual Meeting in Monterey, California, two weeks ago, which was such a fantastic experience for me. Even though my duties as NCPH’s Program Assistant prevented me from directly participating in any conference sessions (save for one at the very end of the conference), I talked with many people at the conference while events were happening and followed various discussions taking place on Twitter. I also made a few observations worth noting here.

One of the most popular sessions at the conference was entitled “Gender: Just Add Women and Stir?” This cleverly-titled session addressed questions surrounding the interpretation of gender, sexuality, and even LGBT history at cultural institutions while also providing a forum to discuss strategies for interpreting these topics without resorting to cultural tokenism or a checkbox system in which various cultural groups get a brief mention before moving on to “the actual story.” As I made my way around the conference center to make sure everything was in order, I noticed that the room for this gender session was filled beyond capacity. More importantly, I noticed there were hardly any men in sight. I counted maybe three or four in the entire room.

I find this state of affairs disappointing partly because “gender” is not synonymous with “women’s history,” nor is it a field of study strictly under the purview of women. I think this discrepancy in the male/female ratio also raises questions about the very purpose of academic conferences. I think it’s fair to say that this gender session was not the first one for many (if not most) of the session attendees, while others who chose not to attend the session may rarely discuss gender in their work as public historians. What happens, therefore, is a sort of “preaching to the choir” situation where the experts talk to each other while the non-experts find sessions to attend where they can feel like experts.  One of my questions revolves around the degree to which I as a conference participant should be attending sessions within my scholarly interests versus sessions that are outside of my interests. For example, I am primarily a scholar of nineteenth century U.S. History with a particular interest in memory, identity, and culture. As a conference participant should I use my time to attend sessions about nineteenth century history and/or historical memory that relate to my interests, or is it more beneficial to learn about topics outside of my interests in the chances that I could learn something new that enhances the quality of my work?

Another event I heard a lot about during the conference was the “History Relevance Campaign” session. From what I understand, the History Relevance Campaign is a new initiative within the history community that aims to establish a marketing/branding campaign to educate society about the importance of history in our everyday lives. The campaign will also encourage collaboration in answering the ultimate question so many people have about our field: “What does history do for me?”

While this session was also well attended, post-session conversations suggest to me that the “preaching to the choir” effect also took hold in this situation. In short, it sounds like historians got in a room together and convinced each other of the importance of their field. Where were the advertising strategists, k-12 educators, and politicians? How can professional historians in cultural institutions and the academy reach out to these groups and engage in a collaborative effort to make history relevant to all of society and not just the professionals?

To recap, here are my two questions:

1. As a conference participant, should I focus on attending sessions directly connected to my interests so that I can network with people in my field and learn more about content connected to my studies, or should I work to also attend a few sessions that may fall outside of my scholarly interests?

2. As a conference organizer, how do I encourage a diversity of attendees to my conference and, more specific to history, how do I encourage people outside the academy and people outside of history altogether to attend my conference?

There are no easy answers to these questions, but I think they’re worth asking. What do you think?

Cheers