Pros and Cons of Historical Markers

Last week an Alabama-based Civil Rights organization, Equal Justice Initiative, released a report entitled “Lynching in America: Confronting the Legacy of Racial Terror.” The report is unique in that it compiles a comprehensive inventory of nearly 4,000 lynching victims throughout the Deep South from 1877 to 1950, including many new names not listed in previous inventories. The New York Times also ran a story on the report with fancy visuals and more background information on Bryan Stevenson, executive director of EJI.

A lot of interesting discussions emerged on my Twitter feed about various strong and weak points of the report and the need to provide more context about the horrifying consequences of lynching so that these victims are not portrayed as mere numbers or crime statistics. Historian Kidada E. Williams covers some of these concerns here.

I’ve been focusing on the public history side of these discussions. Central to Mr. Stevenson’s vision for reckoning with this history is the erection of historical markers in locations where lynchings occurred. By installing these permanent markers at “ground zero” sites, Americans will have daily, tangible reminders of the lives lost by white mob violence in the late 19th century and first half of the 20th century. I believe the idea of erecting historical markers to commemorate this tough history is necessary, but that it’s only a starting point for further inquiry.

Historical markers come with certain advantages and disadvantages for thinking critically about history outside the classroom. Generally speaking, historical markers are a cost-effective investment in history for towns, cities, and states of all sizes. Besides the initial start-up costs for erecting a marker there is little expense beyond basic maintenance to maintain historical markers, which allows small towns like Kirvin, Texas, and Elaine, Arkansas, to preserve a part of their history without the expense of a museum, historical society, temporary exhibit, or professional staff. And historical markers, combined with digital technology, allow for viewers to write, photograph, collect, and share their experiences at markers through websites like Historypin and The Historical Marker Database. Historical markers also do a good job of emphasizing the importance of local, regional, and state history that often gets passed over in the history classroom. Many of the markers researched and cared for by the Indiana Historical Bureau, for example, do a nice job of connecting local history to national history in a way that demonstrates how small communities throughout Indiana have contributed to the story of the United States.

A historical marker, however, can only take you so far. A marker will not answer any questions in real time that you may have about the content you are reading. Most markers are limited to around 20 to 200 words, and in many cases that text doesn’t go beyond the restatement of basic facts, leaving readers wondering why a particular marker is significant (this marker dedicated to Hannah Milhous Nixon is a great example. Why is this marker important? Who cares?). I personally have had experiences at historic homes, museums, Civil War battlefields, national parks, and even monuments and statues that inspired me to learn more about a given historical topic and, equally important, share that interest with friends and family. With the exception of one uniquely notable historical marker, I don’t think I’ve ever experienced such feelings after looking at a historical marker.

It’s one thing to read historical content on a static marker. It’s a whole other experience to engage in active dialogue with an interpreter or educator in a public history setting who has passion, content knowledge, and the ability to craft an interpretive story that creates meaning and raises questions that one may not readily consider when looking at a marker text alone. When at all possible I prefer to listen to and converse with an interpreter than read a marker text. I realize that not everyone would chose to learn in this manner, but the point is that we should strive to create interpretive opportunities in both settings so that interested parties have multiple avenues in which to connect with the past.

Talking about a difficult and sensitive topic like lynching requires intensive training in both historical content and interpretive techniques, however, and I’m curious to learn more about places where interpreters regularly discuss these topics. What are cultural institutions doing to discuss lynching and rioting in museum exhibits, public programming, and other interpretive mediums within public history?

The floor is yours.


Brooks Simpson on President U.S. Grant and His Alleged “Corruption”

Who says Twitter is only good for selfies, LOLcats, and tweeting about coffee?

Ta-Nehisi Coates, a columnist for The Atlantic, took to Twitter the other day to ask his followers a question about the extent to which President Ulysses S. Grant was “corrupt” compared to his contemporaries. He specifically requested the help of Brooks Simpson, Arizona State University history professor and noted Grant scholar. Simpson fired off a series of tweets in response that conveyed a nuanced, thought-provoking interpretation that I find extremely helpful for my own purposes. I get more questions from visitors at my job about Grant’s presidency than about his generalship during the Civil War, and these corruption questions pop up frequently. Simpson’s response will definitely be a part of my arsenal next time I’m asked about Grant’s alleged corruption.

Here’s what Simpson had to say:

There you go.


News and Notes: November 2, 2014

The weather and clocks are changing, but the blogging continues here at Exploring the Past. Here are a few good reads and some personal notes.

Good Reads

  • Flawed commemoration in Britain: The Tower of London is currently surrounded by red ceramic poppies in commemoration of British soldiers who died during World War I. Jonathan Jones writes a scathing and largely accurate (in my opinion) criticism of this commemoration, arguing that such a commemoration needs to highlight the horrors of war and the ways WWI was tragic to all of Europe, not just Britain.
  • The History Manifesto: Historians Jo Guldi and David Armitage have recently published a new book, The History Manifesto. Guldi and Armitage argue that “the spectre of the short term” clouds our society and government policy. “Almost every aspect of human life is plotted and judged, packaged and paid for, on time-scales of a few months or years” (1), according to Guldi and Armitage. This method of thinking also dominates the historical enterprise, where historians are told to specialize in historic eras or events that range between four and forty years, privileging the small picture instead of the big one. They argue that historians should aim to think more about the long term and the ways history changes over hundreds of years. Moreover, Guldi and Armitage argue that historians should involve themselves in public policy. The History Manifesto is open access and freely available for PDF download here.
  • Do Professors need to use digital technology in the classroom?: Professor and columnist Rebecca Schuman says ‘no.’
  • The Specter of Gettysburg: Kevin Lavery, a student at Gettysburg College, writes a sharp criticism of so-called “historic” ghost tours in and around the Gettysburg battlefield, with some pushback from readers in the comment section. A very thought-provoking read.
  • Slavery in America – Back in the headlines: People think they know everything about slavery in the United States, but they don’t.”

Personal Notes

  • Two of the chapters from my Master’s thesis on the Grand Army of the Republic, Department of Indiana, are currently under review for possible publication in scholarly journals. One of these chapters was revised into an article during the spring semester and submitted for review back in August. The blind peer-reviewers just got back to me a few days ago with mostly positive comments but also a few revisions to make the article better. The other chapter was revised throughout the summer and was submitted a couple weeks ago, so I’m still waiting for feedback on that one. I’ll have more info on these articles soon. Stay tuned.
  • I have an essay on Oscar Taveras, Stan Musial, and public commemoration in sports that is slated for publication on Sport in American History on November 10. This is my first essay for SAH and I’m really excited for readers to check it out.


Changing Reading Habits in the Humanities

Over the past eight or so years there has been a push by educators and school administrators to have students in both k-12 and higher education use e-readers to obtain relevant scholarship and advance their educational careers. Some have argued that e-readers are better suited for so-called “digital natives” that are more comfortable processing information through digital technology than print technology. Others argue that devices like the Amazon Kindle ostensibly provide access to thousands of titles that are not always readily available at a local public or university library (although I would argue that obtaining access to a piece of scholarship is not the same as reading it. The world is full of unread books). This second point is particularly important for humanities students who spend countless hours reading works of literature, philosophy, and history.

A recent thought-provoking essay from American University linguistics professor Naomi S. Baron in The Chronicle of Higher Education, however, turns this logic on its head by suggesting that changing reading habits in the humanities actually threaten the future of the entire discipline. She argues that e-reading–the move from print books to digital devices for reading–“further complicate[s] our struggle to engage students in serious text-based inquiry.”

To wit:

For some years, the amount of reading we assign university students has been shrinking. A book a week is now at best four or five for the semester; volumes give way to chapters or articles. Our motivation is often a last-ditch attempt to get students to actually read what’s on the syllabus. Other factors include the spiraling cost of textbooks and copyright limitations on how much we may post digitally.

Are students even reading Milton or Thucydides or Wittgenstein these days? More fundamentally, are they studying the humanities, which are based on long-form reading? . . . I contend that the shift from reading in print to reading on digital devices is further reducing students’ pursuit of work in the humanities. Students (and the rest of us) have been reading on computers for many years. Besides searching for web pages, we’ve grown accustomed to reading journal articles online and mining documents in digital archives. However, with the coming of e-readers, tablets, and smartphones, reading styles underwent a sea change.

The bottom line is that while digital devices may be fine for reading that we don’t intend to muse over or reread, text that requires what’s been called “deep reading” is nearly always better done in print . . . Digital reading also encourages distraction and invites multitasking . . . Readings in the humanities tend to be lengthy, intellectually weighty, or both. The challenge of digital reading for the humanities is that screens—particularly those on devices with Internet connections—undermine our encounters with meaty texts. These devices weren’t designed for focused concentration, reading slowly, pausing to argue virtually with the author, or rereading. Rather, they are information and communication machines, best used for searching and skimming—not scrutinizing.

In sum, Baron suggests that the loss of close reading/long-form reading is detrimental to humanistic inquiry. Facing the twin challenges of an increasingly digital world and a society that has fetishized utility and practicality in education, humanists have cut down the amount of required reading for their classes while at the same time called for an increased usage of e-readers to obtain and learn about humanities scholarship.

Is it okay for humanists to cut down on the amount of reading they do? What medium is best for reading humanities scholarship?

In my opinion, close reading/long-form reading is necessary for all humanities scholars, even if they’re interested in using quantitative methods that utilize what Stanford University English professor Franco Moretti describes as “distant reading.” Everyone needs a basic understanding of noteworthy works in literature, philosophy, history, etc. etc. and that requires at least some sort of close reading.

When it comes to the best medium for reading humanities scholarship, I think the design of the medium is crucial. Most websites (including blogs on WordPress) don’t lend themselves for long, concentrated reading that exceeds more than 1,000 or 1,500 words. That number is even smaller for reading on a mobile device. Although I don’t prefer to read on an Amazon Kindle, those devices can help readers concentrate for longer periods of time than with a digital computer or phone screen, so I don’t find myself as dismissive of e-readers as Baron.

For my own studies I rely on print books for long-form reading. I find that print books are easier on my eyes and help me concentrate better on the material I am reading. I do a lot of reading online and on my mobile phone, but most of that reading consists of blog posts, news articles, opinion pieces, and other short-ish essays. When I read professional articles or books, I prefer print. I would also argue that research via digital archives, while extremely helpful and convenient, cannot fully replace the act of actually going to a brick-and-mortar library and/or archives and having an actual historical artifact in your hands (it’s also important to point out that the vast majority of historical artifacts are not digitized).

What are your thoughts? What is the place of reading in the humanities today, and what medium do you prefer for your own reading?


Academic Publishing Should Encourage Access and Knowledge Sharing

A few days ago Al-Jazeera English columnist Sarah Kendzior wrote a thoughtful essay in which she asks, “What’s the point of academic publishing?”

The question is an important one to ask. Prior to starting graduate school in 2012 I had little idea how much criticism traditional academic publishing ventures–more specifically, peer-reviewed scholarly journals–have received over the past few years. Although my interests are mainly focused on teaching history to a public audience outside the academic classroom, I still have an interest in working with an academic publisher someday. Back in 2012 I figured that getting articles published in journals was a great starting point for getting one’s name out in scholarly circles and, if I decided to continue my education and pursue my doctorate in the future, I’d be in a position to have strong credentials for possibly pursuing a career in academia. I love the capacity for intellectual growth that academia provides, and I would love to someday teach my own college courses, whether that be next year or thirty years from now. The point of academic publishing, I believed, served a dual purpose of boosting one’s credentials in academic circles and disseminating knowledge to non-academic audiences.

Unfortunately, the actual reality of academic publishing is not that simple. Kendzior’s article is one of many that has been published in the past year and a half calling out the practices of academic universities and their publishing wings. For one, the idea of publishing as an avenue to academic employment is a myth. According to Kendzior, “the harsh truth is that many scholars with multiple journal articles —and even multiple books—still do not find full-time employment.” More and more tenure-track positions require a hefty track record of publishing endeavors, but the number of available full-time, tenured positions in academia has gone down tremendously. In 1975, 45% of all professorial positions were tenured or tenure-track. By 2009, that number dropped more than twenty percent, and the New York Times published a report last April pointing out that 76% of all professorial positions today are filled by contingent adjunct faculty. The amount of academic scholarship being produced today is unprecedented in quantity, but the number of available positions for the people who produce that scholarship is diminishing.

Adjunct faculty in colleges and universities around the country teach in absolutely horrible conditions. They are essentially contract labor, jumping from school to school looking for courses to teach. If they’re lucky, they get paid around $3,500-$4,000 per three credit course and they teach somewhere around five to eight classes a semester (most tenured professors teach between one and three classes per semester). They receive no health benefits and pretty much no chance for tenure, and what I’ve just described is actually ideal for a contingent faculty member. The situation is usually worse. An adjunct whose resignation letter from a Pennsylvania college was published online yesterday was making $3,150 per three-credit course and restricted to a maximum of four classes per semester, which equates to $25,200 per year before taxes. Another Pennsylvania adjunct professor died last year at the age of 83 after years of working as an adjunct. She had been receiving cancer treatment (and remember, adjuncts get no insurance) and was struggling to pay her house bills. The university she worked for had recently ended their contract with her, and she died penniless.

The second issue with academic publishing is that much of the scholarship that is being published today is not getting into the hands of those outside academia who want to learn from it. As Kendzior remarks, “with the odds of finding a tenure-track job against them, graduate students are told to plan for a backup career, while simultaneously being told to publish jargon-filled research in paywalled journals.” Paywalled, subscription-based services like ProQuest and JSTOR charge exorbitant fees for access to scholarly books, articles, Ph.D. dissertations, and other content that is already funded in part by taxpayers who fund the public universities that contribute much of this academic content. While students and faculty in academia have access to this content, it is difficult and expensive for those outside of academia to access it, even though their tax dollars have gone towards it production.

So, in sum, it seems as if academics are producing content for themselves first and foremost, which is extremely unfortunate. I believe the ultimate goal of academic publishing should be to disseminate knowledge to those who want to learn from it, regardless of their job title or financial resources. I am proud of the fact that the IUPUI University Library has committed itself to open access scholarship, and my master’s thesis will be freely available for download to anyone when it is completed later this year. I am also working on writing an article for a scholarly journal that will ideally be published within the next year or so. I hope this proposed article is made open access as well.

When I think about the point of academic publishing, four questions emerge in my mind:

1. What’s the point of academic publishing if your work is locked behind a paywall?

2. If I want to connect with an audience beyond the ivory tower, what mediums give me the best opportunity to do so?

3. What’s the point of academic publishing if it’s being demanded as a job requirement for a field I most likely can’t break into?

4. How do I make academic publishing work for my interests and not the other way around?

Academic publishing is important to me as student and a scholar. I rely on academic publishing to provide me the latest and best scholarship on topics that interest me as a reader and as a researcher, and I believe society benefits immensely from the work of academic scholars. If scholars hope to reach an audience beyond the academy in the future, however, I believe the purpose of academic publishing needs to be redefined in ways that encourage access for all, not paywalls for most. It would also help if we started paying Ph.D. professors enough money to not have to rely on food stamps to get by.


IUPUI Digital Sandbox on History@Work

Digital SandboxA brief note: back in August three of my cohorts–Callie McCune (@CallieMcCune), Christine Crosby (@XtineXby), and Abby Curtin (@Abby_Curtin)–and I hosted Digital Sandbox, a one-day digital humanities workshop on the campus of IUPUI. More recently the four of us collaborated to write up a follow-up analysis of what worked, what didn’t, and questions we have about the digital humanities going forward. That essay was published today by the National Council on Public History’s preeminent public history blog, History@Work. You can read it here.

As a part of the workshop I created and ran a panel on using social media in conjunction with humanities scholarship. Here’s the introductory speech I made for the panel, and here are some discussion highlights that may help interested parties get started with “putting yourself out there.”


The Internet as an Archive of 21st Century History

The Stream
“The Stream”

Several days ago I read a fine piece in The Atlantic from anthropologist Alexis C. Madrigal on real-time internet content/information delivery, what Madrigal refers to as “The Stream.” Whether it be Facebook, Twitter, Google Reader (R.I.P.), or the New York Times, many websites have turned to the stream as a means for instantly delivering information that is ostensibly meaningful to readers. The screenshot above is from the “Times Wire”–which is run by the New York Times–and it exemplifies the machinations of the stream: instant updates, individualized content, and and a sense of inclusion, by which I mean a feeling that you are keeping up with and understanding (at least somewhat) what’s going on in the world.

Madrigal explains the stream as such:

The Stream represents the triumph of reverse-chronology, where importance—above-the-foldness—is based exclusively on nowness. There are great reasons for why The Stream triumphed. In a world of infinite variety, it’s difficult to categorize or even find, especially before a thing has been linked. So time, newness, began to stand in for many other things. And now the Internet’s media landscape is like a never-ending store, where everything is free. No matter how hard you sprint for the horizon, it keeps receding. There is always something more.  Nowness also transmits this sense of presence, of other people, that you get in a city when you go to a highway overpass and look down at all the cars at any time of the day or night.

Given my recent embrace of Twitter and my belief in its enormous potential to deliver information to me that I find important, I am now more than ever a product of the stream. Rather than reading a newspaper, I now check my Twitter stream in the morning to see what’s happening, to find information that “newsworthy” to me. When I find content personally interesting, I contribute my own small part to the stream through tweets, Facebook posts, and essays on Exploring the Past. Since I started this regiment of blogging and tweeting one year ago, I’ve been blown away by the connections I’ve made with people all over the world and the number of visits I’ve had to this blog (more than 10,000 so far).

Yet there are times when I feel as if the stream overwhelms me. Sometimes I feel like I can’t get away. I try to work on projects, school assignments, etc., but the pull of nowness sucks me in, challenging me to stop work to check and see if I’m missing something important in the stream. Equally frustrating, these streams make little distinction between what Robin Sloan refers to as “flow” and “stock.” “Flow” refers to information designed for the here and now: updates and tweets about weather, daily activities, your pumpkin spice latte, etc. “Stock” refers to content that I’d argue is more than information in that it actually contributes to knowledge construction; material that you’d still refer to long after its incorporation into the stream.

Madrigal’s article raised larger questions within me about how we view the internet from a holistic viewpoint. If we rely on the stream for obtaining information, how do we promote and preserve meaningful flow and stock content for the long term? Can we break away from the pull of the now to make room for reflection on what has already occurred in recent memory?

Part of the solution, I think, is understanding that while the internet provides us meaningful information for the here and now, the internet should also be viewed as a historical, archived space. Sure, there are sites like the Internet Archive, Google Books, HathiTrust, and Chroncling America that provide public access to historical events, documents, and artifacts from the twentieth century and earlier, but how do we go about archiving the history we make every day through our interactions on the stream? Twitter, Facebook, Reddit, and other related sites are not just sources for nowness: they’re also tools and resources for future historians looking to interpret the history of the early twenty-first century.

Viewing the internet as a historical archive will require more discussion and questioning, as far too many website proprietors view the content and interactions on their websites as disposable rather than historical. Ian Milligan points out that major websites such as Yahoo! and MySpace have recently destroyed millions upon millions of historical digital records, embracing the notion of “who needs old stuff when the future is here?” In the case of MySpace, bloggers who used the world’s largest social media website from 2005-2008 to share their thoughts had their information wiped out instantly in June of this year. As Milligan argues, MySpace “meant something to multiple millions of people,” and future historians are now more impoverished thanks to this focus on the now.

How do you go about preserving your digital records? What would you do if Facebook, Twitter, or WordPress suddenly deleted all of your content, all of your flow and stock?


A Colorful Map that Tells Us Almost Nothing

Colin Woodard's Map of "The American Nations Today" Photo Credit: Tufts Magazine
Colin Woodard’s Map of “The American Nations Today” Photo Credit: Tufts Magazine

Read this article before proceeding.

Colin Woodard’s recently created map of eleven “American Nations” in the United States today is of dubious scholarship, in my opinion. It is a decidedly white, British-Isles-centric rendering of American cultural values that purports to interpret the roots of gun violence throughout the country and explain why Americans can’t come to an agreement over gun control today. It fails on both accounts and does little to sharpen our perceptions of both the past and present.

Woodard’s description of each “nation” is extremely problematic for several reasons. For one, he gives us little in the way of a time frame in which to contextualize the cultural characteristics of each “nation.” He vaguely refers to the time period in which each “nation” was “established,” [by white Anglo-Saxons, of course] but fails to explain which “nations” emerged first or suggest the possibility that each of these “nations” was inhabited by Americans who emigrated multiple times during their life and who were influenced by cultural characteristics across regions of the country. In failing to provide a chronological context for his study, Woodard essentially uses each region’s cultural context during the period of “establishment” [the late eighteenth and early nineteenth centuries] to explain contemporary political problems without explaining changes in politics, economics, or demographics from roughly the antebellum period to the present. Plus, his descriptions and geographical placements for some of these “nations” are outright awful.

Take, for example, “The Midlands.”  Woodard says the following about this “nation”:

America’s great swing region was founded by English Quakers, who believed in humans’ inherent goodness and welcomed people of many nations and creeds to their utopian colonies like Pennsylvania on the shores of Delaware Bay. Pluralistic and organized around the middle class, the Midlands spawned the culture of Middle America and the Heartland, where ethnic and ideological purity have never been a priority, government has been seen as an unwelcome intrusion, and political opinion has been moderate. An ethnic mosaic from the start—it had a German, rather than British, majority at the time of the Revolution—it shares the Yankee belief that society should be organized to benefit ordinary people, though it rejects top-down government intervention.

If you look at Missouri on the map above, you’ll notice that “The Midlands” covers most of the Northwestern part of state before making a stretch along the Missouri River to St. Louis. In making this distinction, Woodard purports to explain the founding of St. Louis and a good chunk of the Missouri Region as the creation of utopian-minded English Quakers. Nothing could be further from the truth. The first white settlers to St. Louis were Pierre Laclède and Auguste Chouteau, two men of French descent who named the city after King Louis IX in 1764. Shortly after French discovery, the area west of the Mississippi River was assumed by New Spain. This land was later transferred back to the French in 1800 before Napoleon Bonaparte sold it as a part of the 1803 Louisiana Purchase. Long before these white men came, Mississippian culture thrived in the area thanks to maize-based agricultural production and a strong trading network.

From 1764 to roughly the Jacksonian Era (1828-1836), St. Louis was a pluralistic society largely composed of French, Spanish, and American Indian cultures. Emigrants from Woodard’s “Tidewater” and “Greater Appalachia” areas also came around the turn of the nineteenth century. Starting in the 1830s immigrants from Woodard’s “Yankeedom” region emigrated, as did Irish and German immigrants (the latter especially so after the failed revolutions of 1848). Therefore in no way was there any type of German “majority” at the time of American Revolution, nor were there many British people to speak of. In fact, St. Louis (and all of Missouri by extension) was such a melting pot during the antebellum years that debates still rage about Missouri’s identity. Is it Midwestern? Is it Southern? Northern? “Tidewater Appalachia New France Yankeedom”? Additionally, how do you account for Missouri and Kansas engaging in bloody conflict during the 1850s if these areas are largely composed of the same people? Some “nation,” eh? (If you want to read more about St. Louis’s cultural identity, read James Neal Primm and the beginning of Louis Gerteis‘ work on the Civil War in St. Louis).

Woodard is correct that “the original North American colonies were settled by people from distinct regions.” He is also correct that America has developed several unique regional identities (and not just North, South, and West). But to suggest that these regions were actually autonomous “nations . . . developed in isolation from one another” through bland generalizations that have little chronological order and without acknowledging any cultural agency of non-white Americans–only to conclude that Americans can’t agree on gun regulations today–is silly. Even the whole argument about a contemporary “divided nation” with deep ideological divisions is a myth, with roughly 70-80% of the American population falling into a centrist/moderate camp politically, according to Morris P. Fiorina.

Woodard qualifies his “nation” descriptions by stating that “my observations refer to the dominant culture, not the individual inhabitants, of each region.” I can appreciate this distinction. Again, there are certainly cultural differences in various regions of the United States. But as my St. Louis example shows, Woodard can’t even get “the dominant culture” right. Rather than working to show historical change over time, Woodard bends the past suit his present day agenda.

Rant over.

What do you think of this map? I’ve stated my opinions, but I’m open to different perspectives.


There is No Such Thing as a “Digital Native”

Photo Credit: Ashleigh Graham
Photo Credit: Ashleigh Graham

I have been doing research on teaching students how to assess historical primary sources (both print and digital) and utilize historical thinking in and out of the classroom. One of the best sources I’ve relied upon for this project is the 2011 publication “Why Won’t You Just Tell Us the Answer?”: Teaching Historical Thinking Grades 7-12, written by history teacher Bruce Lesh. The book is wonderful and I really like his lesson plans. Many of Lesh’s activities challenge students to imagine themselves working as curators, archivists, or some other public historian who is interpreting the past for a larger audience. I hope to write more about Lesh’s book in a future post, but for this essay I am going to focus on a brief comment Lesh makes on page 33:

I am always amazed at how visual images, be they photographs, hand drawn, painted, sculpted, stimulate conversation among my students. It is a testament to the much discussed visual generation, of which they are a part. Inundated with images on television and online, combined with the decline of newspapers and print reading, this generation is more inclined to gather information from visual elements or sparse narratives. The predisposition for the visual over the written, particularly complicated text, is also indicative of the fact that students have been trained to see the study of history as one that involves textual sources…to the exclusion of other types of historical sources.

What Lesh essentially argues here is that his students are “digital natives.” They think and understand the world differently than older generations thanks to their participation in what Lesh describes as the “visual generation,” a new era of students who allegedly don’t like reading and who better process information through the use of visual images and short texts. Because our students are more comfortable with visual images, we should cater our lesson plans to that “learning style.”

The concept of a “digital native” was first penned by Marc Prensky in 2001. Digital natives, according to Prensky, are people who were born into what many refer to as “the digital age.” They are inherently different from “digital immigrants” who were born before the “digital age” but who have “immigrated” to this new age. The use of technology, social media, texting, etc. comes naturally to digital natives, whereas this technology is akin to learning a new language for digital immigrants.

While I agree with Lesh that history instruction has unnecessarily relied upon textual sources to the determent of visual sources such as maps, paintings, and photographs, I cannot agree with the idea of an existing “visual generation” that has a natural predisposition for visual items over textual sources. Additionally, I believe there are no such things as “digital natives” and “digital immigrants.” Here are a few reasons why:

  • Much of the digital technology we use on a daily basis was developed by “digital immigrants” who were not a part of the “visual generation.”
  • Since this technology was developed by “digital immigrants,” any notion of a cognitive difference between Millenials, Generation X, Baby Boomers, etc. lies on shaky ground. Rather than creating a dichotomy that differentiates how people process and apply information, perhaps we should consider the idea that all generations have a disposition to prefer visual images and sparse narratives over dense text. If we acknowledge that the teaching of history from the early 20th century to the present has had many shortcomings and that many students hate the way history is taught and not the discipline itself, then it signals a failure of learning theory, content delivery, and the creation of lesson plans with little purpose among educators rather than any cognitive difference in students today. One of the most exciting aspects of the digital humanities is that historians have so many opportunities to utilize sources that go beyond textual descriptions of the past. I would argue that everyone can have their perceptions of the past sharpened through visual imagery, not just the “visual generation.”
  • Jonathan Berg, a Washington, D.C. Library Director and author of the awesomely titled blog BeerBrarian, cites a recent study in which 315 college students and recent graduates were surveyed about their use of digital technology. The study concluded that younger people were slightly more comfortable using digital technology than people older than them, but it also concluded that younger people were no more comfortable creating technology than older people. In sum, young people are comfortable being consumers of digital technology, but there is no evidence to suggest that younger people are comfortable in their cognitive ability to creattechnology. Additionally, the study also shows that not all young people have access to the same technology. Many people use computers that were created ten years ago and/or don’t have access to smartphone technology.
  • Just because you have a smartphone or participate on Twitter does not mean that you are a “digital native” or that you understand the technology, source code, or power interests behind the creation of that technology. Again, consumption and creation are two very different concepts.

In sum, the notion of a “visual generation” composed of “digital natives” is a myth.


Ideas for Keeping the Tent Big

A brief addendum to my post yesterday on crowdsourcing and DHThis:

  • Ernesto Priego of City University London left a thoughtful comment on this blog that provoked new questions about DHThis and the nature of inclusiveness in dh. He correctly clarified that the “yes/no” binary that I described is actually an “up/down” vote and that the content of an essay, article, video, etc. is not submitted to DHThis. Rather, it is the link to that content that is submitted by users. He also suggested that this formula complicates the voting system because “it’s not only the content being judged, but the participation via submission.” What motivates these people to submit links for voting to the site? Is it to endorse that content, or is like a Retweet on Twitter that doesn’t necessarily function as an endorsement? What if someone votes down a fellow colleague’s work?
  • Ernesto also pointed out that I had made no mention of DHNow, another website that functions as a repository for showcasing notable work in the dh community. I was vaguely aware of DHNow’s existence prior to writing my post, and I knew little about the site. Furthermore, much of the discussion I had observed on Twitter and blogs had revolved around concerns brought up by the Journal of Digital Humanities, so it never occurred to me that making no mention of the site or its similarities to DHThis was a mistake on my part.
  • Jesse Stommel of the University of Wisconsin-Madison and I exchanged a few tweets about ideas for DHThis to consider for their voting system. We agreed that the yes/no/up/down voting system put less emphasis on discussion of content. Jesse suggested that a system for tagging similar posts be implemented, to which I responded that such a system could open the possibility for a “similar” or “recommended” reading section to be added under each post on the site. If a popular post is featured on the front page, clicking on it would lead to similar content that has less votes, but may be worthy of reading. Such a system wouldn’t completely remove the “popularity contest” aspect, but I think it would allow for less popular scholarship to be considered for featured status on the site. Adeline Koh of Stockton College (and one of the creators of DHThis) agreed that such a system may enhance the “discoverability” of content on the site.
  • Jesse and I agreed that keeping the yes/up vote option and removing the no/down vote option should be considered by the organizers of DHThis.

I enjoyed the exchanges that took place yesterday and I thank Ernesto, Jesse, and Adeline for engaging in discussion with me when they really didn’t have to. I hope that readers didn’t perceive my last post as full of negativity. It is an exciting time to be involved with the Digital Humanities, as many traditional notions for creating, reviewing, and disseminating humanities scholarship are being challenged by the promises and perils of digital technology. The fact that three remarkably talented scholars and a Hoosier graduate student spread over two continents can exchange so many questions and arguments in a day’s time reinforces the sheer astonishment I have about the changing nature of communication in the digital age. Such an exchange would have never happened five or ten years ago, and I think that’s a cause for celebration, even if many of us still have a limited understanding as to what the “digital humanities” we so frequently talk about actually is.