Digital History Minor Field

I completed Digital Humanities: Theory and Practice, a minor field readings course, in the summer of 2015 with Dr. Lincoln Mullen. This course provided an in-depth examination of the theories, methodologies, and current trends and issues within the field. The course culminated in a minor field exam, with Dr. Kelly Schrum serving as the second reader. Below are the blog posts I wrote each week reflecting on the assigned readings.


The Claws Come Out: The Syuzhet Debates

This week's readings on big history and humanities computing were both informative and fun, as I enjoy watching scholars critique each other and, sometimes, get catty. The syuzhet debates read something like this:

1) Jockers introduces syuzhet, a package for R that studies plot shifts through sentiment analysis. In the first two blog posts, he details the Fourier transformation, Euclidean distance, how plot shape is derived, and the distance matrix.

2) Swafford writes a blog post in response that further describes the algorithm behind syuzhet, which works by splitting a novel into sentences; assigning a positive or negative number to each sentence; and smoothing out the numbers to get a foundation shape of the novel. She then discusses the various problems she ran into while working through syuzhet, including having the package incorrectly interpret multiple sentences as being one sentence; not graphing emotional valence of a text and instead creating graphs of word frequency groups by theme; and ringing artifacts creating the six or seven plot archetypes rather than those archetypes resulting from similarities between the emotional structures of the novel.

3) Jockers responds to these critiques in another blog post. He writes that the tool does not have to be perfect, just "good enough." While he maintains that he is sympathetic to Annie's position on the sentence level precision of syuzhet, he thinks that for this case it doesn't really matter. As long as the overall shape is reminiscent of the known sense of the novel's plot, then the tool is working.

4) Swafford responds to the "good enough" claim in a--yup, you guessed it--blog post. She makes two conclusions: foundation shapes are not the right tool to use since they do not always reveal the emotional valence of novels; and benchmarks are needed in order to evaluate syuzhet properly. In another blog post, she points to specific examples that Jockers had provided and show how they are not true illustrations of syuzhet's success.

5) Jockers responds with the final blog post, writing that he believes the true test of the method lies in assessing whether or not the shapes produced by the transformation are a good approximation of the known shape of the story.

Swafford pointed out several serious issues with the package, and Jockers responded by mansplaining and ultimately looked like a very sore loser. Despite this, several good points were brought up by others who blogged in response to this debate. Andrew Piper discussed the issue of validation and argued that humanists need their own particular form of validation. Scott Enderle wrote about Fourier transformations and believes that ringing artifacts are necessary. Ben Schmidt found that Fourier transformations are not the correct smoothing function to use for plots.

This debate really showcased how gender plays a role in digital humanities. In the first week's readings, we read Miriam Posner's article "Think Talk Make Do" about women and coding in DH, and I think we have come full circle. Jockers's responses to Swafford came across as one of the best examples of mansplaining I have ever seen. I really felt that he was condescending and looking down at Swafford because of her gender. Women can code just as well as men can, and Swafford had every right to pick apart syuzhet and examine the algorithm for problems. She found some serious issues, which I felt that Jockers brushed off by saying the package was "good enough." And speaking of "good enough," is that what we really want our tools to be? Do we want to work with something that is "good enough" and use that to analyze large quantities of data?


Visual Information and My Blog?

As I grappled this week with the readings on visualization, I started to think more about my blog and the portfolio component of our final exam for this course. The more I dwell on it, the more I realize that these two things do not go hand in hand. Visualization is about displaying information visually, and this site is pretty much the opposite. Yes, information is displayed here, but in a mostly written format. I have a few visualizations on this site: my Clio 1 final project, and a few practicum blog posts for the same class. Despite my blog having (almost) nothing to do with this week's readings, I'm pleased with the small progress I've made so far with my portfolio. As the information on this site is a representation of me and my academic pursuits and my initial foray into the world of digital humanities, it is important that this non-visual information is accurate, well-written, well-organized, and ascetically pleasing.

Okay, so now onto the readings. Firstly, what exactly is a visualization? I have been assuming that networks, text analysis, and mapping all fit under the umbrella of visualization. Isabel Meirelles, in Design for Information, discusses many different forms of visualization: infographics, which are visual displays in which graphics, combined with verbal language, communicate information that would not have been otherwise discernible; hierarchical systems, which are ordered sets or subsets are organized in a given relationship to one another; relational structures, which organize data in which relationships are critical to the system being visualized; maps; and spatio-temporal structures, in which data belonging to both space and time are represented. These seem to mesh with Edward Tufte's overarching push towards having graphics display a relationship between two or more variables in his books Beautiful Evidence and The Visual Display of Qualitative Information. David Staley's Computers, Visualization and History notes that visualization projects organize information in spatial forms that are multidimensional. Staley elucidates how visualizations can be beneficial to historians: if creating visual simulations and models based on primary sources, historians are able to explore patterns that otherwise would have been unobservable, including simultaneity, networks, and multi-dimensional patterns. However, Staley observes that historians have not been quick to adapt visualizations to their studies, despite their usefulness.

The one article I was a bit confused with was Matthew Bookers "Visualizing San Francisco Bay's Forgotten Past." I was confused primarily because the article consisted of a narrative history of San Fransicso Bay, and while there were visualizations, mostly in the form of maps, there was no discussion as to how the author created those visualizations, how they moved the narrative forward, or how those visualizations helped Booker in framing his argument. The narrative format of this reading was especially evident since I read it immediately following Laura Klein's "The Image of Absence," which went into specific detail on creating and using visualizations in the research and historical processes.


In Which I Honestly Assess Myself

So this is the blog post where I admit that I am finding this course to be more and more challenging. As we started out defining the field and then discussing the state of DH, whether that be public history, digital scholarship, etc., I was feeling very good about the readings and discussion. I found all of the readings intriguing and interesting, and I was able to engage with the topics and think about them in the context of my research. Now that we've moved on to the methodology portion of the course, I am having a much harder time. When Lincoln mentioned the quadratic equation last week while we discussed text analysis and topic modeling, my brain sort of exploded and oozed out through my ears. I am not a numbers person, a math person, or a statistics person, so learning the nuts and bolts of these different methodologies is hard for me to wrap my mind around.

This week's readings on networks did not make me feel any better. Combined, the readings provide a detailed and thorough introduction to and applications of networks. Out of the articles, blog posts, and textbooks, the one I got the most out of was both sections of Scott Weingart's "Demystifying Networks." As it was the first reading of the week it gave me a much more detailed look at networks than I previously had in Clio 1, and I realized, as I became aware of last week with text analysis, that this is much more complicated than I originally thought.

In the first paragraph of his article, Weingart points out that any data can be studied as a network, but states that this is a dangerous idea for two reasons: 1) While networks can be used on any project, they should be used much less often than they are, and 2) methodology appropriation is dangerous. Weingart then spends the rest of the article discussing the theoretical aspects of networks and the inherent assumptions that various network methods have, both of which are necessary in order to prove his second reason for why using networks can be dangerous. My quibble is that Weingart never specifically articulates what type of projects would benefit most from network analysis, and where such analyses have been most productive and useful. The rest of the readings for this week include several that demonstrate applications of network analyses, including Elijah Meeks's posts "Visualization of Network Distance" and "More Networks in the Humanities or Did Books Have DNA?" I would have liked Weingart, who has extensive experience working with networks, to point out some projects he feels best use networks, and other projects he thinks don't need to utilize such analysis.


Text Analysis... So What?

Tim Hitchcock's "Big Data for Dead People" elucidates a problem I encountered when first introduced to text analysis and continued to grapple with while working on my Clio 1 project. Hitchcock notes that "distant reading seems to tell us what we already know." For my Clio 1 project, I used Voyant to analyze twentieth century Supreme Court cases and newspaper articles that reported on those specific cases. When I first examined the results from Voyant, I couldn't help but think, "This is neat! But so what?"

Tim Hitchcock demonstrates how text analysis can be utilized for historical research purposes using the case of Sarah Durrant as an example. To start out, Hitchcock analyzed the record of her trial, using the trial transcript, records from her imprisonment, and the newspaper report of her case. He used an ngram viewer to compare Sarah's words to those of women of the same age and social class, to see if her linguistic patterns matched. Since the records included Sarah's address, Hitchcock was able to examine her neighborhood and its inhabitants, including neighbors who lived in her building and on along the same road, using GIS. Sarah's trial was one of the first in which a detective gave evidence, showing that her case was unique. Hitchcock also analyzed Sarah's experience with other defendants using the Old Bailey Online, and found that her plea coincides with the rise of plea bargaining. Using both close and distant reading, Hitchcock was able to contextualize Sarah's experience and demonstrate the importance of the information she left behind.

While all this analysis feeds into Hitchcock's larger point of using these new technologies of digital humanities to examine source bases that we do not already know (as opposed to sources we do know, i.e. books, print records, etc.), his article stood out for me because he did more than just textual analysis. Yes, it's great that text mining and topic modeling software can spit out word frequencies and a series of topics. But so what? Hitchcock's piece more than adequately answered the question. Using digital methods he was able to examine the data Sarah left to craft a larger story of the justice system in England in the nineteenth century.

This "So what?" question goes back to the first blog post I wrote for this minor field, where I championed findings over methodology. I'm not going to reiterate this thought, as I've learned over the past several weeks the importance of DH being methodologically focused. But with text analysis, it is important for me, not DH as a field, to remember that I need to use these tools to prove a point and to shape an argument. Hitchcock demonstrated how I can use textual analysis to contextualize data, and how that contextualization can lead to larger findings. Contextualization is key, as the methodology will generally tell the user what he/she already knows. Text analysis is incredibly useful for many reasons, but I have to remember to look past the words and numbers, take that next step, and look at the "bigger picture."


The Issues of Digital Scholarship

This week's readings on digital scholarship encompassed many different topics: digital articles, evaluation, dissertation embargoes, open access, copyright, and more. So what is digital scholarship and what are the main issues surrounding it?

Ed Ayers defines digital scholarship as "discipline-based scholarship produced with digital tools in a digital form." According to Will Thomas, digital scholarship can be divided into three separate entities. Interactive scholarly works (ISWs) are hybrids of archival materials and tools which are situated around a historiographically significant topic. ORBIS is one example of an ISW. Digital scholarship can also be digital projects or thematic research collections (TRC), which usually deal with large investigations into a complex problem, such as the Valley of the Shadow project. Digital narratives make up another section, and these works typically feature a historical argument supplemented by evidence and provide multiple entry points for readers. Sheila Brennan writes that grants provide another entry point for understanding what comprises digital scholarship, as grants have to include intellectual motivations behind the digital project, proposals are peer-reviewed, are written without technical jargon, and produce specific deliverables. In short, grants allow outsiders to understand digital scholarship (in whatever form it might take), its purpose, and its significance. The Journal of Digital Humanities is one model of how digital scholarship can be collected, curated, disseminated, and accessed, as Joan Troyano details in her article.

Digital scholarship comes with a lot of baggage. To begin with, there is no universal definition of what digital scholarship is and what that term encompasses. Will Thomas writes that "the forms, practices, and procedure of creation in the digital medium remain profoundly unstable and speculative." Since there is no established definition, there are no standard conventions and no set rules for evaluating such scholarship. Professional organizations, especially the American Historical Association, have not been especially pro-active in promoting digital scholarship and helping to establish a set of guidelines for evaluation. Despite not having codified instructions for assessing digital scholarship, several DHers have suggested some, which should help in generating a larger conversation on this topic. Trevor Owens, Geoffrey Rockwell, Todd Presner, and James Smithies have all come up with their own recommendations. There is a debate among scholars about what form digital scholarship should take: should it be traditional scholarship in a digital form, or should the digital be completely different and only do things not possible in an analog form? Tim Hitchcock asserts that historians need to ensure that digital scholarship is as rigorous as its analog counterparts, otherwise the study of history, like the book, will be "dead." Other scholars have argued against this, and Will Thomas questions whether or not digital articles should conform to the conventions of print. Collaborative work done in the digital realm is important, and tenure and promotion (T&P) committees need to acknowledge it as such. Tenure and promotion committees are another can of worms--often they do not see digital scholarship as being as important or significant for the tenure and promotion process as traditional scholarship. Digital scholarship presents particular obstacles to junior scholars and grad students who are digital scholars and have digital work. In order to be successful going forward, Ed Ayers says that digital scholarship needs to have a greater focus, purpose, and sense of collective identity.


Digital Pedagogy and the Digital Divide

One of the ideas presented by Mills Kelly in his book Teaching History in the Digital Age is the notion that professors need to meet students where they are and the need to engage students in the space where they live. This means that professors and teachers need to be willing to adapt current technological trends into their syllabi and course teachings, and that by doing so they are meeting their students in a realm in which they are already comfortable. This is usually the space in which they re creating content for others to see, use, and remix. Professors should teach students how to be a historian in a digital space by showing them how the practices of the historian can be applied in that space.

Jody R. Rosen and Maura A. Smale's "Open Digital Pedagogy = Critical Pedagogy" echoes these same ideas. They want students and professors to engage in productive dialogue by using open digital tools and platforms. They state that college spaces typically reinforce traditional hierarchies but virtual spaces do not have to do so. By promoting the use of digital tools, students can bring in prior knowledge and rely on their own experiences more readily than in a traditional classroom setting. Open digital pedagogy can "facilitate student access to existing knowledge and empower them to critique it, dismantle it, and create new knowledge."

I agree wholeheartedly with both propositions and think that such techniques should be employed in all educational institutions. But I also think that such claims warrant mention of the digital divide. How do professors teach in the digital realm in places where the internet is not quite as reliable or fast? How do students gain access to the internet? How does the digital divide affect pedagogy? How does the digital divide affect students, especially those who are considered "digital natives"? How can the digital divide be overcome so students can become digital historians?

My favorite chapter in Teaching History in the Digital Age was the final chapter, entitled "Making: DIY History?" Kelly discusses his course Lying About the Past, in which students produced a historical hoax with the purpose of learning about the historical process. The ethics of making such a hoax were briefly mentioned, but I think ethics and pedagogy go hand in hand in many instances. When is it okay for students to "remix" copyrighted sources? Was lying to a reporter ethical when interviewed about the last pirate? Students need to learn how to be ethical, responsible historians, and I think that those who took Lying About the Past probably debated such issues much more heavily than other students who didn't have the opportunity to take such a course.


"The Public" and Public History

A quick recap of a selection of this week's readings on public history:

  • Carl Smith's "Can You Do Serious History on the Web?" asks whether history that is put online can be considered professional and describes the process of creating The Great Chicago Fire and the Web of Memory.
  • Mark Tebeau discusses the importance of oral history and the notion of "community sourcing" in the context of Cleveland Historical Project, a mobile app and a mobile optimized website, in "Listening to the City: Oral History & Place in the Digital Era."
  • "Digital Storytelling in Museums: Observations and Best Practices," authored by Bruce Wyman et. al., emphasizes that in a digital age, the scholars' voice exists in a much louder world information-wise and the importance of making online space a museum's "fifth gallery."
  • When Melissa Terras decided to examine the most downloaded items at major institutions across the UK, chronicled in her blog post "Digitization's Most Wanted," she discovered that that question opens up the door to discussing broader topics of public engagement, the impact of social networks on digital collections, and the significance of making primary sources open and available to others.
  • Michael Peter Edson, in "Dark Matter," discusses the how museums could be more influential in the digital age and in the internet community by embracing creativity and by using the many facets of the internet to reach a broader audience.
  • "Life on the Outside: Collections, Context, and the Wild, Wild, Web," by Tim Sherratt, discusses the ways in which the public engages with online content, and how the public often uses digital content in unexpected ways.
  • Sheila Brennan, in her blog post "The Public is Dead, Long Live the Public,"  discusses the need for digital public historians to identify which "public" they are trying to attract with digital public history projects, and argues that simply putting a project online does not make it public history.

Many topics and ideas are brought up in these readings: the need to understand and identify "the public" in order to have successful, meaningful digital projects; how GLAMs can harness the internet to their advantage to attract both virtual and real visitors to their analog and digital collections; how the public often uses digital content in ways scholars didn't originally think of; and the ways in which the public can be used to create digital projects.

I came up with (another) list of lessons learned from this week's readings:

  1. Understand and define "the public" or the audience: What struck me as most interesting is how institutions and public historians often fail to fully understand "the public" as they do not bother to take the time to define it. One of the reasons why Histories of the National Mall is so successful is because the project creators identified, invited, and addressed their target audience in all stages of its creation and implementation. Since public history is so focused on audience, it is vital for public historians to understand and identify which particular audience they are trying to reach. "The public" should never be thought of as this large, amorphous other of non-scholars. If the audience is not defined then there is no way to gauge the relative success of the project, exhibit, etc.
  2. Reach out to that audience via the internet: Once the audience is defined, then the institution or historian can reach out to them via the internet. This can be done in many ways: creating digitized content, creating a social media presence, or having a YouTube channel like the Green brothers. As is noted in "Dark Matter," this online work should not be demeaned as being less important than other analog or scholarly work. The internet is huge and an institution can reach a very large audience by having a powerful online presence. Institutions should use the internet to attract visitors to their collections, whether they are physical or online visitors. Note that reaching out to an audience is not possible if that audience is not first defined.
  3. Do not be afraid of creativity: As evidenced in "Digitization's Most Wanted" and "Life on the Outside" digitized content will often be taken out of context and used in unexpected ways. Institutions should not fear the decontexualization of their collections--they should accept that it will happen, and when it does, they should use it to their advantage. This could be achieved by posting on a social media platform and giving background information about the particular object (e.g. the dog with the pipe), and in many other ways. By embracing creativity, institutions are demonstrating that they are cognizant of and supporting their audience.

The Problematic Lack of Transparency

This week's readings discussed databases and the ways in which such technology affects the historical profession. Patrick Spedding's article "The New Machine: Discovering the Limits of ECCO" touches briefly on something we mentioned in discussion last week. Spedding notes that ECCO's OCR transcriptions are not available to the users of the database. This is a problem that all historians have encountered before. When working on my Clio 1 final project that utilized the text mining tool Voyant, I went to ProQuest and pulled a selection of newspaper articles written in the 20th century. ProQuest is one of the many databases that does not provide a transcription of the OCRed text, so I had to go through and transcribe all of my newspaper articles by hand. While it was time-consuming, in the long-run it was probably quicker than going through the transcription and fixing each and every error. This topic is better suited to last week's topic of digitization, but I am perplexed as to why databases so frequently do not allow users to have access to OCR transcripts. Pete had mentioned that databases essentially have these "black boxes," meaning that we don't fully understand what sort of process our data has gone through. Why can't all databases follow the model set by Chronicling America? While I am fairly certain my entire Clio 1 class was appalled by the poor OCR quality, showing the OCRed transcripts promotes transparency and openness. Not only are OCR transcripts not provided, but users of databases like ProQuest are given no information as to the accuracy of the OCRed text. Digital history as a field promotes transparency, and as digital scholars I feel that something should be done to correct this problem. It is important to understand how the data we are seeking has transformed. For example, what had to happen for this article on Roe v. Wade to appear as I see it now on my computer screen? What sort of process did this article go through to appear in this digitized format? Databases need to be accountable to their users and share their practices.

Caleb McDaniel's blog post "The Digital Early Republic" examines the practices of historians who do not identify as digital historians and how they use databases in their research. McDaniel mentions that there are no specific conventions on how to cite searches performed in databases, as well as citing which databases were used, and reporting on the results yielded from such searches. The lack of citation convention continues to pose a problem for digital historians, and even those historians who do not identify with the digital part of the field but use databases while researching. Once again, there is a lack of transparency. We've discussed how historians are sometimes hesitant to say that they used digital sources and digital methodologies because those historians often come under stricter scrutiny than those historians who have researched the "old-fashioned" way. We need to abandon this tendency that we have to judge the digital more harshly than the analog: all methodologies should be examined, discussed, and meticulously documented. We need to create standards for citing digital sources, especially the ones mentioned by McDaniel, since all historians engage in such practices whether or not they cite them as such.


The Changing Role of the Historian

The theme that came to my mind most often while going through this week's readings on digitization and crowdsourcing is the changing role of the historian, and how digitization has accelerated that change. Because of digitization, historians now have to  be comfortable in fulfilling roles typically filled by curators, librarians, and archivists, an idea that was voiced last week by Andrew Prescott in his article "An Electric Current of the Imagination." This idea definitely carried through to this week's readings, and I felt that Stephen Brier and Josh Brown's article on the 9/11 Digital Archive illustrated this point best. In their article, they described building the archive, the steps they took to make certain it would be a well-rounded collection, and how they ensured its preservation for future scholars. With the goal of creating an archive that highlighted the voices of those whose memories would otherwise have been lost in the media deluge in the days and weeks following the attacks, the American Social History Project and CHNM collaborated to create what is now the 9/11 Digital Archive. Brier and Brown note that after 9/11 historians had to work as archivist-historians in order to successfully collect, catalog, and make data accessible.

While working at CHNM over the past academic year, I was given the chance to work with the Archive. Along with Stephanie and Jordan, we added items to a collection, described each item, and after a successful review, the collection was made public. When I worked in Public Projects in the spring semester I reviewed and made collections public, and wrote blog posts highlighting the Boston Federal Aviation Administration Filings, Sonic Memorial Project, and Middle East and Middle Eastern America Center Interviews collections. While undertaking this work, I fulfilled many different roles: I acted as a librarian in accessioning items to various collections; I reviewed items already public and ensured the metadata attached to each specific item was correct and up-to-date, thus curating and looking after the existing data; and I analyzed the data in order to write blog posts that appeal to a wide audience, which is a responsibility many public historians today have.

Crowdsourcing has also experienced a huge boom thanks to digitization. Trevor Owens's series of blog posts regarding crowdsourcing and cultural heritage institutions were particularly enlightening. In his blog posts, Owens breaks down the meaning of crowdsourcing and argues against the derogatory connotations of the word, argues that the process of crowdsourcing projects fulfills the mission of digital collections, and studies participants' motivation in engaging with crowdsourcing projects. Thanks again to CHNM and to Public Projects, specifically, I worked on a large crowdsourcing project: The Papers of the War Department (PWD). I created user accounts; protected and exported documents that had been transcribed; and raised awareness of the project by tweeting about nominated documents and writing monthly blog posts, either in the form of a community transcription update or a "transcribe this" post. While working on the PWD, I experienced what Owens wrote about in his blog posts. Crowdsourcing should not be looked at in a negative light; often projects that employ it allow transcribers (I'll refer to any volunteers working on a crowdsourcing project as transcribers simply to maintain uniformity and continuity - I understand not all crowdsourcing work is transcription-based) to work with the document in ways that would not have been possible otherwise. The interaction between the transcriber and the document is a special and unique one, as the experience is not one that could have been produced in any other environment.

In conclusion, digitization has changed the role of the historian in dramatic and necessary ways. I remember in high school when my guidance counselors and teachers were saying how important it is for students to be well-rounded in order to be attractive to college admission boards. I think that historians today have to be well-rounded: they have to know more than just the typical historian-craft as a result of digitization. Also, historians must be open to crowdsourcing and the opportunities it presents not just to the crowdsourced project but to the users who work on it.


The Eventual Sunset of Methodology?

The minor field I’m completing this summer with six of my peers is titled “Digital Humanities: Theory and Practice,” and our first week’s readings topic focuses on the history of the field and the various definitions of both digital history and digital humanities. These readings discuss many themes and issues within digital humanities, and this week the authors of the articles and blog posts identified many broadly configured needs, including the need to transform the DH community both internally and externally—women and people of color should be better represented within the field, and DH needs to reach a broader audience; the need to re-focus the field; the need to develop better DH tools; and the need to focus on what the tools reveal about the data. Another theme inherent within these readings is that digital technology allows scholars, and an even broader audience of non-scholars, to work in new ways.

Out of this week’s collection of readings, the one that I found most interesting was Cameron Blevins’s 2015 blog post “The Perpetual Sunrise of Methodology.” Blevins argues that it is time to shift the focus of DH away from methodology to the findings and argument. We are now at a point that the significance of the findings should take precedence over the utilization of tools used to reach that argument. I agree wholeheartedly with Blevins, as he raises issues that I’ve struggled with since August, when I first encountered digital humanities. I’ve always felt that there was something missing whenever these tools were under discussion, whether it was in class or in an informal meeting of my peers and fellow graduate students at the Center. Without a doubt it is important to know how to use these tools and to be able to elucidate how digital methodologies were used to interpret and analyze the data in question. But it is also equally important to be able to discuss what these tools are showing us about that data—what arguments can be made from the resulting information? What are the larger claims? What does this tell us the topic at hand? Why is this significant? How do the findings actually further our understanding of history? In Blevins’s case, four years after he blogged about topic modeling and Martha Ballard, he published an article in the Journal of American History challenging the notion that the nineteenth century was a time of integration and incorporation. He used digital methods to examine a newspaper printed in Houston, Texas in the 1890s, and found that regional matters heavily outweighed national matters. Despite attempting to reach a different audience than he had with his Martha Ballard blog post, his colleagues once again were focused more on the methods than the results.

When Micki Kaufman came to the Center in March to present on her dissertation, “‘Everything on Paper Will Be Used Against Me’: Quantifying Kissinger,” she spent a great deal of time discussing her digital methodology. This discussion was both enlightening and insightful, and I learned so much in a short period of time. In fact, I wouldn’t mind if she could come to the Center once a week to give us updates on her progress. In the Q&A afterwards, Sean Takats, director of Research at the Center, told her that as she had employed such a wide variety of digital methodologies and analyzed such a large corpus of information, the next step is to take her findings and make a larger claim about what she had found. I think that, as a field, we need to take Sean’s suggestion and start to make those arguments. Digital methodologies are not going to advance our study of history; digital methodologies are already advancing the study of history.

This leaves two questions:
1) Why is the DH community so focused on methodology rather than significance?
2) How can we change the focus away from methodology and towards significance?
One of my theories is that there are still many scholars who are uncomfortable using digital methodologies, and so they discuss these methodologies in order to comprehend them better so they can apply them to their own research. Shifts within the field and in the historiography always take time to become fully integrated and embraced by all scholars, so this is not a particularly intuitive observation. At the same time, though, it is also important to ask why these tools are useful for things other than analyzing data. How can we use text mining to prove larger arguments? How can we use visualizations to craft claims? I think a good starting point in shifting the focus away from methodology and towards significance is to publish more articles similar to Blevins’s in the Journal of American History. He uses digital methodologies but also makes an important and significant argument. I’ve read a lot in this past year about using such tools, but not enough about the significance of the findings. That might be a good place to begin.


css.php