| CARVIEW |
Select Language
HTTP/2 200
server: nginx/1.22.1
date: Tue, 20 Jan 2026 16:53:09 GMT
content-type: text/html
x-clacks-overhead: GNU Terry Pratchett
content-encoding: gzip
DHQ: Digital Humanities Quarterly: 2025
Current Issue
Preview Issue
Previous Issues
Preview Issue
Previous Issues
- 2025: 19.2
- 2025: 19.1
- 2024: 18.4
- 2024: 18.3
- 2024: 18.2
- 2024: 18.1
- 2023: 17.4
- 2023: 17.3
- 2023: 17.2
- 2023: 17.1
- 2022: 16.4
- 2022: 16.3
- 2022: 16.2
- 2022: 16.1
- 2021: 15.4
- 2021: 15.3
- 2021: 15.2
- 2021: 15.1
- 2020: 14.4
- 2020: 14.3
- 2020: 14.2
- 2020: 14.1
- 2019: 13.4
- 2019: 13.3
- 2019: 13.2
- 2019: 13.1
- 2018: 12.4
- 2018: 12.3
- 2018: 12.2
- 2018: 12.1
- 2017: 11.4
- 2017: 11.3
- 2017: 11.2
- 2017: 11.1
- 2016: 10.4
- 2016: 10.3
- 2016: 10.2
- 2016: 10.1
- 2015: 9.4
- 2015: 9.3
- 2015: 9.2
- 2015: 9.1
- 2014: 8.4
- 2014: 8.3
- 2014: 8.2
- 2014: 8.1
- 2013: 7.3
- 2013: 7.2
- 2013: 7.1
- 2012: 6.3
- 2012: 6.2
- 2012: 6.1
- 2011: 5.3
- 2011: 5.2
- 2011: 5.1
- 2010: 4.2
- 2010: 4.1
- 2009: 3.4
- 2009: 3.3
- 2009: 3.2
- 2009: 3.1
- 2008: 2.1
- 2007: 1.2
- 2007: 1.1

ISSN 1938-4122
Announcements
DHQ: Digital Humanities Quarterly
2025 19.3
Articles
[en] Slow, Painful and Expensive: Current Challenges in
Text-Mining Corpus Construction for the Digital
Humanities
Matt Warner, Stanford University; Nichole Nomura, University of Wyoming; Carmen Thong,
Stanford University; Alix Keener, Stanford University; Alexander Sherman, University
of Texas, Austin; Gabi Birch, Stanford University; Maciej Kurzynski, Lingnan University;
Mark Algee-Hewitt, Stanford University
Abstract
[en]
The process of assembling corpora for text-mining-based Digital Humanities projects
is a crucial and yet frequently overlooked aspect of the research process. Often
complicated by text availability and cost, as well as legal restrictions on
in-copyright text, DH scholars frequently resort to “found” corpora marketed to
libraries by publishing companies or questionably sourced corpora that inhabit legal
grey areas. While such corpora have led to methodological developments in the field,
there is a general sense that the biases of these corpora and the inability to share
their raw data have made them imperfect vehicles for large-scale critical claims in
the humanities. Recent developments, however, suggest that this situation may be
changing. In the United States, the 2021 text and data-mining exemption to the
Digital Millennium Copyright Act (DMCA) has promised to improve the viability of
bespoke corpora for text-mining research. In this paper, we put these improvements
to
the test, reporting on our efforts to source a relatively small corpus of literary
theory monographs. Focusing primarily on born-digital works and operating under all
of the practical and legal constraints dictated by the exemption to the DMCA, we
sought to assemble a corpus of 402 pre-selected theoretical works. We found that,
despite the recent legal changes, and even with extensive support from a
well-resourced library, it remains overly difficult to assemble a pre-selected corpus
of scholarly works, even under ideal financial and institutional conditions. While
scholars outside of the United Stats will face somewhat different legal restrictions
on the collection of electronic texts than we did, we found that many of the
obstacles we faced were practical, rather than regulatory, and in many cases, we
found that scanning books was the easiest and most efficient route to digital
versions of the texts we sought.
[en] Rewiring Digital Humanities through an Ethics of
Ecological Care
Photini Vrikki, University College London; Güneş Tavmen, King’s College London
Abstract
[en]
This paper advocates for a fundamental transformation of the Digital Humanities
(DH) field through the adoption of an ethics of ecological
care, challenging the discipline’s current entanglement with
environmentally damaging digital infrastructures. Drawing on feminist care
ethics, postcolonial ecocriticism, and environmental humanities, we argue that
DH must move beyond surface-level sustainability and engage in a deep, critical
reassessment of its pedagogies, methodologies, and institutional affiliations.
The paper critiques DH’s complicity in extractive practices and digital
techno-solutionism, calling for a shift from neutrality to active environmental
accountability by introducing a two-pronged strategy: rewiring DH methodologies
to reflect ecological awareness and embedding ecological care into DH education.
By examining existing projects within Digital Environmental Humanities (DEH) and
eco-critical DH, we highlight pathways for building more inclusive, decolonial,
and care-centred practices. We imagine a rewired DH that is not a neutral
academic space but a dynamic, ethical actor capable of contributing to planetary
health and environmental justice. Positioning DH as a critical site for
cultivating collective responsibility in the face of ecological precarity, the
paper envisions a “care-full” DH committed to resisting exploitative
systems and fostering sustained, interdisciplinary engagement with the climate
crisis.
[en] Let the Light in. Using LiDAR- and Photogrammetry-based BIM Reconstruction to Simulate
Daylighting in the House of Trebius Valens, Pompeii
Nikolai Paukkonen, Doctoral School of Geosciences, University of Helsinki, Finland
Abstract
[en]
From ancient authors, such as Vitruvius, we know that the Romans considered lighting
and especially daylighting an important factor in designing buildings. Instead of
windows, which were usually few and small,
a typical Roman house would get most of its daylight from an atrium and a peristyle
that were open to the sky in the middle. Quantity of light would determine what kind
of action was possible inside and at which hours,
in addition to the general experience of space for the individual. Pompeii offers
many well-preserved remains of actual houses that allow for attempts to reconstruct
this phenomenon.
Here a reconstruction and daylighting simulations of the house of Trebius Valens in
Pompeii are presented. The reconstructions, done using Autodesk Revit,
are based on LiDAR and photogrammetric documentation of the building. The lighting
simulations were performed using Light Stanza addon.
By assessing hourly alteration in lighting in different spaces, it is possible to
visualize the daylighting schedule of the house.
Through careful inspection of the results, this article paints a picture of a Roman
house’s constructed relationship with natural light.
[en] Genre 2.0? Embedding-Based Cluster Analysis as a
Tool for Text Classification in Medieval Hebrew Literature
Annabelle Fuchs, University of Haifa
Abstract
[en]
The classification of medieval Hebrew literature has long relied on historically
inherited genre labels, often leading to misassignments and blurred textual
boundaries. This study applies transformer-based cluster analysis to a corpus of
60 texts, using BEREL embeddings and hierarchical clustering to evaluate whether
selected computational methods can provide empirically grounded, complementary
insights to traditional genre classifications. The analysis identifies several
stable clusters, including a distinct “Narrative” cluster, reinforcing
prior research that questions the categorization of certain texts as “Aggadic
Midrash.” While some clusters align with established classifications,
others highlight ambiguities that challenge conventional taxonomies. The study
demonstrates that computational clustering can systematically capture textual
affinities, revealing relationships that may remain obscured in traditional
approaches. These findings establish a methodological framework for reassessing
genre structures in Hebrew literature, laying the groundwork for future research
based on expanded datasets and manuscript evidence.
[en] Rondo: A Minimal Single Page Application for
Digital Exhibits
Nick Szydlowski, San José State University
Abstract
[en]
Minimal computing is a promising conceptual framework for digital humanities
infrastructure, but the static site architecture most commonly associated with
minimal computing can present a steep learning curve, particularly in a workshop
or classroom context. This article introduces Rondo, a new minimal framework for
digital exhibits which requires no software installation or command line
interaction. Rondo differentiates itself from static site tools by adopting a
single page application architecture with data stored in a Google Sheet, but it
includes tools to disconnect from Google Sheets and create a static site with no
external dependencies. Rondo’s reliance on Google Sheets reflects an approach to
computing which draws inspiration from Agnès Varda’s film The Gleaners and I. Gleaning, the practice of collecting food or
other resources left behind by commercial enterprises like farming, is proposed
as a productive framework for exploring the relationship between digital
humanities and the technology industry. Rondo’s integration with the Digital
Public Library of America presents another way to explore the possibilities of
small acts of curation, criticism, and juxtaposition using resources gleaned
from larger institutions and corporations.
[en] Explicit!: Coding Carceral Censorship and Social
Biases in United States Prisons
Kim Bobier, Pratt Institute and Smithsonian American Art Museum; Sue Jeong Ka, New
York University's Asian Pacific American Institute
Abstract
[en]
Social practice artist Sue Jeong Ka’s ongoing digital database, Coding Carceral Censorship (CCC) (2019-), charts the
United States carceral system’s publication censorship. Ka made data categories
to track the knowledge production that US prison book bans target and why.
Concentrating on the most common justification for this censorship, sexual
explicitness, her initial data analysis reveals restrictive penal trends that
devalue women’s and East Asian sexual speech via romance/erotica literature and
manga bans, respectively. These patterns reflect how national white
cis-hetero-patriarchal values play an outsized role in determining which types
of expression the government protects and prioritizes. Our article argues that
through its capacity to identify such under-examined censorship patterns, CCC
underscores Digital Humanities’ (DH’s) potential to intervene in and interpret
the prison industrial complex’s social biases. CCC’s data indicate that US
prison bans enact these biases through excessively barring content by and about
marginalized groups, thereby impeding incarcerated people’s access to socially
marginalized perspectives and authors. In framing CCC as both critically made DH
and part of Ka’s socially engaged art, we further suggest that her database
illuminates how DH and socially engaged art practices can be mutually
enriching.
[en] Digital Hermeneutics, Medieval Texts, and Urban History: A Case Study from Aberdeen,
Scotland
Wim Peters, Text Dimensions (NL); William Hepburn, University of Aberdeen (UK)
Abstract
[en]
This article presents an inquiry into the use of natural language processing (NLP)
methods to enrich, rather than replace, hermeneutical workflows in historical research.
Making use of digital technologies in the form of existing tools and custom computational
processing, it advocates an approach that fosters deep text interpretation by historical
scholars with the aim of incrementally addressing and expanding the range of research
questions asked about a particular theme, using a particular textual corpus.
In general, this paper argues that success hinges on the possibility of incrementally
and systematically unlocking new data for hermeneutical knowledge acquisition and
integration without compromising the role of the human historical researcher and their
core scholarly analysis methods. This entails that, just like in a traditional manual,
close reading effort, the scholar should retain maximum control of research activity
and strategy.
Our main finding is that the digital hermeneutical method applied in the described
work provides relevant results for a ‘gude compt and rekning’ (the late medieval concept
of ‘good account’ described in this article) of the conceptual structure of our domain.
The general conclusion we draw from this is that NLP brings possibilities for more
focused and fine-grained qualitative text analysis in this domain, while allowing
easy access to a global perspective on the texts under study. We contend that a combination
of tailored quantitative and qualitative text analysis methods can be integrated into
a flexible research workflow, which empowers the hermeneutical work of humanities
researchers.
[en] Stacks and Intersections: Feminist Thinking in
Digital Humanities, a view from “these islands”
Caroline Bassett, University of Cambridge; Kylie Jarrett, University College Dublin;
Sharon Webb, University of Sussex
Abstract
[en]
Intersectional feminist work around digital archiving can productively develop
stack models to address questions concerning the location, site, and materiality
of possible and effective feminist intervention. Developing this proposition, we
draw on two DH research projects undertaken between the UK and Ireland (2020 and
on-going) focussing on work around community archives at a moment of their
radical transformation.
Asking what feminist DH can be in relation to these archives, we engage with the
hollowing out of intersectional feminism – noting that it at times becomes a
reached for category rather than a useful signifier, while also recognizing that
theorizing forms of intersection between categories and groups experiencing
discrimination is essential to grappling with issues including new forms of
institutionalization, context dependency, and privacy.
An exploration of various forms of Critical Race Theory (CRT) and of more
historically materialist thinking around intersectionality enables us to develop
a feminist-informed stack approach able to grapple with the complicated stakes
of archiving histories of trauma – including those undertaken by those in
different situated positions.
We use this to inform a discussion of “where feminism should act” and
develop an argument that stack models – echoing in theory what is found in
material infrastructure – can be useful as guides to think about appropriate and
expanded sites for feminist intervention in DH.
[en] Expertise vs. statistics. A qualitative
evaluation of three keyness measures (logarithmic Zeta, Welch’s t-test, and
Log-likelihood ratio test) applied to subgenres of the French novel
Julia Röttgermann, University Trier, Germany; Keli Du, University of Trier, Germany;
Julia Havrylash, University of Trier, Germany; Christof Schöch, University of Trier,
Germany
Abstract
[en]
This paper continues an ongoing investigation of measures of distinctiveness
(also known as keyness measures), this time employing a qualitative, comparative
evaluation of three different measures: logarithmic Zeta, Welch’s t-test, and
Log-likelihood ratio test. Our domain of application is the contemporary French
novel, more specifically four types of French novels from the period 1970-1999,
namely: sentimental novel, crime novel, science fiction, and “littérature blanche”
(literary
fiction).
Our evaluation proceeds in the following steps: First, we establish important
abstract characteristics of specific literary subgenres based on a synthesis of
close readings of scholarly literature on these subgenres, resulting in
qualitative, expert-based “subgenre profiles.” Second, we use a purely
statistical approach, namely three different measures of distinctiveness, to
identify words that are expected to be statistically typical or characteristic
of groups of texts such as subgenres, when compared to other texts. Finally, we
compare expertise and statistics, that is, attempt to establish, for each of the
four subgenres, a mapping between individual words found to be statistically
distinctive of this subgenre and specific aspects contained in the relevant
subgenre profile and count the matches.
It turns out that each measure yields a different list of most distinctive words
that therefore, relates differently to the subgenre profiles. The analysis of
these varying degrees of overlap contributes to a better understanding of the
characteristics of and differences between the three measures, while also
serving as an example of a qualitative evaluation of a statistical measure.
Case Studies
[en] "I was painted by...": A Case Study on the Use of CNNs for Image Classification in
the Humanities
Marta Kipke, Institut für Digital Humanities, Georg-August-Universität Göttingen;
Lukas Brinkmeyer, Information Systems and Machine Learning Lab, Stiftung Universität
Hildesheim; Martin Langer, Institut für Digital Humanities, Georg-August-Universität
Göttingen; Lars Schmidt-Thieme, Information Systems and Machine Learning Lab, Stiftung
Universität Hildesheim
Abstract
[en]
EGRAPHSEN is a case study on image classification in the humanities, specifically
on painter attribution on Attic vase paintings.
This study aimed to explore the new perspective that artificial intelligence (AI)
can offer when studying traditional methods and heterogeneous domains.
When we translate the task (painter attribution), we have to consider the idiosyncrasies
of the data domain (Attic vase paintings). This is challenging for both, classical
archaeologists and computer scientists.
In this paper, we address how to approach the challenges in the creation of the dataset.
We carefully selected and prepared the data,
reflected on potential biases and trained a convolutional neural network (CNN) accordingly.
Specifically, we developed sampling criteria to
combat the biases and a hierarchical labelling system to segment the images into details.
Our model architecture was designed to
process sets of images instead of only one individual image, which enables us to experiment
with different combinations of image segments.
This forms the basis for an analysis framework, which allows us to go beyond mere
painter attribution and to explore the ambiguity of
image similarity itself.
[en] Keywords in Digital Humanities – A critical
assessment of computational techniques for mapping security and freedom in
historical debates
Tobias Blanke, University of Amsterdam; Chloe Papadopoulou, University of Amsterdam
Abstract
[en]
The article investigates the continued importance of keywords in digital
humanities and especially their relation to recent machine-learning approaches.
Different research practices related to digital humanities agree on the
importance of keywords to present issues and/or provide the baseline for new
stories about the past, about literature or media. Keywords are useful to target
ideas that have no clear definitions and productive in describing contested
categorisations that are key to humanities scholarship. At the same time,
keywords are often employed used without considering specific contexts and how
they are generated. To understand the diversity of keywords in digital
humanities, we consider three approaches to computationally generating keywords,
from traditional and established ones to state-of-the-art language modelling.
With these three approaches, we analyse a case where keywords should be
especially powerful, as underlying considerations are uncertain. We cover the
relation between security, human-rights and freedom according to discussions in
United Nations documents. Finally, we present a number of approaches to
productively use keywords to tell a different story about the relation of
security and freedom according to UN discussions.
Reviews
[en] Gapping the Map: Indigenous Presence in Native Land
Digital
Anja Keil, The College of William & Mary
Abstract
[en]
Native Land Digital, an Indigenous-led mapping project, exemplifies the growth and
impact of mapping as a tool for data visualization. Its unique focus on absence,
rather than presence, offers opportunities for interactive humanities scholarship.
Native Land Digital (2014– ) was founded by Victor Temprano and its Executive
Director is Tanya Ruka (Māori): https://native-land.ca
[en] A Review of Teaching the Middle Ages Through Modern Games
Morgan Pearce, University of Lethbridge, Alberta, Canada; Davide Pafumi, University
of Lethbridge, Alberta, Canada & the Humanities Innovation Lab
Abstract
[en]
This review considers Houghton’s edited volume, Teaching the Middle Ages Through Modern Games (2022).
The collection explores how games — ranging from historical simulations to fantasy-inspired
narratives — can effectively be
used to teach various aspects of the Middle Ages. By analyzing the theoretical underpinnings,
pedagogical strategies,
and practical case studies presented in the volume, this review contributes to the
debate on how to innovate educational
practices as well as the discussion on the legitimacy of games within the context
of higher education.
[en] The Little Database: A Poetics of Media Formats
(2025)
Ben Kudler, The New School, New York University
Abstract
[en]
The Little Database is a book that merges media studies and literary studies to
survey a series of “little databases”, databases too small to be useful for
modern data analytics, but too large for a single person to consume. The work
covers Textz.com, Eclipse, PennSound, MutantSounds, MUPS, and UbuWeb in novel
and interesting ways previously unseen in either media or literary studies.
Author Biographies
URL: https://dhq.digitalhumanities.org/index.html
Comments: dhqinfo@digitalhumanities.org
Published by: The Alliance of Digital Humanities Organizations and The Association for Computers and the Humanities
Affiliated with: Digital Scholarship in the Humanities
DHQ has been made possible in part by the National Endowment for the Humanities.
© 2005–2025 DHQ

Unless otherwise noted, the DHQ web site and all DHQ published content are published under a Creative Commons Attribution-NoDerivatives 4.0 International License. Individual articles may carry a more permissive license, as described in the footer for the individual article, and in the article’s metadata.
Comments: dhqinfo@digitalhumanities.org
Published by: The Alliance of Digital Humanities Organizations and The Association for Computers and the Humanities
Affiliated with: Digital Scholarship in the Humanities
DHQ has been made possible in part by the National Endowment for the Humanities.
© 2005–2025 DHQ

Unless otherwise noted, the DHQ web site and all DHQ published content are published under a Creative Commons Attribution-NoDerivatives 4.0 International License. Individual articles may carry a more permissive license, as described in the footer for the individual article, and in the article’s metadata.
