(Foto: Northeastern University)
(Foto: Northeastern University)
03.03.2020, 19:00 Uhr
Northeastern University, USA
Julia Flanders is a Professor of the Practice in the Department of English and the Director of the Digital Scholarship Group in the Northeastern University Library. She also directs the Women Writers Project and serves as editor in chief of Digital Humanities Quarterly, an open-access, peer-reviewed online journal of digital humanities. She has served as chair of the TEI Consortium and as President of the Association for Computers and the Humanities. She is the co-editor, with Fotis Jannidis, of The Shape of Data in Digital Humanities, and also the co-editor, with Neil Fraistat, of the Cambridge Companion to Textual Scholarship. Her research interests focus on data modeling, textual scholarship, humanities data curation, and the politics of digital scholarly work. | more
„From Modeling to Interpretation“
Scholarly modeling and interpretation are complementary elements of a shared social geometry. In shaping corpora, editions, archives, and data sets, the work of modeling is directed at producing convergence and legibility: the preconditions of interpretation. The authority of such modeling work–the consensus it mobilizes and formalizes–is founded in the shared literacies that also animate even the most contrarian interpretive acts. The interpretive agency of the scholarly individual draws its power from the same sources, and moves along the same intellectual vectors, as the shared agency of the standards organization, the committee, the disciplinary imaginary. Modeling in this curation-based mode is a world-making tool whose products are not only models but also guidelines and specifications, constraint systems and conversion pathways, all operating to make a world whose interpretive gestures have been anticipated and accommodated in advance.
The value of such curatorial work within academic digital humanities is considerable, but in the widening and socially urgent space of community-led archiving and public humanities research, these forms of power and agency need renewed scrutiny. The sponsoring, authoritative „we“ of the information standard elides the very publics who most need recognition. „Our“ models do not yet account for the forms of knowledge and interpretive work arising in those publics. The processes by which digital models are created and applied are hermetic and enmeshed in technical interdependencies. Can we imagine instead techniques, processes, and literacies that can support community-oriented and community-led modeling and interpretation for a new public digital humanities?
UC Santa Barbara, USA
Alan Liu is Professor in the English Department at the University of California, Santa Barbara, and an affiliated faculty member of UCSB’s Media Arts & Technology graduate program. Previously, he was on the faculty of Yale University’s English Department and British Studies Program.
His research begain in the field of British romantic literature and art. A first book, Wordsworth: The Sense of History (Stanford UP, 1989), explored the relation between the imaginative experiences of literature and history. Theoretical essays in the 1990s then explored cultural criticism, the „new historicism,“ and postmodernism in contemporary literary studies. In 1994, when he started his Voice of the Shuttle Web site for humanities research, he began to study information culture as a way to close the circuit between the literary or historical imagination and the technological imagination. Books published since then include The Laws of Cool: Knowledge Work and the Culture of Information (U. Chicago Press, 2004), Local Transcendence: Essays on Postmodern Historicism and the Database(link is (U. Chicago Press, 2008), and Friending the Past: The Sense of History in the Digital Age (U. Chicago Press, 2018). | more
(Foto: Priscilla Leung, 2017)
06.03.2020, 14:00 Uhr
„Humans in the Loop: Humanities Hermeneutics and Machine Learning“
As indicated by the emergent research fields of computational “interpretability” and “explainability,” machine learning creates fundamental hermeneutical problems. One of the least understood aspects of machine learning is how humans learn from machine learning. How does an individual, team, organization, or society “read” computational “distant reading” when it is performed by complex algorithms on immense datasets? Can methods of interpretation familiar to the humanities (e.g., traditional or poststructuralist ways of relating the general and the specific, the abstract and the concrete, the structure and the event, or the same and the different) be applied to machine learning? Further, can such traditions be applied with the explicitness, standardization, and reproducibility needed to engage meaningfully with the different Spielraum – scope for “play” (as in the “play of a rope,” “wiggle room”, or machine-part “tolerance”) – of computation? If so, how might that change the hermeneutics of the humanities themselves?
In his keynote lecture, Alan Liu uses the example of the formalized “interpretation protocol” for topic models he is developing for the Mellon Foundation funded WhatEvery1Says project (which is text-analyzing millions of newspaper articles mentioning the humanities) to reflect on how humanistic traditions of interpretation can contribute to machine learning. But he also suggests how machine learning changes humanistic interpretation through fresh ideas about wholes and parts, mimetic representation and probabilistic modeling, and similarity and difference (or identity and culture).