Research

My research combines machine learning with traditional literary scholarship, and has focused on metric learning, the adaptation of large language models to historical domains, and analyzing orthographic variation in literary works. I am broadly interested in the set of topics that emerge at the nexus of literary scholarship, linguistics, and computational language modeling, particularly literary theory, semantics, pragmatics, and artificial neural networks.

Some recently published papers are listed below:

Examining Language Modeling Assumptions Using an Annotated Literary Dialect Corpus

Arxiv Accepted to NLP4DH 2024 @ EMNLP

Pairing Orthographically Variant Literary Words to Standard Equivalents Using Neural Edit Distance Models

Arxiv ACL Anthology Presented at LaTeCH-CLFL 2024