I am pursuing my Ph.D. in Computer Science at the University of Sheffield, supported by a fully-funded UKRI scholarship.
Research: My PhD research investigates privacy in deep learning, focusing on how language models inadvertently leak sensitive information. I combine mechanistic interpretability with privacy research to understand and mitigate these vulnerabilities.
Recent work: My EMNLP 2025 paper investigates privacy leakage in abstractive summarisation. My EACL 2026 paper (Findings), developed during a research visit with the CrySP group at the University of Waterloo, uses circuit discovery to understand and patch PII leakage in language models. I'm currently exploring how mechanistic interpretability can be extended to defend against a broader range of attack vectors.
Past: I completed my MSc in Computational Linguistics (Distinction) from University of Wolverhampton, and my undergraduate degree in Computer Science (1st Class, Hons) from Nottingham Trent University. Before returning to academia, I spent 10 years in industry as a software engineer, progressing to Lead Software Engineer and NLP Engineer roles.
Selected Publications in Privacy
Selected Publications in Health/Medicine
Research Experience
Collaborators/Mentors: Vasisht Duddu, N. Asokan. Research on mechanistic approaches to understanding PII leakage in language models.
Industry Experience
Led development of NLP-based products including automated text classification SaaS, data visualization tools, and integration of language models into data platforms. Built production systems serving FTSE clients.
Software Engineer working on services solving data silo issues with linked data.
Worked on a new digital platform centered around semantic web technologies.