Academics

Topology of large language models data representations

Time:14:00 - 16:00, 2024-12-05

Venue: A3-1-301

Organizer:Mingming Sun, Yaqing Wang

Speaker:Serguei Barannikov

Speaker: Serguei Barannikov BIMSA, IMJ-PRG

Time: 14:00 - 16:00, 2024-12-05

Venue: A3-1-301

ZOOM: 230 432 7880

PW: BIMSA

Organizers: Mingming Sun, Yaqing Wang

Abstract

The rapid advancement of large language models (LLMs) has made distinguishing between human and AI-generated text increasingly challenging. The talk examines the topological structures within LLM data representations, focusing on their application in artificial text detection. We explore two primary methodologies: 1) Intrinsic dimensionality estimation: Human-written texts exhibit an average intrinsic dimensionality of around 9 for alphabet-based languages in RoBERTa representations. In contrast, AI-generated texts displayed values approximately 1.5 units lower. This difference has allowed the development of robust detectors capable of generalizing across various domains and generation models. 2) Topological data analysis (TDA) of attention maps: By extracting interpretable topological features from transformer model attention maps, we capture structural nuances of texts. Similarly, TDA applied to speech attention maps and embeddings from models like HuBERT enhances classification performance in several tasks.

These topological approaches provide a mathematical methodology to study the geometric and structural properties of LLM data representations and their role in detecting AI-generated texts. The talk is based on the following works, carried out in collaboration with my PhD students E.Tulchinsky and K.Kuznetsov, and other colleagues:

Intrinsic Dimension Estimation for Robust Detection of AI-Generated Texts, NeurIPS 2023;

Topological Data Analysis for Speech Processing, InterSpeech 2023;

Artificial Text Detection via Examining the Topology of Attention Maps, EMNLP 2021.

Speaker Intro

Prof. Serguei Barannikov earned his Ph.D. from UC Berkeley and has made contributions to algebraic topology, algebraic geometry, mathematical physics, and machine learning. His work, prior to his Ph.D., introduced canonical forms of filtered complexes, now known as persistence barcodes, which have become fundamental in topological data analysis. More recently, he has applied topological methods to machine learning, particularly in the study of large language models, with results published in leading ML conferences such as NeurIPS, ICML, and ICLR, effectively bridging pure mathematics and advanced AI research.

DATEDecember 3, 2024
SHARE
Related News