Thursday, March 7, 2024 3:30pm to 4:30pm
About this Event
3620 South Vermont Avenue, Los Angeles, CA 90089
Jordan Ellenberg, University of Wisconsin
[In Person]
OR
Zoom link: https://usc.zoom.us/j/91543013145?pwd=aW1iZERFV2JYT3YwUUprTUZOcHBnQT09&from=addon
Meeting ID: 915 4301 3145
Passcode: 305741
Abstract: I spent a portion of 2023 working with a team at DeepMind on the “cap set problem” – how large can a subset of (Z/3Z)n be which contains no three terms which sum to zero? (I will explain, for those not familiar with this problem, something about the role it plays in combinatorics, its history, and why number theorists care about it a lot.) By now, there are many examples of machine learning mechanisms being used to help generate interesting mathematical knowledge, and especially interesting examples. This project used a novel protocol; instead of searching directly for large cap sets, we used LLMs trained on code to search the space of short programs for those which, when executed, output large capsets. One advantage is that a program is much more human-readable than a large collection of vectors over Z/3Z, bringing us closer to the not-very-well-defined-but-important goal of “interpretable machine learning.” I’ll talk about what succeeded in this project (more than I expected!) what didn’t, and what role I can imagine this approach to the math-ML interface playing in near-future mathematical practice.