Jacob Pfau


contact: [first].pfau@gmail.com

PhD student at the NYU Alignment Research Group. Current research projects include:

  • studying scaling properties of LM performance as a function of filler (i.e. repeated) tokens in prompts
  • latent adversarial training for improving safety of LMs

I like to post about research on Twitter and Lesswrong. I also like to create prediction markets e.g. “Will an AI produce encyclopedia-worthy philosophy by 2026” on Manifold, and “Will transformer derived architectures accelerate progress in deep learning?” on Metaculus.


latest posts

selected publications