About

I’m interested in how neural networks represent and process information (mechanistic interpretability).
Most recently, I was a founding research scientist at Goodfire, working on foundational questions about neural network representations. I’m currently taking some leave to explore what’s next.
I get a lot of joy from this work! It is intrinsically fascinating to me and I also believe it matters a lot. I believe interpretability is crucial for building AI systems we can trust and understand. I also believe that we can use interpretability to uncover new things about our world (e.g. novel scientific discoveries). Broadly, AI has overwhelming potential to shape an amazing future (if we get it right!).
Current Interests
- Feature geometry & manifolds.
- Barriers to scaling interpretability techniques.
- Reasoning model & CoT interpretability.
Background
I’m originally from Australia, moving to San Francisco at the end of 2022 for a startup I cofounded. I am a med school dropout. Prior to moving to the US, my research was broadly using technical skills to gain insight into biomedicine (e.g. computational neuroscience, computer vision for radiology).
Reach Out!
I love meeting people who share my curiosity about AI, interpretability, or just interesting ideas in general.
Email: liv[at]livgorton[dot]com
Twitter: DMs open!