이진화; Lee, Jin Hwa

Hi, I'm Jin, a 3rd year PhD student at UCL, supervised by Andrew Saxe.

Broadly, my research focuses on understanding of learning in neural networks and their intelligent behaviors. How does the structure of data and the inductive biases of models shape learning dynamics? Particularly, how compositional structure of the tasks (such as skill composition) or data (such as language) might emerge from this interplay. I take multiple approaches including theory based on a simple tractable model and tools from statistical physics, and empirical studies of large language models.

Using those approaches, my current projects are revolving around understanding the reasoning capabilities of large language models through the lens of compositionality.

Recently, I've been more committed to be involved in the pressing problems revolving AI capability and alignment. I'm seeking mentorship and community in AI safety research, driven by thoughts that it is a critical challenge that requires the action now and it is where my interest and skills can make a meaningful contribution.

You can reach me via: jin dot lee dot 22 at ucl dot ac dot uk

News

• I'm invited to give a talk at COSYNE 2025 workshop on compositionality. See you in Mont-Tremblant! (2025 Apr) • I'm invited to visit UPF computational linguistics group led by Marco Baroni, digging up more linguistic compositionality in LLMs! (2025 Feb) • I'm giving a tutorial on Theoretical "Advances in Continual Learning" at COLLAS 2024, in Pisa. (2024 July)

Research

Distinct Computations Emerge From Compositional Curricula in In-Context Learning
JH. Lee, A. Lampinen, A. Singh* and A. Saxe*
Workshop on Spurious Correlation and Shortcut Learning, ICLR 2025
  • Training transformer architecture to learn compositional algorithmic task from scratch.
  • Curricula-like data structure given in-context learning impacts models' strategy choice.
Geometric Signatures of Compositionality Across a Language Model's Lifetime
JH. Lee*, T. Jiralerspong*, L. Yu, Y. Bengio and E. Cheng
Under review
  • Analyzing geometric properties of hidden representations in LLMs throughout pretraining and how does compositional structure of language is reflected and correlated to the linguistic capability.
Range, not Independence, Drives Modularity in Biologically Inspired Representations
W. Dorrell*, Hsu. K*, L. Hollingsworth, JH. Lee, J. Wu, C. Finn, PE. Latham, T. Behrens and TEJ. Whittington
ICLR 2025
  • Deriving necessary and sufficient conditions on sample data statics to gain modular representation with biological neural constraints.
Why Do Animals Need Shaping? A Theory of Task Composition and Curriculum Learning
JH. Lee, SS. Mannelli and A. Saxe
ICML 2024
  • Analytical study of deterministic learning dynamics of compositional RL in teacher-student setup
Learnable latent embeddings for joint behavioural and neural analysis
Steffen Schneider*, JH. Lee* and MW. Mathis
Nature (2023)
  • Contrastive learning and idetifiability in ICA inspired multimodal ML method to map high dimensional neural and behavioral data

Learn More

If you want to learn more about me, here is a 2 pages of myself

And More...

Believe me or not, I chose to live in London where there's absolute absence of mountains but I love hiking and nature.

I'm on my way to become a climber (but still have a long way to go).

I play piano, mostly classical pieces but I am too shy to post any videos of me playing.

I'm a dog person, and I have a dog named Dongdong (it means nothing but sounds very cute in Korean).

I have opinions on a lot of things, but I try to keep them to in-person conversations.

I try not to take myself too seriously.

Design from Simple Minimalistic Academic Portfolio.