이진화; Lee, Jin Hwa

Hi, I'm Jin, a 3rd year PhD student at UCL, supervised by Andrew Saxe.

Broadly, my research focuses on understanding how the structure of data and the inductive biases of models shape learning. I believe that a fundamental scientific understanding of learning is essential for explaining the surprising capabilities of current artificial intelligence systems, including language production and reasoning, and to ultimately controlling them for reliable and efficient applications.

My work blends theory and controlled experiments based on tractable toy models, with empirical studies of models at scale. Through this approach, my current projects aim to understand how certain properties present in natural data interact with learning and generalization behavior of neural network models. In particular, I am interested in how various aspects of compositionality might emerge from this interplay.

You can reach me via: jin dot lee dot 22 at ucl dot ac dot uk

News

• I'm awarded a Q3 Pivotal Research Fellowship for AI safety. (2025 June) • I'm invited to give a talk at COSYNE 2025 workshop on compositionality. See you in Mont-Tremblant! (2025 Apr) • I'm invited to visit UPF computational linguistics group led by Marco Baroni, digging up more linguistic compositionality in LLMs! (2025 Feb) • I'm giving a tutorial on Theoretical "Advances in Continual Learning" at COLLAS 2024, in Pisa. (2024 July)

Research

Distinct Computations Emerge From Compositional Curricula in In-Context Learning
JH. Lee, A. Lampinen, A. Singh* and A. Saxe*
Presented at Workshop on Spurious Correlation and Shortcut Learning, ICLR 2025. Currently under review for a conference.
  • A demonstration of how curriculum-like data structures, richly present in natural language corpora, can influence models' in-context solution strategies on compositional tasks.
Geometric Signatures of Compositionality Across a Language Model's Lifetime
JH. Lee*, T. Jiralerspong*, L. Yu, Y. Bengio and E. Cheng
Accepted at ACL 2025 Main Conference
  • Analyzing geometric properties of hidden representations in LLMs throughout pretraining and how does compositional structure of language is reflected and correlated to the emergence of linguistic capability.
Range, not Independence, Drives Modularity in Biologically Inspired Representations
W. Dorrell*, Hsu. K*, L. Hollingsworth, JH. Lee, J. Wu, C. Finn, PE. Latham, T. Behrens and TEJ. Whittington
ICLR 2025
  • Deriving necessary and sufficient conditions on sample data statics to gain modular representation with biological neural constraints.
Why Do Animals Need Shaping? A Theory of Task Composition and Curriculum Learning
JH. Lee, SS. Mannelli and A. Saxe
ICML 2024
  • Analytical study of deterministic learning dynamics of compositional RL in teacher-student setup
Learnable latent embeddings for joint behavioural and neural analysis
Steffen Schneider*, JH. Lee* and MW. Mathis
Nature (2023)
  • Contrastive learning and idetifiability in ICA inspired multimodal ML method to map high dimensional neural and behavioral data

Learn More

If you want to learn more about me, here is a 2 pages of myself

And More...

Believe me or not, I chose to live in London where there's absolute absence of mountains but I love hiking and nature.

I'm on my way to become a climber (but still have a long way to go).

I play piano, mostly classical pieces but I am too shy to post any videos of me playing.

I'm a dog person, and I have a dog named Dongdong (it means nothing but sounds very cute in Korean).

I have opinions on a lot of things, but I try to keep them to in-person conversations.

I try not to take myself too seriously.

Design from Simple Minimalistic Academic Portfolio.