## About Me

A postdoc researcher at Sugiyama-Yokoya-Ishida Lab, The University of Tokyo. I also worked at Imperfect Information Learning Team in RIKEN AIP as visiting scientist.

CV / Google Scholar / GitHub / Twitter

✉️: nagano@k.u-tokyo.ac.jp

## Research Interests

I am interested in *how the model can learn useful representations from weak supervision or few inductive bias*. I would like to clarify the supervisions and inductive bias needed to acquire good representations, or the fundamental limits of the performance of representations that can be learned under given these assumptions. The specific topics are as follows.

**Theoretical understanding of weakly supervised representation learning**: I am interested in a mechanism by which semantically similar pairs of instructions, such as those used in disentangled representation learning or contrastive learning (preprint), can be useful for subsequent tasks.**Generative model learning under structural assumption**: Generative models in situations where relationships between data points can be assumed, such as cluster structures (SciRep2020) and hierarchical structures (ICML2019), which are widely found in the real world.

## News

- Nov. 16, 2021. Our paper “Complex Energies of the Coherent Logitudinal Optical Phonon-plasmon Coupled Mode According to Dynamic Mode Decomposition Analysis” has been accepted by
*Scientific Reports*. - Oct. 6, 2021. Our preprint “Sharp Learning Bounds for Contrastive Unsupervised Representation Learning” is out.
- Jul. 26, 2021. Our paper “Statistical Mechanical Analysis of Catastrophic Forgetting in Continual Learning with Teacher and Student Networks” has been accepted by
*JPSJ*. - Apr. 10, 2021. Our paper “Analysis of Trainability of Gradient-based Multi-environment Learning from Gradient Norm Regularization Perspective” has been accepted by
*IJCNN2021*.

‣