Member of Kamitani Lab (Neuroinformatics group).
I am interested in how the model can learn useful representations from weak supervision or few inductive bias. I would like to clarify the supervisions and inductive bias needed to acquire good representations, or the fundamental limits of the performance of representations that can be learned under given these assumptions. The specific topics are as follows.
- Theoretical understanding of weakly supervised representation learning: I am interested in a mechanism by which semantically similar pairs of instructions, such as those used in disentangled representation learning or contrastive learning (ICML2022), can be useful for subsequent tasks.
- Generative model learning under structural assumption: Generative models in situations where relationships between data points can be assumed, such as cluster structures (SciRep2020) and hierarchical structures (ICML2019), which are widely found in the real world.
- Dec. 21, 2022: Our presentation at IBIS2022 got the presentation award!
- Oct. 1, 2022: I joined Kamitani Lab, Kyoto University.
- May 16, 2022: Our paper “On the Surrogate Gap between Contrastive and Supervised Losses” has been accepted by ICML2022.
- Nov. 16, 2021. Our paper “Complex Energies of the Coherent Logitudinal Optical Phonon-plasmon Coupled Mode According to Dynamic Mode Decomposition Analysis” has been accepted by Scientific Reports.
- Oct. 6, 2021. Our preprint “Sharp Learning Bounds for Contrastive Unsupervised Representation Learning” is out.
- Jul. 26, 2021. Our paper “Statistical Mechanical Analysis of Catastrophic Forgetting in Continual Learning with Teacher and Student Networks” has been accepted by JPSJ.
- Apr. 10, 2021. Our paper “Analysis of Trainability of Gradient-based Multi-environment Learning from Gradient Norm Regularization Perspective” has been accepted by IJCNN2021.
- 稲垣州都, 長野祥大, 牧野泰才, 篠田裕之. (2021) “テクスチャ素材の触知覚における個人差のクラスタリング分析の研究”, 第22回 計測自動制御学会 システムインテグレーション部門講演会, オンライン (12月15日)