Namuk Park

Email: namuk.park@gmail.com, GitHub: @xxxnell, Twitter: @xxxnell, Google Scholar: scholar link, CV: cv link
I'm looking for an intern interested in 3D generative models for protein design. A background in biology, while advantageous, is not a requirement for this role! Please feel free to reach out.
The focus of my research has been on empirically analyzing the inductive biases of neural networks, leading to the design of more efficient and effective machine learning systems. More specifically, my research covers the following topics:
1.
Vision Transformers: I have demonstrated that Transformers and convolutions have their own inductive biases through empirical analyses such as loss landscape visualization, Hessian eigenvalue spectra, and Fourier analysis.
2.
Self-Supervised Learning: I have explored the distinct advantages and limitations of masked token modeling and contrastive learning, providing insights into their respective inductive biases.
3.
Reliable Neural Networks: I have investigated how Bayesian probabilistic neural networks and ensemble methods can enhance the reliability and generalizability of real-world AI systems.
Building upon my previous work, I am now focusing on developing foundation models for 3D protein structures. I believe this research will not only deepen our scientific understanding of inductive biases in 3D spaces but also significantly impact real-world applications. These foundation models are poised to address a wide range of downstream tasks, including binding prediction and protein design.
Before joining Prescient Design, I previously worked at NAVER AI Lab as a visiting researcher. I received a Ph.D. in Computer Science from School of Integrated Technology, Yonsei University in South Korea, and graduated with a B.S. degree in Physics as the Valedictorian of the College of Sciences from Yonsei University.

Publications

[4] Namuk Park, Wonjae Kim, Byeongho Heo, Taekyung Kim, Sangdoo Yun, “What Do Self-Supervised Vision Transformers Learn?” ICLR 2023.
We show that (i) Contrastive Learning (CL) primarily captures global patterns compared with Masked Image Modeling (MIM), (ii) CL is more shape-oriented whereas MIM is more texture-oriented, and (iii) CL plays a key role in the later layers while MIM focuses on the early layers.
[3] Namuk Park and Songkuk Kim. “How Do Vision Transformers Work?” ICLR 2022. Spotlight. Zeta-Alpha’s Top 100 most cited AI papers for 2022. BenchCouncil’s Top 100 AI achievements from 2022 to 2023.
We show that the success of "multi-head self-attentions" (MSAs) lies in the "spatial smoothing" of feature maps, NOT in the capturing of long-range dependencies. In particular, we demonstrate that MSAs (i) flatten the loss landscapes, (ii) are low-pass filters, contrary to Convs, and (iii) significantly improve accuracy when positioned at the end of a stage (not the end of a model). See also [2].
[2] Namuk Park and Songkuk Kim. “Blurs Behave Like Ensembles: Spatial Smoothings to Improve Accuracy, Uncertainty, and Robustness.” ICML 2022. Winner of Qualcomm Innovative Fellowship South Korea.
We show that "spatial smoothing" (e.g., a simple blur filter) improves the accuracy, uncertainty, and robustness of CNNs, all at the same time. This is primarily due to that spatial smoothing flattens the loss landscapes by "spatially ensembling" neighboring feature maps of CNNs. See also [1].
[1] Namuk Park, Taekyu Lee, and Songkuk Kim. “Vector Quantized Bayesian Neural Network Inference for Data Streams.” AAAI 2021.
We show that "temporal smoothing" (i.e., moving average of recent predictions) significantly improves the computational performance of Bayesian NN inference without loss of accuracy by “temporally ensembling” the latest & previous predictions. To do so, we propose "ensembles for proximate data points", as an alternative theory to “ensembles for a single data point”—this theory is the foundation of [2] and [3].

Awards & Honors

Top Reviewer” at NeurIPS 2023.
“Outstanding Thesis Award, Third prize”, Yonsei University, Jun 2022.
Winner of Qualcomm Innovative Fellowship South Korea”, Qualcomm, Nov 2021.
Research Grant Support for Ph.D. Students”, National Research Foundation of South Korea, Jun 2021 — Feb 2022.
National Fellowship from Global Open Source Frontier”, NIPA (National IT Industry Promotion Agency of South Korea), Jun 2019 — Dec 2020.
CJK (China–Japan–South Korea) OSS (Open Source/Software) Award”, The Organizing Committee of the CJK OSS Award, Nov 2019.
OSS Competition, Honorable Mention”, NAVER Corporation, Feb 2019.
OSS Challenge, First prize—the Award From the Minister of Science and ICT”, Nov 2018
OSS Competition (2nd phase), First prize”, NAVER Corporation, Aug 2018.
OSS Competition (1st phase), Second prize”, NAVER Corporation, Feb 2018.
National Ph.D. Full Ride Fellowship”, Institute for Information and Communications Technology Promotion of South Korea, Mar 2011 — Feb 2016.
The Valedictorian of the College of Sciences”, Yonsei University, Feb 2011.
Yonsei University Alumni Full Ride Scholarship for Undergraduate Students”, “GE Scholarship”, “National Scholarship & for Science and Engineering”, and other merit-based scholarships, Sep 2008 — Feb 2011.

Talks

How Do Vision Transformers Work?”, [2, 3]
Seminar at SeoulTech, Aug 2022
AI Seminar at UNIST, Mar 2022
Tech Talk at NAVER WEBTOON, Jan 2022
NAVER Tech Talk at NAVER Corporation, Dec 2021
Uncertainty in AI: Deep Learning Is Not Good Enough for Safe AI”, [1]
Keras Korea Meetup at AI Yangjae Hub, Dec 2019
OSS Contribution Festival at NIPA, Dec 2019
South Korea-Uzbekistan SW Technology Seminar at NIPA & Tashkent University of Information Technologies, Oct 2019
A Fast and Lightweight Probability Tool for AI in Scala”, [code]
North-East Asia OSS Forum at NIPA, Nov 2019
OSS Day (Keynote) at NIPA, Nov 2018
Scala Night Korea at Scala User Group Korea, Apr 2018