MinKyu Lee

I am a Ph.D. candidate in the Visual Computing Lab (VCLab) at Sungkyunkwan University, supervised by Prof. Jae-Pil Heo. I received my Master's and Bachelor's degrees from Sungkyunkwan University. My research interests include low-level vision, image restoration and generative modeling.

Email: 2minkyulee@gmail.com (Primary Contact)  /  Google Scholar  /  Github  /  LinkedIn

profile photo

Publications

CVPR 2026 SeaCache thumbnail
SeaCache: Spectral-Evolution-Aware Cache for Accelerating Diffusion Models
Jiwoo Chung, Sangeek Hyun, MinKyu Lee, Byeongju Han, Geonho Cha, Dongyoon Wee, Youngjun Hong, Jae-Pil Heo†
CVPR, 2026
Paper (PDF) / arXiv / Code

A training-free, plug-and-play caching schedule that applies a timestep-dependent Spectral-Evolution-Aware (SEA) filter before measuring feature distances, so reuse decisions track content (signal) rather than high-frequency noise, improving latency–quality trade-offs.

CVPR Findings 2026 PDF-GS thumbnail
Progressive Distractor Filtering for Robust 3D Gaussian Splatting
Kangmin Seo, MinKyu Lee, Tae-Young Kim, ByeongCheol Lee, JoonSeoung An, Jae-Pil Heo†
CVPR Findings, 2026
Paper (PDF) / arXiv / Code

A two-phase 3DGS framework: progressive filtering iteratively finds and masks view-inconsistent distractors via rendered-vs-train discrepancies, then a reconstruction phase restores fine details from the purified representation. PDF-GS show robust, distractor-free reconstructions with no inference-time overhead.

Arxiv 2025 Scalable GANs thumbnail
Scalable GANs with Transformers
Sangeek Hyun, MinKyu Lee, Jae-Pil Heo†
Arxiv, 2025
Project Page / arXiv

Pure Transformer GANs can be scaled up and beats diffusion/flow models on an one-step conditional generation on ImageNet-256 only within 40 epochs.

ICLR 2026 Analyzing thumbnail
Analyzing the Training Dynamics of Image Restoration Transformers: A Revisit to Layer Normalization
MinKyu Lee, Sangeek Hyun, Woojin Jun, Hyunjun Kim, Jiwoo Chung, Jae-Pil Heo†
ICLR, 2026
OpenReview / arXiv / Code

This paper analyzes training instability in image restoration Transformers caused by conventional per-token LayerNorm, and proposes an IR-tailored normalization (i-LN) that stabilizes training and improves performance across restoration tasks.

NeurIPS 2025 Diffusion thumbnail
Diffusion Feature Field for Text-based 3D Editing with Gaussian Splatting
Eunseo Koh, Sangeek Hyun, MinKyu Lee, Jiwoo Chung, Kangmin Seo, Jae-Pil Heo†
NeurIPS, 2025
OpenReview

This paper proposes DFFSplat, which injects a 3D-consistent diffusion feature field into the 3D Gaussian Splatting editing pipeline to enforce multi-view consistency and mitigate view-inconsistent artifacts like the Janus problem in text-based 3D editing.

ICCV 2025 Fine-Tuning thumbnail
Fine-Tuning Visual Autoregressive Models for Subject-Driven Generation
Jiwoo Chung, Sangeek Hyun, Hyunjun Kim, Eunseo Koh, MinKyu Lee, Jae-Pil Heo†
ICCV, 2025
Qualcomm Innovation Fellowship Korea 2025 Finalist
Paper / arXiv / Code / Project

This paper introduces a fast and effective VAR-based method for subject-driven image generation, using selective and scale-wise weighted tuning to overcome fine-tuning challenges and outperform diffusion models.

CVPR 2025 Auto-Encoded thumbnail
Auto-Encoded Supervision for Perceptual Image Super-Resolution
MinKyu Lee, Sangeek Hyun, Woojin Jun, Jae-Pil Heo†
CVPR, 2025
Paper / arXiv / Poster / Code

This paper proposes AESOP, a simple yet effective loss that replaces pixel-wise Lp loss with a distance in the autoencoder output space, enabling better reconstruction in perceptual super-resolution without sacrificing visual quality.

AAAI 2024 Noise-free thumbnail
Noise-free Optimization in Early Training Steps for Image Super-Resolution
MinKyu Lee, Jae-Pil Heo†
AAAI, 2024
Paper / arXiv / Code

This paper shows that early-stage SISR training with pixel-wise losses is overly affected by "inherent noise" in HR targets, and proposes a noise-free optimization scheme that estimates the optimal HR centroid and optimizes toward it, improving training stability and final super-resolution quality.


Website template.