{"id":2234,"date":"2023-09-23T14:44:44","date_gmt":"2023-09-23T05:44:44","guid":{"rendered":"https:\/\/vds.sogang.ac.kr\/?p=2234"},"modified":"2026-04-05T16:29:04","modified_gmt":"2026-04-05T07:29:04","slug":"publication","status":"publish","type":"post","link":"https:\/\/vds.sogang.ac.kr\/?p=2234","title":{"rendered":"International Conference"},"content":{"rendered":"\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\">\n<p><\/p>\n<\/div>\n<\/div>\n\n\n\n<h1 class=\"wp-block-heading\">2026<\/h1>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>1. <em>Dual Anchors, Do It Better<\/em>: Hierarchical Group Merging for Zero-shot Anomaly Detection<\/p>\n<cite>J. Roh*, D. Kim*, and S.-J. Kang, <em>IEEE\/CVF Conference on Computer Vision and Pattern Recognition Findings (CVPRF)<\/em>, June. 2026.<br>ACK: IITP-2026-RS-2023-00260091, RS-2025-16066849, RS-2025-02263706<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"587\" data-id=\"5018\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/jimin_paper-1024x587.png\" alt=\"\" class=\"wp-image-5018\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/jimin_paper-1024x587.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/jimin_paper-300x172.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/jimin_paper-768x440.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/jimin_paper.png 1256w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\">\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>2. WIMFRIS: WIndow Mamba Fusion and Parameter Efficient Tuning for Referring Image Segmentation<\/p>\n<cite>S. Moon*, H. Yu*, H. Lee*, and S.-J. Kang, The Fourteenth International Conference on Learning Representations (ICLR), April. 2026.<br>ACK: IITP-2026-RS-2023-00260091, RS-2024-00414230, RS-2025-02263706, IO251218<br>14799-01, 202512025.01, RS-2022-00143911<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"434\" data-id=\"4954\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/main_figure13-1024x434.png\" alt=\"\" class=\"wp-image-4954\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/main_figure13-1024x434.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/main_figure13-300x127.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/main_figure13-768x325.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/main_figure13-1536x651.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/main_figure13.png 1959w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>3. Enhancing Perceptual Quality on High-Resolution Displays: A Unified Deep Model for Super-Resolution and Deblurring<\/p>\n<cite>J. Choi*, M. Zinke*, J. Yang*, H. Cho, H. Choi, S. Lee, and S.-J. Kang*, <em><em><em>SID Symposium Digest of Technical Papers<\/em><\/em><\/em>, May. 2026.<br>ACK: Samsung display, IITP-2026-RS-2023-00260091, RS-2025-16066849, RS-2024-00414230<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-3 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"378\" data-id=\"4921\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Figure-3-1024x378.png\" alt=\"\" class=\"wp-image-4921\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Figure-3-1024x378.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Figure-3-300x111.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Figure-3-768x284.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Figure-3-1536x568.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Figure-3-2048x757.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>4. Toward Robust Anomaly Detection for Real-World Display Inspection<\/p>\n<cite>J. Lee and S.-J. Kang*, <em><em><em>SID Symposium Digest of Technical Papers<\/em><\/em><\/em>, May. 2026.<br>ACK:  IITP-2026-RS-2023-00260091, RS-2025-16066849, RS-2024-00414230<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-4 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"186\" data-id=\"5005\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Fig1-1-1024x186.png\" alt=\"\" class=\"wp-image-5005\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Fig1-1-1024x186.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Fig1-1-300x55.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Fig1-1-768x140.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Fig1-1-1536x279.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Fig1-1-2048x372.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>5. Enhancing Reverse Distillation with Core Exemplar Learning for Unified Multi-Class Anomaly Detection<\/p>\n<cite>H. Lim*, M.-S. Kim*, H.-B. Lee*, S.-J. Kang*, K.-W. Chon*, and H. Lee, <em>IEEE\/CVF Winter Conference on Applications of Computer Vision (WACV)<\/em>, Mar. 2026.<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-5 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"464\" data-id=\"4873\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uad50\uc218\ub2d8-wacv-pipeline-1024x464.png\" alt=\"\" class=\"wp-image-4873\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uad50\uc218\ub2d8-wacv-pipeline-1024x464.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uad50\uc218\ub2d8-wacv-pipeline-300x136.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uad50\uc218\ub2d8-wacv-pipeline-768x348.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uad50\uc218\ub2d8-wacv-pipeline.png 1537w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>6. Infrared Monocular Depth Estimation via Parameter-Efficient Adaptation<\/p>\n<cite>J. H. Pyun, S. H. Moon,  Y. S. Jeong, and S.-J. Kang, <em>International Conference on Electronics, Information, and Communication (ICEIC)<\/em>, Jan. 2026.<br>ACK: IITP-2026-RS-2023-00260091, RS-2024-00414230, RS-2025-16066849<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-6 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"549\" data-id=\"4897\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Infrared_Monocular_Depth_Estimation_via_Parameter_Efficient_Adaptation-1024x549.png\" alt=\"\" class=\"wp-image-4897\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Infrared_Monocular_Depth_Estimation_via_Parameter_Efficient_Adaptation-1024x549.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Infrared_Monocular_Depth_Estimation_via_Parameter_Efficient_Adaptation-300x161.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Infrared_Monocular_Depth_Estimation_via_Parameter_Efficient_Adaptation-768x411.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Infrared_Monocular_Depth_Estimation_via_Parameter_Efficient_Adaptation-1536x823.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Infrared_Monocular_Depth_Estimation_via_Parameter_Efficient_Adaptation.png 1941w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>7. Hybrid Prompt Context Learning for Balanced Adaptation of Vision Language Models<\/p>\n<cite>S. W. Jang and S.-J. Kang, <em>International Conference on Electronics, Information, and Communication (ICEIC)<\/em>, Jan. 2026.<br>ACK: IITP-2026-RS-2023-00260091<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-7 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"323\" data-id=\"4938\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-9-1024x323.png\" alt=\"\" class=\"wp-image-4938\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-9-1024x323.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-9-300x95.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-9-768x243.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-9-1536x485.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-9-2048x647.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>8. View Fusion Based Zero-shot Multi-view Anomaly Detection<\/p>\n<cite>G. W. Kim*, J. M. Roh*, J. J. Yoon, and S.-J. Kang*, <em>International Conference on Electronics, Information, and Communication (ICEIC)<\/em>, Jan. 2026.<br>ACK: IITP-2025-RS-202300260091<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-8 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"362\" data-id=\"4962\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\ud654\uba74-\ucea1\ucc98-2026-02-24-140037-1024x362.png\" alt=\"\" class=\"wp-image-4962\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\ud654\uba74-\ucea1\ucc98-2026-02-24-140037-1024x362.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\ud654\uba74-\ucea1\ucc98-2026-02-24-140037-300x106.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\ud654\uba74-\ucea1\ucc98-2026-02-24-140037-768x272.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\ud654\uba74-\ucea1\ucc98-2026-02-24-140037-1536x543.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\ud654\uba74-\ucea1\ucc98-2026-02-24-140037.png 1566w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>9. Enhancing Illumination Robustness in Unified Anomaly Detection via Multi-Brightness Training<\/p>\n<cite>G. W. Kim and S.-J. Kang, <em>International Conference on Electronics, Information, and Communication (ICEIC)<\/em>, Jan. 2026.<br>ACK: IITP-2026-RS-202300260091<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-9 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"711\" height=\"138\" data-id=\"4961\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\ud654\uba74-\ucea1\ucc98-2026-02-24-135704.png\" alt=\"\" class=\"wp-image-4961\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\ud654\uba74-\ucea1\ucc98-2026-02-24-135704.png 711w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\ud654\uba74-\ucea1\ucc98-2026-02-24-135704-300x58.png 300w\" sizes=\"auto, (max-width: 711px) 100vw, 711px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>10. Multiview tile-image rendering from 3D Gaussians for 1D camera array<\/p>\n<cite>J. S. Kim, J. H. Pyun, K. J. Yun, S.-J. Kang, and H. G. Choo <em>International Conference on Electronics, Information, and Communication (ICEIC)<\/em>, Jan. 2026.<br>ACK:<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-10 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"2469\" height=\"696\" data-id=\"4966\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/gaussian.png\" alt=\"\" class=\"wp-image-4966\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/gaussian.png 2469w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/gaussian-300x85.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/gaussian-1024x289.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/gaussian-768x216.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/gaussian-1536x433.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/gaussian-2048x577.png 2048w\" sizes=\"auto, (max-width: 2469px) 100vw, 2469px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>11. Mask-Region Landmark Prediction in 3D Masked Face Point Clouds Using Non-Mask Facial Landmarks<\/p>\n<cite>J. W. Park, J. H. Pyun, Y. S. Jeong, S.-J. Kang, and S. I. Cho <em>International Conference on Electronics, Information, and Communication (ICEIC)<\/em>, Jan. 2026.<br>ACK:<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-11 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1786\" height=\"678\" data-id=\"4967\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/mask-1.png\" alt=\"\" class=\"wp-image-4967\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/mask-1.png 1786w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/mask-1-300x114.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/mask-1-1024x389.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/mask-1-768x292.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/mask-1-1536x583.png 1536w\" sizes=\"auto, (max-width: 1786px) 100vw, 1786px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<h1 class=\"wp-block-heading\">2025<\/h1>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<ol class=\"wp-block-list\">\n<li>Replace-in-Ego: Text-Guided Object Replacement in Egocentric Hand-Object Interaction<\/li>\n<\/ol>\n<cite>M. Song, J. Park, and S.-J. Kang, Observing and Understanding Hands in Action in conjunction with ICCV 2025, Oct. 2025.<br>ACK: RS-2024-00414230, IITP-2025-RS-2023-00260091<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-12 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"239\" data-id=\"4690\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/fig1-1024x239.png\" alt=\"\" class=\"wp-image-4690\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/fig1-1024x239.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/fig1-300x70.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/fig1-768x179.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/fig1.png 1185w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>2. Corner Point-based Calibration Technique for QR Code Readers in Robot Localization<\/p>\n<cite>M. Song, Y. Im, S. Jang, S. Kim, and S.-J. Kang, The 10th International Conference on Consumer Electronics  (ICCE) Asia , Oct. 2025.<br>ACK: IO250307-12263-01<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-13 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"869\" height=\"451\" data-id=\"4703\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc2-2.png\" alt=\"\" class=\"wp-image-4703\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc2-2.png 869w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc2-2-300x156.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc2-2-768x399.png 768w\" sizes=\"auto, (max-width: 869px) 100vw, 869px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>3. QSCA: Quantization with Self-Compensating Auxiliary for Monocular Depth Estimation<\/p>\n<cite>J. Yang, J. Choi, M. Zinke, and S.-J. Kang, The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS) , Dec. 2025.<br>ACK: IITP-2025-RS-2023-00260091, RS-2025-02263706, RS-2025-16066849<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-14 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1300\" height=\"589\" data-id=\"4619\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/4-1024x464.png\" alt=\"\" class=\"wp-image-4619\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/4-1024x464.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/4-300x136.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/4-768x348.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/4.png 1300w\" sizes=\"auto, (max-width: 1300px) 100vw, 1300px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>4. Extending gem5 with a Developer-Centric IDE for SoC Architecture Simulation<\/p>\n<cite>H. Shin, D. Kim, H. S. Song, and W. Kim, <em><em>IEEE International Conference on Consumer Electronics &#8211; Berlin (ICCE<\/em>-Berlin)<\/em>, Sep. 2025.<br>ACK: RS-2024-00487113<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-15 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"351\" data-id=\"4614\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Group-4-2-1-1024x351.png\" alt=\"\" class=\"wp-image-4614\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Group-4-2-1-1024x351.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Group-4-2-1-300x103.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Group-4-2-1-768x263.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Group-4-2-1-1536x526.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Group-4-2-1-2048x702.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>5. FCM: Feature Concatenation Module for Anomaly Detection in High-Resolution Images<\/p>\n<cite>T. W. Kim, H. E. Kim,  S. G. Kim, and S.-J. Kang, <em>The 25th International Meeting on Information Display (IMID)<\/em>, Aug. 2025.<br>ACK: IITP-2025-RS-2023-00260091, Samsung Display<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-16 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"442\" data-id=\"4586\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/3-1-1024x442.png\" alt=\"\" class=\"wp-image-4586\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/3-1-1024x442.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/3-1-300x129.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/3-1-768x331.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/3-1.png 1027w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>6. On-device Monocular Depth Estimation Network Optimization<\/p>\n<cite>M. Zinke, J. C. Yang,  J. M. Choi, S. G. Lee, H. Y. Choi, H. U. Cho, and S.-J. Kang, <em>The 25th International Meeting on Information Display (IMID)<\/em>, Aug. 2025.<br>ACK: IITP-2025-RS-2023-00260091, Samsung Display<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-17 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"439\" data-id=\"4585\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/2-1-1024x439.png\" alt=\"\" class=\"wp-image-4585\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/2-1-1024x439.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/2-1-300x128.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/2-1-768x329.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/2-1.png 1345w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>7. Post-Training Quantization for Depth Anything<\/p>\n<cite>J. C. Yang, M. Zinke, J. M. Choi, S. G. Lee, H. Y. Choi, H. U. Cho, and S.-J. Kang, <em>The 25th International Meeting on Information Display (IMID)<\/em>, Aug. 2025.<br>ACK: IITP-2025-RS-2023-00260091, Samsung Display<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-18 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"966\" height=\"421\" data-id=\"4584\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/1-2.png\" alt=\"\" class=\"wp-image-4584\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/1-2.png 966w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/1-2-300x131.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/1-2-768x335.png 768w\" sizes=\"auto, (max-width: 966px) 100vw, 966px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>8. Towards Brightness-Robust Unified Anomaly Detection: Training with Multi-Brightness Data<\/p>\n<cite>G. W. Kim, J. M. Roh, J. H. Lee and S.-J. Kang. <em>International Technical Conference on Circuits\/Systems, Computers and Communications<\/em> <em>(ITC-CSCC)<\/em>, July. 2025.<br><em>ACK: <\/em>2021M3H2A1038042, IITP-2025-RS-2023-00260091<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-19 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"339\" data-id=\"4436\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/20250602_143959-1024x339.png\" alt=\"\" class=\"wp-image-4436\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/20250602_143959-1024x339.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/20250602_143959-300x99.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/20250602_143959-768x255.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/20250602_143959.png 1074w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>9. Enhanced Illumination-Reflectance Decomposition for Image Enhancement Based on Retinexformer<\/p>\n<cite>S. H. Kim, and S.-J. Kang. <em>International Technical Conference on Circuits\/Systems, Computers and Communications<\/em> <em>(ITC-CSCC)<\/em>, July. 2025.<br><em>ACK: IITP-2025-RS-2023-00260091, RS-2024-00414230<\/em>, IO201218-08232-01<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-20 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"289\" data-id=\"4432\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-13-1024x289.png\" alt=\"\" class=\"wp-image-4432\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-13-1024x289.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-13-300x85.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-13-768x217.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-13.png 1391w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>10. Leveraging Prompt Learning for Robust Transfer in Long-tailed Distributions<\/p>\n<cite>S. W. Jang, and S.-J. Kang. <em>International Technical Conference on Circuits\/Systems, Computers and Communications<\/em> <em>(ITC-CSCC)<\/em>, July. 2025.<br><em>ACK: IITP-2025-RS-2023-00260091, RS-2024-00414230<\/em><br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-21 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"475\" data-id=\"4422\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-8-1024x475.png\" alt=\"\" class=\"wp-image-4422\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-8-1024x475.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-8-300x139.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-8-768x356.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-8-1536x713.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-8-2048x950.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>11. Image Denoising Meets Quantization: Exploring the Effects of Post-Training Quantization<\/p>\n<cite>J. M. Choi, J. C. Yang, M. Zinke, and S.-J. Kang. <em>International Technical Conference on Circuits\/Systems, Computers and Communications<\/em> <em>(ITC-CSCC)<\/em>, July. 2025. <br><em>ACK: IO201218-08232-01, IITP-2025-RS-2023-00260091, RS-2024-00414230<\/em><br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-22 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"230\" data-id=\"4416\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-7-1024x230.png\" alt=\"\" class=\"wp-image-4416\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-7-1024x230.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-7-300x68.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-7-768x173.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-7-1536x346.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-7-2048x461.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>12. Depth-Only Human Pose Estimation<\/p>\n<cite>J. H. Pyun, H. Oh, S. Y. Yun, H. J. Kang, and S.-J. Kang<em>, International Technical Conference on Circuits\/Systems, Computers and Communications (ITC-CSCC)<\/em>, July. 2025.<br>ACK: RS-2024-0046774, IITP-2025-RS-2023-00260091<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-23 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"488\" data-id=\"4407\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-12-1024x488.png\" alt=\"\" class=\"wp-image-4407\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-12-1024x488.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-12-300x143.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-12-768x366.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-12-1536x731.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-12.png 1766w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>13. Deformation-Aware Luminance Compensation using Gaussian-Weighted Kernels for Stretchable Displays<\/p>\n<cite>Y. -I. Park, and S.-J. Kang,<em> <em>SID Symposium Digest of Technical Papers<\/em><\/em>, May. 2025.<br>ACK: IITP-2025-RS-2023-00260091, RS-2024-00414230, 2020M3H4A1A02084899, 202412001.01<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-24 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"497\" data-id=\"4105\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/sim-1024x497.png\" alt=\"\" class=\"wp-image-4105\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/sim-1024x497.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/sim-300x146.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/sim-768x373.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/sim-1536x746.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/sim.png 1587w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>14. Financial Data Generation Utilizing Graph Information from Transaction Networks<\/p>\n<cite>K. G. Kim, S. K. Lim, S. H. Choi, and S.-J. Kang<em> IEEE International Conference on Big Data and Smart Computing (BigComp)<\/em>, Feb. 2025.<br>ACK: IITP-2025-RS-2023-00260091, IITP-RS-2022-00156318<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-25 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"282\" data-id=\"4404\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uc2a4\ud06c\ub9b0\uc0f7-2025-06-02-104442-1024x282.png\" alt=\"\" class=\"wp-image-4404\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uc2a4\ud06c\ub9b0\uc0f7-2025-06-02-104442-1024x282.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uc2a4\ud06c\ub9b0\uc0f7-2025-06-02-104442-300x83.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uc2a4\ud06c\ub9b0\uc0f7-2025-06-02-104442-768x212.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uc2a4\ud06c\ub9b0\uc0f7-2025-06-02-104442.png 1296w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>15. 3D Gaussian SLAM With DoG Mapping<\/p>\n<cite>E. H. Shin, and S.-J. Kang<em> International Conference on Electronics, Information, and Communication (ICEIC)<\/em>, Jan. 2025.<br>ACK: IITP-2025-RS-2023-00260091, RS-2024-00414230<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-26 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"274\" data-id=\"4272\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-6-1024x274.png\" alt=\"\" class=\"wp-image-4272\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-6-1024x274.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-6-300x80.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-6-768x205.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-6-1536x411.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc1-6-2048x548.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>16. Enhanced Human Pose Retargeting through Video Frame Deblurring <mark style=\"background-color: rgba(0, 0, 0, 0); color: #ea210b;\" class=\"has-inline-color\">(Best Paper Award)<\/mark><\/p>\n<cite>S. W. Ahn, and S.-J. Kang<em> International Conference on Electronics, Information, and Communication (ICEIC)<\/em>, Jan. 2025.<br>ACK: IITP-2025-RS-2023-00260091, RS-2024-00414230<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-27 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" data-id=\"4085\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Group-294-1-1-1024x341.png\" alt=\"\" class=\"wp-image-4085\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Group-294-1-1-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Group-294-1-1-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Group-294-1-1-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/Group-294-1-1.png 1200w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<h1 class=\"wp-block-heading\">2024<\/h1>\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><\/p>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>1. Embedding-Free Transformer with Inference Spatial Reduction for Efficient Semantic Segmentation (<a href=\"https:\/\/github.com\/hyunwoo137\/EDAFormer\" class=\"external\" rel=\"nofollow\">Project Page<\/a>)<\/p>\n<cite>H. W. Yu, Y. B. Cho, B. W. Kang, S. H. Moon, K. Kong, and S.-J. Kang, <em><em>Proceedings of the European Conference on Computer Vision (ECCV)<\/em><\/em>, Oct.2024. <br>ACK: RS-2024-0041423, IITP-2024-RS-2023-0026009, KSC-2023-CRE-0444<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-28 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"306\" data-id=\"3552\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/07\/\uc2a4\ud06c\ub9b0\uc0f7-2024-07-05-102053-1024x306.png\" alt=\"\" class=\"wp-image-3552\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/07\/\uc2a4\ud06c\ub9b0\uc0f7-2024-07-05-102053-1024x306.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/07\/\uc2a4\ud06c\ub9b0\uc0f7-2024-07-05-102053-300x90.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/07\/\uc2a4\ud06c\ub9b0\uc0f7-2024-07-05-102053-768x230.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/07\/\uc2a4\ud06c\ub9b0\uc0f7-2024-07-05-102053-1536x459.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/07\/\uc2a4\ud06c\ub9b0\uc0f7-2024-07-05-102053.png 1997w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>2. AttentionHand: Text-driven Controllable Hand Image Generation for 3D Hand Reconstruction in the Wild <mark style=\"background-color: rgba(0, 0, 0, 0); color: #ea210b;\" class=\"has-inline-color\">(Oral Presentation)<\/mark> (<a href=\"https:\/\/arxiv.org\/abs\/2407.18034\" class=\"external\" rel=\"nofollow\">arXiv<\/a>) (<a href=\"https:\/\/redorangeyellowy.github.io\/AttentionHand\/\" class=\"external\" rel=\"nofollow\">Project Page<\/a>)<\/p>\n<cite>J. Park*, K. Kong*, and S.-J. Kang, <em><em>Proceedings of the European Conference on Computer Vision (ECCV)<\/em><\/em>, Oct.2024.<br>ACK: IITP-2024-RS-2023-00260091, P0020535, KSC-2023-CRE-0444<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-29 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"386\" data-id=\"3569\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/07\/Capture-2024-07-05-20-16-50-1-1024x386.png\" alt=\"\" class=\"wp-image-3569\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/07\/Capture-2024-07-05-20-16-50-1-1024x386.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/07\/Capture-2024-07-05-20-16-50-1-300x113.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/07\/Capture-2024-07-05-20-16-50-1-768x290.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/07\/Capture-2024-07-05-20-16-50-1-1536x579.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/07\/Capture-2024-07-05-20-16-50-1-2048x772.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>3. Interactive 3D Room Generation for Virtual Reality via Compositional Programming <mark style=\"background-color: rgba(0, 0, 0, 0); color: #ea210b;\" class=\"has-inline-color\">(Oral Presentation)<\/mark><\/p>\n<cite>J. Kim*, J. Park*, K. Kong*, and S.-J. Kang, <em><em>European Conference on Computer Vision<\/em><\/em> Workshop (<em><em>3rd Computer Vision for Metaverse Workshop)<\/em><\/em>, Oct.2024.<br>ACK: IITP-2025-RS-2023-00260091, IITP-RS-202200156318, RS-202400414230<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-30 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"467\" data-id=\"4158\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/1.png\" alt=\"\" class=\"wp-image-4158\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/1.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/1-300x137.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/1-768x350.png 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>4. Diffusion-based Interacting Hand Pose Transfer<\/p>\n<cite>J. Park*, Y. Hwang*, and S.-J. Kang, <em><em>European Conference on Computer Vision Workshop (8th Workshop on Observing and Understanding Hands in Action)<\/em><\/em>, Oct.2024.<br>ACK: x<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-31 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"278\" data-id=\"3706\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/pipeline-min2-1024x278.png\" alt=\"\" class=\"wp-image-3706\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/pipeline-min2-1024x278.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/pipeline-min2-300x81.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/pipeline-min2-768x209.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/pipeline-min2-1536x417.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/pipeline-min2-2048x556.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>5. Refinement-based Lightweight Unified Anomaly Detection<\/p>\n<cite>J.H. Lee, J.M.Roh, and S.-J. Kang, International SoC Design Conference (ISOCC), Aug.2024.<br>ACK: x<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-32 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"879\" height=\"481\" data-id=\"4314\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\ucea1\ucc98-2.png\" alt=\"\" class=\"wp-image-4314\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\ucea1\ucc98-2.png 879w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\ucea1\ucc98-2-300x164.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\ucea1\ucc98-2-768x420.png 768w\" sizes=\"auto, (max-width: 879px) 100vw, 879px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>6. Enhancing Depth Estimation in Various Objects<\/p>\n<cite>J. C. Yang, B. W. Kang, S. G. Lee, H. Y. Choi, H. U. Cho, and S.-J. Kang, <em>The 24th International Meeting on Information Display (IMID)<\/em>, Aug. 2024.<br>ACK: IITP-2024-RS-2023-00260091, Samsung Display<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-33 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"2144\" height=\"717\" data-id=\"3482\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/06\/\uadf8\ub9bc1.png\" alt=\"\" class=\"wp-image-3482\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/06\/\uadf8\ub9bc1.png 2144w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/06\/\uadf8\ub9bc1-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/06\/\uadf8\ub9bc1-1024x342.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/06\/\uadf8\ub9bc1-768x257.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/06\/\uadf8\ub9bc1-1536x514.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/06\/\uadf8\ub9bc1-2048x685.png 2048w\" sizes=\"auto, (max-width: 2144px) 100vw, 2144px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>7. Dual Stocker Scheduling Optimization based on Reinforcement Learning<\/p>\n<cite>S. W. Ahn, J. K. Kim, J.-J. Hong, S.-G Kim and S.-J. Kang, <em>International Technical Conference on Circuits\/Systems, Computers and Communications (ITC-CSCC)<\/em>, Jul. 2024.<br>ACK: IITP-2024-RS-2023-00260091, 2021M3H2A1038042, LG Display<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-34 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" data-id=\"3451\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc6.jpg\" alt=\"\" class=\"wp-image-3451\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc6.jpg 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc6-300x100.jpg 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc6-1024x341.jpg 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc6-768x256.jpg 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>8. Person in Place: Generating Associative Skeleton-Guidance Maps for Human-Object Interaction Image Editing (<a href=\"https:\/\/yangchanghee.github.io\/Person-in-Place_page\/\" class=\"external\" rel=\"nofollow\">Project Page<\/a>)<\/p>\n<cite>C. H. Yang*, C. H. Kang*, K. Kong*, H. Oh, and S.-J. Kang, <em>The IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)<\/em>, Jun.2024. <br>ACK: IITP-2024-RS-2023-00260091, 2021R1I1A1A01051225<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-35 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" data-id=\"3257\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc1-1024x341.png\" alt=\"\" class=\"wp-image-3257\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc1-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc1-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc1-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc1.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>9. Human Motion Aware Text-to-Video Generation with Explicit Camera Control <a href=\"https:\/\/anonymous.4open.science\/w\/HMTV_docs-5A6C\/\" class=\"external\" rel=\"nofollow\">(Project Page<\/a>)<\/p>\n<cite>T. H. Kim*, C. H. Kang*, J. H. Park*, D. Jeong*, C. H. Yang*, and S.-J. Kang, K. Kong,<em> IEEE\/CVF Winter Conference on Applications of Computer Vision (WACV)<\/em>, Jun. 2024.<br>ACK: 2021M3H2A1038042, 1711198586<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-36 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" data-id=\"3153\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc3-1024x341.png\" alt=\"\" class=\"wp-image-3153\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc3-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc3-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc3-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc3.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>10. MetaSeg: MetaFormer-based Global Contexts-aware Network for Efficient Semantic Segmentation (<a href=\"https:\/\/openaccess.thecvf.com\/content\/WACV2024\/papers\/Kang_MetaSeg_MetaFormer-Based_Global_Contexts-Aware_Network_for_Efficient_Semantic_Segmentation_WACV_2024_paper.pdf\" class=\"mtli_attachment mtli_pdf external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>B. W. Kang*, S. H. Moon*, Y. B. Cho*, H. W. Yu*, and S.-J. Kang,<em> IEEE\/CVF Winter Conference on Applications of Computer Vision (WACV)<\/em>, Jun. 2024.<br>ACK: IO201218-08232-01, IITP-2023-RS-2023-00260091<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-37 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"342\" data-id=\"3152\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc2-1024x342.png\" alt=\"\" class=\"wp-image-3152\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc2-1024x342.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc2-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc2-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc2-1536x512.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc2.png 1772w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>11. Simultaneous Optimization of Luminance and Color: A Novel Dimming Algorithm Utilizing Power-law Mapping<\/p>\n<cite>N. R. Kim, S.-J. Kang, <em>SID Symposium Digest of Technical Papers<\/em>, May. 2024.<br>ACK: 2020M3H4A1A02084899, IITP-2024-RS-2023-00260091<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-38 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"794\" height=\"269\" data-id=\"3446\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc4.png\" alt=\"\" class=\"wp-image-3446\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc4.png 794w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc4-300x102.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc4-768x260.png 768w\" sizes=\"auto, (max-width: 794px) 100vw, 794px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>12. Viewing Angle-aware Color and Luminance Distortion Compensation for Automotive OLED Displays<\/p>\n<cite>J. Pak, J. M. Choi, S. C. Pang, D. G. Kim, K. Kong, and S.-J. Kang, <em>SID Symposium Digest of Technical Papers<\/em>, May. 2024.<br>ACK: IITP-2024-RS-2023-00260091<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-39 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"269\" data-id=\"3275\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc1-1-1024x269.png\" alt=\"\" class=\"wp-image-3275\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc1-1-1024x269.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc1-1-300x79.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc1-1-768x201.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc1-1-1536x403.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc1-1.png 1861w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>13. Adaptive Local Backlight Dimming Control with Local Boosting<\/p>\n<cite><strong><mark style=\"background-color: rgba(0, 0, 0, 0);\" class=\"has-inline-color has-nv-c-2-color\">[Invited Talk]<\/mark><\/strong> J.-C. Cho, J.-S. Yoo, J.-Y. Park, J.-W. Park, S.-J. Kang, <em>SID Symposium Digest of Technical Papers<\/em>, May. 2024.<br>ACK: x<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-40 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"605\" height=\"151\" data-id=\"3399\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/albd.jpg\" alt=\"\" class=\"wp-image-3399\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/albd.jpg 605w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/albd-300x75.jpg 300w\" sizes=\"auto, (max-width: 605px) 100vw, 605px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>14. CNN-based Encoder and Transformer-based Decoder for Efficient Semantic Segmentation<\/p>\n<cite>S. H. Moon, and S.-J. Kang<em> <em>International Conference on Electronics, Information, and Communication (ICEIC)<\/em><\/em>, Jan. 2024.<br>ACK: IITP-2023-RS-2023-00260091, IO201218-08232-01<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-41 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1280\" height=\"406\" data-id=\"3441\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\ub17c\ubb38\uadf8\ub9bc2.png\" alt=\"\" class=\"wp-image-3441\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\ub17c\ubb38\uadf8\ub9bc2.png 1280w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\ub17c\ubb38\uadf8\ub9bc2-300x95.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\ub17c\ubb38\uadf8\ub9bc2-1024x325.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\ub17c\ubb38\uadf8\ub9bc2-768x244.png 768w\" sizes=\"auto, (max-width: 1280px) 100vw, 1280px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>15. Motion Mask-driven Improvements in Monocular Dynamic Novel View Synthesis<\/p>\n<cite>Yeom S, Son H, Kang C, Kim J, Yun KJ, Cheong WS, Kang SJ.,<em> <em>International Conference on Electronics, Information, and Communication (ICEIC)<\/em><\/em>, Jan. 2024.<br>ACK: IITP-2023-RS-2023-00260091, 2022-0-00022<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-42 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1133\" height=\"378\" data-id=\"3450\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc5.png\" alt=\"\" class=\"wp-image-3450\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc5.png 1133w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc5-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc5-1024x342.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc5-768x256.png 768w\" sizes=\"auto, (max-width: 1133px) 100vw, 1133px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<h1 class=\"wp-block-heading\">2023<\/h1>\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<ol class=\"wp-block-list\">\n<li>Deep Learning-based Image Deblurring for Display Vision Inspection<\/li>\n<\/ol>\n<cite>S. J. MIN, K. Kong, and S.-J. Kang.<em> Society for Information Display&#8217;s Display Week, May. 2023.<\/em><br>ACK: 2021R1A2C1004208, 2021M3H2A1038042<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-43 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" data-id=\"3154\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc4.png\" alt=\"\" class=\"wp-image-3154\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc4.png 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc4-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc4-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc4-768x256.png 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>2. A Novel Framework for Generating In-the-Wild 3D Hand Datasets (<a href=\"https:\/\/sites.google.com\/view\/hands2023\/abstract-report\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J. Park*, K. Kong*, and S.-J. Kang, <em>7th ICCV workshop on Observing and Understanding Hands in Action (HANDS)<\/em>, Oct. 2023.<br>ACK: 2021R1A2C1004208, IITP-2023-RS-2023-00260091<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc5-1024x341.png\" alt=\"\" class=\"wp-image-3155\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc5-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc5-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc5-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc5.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>3. SEFD : Learning to Distill Complex Pose and Occlusion (<a href=\"https:\/\/paperswithcode.com\/paper\/sefd-learning-to-distill-complex-pose-and\" class=\"external\" rel=\"nofollow\">View<\/a>) (<a href=\"https:\/\/yangchanghee.github.io\/ICCV2023_SEFD_page\/\" class=\"external\" rel=\"nofollow\">Project Page<\/a>)<\/p>\n<cite>C. H. Yang*, K. Kong*, S. J. Min*, G. Cha, H. D. Jang, D. Wee, and S.-J. Kang, <em>International Conference on Computer Vision<\/em>&nbsp;<em>(ICCV)<\/em>, Oct. 2023.<br>ACK: 2021R1A2C1004208, IITP-2023-RS-2023-00260091<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-44 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" data-id=\"3156\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc6.png\" alt=\"\" class=\"wp-image-3156\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc6.png 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc6-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc6-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc6-768x256.png 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>4. Diffusion-Based Image Generation for Display Defect Detection<\/p>\n<cite>C. H. Lee*, J. K. Kim*, and S.-J. Kang,&nbsp;<em>The 23rd International Meeting on Information Display (IMID)<\/em>, Aug. 2023.<br>ACK: 2021R1A2C1004208, 2020M3H4A1A02084899<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-45 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" data-id=\"3157\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc7.png\" alt=\"\" class=\"wp-image-3157\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc7.png 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc7-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc7-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc7-768x256.png 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>5. Remaining Useful Life Prediction through Meaningful Feature Extraction Using SHAP<\/p>\n<cite>Y. I. Park, and S.-J. Kang,&nbsp;<em>The 23rd International Meeting on Information Display (IMID)<\/em>, Aug. 2023.<br>ACK: 2021R1A2C1004208<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-46 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1334\" height=\"1025\" data-id=\"4171\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-11.png\" alt=\"\" class=\"wp-image-4171\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-11.png 1334w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-11-300x231.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-11-1024x787.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-11-768x590.png 768w\" sizes=\"auto, (max-width: 1334px) 100vw, 1334px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>6. Speech and Text-based Motion Generation and Matching System (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10212502\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J. H. Shin, J. Park, and S.-J. Kang, <em>International Technical Conference on Circuits\/Systems, Computers and Communications (ITC-CSCC)<\/em>, Jun. 25-28, 2023.<br>ACK: 2021R1A2C1004208<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-47 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" data-id=\"3173\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc9.png\" alt=\"\" class=\"wp-image-3173\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc9.png 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc9-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc9-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc9-768x256.png 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>7. Optimization of Video Repetitive Action Counting for Efficient Inference on Edge Devices (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10212477\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>H. Yu, Y. B. Cho, and S.-J. Kang, <em>International Technical Conference on Circuits\/Systems, Computers and Communications (ITC-CSCC)<\/em>, Jun. 25-28, 2023.<br>ACK: 2021R1A2C1004208, 2021M3H2A1038042, IO201218-08232-01<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-48 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" data-id=\"3178\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc10.png\" alt=\"\" class=\"wp-image-3178\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc10.png 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc10-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc10-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc10-768x256.png 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>8. FeedFormer: Revisiting Transformer Decoder for Efficient Semantic Segmentation (<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/25321\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J. H. Shim, H. Yu, K. Kong, and S.-J. Kang, <em>Association for the Advancement of Artificial Intelligence (AAAI)<\/em>, Feb. 7-14, 2023.<br>ACK: 2021R1A2C1004208, 2021M3H2A1038042, IO201218-08232-01<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-49 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"364\" data-id=\"4160\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/2.png\" alt=\"\" class=\"wp-image-4160\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/2.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/2-300x107.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/2-768x273.png 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>9. InvHDR: Inverse Tone Mapping With Invertible Neural Network (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10049952\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J. K. Kim, and S.-J. Kang, <em>International Conference on Electronics, Information, and Communication (ICEIC)<\/em>, Feb. 5-8, 2023.<br>ACK: P0020535. 2021M3H2A1038042<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-50 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" data-id=\"3183\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc12-1-1024x341.png\" alt=\"\" class=\"wp-image-3183\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc12-1-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc12-1-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc12-1-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc12-1.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>10. Multi-view stereo with recurrent neural networks for spatio-temporal consistent depth maps (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10049937\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>H. S. Son, and S.-J. Kang, <em>International Conference on Electronics, Information, and Communication (ICEIC)<\/em>, Feb. 5-8, 2023.<br>ACK: 2021R1A2C1004208, 2021M3H2A1038042<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-51 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" data-id=\"3182\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc13-1.png\" alt=\"\" class=\"wp-image-3182\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc13-1.png 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc13-1-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc13-1-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc13-1-768x256.png 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>11. Whole-body Human Mesh Reconstruction with Transformer (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10049916\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J. Park, and S.-J. Kang, <em>International Conference on Electronics, Information, and Communication (ICEIC)<\/em>, Feb. 5-8, 2023<br>ACK: 2021R1A2C1004208, 2021M3H2A1038042<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-52 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"830\" height=\"311\" data-id=\"3184\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc14.png\" alt=\"\" class=\"wp-image-3184\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc14.png 830w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc14-300x112.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc14-768x288.png 768w\" sizes=\"auto, (max-width: 830px) 100vw, 830px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>12. An Unified Framework for Language Guided Image Completion (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10030522\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J. H. Kim*,&nbsp; S. H. Jeong*,&nbsp;&nbsp;K. Kong*,&nbsp;and S.-J. Kang,<em>&nbsp;IEEE Winter Conference on Applications of Computer Vision (WACV)<\/em>, Jan. 3-7, 2023.<br>ACK: 2021R1A2C1004208<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-53 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1019\" height=\"460\" data-id=\"3185\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc15.png\" alt=\"\" class=\"wp-image-3185\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc15.png 1019w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc15-300x135.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc15-768x347.png 768w\" sizes=\"auto, (max-width: 1019px) 100vw, 1019px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>13. Unifying Domain Adaptation and Energy-Based Techniques for Person Search<\/p>\n<cite>J. Pak*,&nbsp;C. Jeon*,&nbsp;&nbsp;and S.-J. Kang,<em>&nbsp;IEEE International Conference on Visual Communications and Image Processing (VCIP)<\/em>, Dec. 4-7, 2023.<br>ACK: 2020M3H4A1A02084899, P0020535<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-54 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"777\" height=\"251\" data-id=\"3640\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/figure.png\" alt=\"\" class=\"wp-image-3640\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/figure.png 777w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/figure-300x97.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/figure-768x248.png 768w\" sizes=\"auto, (max-width: 777px) 100vw, 777px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>14. Optimized Image Quality Determination for Backlight Dimming<\/p>\n<cite>N.R. Kim,&nbsp;&nbsp;and S.-J. Kang,<em>&nbsp;<\/em>International SoC Design Conference (ISOCC)<br>ACK: IITP-2023-RS2023-00260091, 2020M3H4A1A02084899, P0020535<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-55 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"655\" height=\"481\" data-id=\"4163\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/3.png\" alt=\"\" class=\"wp-image-4163\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/3.png 655w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/3-300x220.png 300w\" sizes=\"auto, (max-width: 655px) 100vw, 655px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>15. Efficient Framework for Blind High-Resolution Image Reconstruction<\/p>\n<cite>S.J. Na,&nbsp;&nbsp;and S.-J. Kang,<em>&nbsp;<\/em>International Conference On Consumer Electronics (ICCE) Asia<br>ACK: 2020M3H4A1A02084899, 2021R1A2C1004208<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-56 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"539\" height=\"311\" data-id=\"3650\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/img.png\" alt=\"\" class=\"wp-image-3650\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/img.png 539w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/img-300x173.png 300w\" sizes=\"auto, (max-width: 539px) 100vw, 539px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<h1 class=\"wp-block-heading\">2022<\/h1>\n\n\n<p>&nbsp;<\/p>\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<ol class=\"wp-block-list\">\n<li>Performance Comparison of Soiling Detection Using Anomaly Detection Methodology (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10031428\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/li>\n<\/ol>\n<cite>J. H. Lee, C. R. Jeon, S. J, Kang, <em>International SoC Design Conference (ISOCC)<\/em>, Gangneung, Korea, Oct. 2022<br>ACK: P0020535<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-57 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1133\" height=\"378\" data-id=\"3186\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc16.png\" alt=\"\" class=\"wp-image-3186\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc16.png 1133w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc16-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc16-1024x342.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc16-768x256.png 768w\" sizes=\"auto, (max-width: 1133px) 100vw, 1133px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>2. MosaicMVS: Mosaic-based omnidirectional depth estimation for view synthesis (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10005048\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>M. J. Shin*, M. J. Cho*, W. J. Park, K. Kong, J. S. Kim, K. J. Yun,&nbsp; G. S. Lee and S.-J. Kang, <em>The 4th ECCV Workshop on Learning to Generate 3D Shapes and Scenes<\/em>, Oct. 2022.<br>ACK: 2022-0-00022, 2021R1A2C1004208<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-58 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" data-id=\"3187\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc17-1024x341.png\" alt=\"\" class=\"wp-image-3187\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc17-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc17-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc17-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc17.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>3. Anomaly Segmentation Using Class-aware Erosion and Smoothing (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9954841\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>B. W. Kang, J. H. Kwak, and S.-J. Kang, <em>The 7th International Conference On Consumer Electronics (ICCE) Asia<\/em>,&nbsp; Oct. 2022.<br>ACK: 22PQWO-C153369-04, 2021M3H2A1038042<\/cite><\/blockquote>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-59 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" data-id=\"3188\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc18.png\" alt=\"\" class=\"wp-image-3188\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc18.png 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc18-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc18-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc18-768x256.png 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>4. Deep Learning-based Data Augmentation for Display Defect Detection<\/p>\n<cite><strong><mark class=\"has-inline-color has-nv-c-2-color\" style=\"background-color: rgba(0, 0, 0, 0);\">[Invited Talk]<\/mark><\/strong> C. H. Lee, J. K. Kim,&nbsp;and S.-J. Kang, <em>The 29th International Display Workshops (IDW)<\/em>, Fukuoka, Japan, Dec. 2022.<br>ACK: 2021M3H2A1038042, 22PQWO-C153369-04<\/cite><\/blockquote>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-60 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" data-id=\"3189\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc19-1024x341.png\" alt=\"\" class=\"wp-image-3189\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc19-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc19-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc19-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc19.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>5. Selective TransHDR: Transformer-based selective&nbsp;HDR Imaging using Ghost Region Mask (<a href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-031-19790-1_18\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J. W. Song*, Y. I. Park*, K. Kong, J. H. Kwak, and S.-J. Kang,&nbsp;<em>Proceedings of the European Conference on Computer Vision (ECCV)<\/em>. Oct. 2022.<br>ACK: 2021R1A2C1004208<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-61 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" data-id=\"3191\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc20.png\" alt=\"\" class=\"wp-image-3191\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc20.png 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc20-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc20-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc20-768x256.png 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>6. NTIRE 2022 Burst Super-Resolution Challenge (<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022W\/NTIRE\/html\/Bhat_NTIRE_2022_Burst_Super-Resolution_Challenge_CVPRW_2022_paper.html\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>Y. C. Nam*, Y. S. Jo* and&nbsp;S.-J. Kang,&nbsp;<em>CVPR&nbsp;2022 Workshop on New Trends in Image Restoration and Enhancement workshop and challenges on image and video processing<\/em>, Jun. 2022.<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-62 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" data-id=\"3196\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc21-1024x341.png\" alt=\"\" class=\"wp-image-3196\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc21-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc21-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc21-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc21.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>7. Deep Anti-Aliasing: Image Restoration for Enhancing Display Defects Detection<\/p>\n<cite><strong><mark class=\"has-inline-color has-nv-c-2-color\" style=\"background-color: rgba(0, 0, 0, 0);\">[Invited Talk]<\/mark><\/strong> S. J. Min, K. Kong,&nbsp;and S.-J. Kang,<em>The 22nd International Meeting on Information Display (IMID)<\/em>, Aug. 2022.<br>ACK: 2021R1A2C1004208, 2020M3H4A1A02084899, 2021M3H2A1038042<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-63 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" data-id=\"3199\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc22.png\" alt=\"\" class=\"wp-image-3199\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc22.png 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc22-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc22-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc22-768x256.png 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>8. Burst Super Resolution Using Enhanced Residual Deformable Alignment and MAP estimation for Display Resolution Enhancement (<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022W\/NTIRE\/html\/Bhat_NTIRE_2022_Burst_Super-Resolution_Challenge_CVPRW_2022_paper.html\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>Y. C. Nam*, Y. S. Jo*,&nbsp;and S.-J. Kang,&nbsp;<em>The 22nd International Meeting on Information Display (IMID)<\/em>, Aug. 2022.<br>ACK: 2020M3H4A1A02084899, 2021M3H2A1038042, 22PQWO-C153369-04<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-64 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" data-id=\"3204\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc21-3.png\" alt=\"\" class=\"wp-image-3204\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc21-3.png 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc21-3-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc21-3-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc21-3-768x256.png 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>9. Stretch-aware Luminance Compensation for Stretchable Displays<\/p>\n<cite>S. H. Jung*, J. H.&nbsp;Kim*&nbsp;and S.-J. Kang,&nbsp;<em>The 22nd International Meeting on Information Display (IMID)<\/em>, Aug. 2022.<br>ACK: 2021R1A2C1004208, 2020M3H4A1A02084899<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-65 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1133\" height=\"378\" data-id=\"3206\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc23.png\" alt=\"\" class=\"wp-image-3206\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc23.png 1133w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc23-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc23-1024x342.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc23-768x256.png 768w\" sizes=\"auto, (max-width: 1133px) 100vw, 1133px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>10. Brightness Compensation of Outpainted Image for Stretchable Display<\/p>\n<cite>J. H. Kim*, and S.-J. Kang,&nbsp;<em>The 22nd International Meeting on Information Display (IMID)<\/em>, Aug. 2022.<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-66 is-layout-flex wp-block-gallery-is-layout-flex\"><\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>11. Semi-supervised Anomaly Detection with Reinforcement Learning (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9895028\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>C. H. Lee*, J. K. Kim* and S.-J. Kang,&nbsp;<em>The 37th&nbsp;International Technical Conference on Circuits\/Systems, Computers and Communications (ITC-CSCC)<\/em>, Jul. 2022.<br>ACK: 22PQWO-C153369-04, 2021R1A2C1004208,2021-0-02308<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-67 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1133\" height=\"728\" data-id=\"4165\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/5.png\" alt=\"\" class=\"wp-image-4165\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/5.png 1133w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/5-300x193.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/5-1024x658.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/5-768x493.png 768w\" sizes=\"auto, (max-width: 1133px) 100vw, 1133px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>12. Class Attention Transfer for Semantic Segmentation (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9869901\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>Y. B. Cho*&nbsp;and S.-J. Kang,&nbsp;<em>IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS),&nbsp;<\/em>Jun.&nbsp;2022<em>.&nbsp;<\/em><br>ACK: IITP-2021-2018-0-01421, 2021R1A2C1004208<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-68 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" data-id=\"3214\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc25.png\" alt=\"\" class=\"wp-image-3214\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc25.png 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc25-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc25-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc25-768x256.png 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>13. Deep Learning-based Real-time Segmentation for Edge Computing Devices (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9869967\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J. H. Kwak*, H. Yu*, Y. B. Cho*&nbsp;and S.-J. Kang,&nbsp;<em>IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)<\/em>, Jun. 2022.<br>ACK: IITP-2020-2018-0-01421<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-69 is-layout-flex wp-block-gallery-is-layout-flex\"><\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>14. Deep Learning-Based Image Enhancement for HDR Imaging (<a href=\"https:\/\/doi.org\/10.1002\/sdtp.15630\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite><strong><mark class=\"has-inline-color has-nv-c-2-color\" style=\"background-color: rgba(0, 0, 0, 0);\">[Invited Talk]<\/mark><\/strong> Y. I. Park, J. W. Song and S.-J. Kang,&nbsp;<em>The SID International Symposium, Seminar, and Exhibition<\/em>, San Jose, USA,&nbsp;May. 2022.<br>ACK: 2021R1A2C1004208, 2020M3H4A1A02084899<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-70 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"558\" data-id=\"4166\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/6.png\" alt=\"\" class=\"wp-image-4166\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/6.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/6-300x163.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/6-768x419.png 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>15. Vision Transformer-based Retina Vessel Segmentation With Deep Adaptive Gamma Correction (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9747597\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>H. Yu*, J. H. Shim*, J. H. Kwak*, J. W. Song* and S.-J. Kang,&nbsp;<em>IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)<\/em>, May. 2022.<br>ACK: 2021R1A2C1004208, IITP-2021-2018-0-01421<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-71 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" data-id=\"3219\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc27.png\" alt=\"\" class=\"wp-image-3219\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc27.png 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc27-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc27-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc27-768x256.png 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>16. Integrated In-vehicle Monitoring System Using 3D Human Pose Estimation and Seat Belt Segmentation (<a href=\"https:\/\/arxiv.org\/abs\/2204.07946\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>G. Kim*, H. Kim*, K. Kim, S. S. Cho, Y. H. Park and S.-J. Kang,&nbsp;<em>AAAI 2022 Workshop on AI for Transportation<\/em>, Vancouver, British Columbia, Canada, Feb. 2022.<br>ACK: 2022-0-00022, 2021R1A2C1004208<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-72 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" data-id=\"3222\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc29.png\" alt=\"\" class=\"wp-image-3222\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc29.png 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc29-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc29-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc29-768x256.png 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>17. Parallel Attention Network using Vector with High Correlation with Label for Remaining Useful Life Estimation (<a href=\"https:\/\/openreview.net\/forum?id=IlBRnwuFLvw\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>Y. I. Park, J. W. Song and S.-J. Kang, <em>AAAI 2022 Workshop on AI for Design and Manufacturing (ADAM)<\/em>, Vancouver, British Columbia, Canada, Feb. 2022.<br>ACK: IITP-2021-2018-0-01421, 2021R1A2C1004208<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-73 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1133\" height=\"378\" data-id=\"3221\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc28.png\" alt=\"\" class=\"wp-image-3221\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc28.png 1133w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc28-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc28-1024x342.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc28-768x256.png 768w\" sizes=\"auto, (max-width: 1133px) 100vw, 1133px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>18. AnoSeg: Anomaly Segmentation Network Using Self-Supervised Learning (<a href=\"https:\/\/arxiv.org\/abs\/2110.03396\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J. W. Song*, K. Kong*, Y. I. Park, S. G. Kim and S.-J. Kang,&nbsp;<em>AAAI 2022 Workshop on AI for Design and Manufacturing (ADAM)<\/em>, Vancouver, British Columbia, Canada, Feb. 2022.<br>ACK: IITP-2021-2018-0-01421, 2021R1A2C1004208<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-74 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" data-id=\"3223\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc30.png\" alt=\"\" class=\"wp-image-3223\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc30.png 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc30-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc30-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc30-768x256.png 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>19. Image-Adaptive Hint Generation via Vision Transformer for Outpainting (<a href=\"https:\/\/openaccess.thecvf.com\/content\/WACV2022\/papers\/Kong_Image-Adaptive_Hint_Generation_via_Vision_Transformer_for_Outpainting_WACV_2022_paper.pdf\" class=\"mtli_attachment mtli_pdf external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>D. H. Kong*, K. Kong*, K. H. Kim*, S. J. Min* and S.-J. Kang,&nbsp;<em>IEEE Winter Conference on Applications of Computer Vision (WACV)<\/em>, Jan. 2022.<br>ACK: 2020M3H4A1A02084899, 2021R1A2C1004208, 2020M3C1B8081320<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-75 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" data-id=\"3224\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc31.png\" alt=\"\" class=\"wp-image-3224\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc31.png 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc31-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc31-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc31-768x256.png 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n<p>&nbsp;<\/p>\n\n\n<p>&nbsp;<\/p>\n\n\n<p><h1>&nbsp;2021<\/h1><\/p>\n\n\n<hr \/>\n<ol>\n<li>S. J. Min and S.-J. Kang, \u201cEdge Map-guided Scale-iterative Image Deblurring\u201d, <em>13th Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)<\/em>, Dec. 2021. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/9689267\" rel=\"nofollow\">View<\/a>)<\/li>\n<li><strong><strong><mark class=\"has-inline-color has-nv-c-2-color\" style=\"background-color: rgba(0, 0, 0, 0);\"><em>[Invited Talk]<\/em><\/mark><\/strong>\u00a0<\/strong>J. H. Kim and S.-J. Kang, \u201cDeep Learning-based Image Restoration Algorithms in Display Devices\u201d,\u00a0 <em>The 28th International Display Workshops (IDW)<\/em>, Online, Japan, Dec. 2021. (<a class=\"external\" href=\"https:\/\/confit.atlas.jp\/guide\/event-img\/idw2021\/AIS7_VHF6-01\/public\/pdf_archive?type=in\" rel=\"nofollow\">View<\/a>)<\/li>\n<li>J. H. Shim*, K. Kong*, and S.-J. Kang, \u201cCore-set Sampling for Efficient Neural Architecture Search\u201d, <em>38th International Conference on Machine Learning Workshop (ICMLW)<\/em>, Jul. 2021.\u00a0(<a class=\"external\" href=\"https:\/\/arxiv.org\/abs\/2107.06869\" rel=\"nofollow\">View<\/a>)\u00a0<em><strong>(Spotlight)<\/strong><\/em><\/li>\n<li>K. Kong*, K. H. Kim*, W. J. Song, and S.-J. Kang, \u201cSelective Focusing Learning in Conditional GANs\u201d, <em>38th International Conference on Machine Learning Workshop (ICMLW)<\/em>, Jul. 2021.\u00a0(<a class=\"external\" href=\"https:\/\/arxiv.org\/abs\/2107.08792\" rel=\"nofollow\">View<\/a>)\u00a0<em><strong>(Spotlight)<\/strong><\/em><\/li>\n<li><strong><em><strong><mark class=\"has-inline-color has-nv-c-2-color\" style=\"background-color: rgba(0, 0, 0, 0);\">[Invited Talk]<\/mark><\/strong>\u00a0<\/em><\/strong>J. H. Shim and S.-J. Kang, \u201cDesign Automation of Efficient Deep Neural Networks in Display Devices\u201d,\u00a0 <em>International Meeting on Information Display (IMID)<\/em>, Seoul, Korea, Aug. 2021.<\/li>\n<li>J. Lee and S.-J. Kang, \u201cSkeleton action recognition using Two-Stream Adaptive Graph Convolutional Networks\u201d, <em>The 36th\u00a0International Technical Conference on Circuits\/Systems, Computers and Communications (ITC-CSCC)<\/em>, Jun. 2021. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/9501457\" rel=\"nofollow\">View<\/a>)<\/li>\n<li>M. J. Shin, W. J. Park, J. S. Kim, K. G. Yun, W. S. Cheong and S.-J. Kang, \u201cUnderstanding the Limitations of SfM-Based Camera Calibration on Multi-View Stereo Reconstruction\u201d, <em>The 36<sup>th<\/sup>\u00a0International Technical Conference on Circuits\/Systems, Computers and Communications (ITC-CSCC)<\/em>, Jun. 2021. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/9501460\" rel=\"nofollow\">View<\/a>)<\/li>\n<li>J. H. Kwak and S.-J. Kang, \u201cVehicle Tracking for Robust Vehicle Detection\u201d, <em>The 36<sup>th<\/sup>\u00a0International Technical Conference on Circuits\/Systems, Computers and Communications (ITC-CSCC)<\/em>, Jun. 2021.<\/li>\n<li>D. H. Kong, and S.-J. Kang, \u201cDownsizing Heatmap Resolution for real-time 3D Human Pose Estimation\u201d, <em>The 36th\u00a0International Technical Conference on Circuits\/Systems, Computers and Communications (ITC-CSCC)<\/em>, Jun. 2021. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/9501409\" rel=\"nofollow\">View<\/a>)<\/li>\n<li>J. H. Shim, and S.-J. Kang, \u201cNeural Architecture Search for Light-weight Multi-touch Classification\u201d, <em>The 36th International Technical Conference on Circuits\/Systems, Computers and Communications (ITC-CSCC)<\/em>, Jun. 2021. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/9501259\" rel=\"nofollow\">View<\/a>)<\/li>\n<li>J. W. Song, Y. I. Park, J. J. Hong, S. G. Kim and S.-J. Kang, \u201cAttention-based Bidirectional LSTM-CNN Model for Remaining Useful Life Estimation\u201d, <em>IEEE International Symposium on Circuits and Systems (ISCAS)<\/em>, Daegu, Korea,\u00a0May. 2021. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/9401572\" rel=\"nofollow\">View<\/a>)<\/li>\n<li>W. J. Park*, J. H. Kim*, J. S. Kim, K. J. Yun, W. S. Cheong and S.-J. Kang, \u201cStructured Camera Pose Estimation for Mosaic-based Omnidirectional Imaging,\u201d <em>IEEE International Symposium on Circuits and Systems (ISCAS)<\/em>,\u00a0Daegu, Korea, May. 2021. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/9401585\" rel=\"nofollow\">View<\/a>)<\/li>\n<li>K. H. Kim, Y. H. Yun, K. W Kang, K. Gong, S. Lee, and S.-J. Kang, \u201cPainting Outside as Inside: Edge Guided Image Outpainting via Bidirectional Rearrangement with Progressive Step Learning,\u201d <em>IEEE Winter Conference on Applications of Computer Vision (WACV)<\/em>, Jan. 2021. (<a class=\"external\" href=\"https:\/\/arxiv.org\/abs\/2010.01810\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>J. H. Kim*, S. Lee* and S.-J. Kang, \u201cEnd-to-End Differentiable Learning to HDR Image Synthesis for Multi-exposure Images\u201d,\u00a0 <em>Association for the Advancement of Artificial Intellignece (AAAI)<\/em>\u00a0, Feb. 2021. (<a class=\"external\" href=\"https:\/\/arxiv.org\/abs\/2006.15833\" rel=\"nofollow\">View<\/a>)<\/li>\n<\/ol>\n<h1>2020<\/h1>\n<p>&nbsp;<\/p>\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<ol class=\"wp-block-list\">\n<li>J. W. Chang*, S. H. Ahn*, K. W. Kang and S.-J. Kang, \u201cTowards Design Methodology of Efficient Fast Algorithms for Accelerating Generative Adversarial Networks on FPGAs\u201d, <em>Proceeding of ACM\/IEEE Asia and South Pacific Design Automation Conference (ASP-DAC)<\/em>, pp.283-288, Jan. 2020. (<a class=\"external\" href=\"https:\/\/arxiv.org\/abs\/1911.06918\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)&nbsp;<strong><mark class=\"has-inline-color has-nv-c-2-color\" style=\"background-color: rgba(0, 0, 0, 0);\">(<em>Best Paper Candidate<\/em>)<\/mark><\/strong><\/li>\n\n\n\n<li>S. Y. Jo, N. Ahn and S.-J. Kang, \u201cLightweight Tone-mapped HDRNET with Exposure Stack Generation\u201d, <em>The SID International Symposium, Seminar, and Exhibition<\/em>, San Francisco, USA, Jun. 2020. (<a class=\"external\" href=\"https:\/\/sid.onlinelibrary.wiley.com\/doi\/abs\/10.1002\/sdtp.14040\" rel=\"nofollow\">View<\/a>)<\/li>\n\n\n\n<li>H. S. Yoon, S.H. Ahn, K. W. Kang,&nbsp; I. H. Lee, S. J. Park, Y. A. Ma, S.-J. Kang, \u201cConvolutional Neural Network-based Multi-touch Detection Technique on Learning from Class-imbalanced Dataset\u201d, <em>The SID International Symposium, Seminar, and Exhibition<\/em>, San Francisco, USA, Jun. 2020. (<a class=\"external\" href=\"https:\/\/sid.onlinelibrary.wiley.com\/doi\/abs\/10.1002\/sdtp.14270\" rel=\"nofollow\">View<\/a>)<\/li>\n\n\n\n<li>J. Heo, G. Kim, J. Park, Y. Kim, S. S. Cho, C. W. Lee and S.-J. Kang*, \u201cLightweight Deep Neural network-based Real-Time Pose Estimation on Embedded Systems\u201d, <em>Intelligent Vehicle Symposium<\/em>, Las Vegas, NV, USA,&nbsp; Jun. 2020. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/9304550\" rel=\"nofollow\">View<\/a>)<\/li>\n\n\n\n<li>S.H. Ahn, J. W. Chang and S.-J. Kang, \u201cAn Efficient Accelerator Design Methodology for Deformable Convolutional Networks\u201d, <em>IEEE International Conference on Image Processing (ICIP)<\/em>, Abu Dhabi, United Arab Emirates (UAE), Oct. 2020. (<a class=\"external\" href=\"https:\/\/arxiv.org\/abs\/2006.05238\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n\n\n\n<li>J. W Song, N. Ahn and S.-J. Kang, \u201cDeep Learning-based Mura Detection With Atrous Spatial Pyramid Pooling\u201d, <em>International Meeting on Information Display (IMID)<\/em>, Seoul, Korea, Aug. 2020.<\/li>\n\n\n\n<li>Y. I. Park, S. Y. Jo and S.-J. Kang, \u201cDeep Learning-based HDR Generator Focused On Saturated Area Restoration\u201d, <em>International Meeting on Information Display (IMID)<\/em>, Seoul, Korea, Aug. 2020.<\/li>\n\n\n\n<li>J. S. Choi and S.-J. Kang, \u201cSequential Compression Using Efficient LUT Correlation for Display Defect Compensation\u201d, <em>International SoC Design Conference (ISOCC)<\/em>, Yeosu, Korea, Oct. 2020. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/9332953\" rel=\"nofollow\">View<\/a>)&nbsp;<strong><mark class=\"has-inline-color has-nv-c-2-color\" style=\"background-color: rgba(0, 0, 0, 0);\">(<em>Best Paper Award<\/em>)<\/mark><\/strong><\/li>\n\n\n\n<li>Y. I. Park, J. W. Song and S.-J. Kang, \u201cHDR Image Generator Focused on Saturated Region Restoration with Contextual Loss\u201d,&nbsp; <em>International SoC Design Conference&nbsp;(ISOCC)<\/em>, Yeosu, Korea, Oct. 2020. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/9333073\" rel=\"nofollow\">View<\/a>)<\/li>\n\n\n\n<li>J. H. Song, S.-J. Kang, \u201cFast 3D Hand Pose Estimation for Real-time System\u201d, <em>International SoC Design Conference (ISOCC)<\/em>, Yeosu, Korea, Oct. 2020. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/9333123\" rel=\"nofollow\">View<\/a>)<\/li>\n\n\n\n<li>J. H. Kim*, S. Lee* and S.-J. Kang, \u201cEnd-to-End Differentiable Learning to HDR Image Synthesis for Multi-exposure Images\u201d,&nbsp; <em>Neural Information Processing Systems Workshop (NeurIPSW)<\/em>, Dec. 2020. (<a class=\"external\" href=\"https:\/\/arxiv.org\/abs\/2006.15833\" rel=\"nofollow\">View<\/a>)<\/li>\n<\/ol>\n\n\n\n<h1 class=\"wp-block-heading\">2019<\/h1>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<ol class=\"wp-block-list\">\n<li>S. J. Lee*, K. W. Kang*, S. Lee, and S.-J. Kang, \u201cA Decomposition Method of Object Transfiguration\u201d, <em>SIGGRAPH Asia 2019 Technical Briefs<\/em>. ACM, Nov. 2019. (<a class=\"external\" href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3355088.3365151\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n\n\n\n<li>J. Park, Y. Lee, J. Heo and S.-J. Kang*, \u201cConvolutional Neural Network-based Jaywalking Data Generation and Classification\u201d,&nbsp;<em>International SoC Design Conference(ISOCC)<\/em>, Jeju, Korea, Oct. 2019. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/9078526\" rel=\"nofollow\">View<\/a>)&nbsp;<strong><mark class=\"has-inline-color has-nv-c-2-color\" style=\"background-color: rgba(0, 0, 0, 0);\">(<em>Best Paper Award<\/em>)<\/mark><\/strong><\/li>\n\n\n\n<li>J. Heo, J. Park and S. J. Kang, \u201cDeep Learning-based Object Focused Optical Flow Estimation for Contents-aware Display System\u201d,&nbsp;<em>International Meeting on Information Display (IMID)<\/em>, Gyeongju, Korea, Aug. 2019.&nbsp;<strong><mark class=\"has-inline-color has-nv-c-2-color\" style=\"background-color: rgba(0, 0, 0, 0);\">(<em>Best Paper Award<\/em>)<\/mark><\/strong><\/li>\n\n\n\n<li>S. Y. Jo, N. Ahn and S. J. Kang, \u201cDeep Learning-based HDR Imaging for Virtual Reality Head Mounted Displays\u201d,&nbsp;<em>International Meeting on Information Display (IMID)<\/em>, Gyeongju, Korea, Aug. 2019.<\/li>\n\n\n\n<li>J. S. Choi, S. J. Lee and S.-J. Kang, \u201cDeep learning-based data augmentation for display defect detection\u201d,&nbsp;<em>International Meeting on Information Display (IMID)<\/em>, Gyeongju, Korea, Aug. 2019.<\/li>\n\n\n\n<li>Y. L. Seo, S. H. Ahn, K. W. Kang and S.-J. Kang, \u201cOptimal Quantization Selection for Fixed Point-based Convolution Neural Network\u201d,&nbsp;<em>International Meeting on Information Display (IMID)<\/em>, Gyeongju, Korea, Aug. 2019.<\/li>\n\n\n\n<li>J. W. Chang, K. W. Kang and S. J Kang, \u201cSDCNN: An efficient sparse deconvolutional neural network accelerator on FPGA,\u201d&nbsp;<em>Proceedings of Design, Automation &amp; Test in Europe (DATE)<\/em>, Mar. 2019. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/8715055\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n\n\n\n<li>N. Ahn and S.-J. Kang, \u201cMulti-View image-based Vehicle Brand Recognition System with Cascaded Convolutional Neural Network\u201d,&nbsp;<em>2019 IEEE International Conference on Consumer Electronics (ICCE)<\/em>. Jan. 2019. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/8661920\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n\n\n\n<li>Y. Lee and S.-J. Kang, \u201cWeb Scraping Crawling-based Automatic Data Augmentation for Deep Neural Networks-based Vehicle Classifications\u201d,&nbsp;<em>2019 IEEE International Conference on Consumer Electronics (ICCE)<\/em>. Jan. 2019. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/8661971\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<\/ol>\n\n\n\n<h1 class=\"wp-block-heading\">2018<\/h1>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<ol class=\"wp-block-list\">\n<li>S. Y. Jo, N. Ahn, Y. Lee, and S.-J. Kang, \u201cTransfer Learning-based Vehicle Classification\u201d,&nbsp;<em>International SoC Design Conference(ISOCC)<\/em>, Daegu, Korea, Nov. 2018. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/8649802\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n\n\n\n<li>S. Kim, M. W. Seo, S. J. Lee and S.-J. Kang, \u201cObject Tracking-based Foveated Super-Resolution Convolutional Neural Network for Head Mounted Display,\u201d&nbsp;<em>SIGGRAPH Asia 2018, ACM<\/em>, Tokyo, Japan, Dec. 2018. (<a class=\"external\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3283289.3283325\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n\n\n\n<li>S. Lee,&nbsp;G. H An and S. J Kang. \u201cDeep Recursive HDRI: Inverse Tone Mapping using Generative Adversarial Networks.\u201d&nbsp;<i>Proceedings of the European Conference on Computer Vision (ECCV)<\/i>. Sep. 2018. (<a class=\"external\" href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/html\/Siyeong_Lee_Deep_Recursive_HDRI_ECCV_2018_paper.html\" rel=\"nofollow\">View<\/a>)<\/li>\n\n\n\n<li>J. W. Chang, K. W.&nbsp;Kang and S. J. Kang*, \u201cFPGA-optimized Image Super-Resolution for Virtual Reality Using Deep Learning\u201d,&nbsp;<em>International Meeting on Information Display (IMID)<\/em>, Busan, Korea, Aug. 2018.&nbsp;<strong><mark class=\"has-inline-color has-nv-c-2-color\" style=\"background-color: rgba(0, 0, 0, 0);\">(<em>Best Paper Award<\/em>)<\/mark><\/strong><\/li>\n\n\n\n<li>S. Kim, Y. Lee, N. Ahn, and S.-J. Kang*, \u201cDriver Identification System Using Convolutional Neural Network with Background Removal-based Infrared Data Augmentation\u201d,&nbsp;<em>Intelligent Vehicle Symposium<\/em>, Chang Shu, China, Jun. 2018. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/8500364\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n\n\n\n<li>J. W. Chang and S. J Kang, \u201cOn-Chip CNN Accelerator for Image Super-Resolution\u201d,&nbsp;<em>Proceedings of ACM\/IEEE Design Automation Conference (DAC-WIP)<\/em>, San Francisco, CA, USA, Jun. 2018. (<a class=\"external\" href=\"https:\/\/www.researchgate.net\/publication\/322592143_On-Chip_CNN_Accelerator_for_Image_Super-Resolution\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n\n\n\n<li>K. W. Kang, M. W. Seo, S. L. Lee, H. C. Lee, E. Y. Oh, J. S. Baek and S.-J. Kang*, \u201cHead Movement-based Motion Blur Measurement System for Head Mounted Displays\u201d,&nbsp;<em>The SID International Symposium, Seminar, and Exhibition<\/em>, Los Angeles, USA, May. 2018. (<a class=\"external\" href=\"https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1002\/sdtp.12208\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n\n\n\n<li>G. H. An, S. Lee, Y. D. Ahn and S.-J. Kang*, \u201cDeep Tone-mapped HDRNET for High Dynamic Range Image Restoration\u201d,&nbsp;<em>The SID International Symposium, Seminar, and Exhibition<\/em>, Los Angeles, USA, May. 2018. (<a class=\"external\" href=\"https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1002\/sdtp.12532\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n\n\n\n<li>Y. D. Ahn, J. W. Chang, S. I. Cho and S.-J. Kang, \u201cWireless Communication Module-based User Identification Technique,\u201d&nbsp;<em>International Symposium on Innovation in Information Technology and Application<\/em>, Malaysia, Jan. 29 \u2013 Feb. 2, 2018. (<a class=\"external\" href=\"http:\/\/jiita.org\/v2n105\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n\n\n\n<li>S. I. Cho and S.-J. Kang, \u201d Real-time People Counting System Using Simplified Motion Detection,\u201d&nbsp;<em>International Symposium on Innovation in Information Technology and Application<\/em>, Malaysia, Jan. 29 \u2013 Feb. 2, 2018. (<a class=\"external\" href=\"http:\/\/jiita.org\/v2n305\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n\n\n\n<li>J. W. Chang and&nbsp;S. J.Kang, \u201cOptimizing FPGA-based Convolutional Neural Networks Accelerator for Image Super-Resolution\u201d,&nbsp;<em>Proceedings of ACM\/IEEE Asia and South Pacific Design Automation Conference (ASP-DAC)<\/em>, Jan. 2018. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/8297347\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<\/ol>\n\n\n\n<h1 class=\"wp-block-heading\">2017<\/h1>\n\n\n<hr \/>\n<ol>\n<li>J. W. Chang, S.-J. Kang*, M. W. S., S. W. Choi, S. L. Lee, H. C. Lee, E. Y. Oh, and J. S. Baek. \u201cReal-time Temporal Quality Compensation Technique for Head Mounted Displays.\u201d <i>SIGGRAPH Asia 2017 Posters<\/i>.\u00a0<em>ACM<\/em>, Nov. 2017. (<a class=\"external\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3145690.3145714\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>S. Kim, G. H. An, S.-J. Kang*, \u201cFacial Expression Recognition System Using Machine Learning\u201d, <em>International SoC Design Conference(ISOCC)<\/em>, Seoul, Korea, Nov. 2017. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/8368887\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)\u00a0<strong><mark class=\"has-inline-color has-nv-c-2-color\" style=\"background-color: rgba(0, 0, 0, 0);\">(<em>Best Paper Award<\/em>)<\/mark><\/strong><\/li>\n<li>B. J. Kim, \u00a0S. Y. Lee and S.-J. Kang*, \u201cHybrid Sensors-based Wind Turbine Blade Monitoring system\u201d, <em>International\u00a0Symposium on Advanced Intelligent Systems(ISIS)<\/em>, Daegu, Korea, Oct. 2017.<\/li>\n<li>Y. D. Ahn, G. H. An, Y.\u00a0 S. Kim and S.-J. Kang*, \u201cReverse Tone Mapping with Adaptive Mapping Curve\u201d, <em>International Meeting on Information Display (IMID)<\/em>, Busan, Korea, Aug. 2017.<\/li>\n<li>G. H. An, Y. D. Ahn, and S.-J. Kang*, \u201cBacklight Uniformity Evaluation Method for 1-D Local Dimming Display\u201d, <em>International Meeting on Information Display (IMID)<\/em>, Busan, Korea, Aug. 2017.<\/li>\n<li>S. P. Cheon, S. J.Kang, \u201cSensor-based Driver Condition Recognition using Support Vector Machine for the Detection of Driver Drowsiness,\u201d <em>Intelligent Vehicle Symposium<\/em>, Los Angeles, USA, Jun. 2017. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/7995924\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>M. W. Seo, S. W. Choi, and S.-J. Kang, \u201cSensor-based Latency Reduction Method for Virtual Reality Head Mounted Display\u201d, <em>The SID International Symposium, Seminar, and Exhibition<\/em>, Los Angeles, USA,\u00a0May. 2017. (<a class=\"external\" href=\"https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1002\/sdtp.11954\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<\/ol>\n<h1>2016<\/h1>\n<hr \/>\n<ol>\n<li>S. W. Choi, M. W. Seo, and S.-J. Kang, \u201cPrediction-Based Latency Compensation Technique for Head Mounted Display\u201d,\u00a0<em>13th International SoC Design Conference<\/em>, Jeju , Korea, Oct. 2016. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/7799715\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>Y. D. Ahn, and S.-J. Kang*, \u201cMapping Table based Fisheye Image Correction Using Interpolation for Time Reduction\u201d, <em>13th International SoC Design Conference<\/em>, Jeju, Korea, Oct. 2016. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/7799760\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>H. S. Lee, S.-J. Kang, and Y. H. Kim*, \u201cMotion Vector Smoothing of Boundary of Moving Object for Frame Rate Up-Conversion\u201d, <em>13th International SoC Design Conference<\/em>, Jeju, Korea, Oct. 2016. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/7799730\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>S. Yoon, S.-J. Kang, and Y. H. Kim*, \u201cBlock-based Static Caption Detection Using Intensity Range\u201d, <em>IEEE Global Conference on Consumer Electronics<\/em>, Kyoto, Japan, Oct. 2016. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/7800374\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>M. W. Seo, S. W. Choi, S. L. Lee, J. H. Park, E. Y. Oh, J. S. Baek and S.-J. Kang, \u201cMotion-to-Photon Latency Analysis for Virtual Reality HMD System\u201d, <em>The 16th International Meeting on Information Display (IMID)<\/em>, Jeju, Korea,\u00a0Aug. 2016.<\/li>\n<li>S. P. Cheon, S. H. Kim, Y. H. Kim and S.-J. Kang*, \u201cPerformance Analysis of Stereo Matching for Multiview 3D Displays,\u201d <em>The 16th International Meeting on Information Display (IMID)<\/em>, Jeju, Korea,\u00a0Aug. 2016.<\/li>\n<li>S. W. Choi, M. W. Seo, S. L. Lee, J. H. Park, E. Y. Oh, J. S. Baek and S.-J. Kang, \u201cHead position model-based Latency Measurement System for Virtual Reality Head Mounted Display\u201d, <em>The SID International Symposium, Seminar, and Exhibition<\/em>, San Francisco, USA,\u00a0May. 2016. (<a class=\"external\" href=\"https:\/\/sid.onlinelibrary.wiley.com\/doi\/abs\/10.1002\/sdtp.10930\" rel=\"nofollow\">View<\/a>)<\/li>\n<li>S.-J. Kang, Y. W Jeong, J. J. Yun and S. Bae*, \u201cReal-time Eye Tracking Technique for Multiview 3D Systems\u201d, <em>The 32th International Conference on Consumer Electronics<\/em>, Las Vegas, USA, Jan. 2016. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/7430728\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<\/ol>\n<h1>2015<\/h1>\n<hr \/>\n<ol>\n<li>S. W. Choi and S.-J. Kang, \u201cEdge Density-based Age Estimation Algorithm for Public Display\u201d, <em>The 15th International Meeting on Information Display (IMID)<\/em>, Daegu, Korea, Aug. 2015.<\/li>\n<li>Y. D. Ahn and S.-J. Kang*, \u201cOLED Power-Reduction Algorithm Using Gray-Level Mapping Conversion\u201d, <em>The SID International Symposium, Seminar, and Exhibition<\/em>, San Jose, USA,\u00a0May 31-June 5, 2015. (<a class=\"external\" href=\"https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1002\/sdtp.10455\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>S.-J. Kang, \u201cPositional Analysis-based Scene-change Detection Algorithm\u201d, <em>The 31th International Conference on Consumer Electronics<\/em>, Las Vegas, USA, Jan. 2015. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/7066299\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<\/ol>\n<h1 class=\"wp-block-heading\"><strong>2014<\/strong><\/h1>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>1. <span style=\"color: var(--nv-text-color); font-family: var(--bodyfontfamily); font-size: var(--bodyfontsize); font-weight: var(--bodyfontweight); letter-spacing: var(--bodyletterspacing); text-transform: var(--bodytexttransform); background-color: var(--nv-site-bg);\">S.-J. Kang, Y.\u00a0 Ahn, and M. Y. Lee*, \u201cOptimal Subpixel Rendering Technique for Ultrasound Imaging Displays\u201d, <\/span><em style=\"color: var(--nv-text-color); font-family: var(--bodyfontfamily); font-size: var(--bodyfontsize); font-weight: var(--bodyfontweight); letter-spacing: var(--bodyletterspacing); text-transform: var(--bodytexttransform); background-color: var(--nv-site-bg);\">The 10th Display Valley Conference and Exhibition (DVCE2014)<\/em><span style=\"color: var(--nv-text-color); font-family: var(--bodyfontfamily); font-size: var(--bodyfontsize); font-weight: var(--bodyfontweight); letter-spacing: var(--bodyletterspacing); text-transform: var(--bodytexttransform); background-color: var(--nv-site-bg);\">, Chungnam, Korea, Nov. 2014.\u00a0<\/span><mark class=\"has-inline-color has-nv-c-2-color\" style=\"background-color: rgba(0, 0, 0, 0); font-family: var(--bodyfontfamily); font-size: var(--bodyfontsize); font-weight: var(--bodyfontweight); letter-spacing: var(--bodyletterspacing); text-transform: var(--bodytexttransform);\">(<em><strong>Outstanding Poster Paper\u00a0Award)<\/strong><\/em><\/mark><\/p>\n<h1 class=\"wp-block-heading\"><strong>2013<\/strong><\/h1>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<ol>\n<li>S.-J. Kang, S. I. Cho, S. Yoo and Y. H. Kim*, \u201cMulti-histogram Based Scene Change Detection for Frame Rate Up-Conversion\u201d, <em>The 31th International Conference on Consumer Electronics<\/em>, Las Vegas, USA, Jan. 2013. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/6486916\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<\/ol>\n<h1 class=\"wp-block-heading\"><strong>~ 2012<\/strong><\/h1>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<ol>\n<li>S.-J. Kang, S. Kim, E. Oh and Y. W. Song*, \u201cLuminance-Difference-based Adaptive Subpixel Rendering Algorithm for Matrix Displays\u201d, <em>The 30th International Conference on Consumer Electronics<\/em>, Las Vegas, USA, Jan. 2012. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/6162057\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>S.-J. Kang and Y. H. Kim*, \u201cDynamic Backlight Dimming Using Multiple Histograms for Low Power Liquid Crystal Displays\u201d, <em>The 29th International Conference on Consumer Electronics<\/em>, Las Vegas, USA, Jan. 2011. (<a class=\"external\" href=\"https:\/\/ieeexplore.ieee.org\/document\/5722723\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>D. G. Yoo, S.-J. Kang, S. K. Lee and Y. H. Kim*, \u201cPerformance Comparison of Fast Bilateral Motion Estimation Algorithms for Frame Rate Up-Conversion\u201d, <em>International Meeting on Information Display (IMID)<\/em>, Seoul, Korea, Oct. 2010.\\r\\nS. I. Cho, S.-J. Kang\u00a0 and Y. H. Kim*, \u201cGlobal Backlight Dimming Method Using a Two-sided Gray-level Conversion for LCD\u201d, <em>International Meeting on Information Display (IMID)<\/em>, Seoul, Korea, Oct. 2010.<\/li>\n<li>P. Lavole, S.-J. Kang, S. K. Lee and Y. H. Kim*, \u201cDynamic Global Dimming Method based on the Complementary Cumulative Distribution Function for LCD\u201d, <em>International Meeting on Information Display (IMID)<\/em>, Seoul, Korea, Oct. 2010.<\/li>\n<li>S.-J. Kang, H. Ahn, H. Hong, E. Oh, I. Chung and Y. H. Kim*, \u201cLow-Power LCDs Using an Image Integrity-Based Backlight-Dimming Algorithm\u201d, <em>The SID International Symposium, Seminar, and Exhibition<\/em>, Seattle, USA, May. 2010. (<a class=\"external\" href=\"https:\/\/www.researchgate.net\/publication\/260304071_673_Low_Power_Liquid_Crystal_Displays_Using_an_Image_Integrity-based_Backlight_Dimming_Algorithm\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>P. Lavole, S. K. Lee, S.-J. Kang and Y. H. Kim*, \u201cDynamic Clipping Ratio Determination for Global Backlight Dimming in LCD\u201d, <em>The IEEE International Symposium on Circuits and Systems (ISCAS)<\/em>, Paris, France, May 30-June 2, 2010. (<u><a class=\"external\" href=\"http:\/\/ieeexplore.ieee.org\/search\/srchabstract.jsp?tp=&amp;arnumber=5537926&amp;queryText%3DDynamic+Clipping+Ratio+Determination+for+Global+Backlight+Dimming+in+LCD%26openedRefinements%3D*%26searchField%3DSearch+All\" rel=\"nofollow\">View<\/a>)<\/u><\/li>\n<li>S.-J. Kang, D. G. Yoo, S. K. Lee and Y. H. Kim*, \u201cFull Search Block Matching Algorithm using Pattern-based Sub-sampling for Low Power Hardware Implementation\u201d, <em>International Technical Conference on Circuits\/Systems, Computers, and Communications (ITC-CSCC)<\/em>, Jeju, Korea, Jul. 2009. (<a class=\"external\" href=\"https:\/\/www.researchgate.net\/publication\/229021690_Full_Search_Block_Matching_Algorithm_using_Pattern-based_Sub-sampling_for_Low_Power_Hardware_Implementation\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>S.-J. Kang, D. G. Yoo, S. K. Lee and Y. H. Kim*, \u201cDesign and Implementation of Median Filter based Adaptive Motion Vector Smoothing for Motion Compensated Frame Rate Up-Conversion\u201d, <em>The 13th IEEE International Symposium on Consumer Electronics<\/em>, Kyoto, Japan, May. 2009. (<a class=\"external\" href=\"http:\/\/ieeexplore.ieee.org\/search\/srchabstract.jsp?arnumber=5156958&amp;isnumber=5156791&amp;punumber=5109426&amp;k2dockey=5156958@ieeecnfs&amp;query=%28design+and+implementation+%3Cin%3E+metadata%29+%3Cand%3E+%285109426+%3Cin%3E+punumber%29&amp;pos=21&amp;access=no\" rel=\"nofollow\"><u>View<\/u><\/a>)<\/li>\n<li>D. G. Yoo, S.-J. Kang, S. K. Lee and Y. H. Kim*, \u201cAdaptive Sum of the Bilateral Absolute Difference for Motion Estimation Using Temporal Symmetry\u201d, <em>The SID International Symposium, Seminar, and Exhibition<\/em>, Texas, USA, May. 2009. (<a class=\"external\" href=\"https:\/\/www.researchgate.net\/publication\/251001101_P-48_Adaptive_Sum_of_the_Bilateral_Absolute_Difference_for_Motion_Estimation_Using_Temporal_Symmetry\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>S.-J. Kang, D. G. Yoo, S. K. Lee and Y. H. Kim, \u201cHardware Implementation of Motion Estimation Using a Sub-sampled Block for Frame Rate Up-Conversion\u201d, <em>International SoC Design Conference (ISOCC)<\/em>, Busan, Korea, Nov. 2008. (<u><a class=\"external\" href=\"http:\/\/ieeexplore.ieee.org\/xpls\/abs_all.jsp?isnumber=4815668&amp;arnumber=4815694&amp;count=50&amp;index=25\" rel=\"nofollow\">View<\/a>)<\/u><\/li>\n<li>D. G. Yoo, S.-J. Kang, S. K. Lee and Y. H. Kim*, \u201cPhase Correlated Bilateral Motion Estimation for Frame Rate Up-Conversion\u201d, <em>International Technical Conference on Circuits\/Systems, Computers, and Communications (ITC-CSCC)<\/em>, Shimonoseki, Japan, Jul. 2008. (<a class=\"external\" href=\"https:\/\/c11.kr\/10uqw\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>S.-J. Kang, D. G. Yoo, S. K. Lee and Y. H. Kim*, \u201cAdaptive frame rate up-conversion considering low computational complexity and complex motion\u201d, <em>The SID International Symposium, Seminar, and Exhibition<\/em>, LA, USA, May. 2008. (<a class=\"external\" href=\"https:\/\/www.researchgate.net\/publication\/255609259_P-48_Adaptive_Frame_Rate_UpConversion_Considering_Low_Computational_Complexity_and_Complex_Motion\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>S. K. Lee, D. G. Yoo, S.-J. Kang and Y. H. Kim*, \u201cA Novel Image Compression Algorithm for Overdriving\u201d, <em>The SID International Symposium, Seminar, and Exhibition<\/em>, LA, USA, May. 2008. (<a class=\"external\" href=\"https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1889\/1.3069701\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>S.-J. Kang, S. K. Lee, D. G. Yu and Y. H. Kim*, \u201cFrame rate up-conversion with bidirectional sub-sampled block motion estimation and minimum deviation filter\u201d, <em>International Conference on New Exploratory Technologies<\/em>, Seoul, Korea, Oct. 2007. (<a class=\"external\" href=\"http:\/\/csdl.postech.ac.kr\/bbs\/board.php?bo_table=sub6_1&amp;wr_id=139&amp;page=3\" rel=\"nofollow\">View<\/a>)<\/li>\n<li>D. G. Yoo, S.-J. Kang, S. K. Lee and Y. H. Kim*, \u201cAdaptive color processing algorithm for frame rate up-conversion\u201d, <em>International Conference on New Exploratory Technologies<\/em>, Seoul, Korea, Oct. 2007.<\/li>\n<li>S.-J. Kang, D. G. Yoo and Y. H. Kim*, \u201cPhase correlation-based motion estimation using variable block sizes for frame rate up-conversion\u201d, <em>International Technical Conference on Circuits\/Systems, Computers, and Communications (ITC-CSCC)<\/em>, Pusan, Korea, Jul. 2007. (<a class=\"external\" href=\"https:\/\/www.researchgate.net\/publication\/228878382_Phase_correlation-based_motion_estimation_using_variable_block_sizes_for_frame_rate_up-conversion\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<li>S.-J. Kang and Y. H. Kim*, \u201cPerformance comparison of motion estimation methods for frame rate up-conversion\u201d, <em>International Display Workshops (IDW)<\/em>, Otsu, Japan, Dec. 2006. (<a class=\"external\" href=\"https:\/\/www.researchgate.net\/publication\/286712158_Performance_comparison_of_motion_estimation_methods_for_frame_rate_up-conversion\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">View<\/a>)<\/li>\n<\/ol>\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>2026 1. Dual Anchors, Do It Better: Hierarchical Group Merging for Zero-shot Anomaly Detection 2. WIMFRIS: WIndow Mamba Fusion and Parameter Efficient Tuning for Referring Image Segmentation 3. Enhancing Perceptual Quality on High-Resolution Displays: A Unified Deep Model for Super-Resolution and Deblurring 4. Toward Robust Anomaly Detection for Real-World Display Inspection 5. Enhancing Reverse Distillation&hellip;&nbsp;<a href=\"https:\/\/vds.sogang.ac.kr\/?p=2234\" class=\"\" rel=\"bookmark\">\ub354 \ubcf4\uae30 &raquo;<span class=\"screen-reader-text\">International Conference<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"off","neve_meta_content_width":70,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","footnotes":""},"categories":[5],"tags":[],"class_list":["post-2234","post","type-post","status-publish","format-standard","hentry","category-publication"],"_links":{"self":[{"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=\/wp\/v2\/posts\/2234","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2234"}],"version-history":[{"count":263,"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=\/wp\/v2\/posts\/2234\/revisions"}],"predecessor-version":[{"id":5036,"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=\/wp\/v2\/posts\/2234\/revisions\/5036"}],"wp:attachment":[{"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2234"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2234"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2234"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}