{"id":2339,"date":"2023-09-22T16:13:43","date_gmt":"2023-09-22T07:13:43","guid":{"rendered":"https:\/\/vds.sogang.ac.kr\/?p=2339"},"modified":"2026-03-09T13:32:54","modified_gmt":"2026-03-09T04:32:54","slug":"international-journal-2","status":"publish","type":"post","link":"https:\/\/vds.sogang.ac.kr\/?p=2339","title":{"rendered":"International Journal"},"content":{"rendered":"\n<p><h1>Submitted<\/h1>\r\n<hr><\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Direct anomaly segmentation with location information,&nbsp;<em>IEEE Transactions on Industrial Electronics<\/em>, Submitted<\/li>\n\n\n\n<li>Data Selective Matching for Stable Conditional Generative Adversarial Network Training, <em>IEEE Transactions on Artificial Intelligence<\/em>, Submitted<\/li>\n\n\n\n<li>Realistic Human Image Animation with Dynamic Camera Effects, <em>IEEE Transactions on Multimedia,<\/em> Submitted<\/li>\n\n\n\n<li>Enhancing Monocular Dynamic Gaussian Splatting via Adaptive Pipeline Integration, IEEE Transactions on Image Processing, Submitted<\/li>\n<\/ol>\n\n\n\n<h1 class=\"wp-block-heading\">2025&nbsp;<\/h1>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>1. ICSD-NeRF: Independent Canonical Spaces for Enhanced Dynamic Scene Modeling in Neural Radiance Fields (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/11319162\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>S. Yeom, H. Son, C. Kang, E. Shin, J. Kim, KJ. Yun and S. -J. Kang, <em>IEEE Transactions on Computational Imaging<\/em><br>ACK: 2022-0-00022, RS-2022-II220022, IITP-2025-RS-2023-00260091 ,RS-2024-00414230<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"588\" data-id=\"4668\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc8-2-1024x588.png\" alt=\"\" class=\"wp-image-4668\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc8-2-1024x588.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc8-2-300x172.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc8-2-768x441.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc8-2.png 1253w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>2. Efficient Monocular Depth-Based Physical Distance Measurement for Low-Depth Scales (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/11152375\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J. Yang, M. Zinke, B. Kang, H. Cho, H. Choi, S. Lee and S.-J. Kang,<em>&nbsp;<em><em><em>IEEE Transactions on Industrial Informatics<\/em><\/em><\/em>, vol. 21, issue 12, pp. 9389-9399, 2025.<\/em><br>ACK: RS-2025-02263706, RS-2024-00414230, IITP-2025-RS-2023-00260091<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"384\" data-id=\"4622\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/5-1-1024x384.png\" alt=\"\" class=\"wp-image-4622\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/5-1-1024x384.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/5-1-300x113.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/5-1-768x288.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/5-1.png 1340w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>3. DGTFNet: Depth-Guided Tri-Axial Fusion Network for Efficient Generalizable Stereo Matching (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/11150692?source=authoralert\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>S. Moon, H. Lee and S. -J. Kang, <em>IEEE Robotics and Automation Letters<\/em>, vol. 10, no. 10, pp. 10791-10798, Oct. 2025<br>ACK: IITP-2025-RS-2023-00260091, RS-2024-00414230<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-3 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"484\" data-id=\"4596\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/DGTFNet-1024x484.png\" alt=\"\" class=\"wp-image-4596\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/DGTFNet-1024x484.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/DGTFNet-300x142.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/DGTFNet-768x363.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/DGTFNet.png 1352w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>4. Luminance Compensation for Stretchable Displays Using Deep Visual Feature-Optimized Gaussian-Weighted Kernels (<a href=\"https:\/\/sid.onlinelibrary.wiley.com\/doi\/10.1002\/jsid.2052\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>Y.-I. Park, and S.-J. Kang,<em>&nbsp;Journal of the Society for Information Display, <em>vol.33, no. 5<\/em><\/em>,<em><em> pp. 452-463,<\/em><\/em> <em>2025.<\/em><br>ACK: IITP-2025-RS-2023-00260091, RS-2024-00414230, 202412001.01<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-4 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"486\" data-id=\"4150\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-2-1024x486.png\" alt=\"\" class=\"wp-image-4150\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-2-1024x486.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-2-300x142.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-2-768x364.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-2-1536x729.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-2-2048x971.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>5. SO-Diffusion: Diffusion-based Depth Estimation from SEM Images and OCD Spectra (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10929668\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>Y. Hwang, M. Song, A. Ma, Q.H. Kim, K. B. Chang, J. Jeong, and S.-J. Kang, <em>IEEE Transactions on Instrumentation &amp; Measurement<\/em>, 2025.<br>ACK: IO240123-08647-01, IITP-2025-RS-2023-00260091, 2021M3H2A1038042<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-5 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"8490\" height=\"3005\" data-id=\"4117\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/fig3_structure-\ubcf5\uc0ac\ubcf8.png\" alt=\"\" class=\"wp-image-4117\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/fig3_structure-\ubcf5\uc0ac\ubcf8.png 8490w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/fig3_structure-\ubcf5\uc0ac\ubcf8-300x106.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/fig3_structure-\ubcf5\uc0ac\ubcf8-1024x362.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/fig3_structure-\ubcf5\uc0ac\ubcf8-768x272.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/fig3_structure-\ubcf5\uc0ac\ubcf8-1536x544.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/fig3_structure-\ubcf5\uc0ac\ubcf8-2048x725.png 2048w\" sizes=\"auto, (max-width: 8490px) 100vw, 8490px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>6. Programmable-Room: Interactive Textured 3D Room Meshes Generation Empowered by Large Language Models (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/11178224\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J. Kim, J. Park, K. Kong, and S.-J. Kang, <em>IEEE Transactions on Multimedia<\/em>, 2025.<br>ACK: IITP-2025-RS-2023-00260091, IITP-RS-2022-00156318, RS-2024-00414230<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-6 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"542\" data-id=\"4155\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-6-1024x542.png\" alt=\"\" class=\"wp-image-4155\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-6-1024x542.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-6-300x159.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-6-768x406.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-6-1536x813.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-6.png 1909w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>7. Query-Vector-Focused Recurrent Attention for Remaining Useful Life Prediction (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10988914\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>Y.-I. Park, and S.-J. Kang, <em>IEEE Transactions on Reliability<\/em>, 2025.<br>ACK: IITP-2025-RS-2023-00260091, IITP-RS-2022-00156318, IO201218-08232-01, RS-2024-00414230<br><\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-7 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"471\" data-id=\"4356\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/proposed_major-1024x471.png\" alt=\"\" class=\"wp-image-4356\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/proposed_major-1024x471.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/proposed_major-300x138.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/proposed_major-768x353.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/proposed_major-1536x707.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/proposed_major-2048x943.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>8. Supervised Denoising for Extreme Low-Light Raw Videos (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/11010860\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>Y. Im, J. Pak, S. Na, J. Park, J. Ryu, S. Moon, B. Koo and S.-J. Kang,<em>&nbsp;<em>IEEE Transactions on Circuits and Systems for Video Technology<\/em>, vol. 35, pp. 10693-10704, 2025.<\/em><br>ACK: VisionNexT, IITP-2025-RS-2023-00260091<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-8 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"518\" data-id=\"4387\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc3-2.png\" alt=\"\" class=\"wp-image-4387\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc3-2.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc3-2-300x152.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc3-2-768x389.png 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>9. CRAN: Compressed Residual Attention Network for Lightweight Single Image Super-Resolution (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/11027440\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>H. Oh, Y. Im, and S.-J. Kang,<em>&nbsp;<em><em>IEEE Signal Processing Letters<\/em><\/em>, vol. 32, pp. 2444-2448, 2025.<\/em><br>ACK: IO201218-08232-01, IITP-2025-RS-2023-00260091, RS-2024-00414230, No.RS-2025-02263706<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-9 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"2560\" height=\"1358\" data-id=\"4391\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc5-1-scaled.png\" alt=\"\" class=\"wp-image-4391\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc5-1-scaled.png 2560w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc5-1-300x159.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc5-1-1024x543.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc5-1-768x407.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc5-1-1536x815.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/\uadf8\ub9bc5-1-2048x1087.png 2048w\" sizes=\"auto, (max-width: 2560px) 100vw, 2560px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>10. A Unified Framework for Super-Resolution via Clean Image Prior (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10925386\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>S. Na, Y. Nam, and S.-J. Kang,<em>&nbsp;<em><em>IEEE Access<\/em><\/em>, 2025.<\/em><br>ACK: IITP-2025-RS-2023-00260091, RS-2024-00414230, 2021M3H2A1038042<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-10 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"533\" data-id=\"4446\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-15-1024x533.png\" alt=\"\" class=\"wp-image-4446\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-15-1024x533.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-15-300x156.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-15-768x400.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-15-1536x800.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-15.png 1763w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>11. Self-Supervised Anomaly Segmentation for Surface Defect Inspection in Display Panels (<a href=\"https:\/\/sid.onlinelibrary.wiley.com\/doi\/10.1002\/jsid.2106\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J.-W. Song, K. B. Kong, Y.-I. Park and S.-J. Kang,<em>\u00a0<em><em><em>Journal of the Society for Information Display<\/em><\/em><\/em>, 2025.<\/em><br>ACK: IITP-2025-RS-2023-00260091, 2021M3H2A1038042<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-11 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"362\" data-id=\"4552\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-16-1024x362.png\" alt=\"\" class=\"wp-image-4552\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-16-1024x362.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-16-300x106.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-16-768x271.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-16.png 1293w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<h1 class=\"wp-block-heading\">2024&nbsp;<\/h1>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>1. CMVDE: Consistent Multi-view Video Depth Estimation via Geometric-Temporal Coupling Apporach (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10529981\/metrics#metrics\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>H.S. Son, M.J. Shin, M.J. Cho. J. Kim, K. Yun, S.-J Kang,<em>&nbsp;IEEE Transactions on Multimedia, vol.26, pp. 9710-9721, 2024.<\/em><br>ACK: IITP-2024-RS-2023-00260091, 2022-0-00022<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-12 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"488\" data-id=\"4156\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-7-1024x488.png\" alt=\"\" class=\"wp-image-4156\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-7-1024x488.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-7-300x143.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-7-768x366.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-7-1536x733.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-7.png 1841w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>2. Mixup-based Neural Network for Image Restoration and Structure Prediction from SEM Images (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10444704\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J. Park, Y. Cho, Y. Hwang, A. Ma, Q.H. Kim, K. B. Chang, J. Jeong, and S.-J. Kang, <em>IEEE Transactions on Instrumentation &amp; Measurement<\/em>, vol. 73, pp. 1-16, 2024.<br>ACK: IO221227-04374-0, IITP-2023-RS-2023-00260091<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-13 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" data-id=\"3444\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc3-1024x341.png\" alt=\"\" class=\"wp-image-3444\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc3-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc3-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc3-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc3.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>3. Deep Conditional HDRI: Inverse Tone Mapping via Dual Encoder-Decoder Conditioning Method (<a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/10476730\" data-type=\"link\" data-id=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/10476730\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>Y. Nam, J. Kim, J. Shim, and S.-J. Kang, <em>IEEE Transactions on Multimedia<\/em>, vol. 26, pp. 8504-8515, 2024.<br>ACK: 2021R1A2C1004208, IITP-2024-RS-2023-00260091, 2020M3H4A1A02084899<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-14 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" data-id=\"3443\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc2-1024x341.png\" alt=\"\" class=\"wp-image-3443\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc2-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc2-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc2-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/05\/\uadf8\ub9bc2.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>4. Image Clustering using Generated Text Centroids (<a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0923596524000298?dgcid=author\" data-type=\"link\" data-id=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0923596524000298?dgcid=author\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>D. Kong, K. B. Kong, and S.-J. Kang, <em>Signal Processing: Image Communication<\/em>,vol 125, 117128, 2024.<br>ACK: IITP-2024-RS-2023-00260091, 2021M3H2A1038042, IO201218-08232-01<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-15 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"542\" data-id=\"4157\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-8-1024x542.png\" alt=\"\" class=\"wp-image-4157\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-8-1024x542.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-8-300x159.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-8-768x406.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-8-1536x813.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-8.png 1890w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>5. Enhancing Stability in Training Conditional Generative Adversarial Networks via Selective Data Matching (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10623662\" data-type=\"link\" data-id=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0923596524000298?dgcid=author\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>K. B. Kong, K. H. Kim, and S.-J. Kang, <em>IEEE Access<\/em>, vol. 12, pp. 119647-119659, 2024.<br>ACK: RS-2024-00414230, KSC-2023-CRE-0444, IITP-2024-RS- 2023-00260091<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-16 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"849\" height=\"295\" data-id=\"3667\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/img-1.png\" alt=\"\" class=\"wp-image-3667\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/img-1.png 849w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/img-1-300x104.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/img-1-768x267.png 768w\" sizes=\"auto, (max-width: 849px) 100vw, 849px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>6. Frequency Domain-Based Super Resolution Using Two-Dimensional Structure Consistency for Ultra-High-Resolution Display<\/p>\n<cite>Y. L. Seo and S.-J. Kang, J. Imaging 2024, 10(11), 266.<br>ACK: IITP-RS-2022-0015631<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-17 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"344\" data-id=\"4445\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-14-1024x344.png\" alt=\"\" class=\"wp-image-4445\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-14-1024x344.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-14-300x101.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-14-768x258.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-14.png 1427w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<h1 class=\"wp-block-heading\">2023&nbsp;<\/h1>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\"><\/div><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>1. Cross-aware Early Fusion with Stage-dived Vision and Language Transformer Encoders for Referring Image Segmentation (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10345690\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>Y. B. Cho*, H. W. Yu*, and S.-J. Kang, <em>IEEE Transactions on Multimedia<\/em>, vol.26, pp. 5823-5833, 2023.<br>ACK: IO201218-08232-01<em>,&nbsp;<\/em>IITP2023-RS-2023-00260091, 2021R1A2C1004208, 2020M3H4A1A02084899<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1134\" height=\"378\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc2-1.png\" alt=\"\" class=\"wp-image-3190\" style=\"width:408px;height:auto\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc2-1.png 1134w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc2-1-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc2-1-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc2-1-768x256.png 768w\" sizes=\"auto, (max-width: 1134px) 100vw, 1134px\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>2. MosaicMVS: Mosaic-based Omnidirectional&nbsp;Multi-view Stereo for Indoor Scenes (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10005048\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>M. J. Shin*, W. J. Park*, M.J. Cho*, K. B. Kong*, H. Son, &nbsp;J. S. Kim, K. J. Yun,&nbsp; G.S. Lee, and S.-J. Kang, <em>IEEE Transactions on Multimedia<\/em>, vol.26 pp.8279-8290, 2023.<br>ACK: 2018-0-00207, 2021R1A2C1004208<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc3-1-1024x341.png\" alt=\"\" class=\"wp-image-3192\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc3-1-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc3-1-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc3-1-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc3-1.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>3. Human Body-Aware Feature Extractor Using Attachable Feature Corrector for Human Pose Estimation (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9858008\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>G. Kim, H. Kim, K. B. Kong, J. W. Song, and S.-J. Kang,&nbsp;<em>IEEE Transactions on Multimedia<\/em>, vol.25, no. 18, pp. 5789-5799, 2023.<br>ACK: R2020040058, 2020M3C1B8081320, 2021M3H2A1038042<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc4-1-1024x341.png\" alt=\"\" class=\"wp-image-3193\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc4-1-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc4-1-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc4-1-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc4-1.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>4. Classification Network-Guided Weighted K-means Clustering for Multi-Touch Detection (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10179190?source=authoralert\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J. Lee*, J. H. Yun*, J. H. Shim*, and S.-J. Kang,&nbsp;<em>IEEE&nbsp;Sensors&nbsp;Journal<\/em>, vol.23, no. 18, pp. 21397-21407, Sept. 2023.<br>ACK: IITP-2023-RS-2023-00260091, 2021R1A2C1004208, 2021M3H2A1038042<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1701\" height=\"567\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc5-1.png\" alt=\"\" class=\"wp-image-3194\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc5-1.png 1701w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc5-1-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc5-1-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc5-1-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc5-1-1536x512.png 1536w\" sizes=\"auto, (max-width: 1701px) 100vw, 1701px\" \/><\/figure>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>5. Out-of-Focus Image Deblurring for Mobile Display Vision Inspection (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10036087?source=authoralert\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>S.-J. Min, K. B. Kong, and S.-J. Kang,&nbsp;<em>IEEE Transactions on Circuits and Systems for Video Technology<\/em>, vol.33, no. 9, pp. 5309-5317, 2023, Sept.&nbsp;2023.<br>ACK: 2021M3H2A1038042, 2021R1I1A1A01051225, IITP-2023-RS-2023-00260091<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"403\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-10-1024x403.png\" alt=\"\" class=\"wp-image-4162\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-10-1024x403.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-10-300x118.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-10-768x303.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-10-1536x605.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-10-2048x807.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>6. Improving Gaze Tracking in Large Screens with Symmetric Gaze Angle Amplification and Optimization Technique (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/10143197?source=authoralert\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J. K. Kim*, J. Park*, Y. K. Moon and S.-J. Kang, <em>IEEE Access<\/em>, vol.11, pp.85799-85811, June 2023.<br>ACK: 2021R1A2C1004208, R2020040058, 2021M3H2A1038042<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc7-1-1024x341.png\" alt=\"\" class=\"wp-image-3197\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc7-1-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc7-1-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc7-1-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc7-1.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>7. Pseudo-label Vector-guided Parallel Attention Network for Remaining Useful Life Prediction (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9870670\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>Y. I. Park, J.W. Song, and S.-J. Kang,&nbsp;<em>IEEE Transactions on Industrial Informatics<\/em>, vol.19, no. 4, pp. 5602-5611, April. 2023.<br>ACK: 2021M3H2A1038042, IO201218-08232-01, 2021M3H2A1038042<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc8-3-1024x341.png\" alt=\"\" class=\"wp-image-3201\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc8-3-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc8-3-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc8-3-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc8-3.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<h1 class=\"wp-block-heading\">2022<\/h1>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<ol class=\"wp-block-list\">\n<li>Image Demoireing Via U-Net for Detection of Display Defects (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9808134\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/li>\n<\/ol>\n<cite>J. H. Kim, K. B. Kong, and S.-J. Kang, <em>IEEE Access<\/em>, vol.10, pp.68645-68654, June 2022.<br>ACK: 2020M3H4A1A02084899, 2021M3H2A1038042<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc9-1-1024x341.png\" alt=\"\" class=\"wp-image-3202\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc9-1-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc9-1-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc9-1-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc9-1.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>2. Super-Resolving Methodology for Noisy Unpaired Datasets (<a href=\"https:\/\/www.mdpi.com\/1424-8220\/22\/20\/8003\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>S.-J. Min, Y. S. Jo, and S.-J. Kang,&nbsp;<em>Sensors<\/em>,&nbsp;vol.22(20), no.8003, Oct. 2022.&nbsp;<br>ACK: IO210819-08892-01, 2021M3H2A1038042<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"340\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc10-3-1024x340.png\" alt=\"\" class=\"wp-image-3207\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc10-3-1024x340.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc10-3-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc10-3-768x255.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc10-3.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>3. Distribution Density-Aware Compensation for High-Resolution Stretchable Display (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9819901?source=authoralert\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>S. H. Jung, J. H. Kim, and S.-J. Kang,&nbsp;<em>IEEE Access<\/em>, vol.10, pp.72470-72479,&nbsp;July 2022.<br>ACK: 2021M3H2A1038042<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc11-1-1024x341.png\" alt=\"\" class=\"wp-image-3208\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc11-1-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc11-1-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc11-1-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc11-1.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>4. Heatmap Assisted Accuracy Score Evaluation Method for Machine-Centric Explainable Deep Neural Networks (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9800759?source=authoralert\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J. Lee, H. Cho, Y. J. Pyun, S.-J. Kang, and H. Nam,&nbsp;<em>IEEE Access<\/em>, vol.10, pp.64832-64849, June. 2022.<br>ACK: 2019R1F1A1061114<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc12-2-1024x341.png\" alt=\"\" class=\"wp-image-3210\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc12-2-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc12-2-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc12-2-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc12-2.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>5. Dynamic Hand Gesture Recognition using Improved Spatio-Temporal Graph Convolutional Network (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9749235\" class=\"external\" rel=\"nofollow\">View<\/a>)<\/p>\n<cite>J. H. Song, K. B. Kong, and S.-J. Kang,&nbsp;<em>IEEE Transactions on Circuits and Systems for Video Technology<\/em>, vol.32, no. 9, pp. 6227-6239, Sept. 2022.<br>ACK: R2020040058, 22PQWO-C153369-04, 2020M3C1B8081320<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc13-2-1024x341.png\" alt=\"\" class=\"wp-image-3211\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc13-2-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc13-2-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc13-2-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc13-2.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>6. TouchNAS: Efficient Touch Detection Model Design Methodology for Resource-Constrained Devices (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9695433?source=authoralert\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\">View<\/a>)<\/p>\n<cite>S. H. Ahn*, J. W. Chang*, H. S. Yoon, and S.-J. Kang,&nbsp;<em>IEEE&nbsp;Sensors&nbsp;Journal<\/em>, vol.22, no. 7, pp. 6784-6792, Jan. 2022.<br>ACK: 2021R1A2C1004208, IITP-2021-2018-0-01421, 22PQWO-C153369-04<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"341\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc14-2-1024x341.png\" alt=\"\" class=\"wp-image-3213\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc14-2-1024x341.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc14-2-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc14-2-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc14-2.png 1134w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>7. Camera Pose Estimation Framework for Array Structured Images<\/em> (<a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/epdf\/10.4218\/etrij.2021-0303\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\">View<\/a>)<\/p>\n<cite>M. J. Shin*, W. J. Park*, J. H. Kim*, J. S. Kim, K. J. Yun,&nbsp; and S.-J. Kang, <em>ETRI Journal<\/em>,&nbsp;vol.44, no.1, pp.10-23, Feb. 2022.<br>ACK: 2018-0-0020, TP-2021-2018-0-01421, 2021R1A2C1004208<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"342\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc15-1-1024x342.png\" alt=\"\" class=\"wp-image-3215\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc15-1-1024x342.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc15-1-300x100.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc15-1-768x256.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2024\/03\/\uadf8\ub9bc15-1.png 1133w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:63%\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-8cf370e7 wp-block-group-is-layout-flex\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>8. EAGNet: Elementwise Attentive Gating Network-Based Single Image De-raining with Rain Simplification (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9387346?source=authoralert\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\">View<\/a>)<\/p>\n<cite>N. Ahn, S.Y. Jo, and S.-J. Kang, &nbsp;<em>IEEE Transactions on Circuits and Systems for Video Technology<\/em>, vol.32, no.2, pp.608-620, Feb. 2022.<br>ACK: IITP-2021-2018-0-01421, 2021R1A2C1004208<\/cite><\/blockquote>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:37%\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"398\" src=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-9-1024x398.png\" alt=\"\" class=\"wp-image-4159\" srcset=\"https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-9-1024x398.png 1024w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-9-300x116.png 300w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-9-768x298.png 768w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-9-1536x596.png 1536w, https:\/\/vds.sogang.ac.kr\/wp-content\/uploads\/2023\/09\/image-9-2048x795.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n<\/div><\/div>\n<\/div><\/div>\n<\/div><\/div>\n<\/div><\/div>\n\n\n\n<h1 class=\"wp-block-heading\"><strong>2021<\/strong><\/h1>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<ol class=\"wp-block-list\">\n<li>S.Y. Jo, S. Lee, N. Ahn, and S.-J. Kang, &#8220;Deep Arbitrary HDRI: Inverse Tone Mapping with Controllable Exposure Changes,\u201d <em>IEEE Transactions on Multimedia<\/em>, vol.24, pp. 2713-2726, Jun. 2021. (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9447972?source=authoralert\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\">View<\/a>) (<strong><em>IF: 6.513<\/em><\/strong>)<\/li>\n\n\n\n<li>J. Park, J. Heo, and S.-J. Kang, &#8220;Feedback-based Object Detection for Multi-person Pose Estimation,&#8221; <em>Signal Processing: Image Communication<\/em>, vol.99, 116508, Nov. 2021. (<a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0923596521002472\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\">View<\/a>).&nbsp;(<strong><em>IF: 3.256<\/em><\/strong>)<\/li>\n\n\n\n<li>S.I. Cho and S.-J. Kang,&#8221; Learning Methodologies to Generate Kernel-learning-based Image Downscaler for Arbitrary Scaling Factors,&#8221;&nbsp;<em>IEEE Transactions on Image Processing<\/em>, vol.30, pp.4526-4539,&nbsp;Apr. 2021. (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9409664\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\">View<\/a>) (<em><strong>IF: 10.856<\/strong><\/em>)<\/li>\n\n\n\n<li>S. Lee, S.Y. Jo, G.H. An, and&nbsp;S.-J. Kang, &#8220;Learning to Generate Multi-Exposure Stacks with Cycle Consistency for High-Dynamic-Range Imaging,&#8221;&nbsp;<em>IEEE Transactions on Multimedia<\/em>, vol.23, pp. 2561-2574, Aug. 2021. (<a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9154558\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\">View<\/a>) (<strong><em>IF: 6.513<\/em><\/strong>)<\/li>\n\n\n\n<li>J.H. Song, and S.-J. Kang, &#8220;3D Hand Pose Estimation via Graph-based Reasoning,&#8221; <em>IEEE Access<\/em>, vol.9, pp.35824-35833, Feb. 2021. (<a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9361677\" class=\"external\" rel=\"nofollow\">View<\/a>) (<strong><em>IF: 3.367<\/em><\/strong>)<\/li>\n\n\n\n<li>S.I. Cho, J.H. Park, and S.-J. Kang, \u201cGenerative Adversarial Network-based Image Denoiser Controlling Heterogeneous Losses,\u201d&nbsp;<em>Sensors<\/em>,&nbsp;&nbsp;vol.21(4), no.1191, Feb. 2021. (<a href=\"https:\/\/www.mdpi.com\/1424-8220\/21\/4\/1191\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>)&nbsp;(<em><strong>IF:3.576<\/strong><\/em>)<\/li>\n<\/ol>\n\n\n\n<h1 class=\"wp-block-heading\"><strong>2020<\/strong><\/h1>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<ol class=\"wp-block-list\">\n<li>S.-J. Lee, S. Lee, S. I. Cho and S.-J. Kang, &#8220;Object Detection-based Video Retargeting with Spatial-Temporal Consistency,&#8221; <em>IEEE Transactions on Circuits and Systems for Video Technology<\/em>, vol.30, no.12, pp.4434-4439, Dec. 2020. (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9043574\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\">View<\/a>) (<a href=\"http:\/\/vds.sogang.ac.kr\/index.php\/2019\/11\/10\/comparisonvideos\/\" target=\"_blank\" rel=\"noreferrer noopener\">Videos<\/a>) (<em><strong>IF: 4.046<\/strong><\/em>)&nbsp;<strong>(<em>The top 50 most frequently downloaded documents, 2020<\/em>)<\/strong><\/li>\n\n\n\n<li>S. I. Cho and S.-J. Kang,&#8221;Extrapolation-Based Video Retargeting with Backward Warping Using an Image-to-Warping Vector Generation Network&#8221;,&nbsp;<em> IEEE Signal Processing Letters<\/em>, vol.27, no.1, pp.446-450, Dec. 2020. (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9018110?source=authoralert\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\">View<\/a>) (<em><strong>IF: 3.268<\/strong><\/em>)<\/li>\n\n\n\n<li>S. I. Cho and S.-J. Kang,&#8221; Temporal Incoherence-free Video Retargeting Using Foreground Aware-extrapolation,&#8221;&nbsp;<em>IEEE Transactions on Image Processing<\/em>, vol.29, no.1, pp.4848-4861, Dec. 2020. (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9025780?source=authoralert\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<em><strong>IF: 6.79<\/strong><\/em>)<\/li>\n\n\n\n<li>S. I. Cho and S.-J. Kang,&#8221; Dictionary-based Interpolation Technique for Text Quality Enhancement,&#8221; <em>Journal of Information Display<\/em>, pp.1-7, Nov. 2020. (<a href=\"https:\/\/www.tandfonline.com\/doi\/full\/10.1080\/15980316.2020.1843556?scroll=top&amp;needAccess=true\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\">View<\/a>) (<em><strong>IF: 3.70<\/strong><\/em>)<\/li>\n\n\n\n<li>S. Lee, G. H. An, J. Kim, K. Yun, W. S. Jung, and S.-J. Kang, &#8220;Tri-Level Optimization-based Image Rectification for Polydioptric Cameras,&#8221; <em>Signal Processing: Image Communication<\/em>,&nbsp;vol.87, 115884, Sep. 2020. (<a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0923596520300916\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<em><strong>IF: 2.779<\/strong><\/em>)<\/li>\n\n\n\n<li>Y. Yun, S.-J. Lee, and S.-J. Kang, &#8220;Motion Recognition-based Robot Arm Control System Using Head Mounted Display,&#8221; <em>IEEE Access<\/em>, vol.8, pp.15017-15026, Jan. 2020. (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/8952722?source=authoralert\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>)&nbsp;(<em><strong>IF: 4.098<\/strong><\/em>)<\/li>\n\n\n\n<li>J. W. Chang, K. W. Kang, and S. J Kang, &#8220;An Energy-Efficient FPGA-based Deconvolutional Neural Networks Accelerator for Single Image Super-Resolution,&#8221;&nbsp;<em>IEEE Transactions on Circuits and Systems for Video Technology<\/em>,&nbsp;vol.30, no. 1, pp.281-295, Jan. 2020. (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/8584497?source=authoralert\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<em><strong>IF: 4.046<\/strong><\/em>)&nbsp;<strong>(<em>The top 50 most frequently downloaded documents, 2020<\/em>)<\/strong><\/li>\n<\/ol>\n\n\n\n<h1 class=\"wp-block-heading\"><strong>2019<\/strong><\/h1>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Y. Lee, N. Ahn, J. H. Heo, S. Y. Jo, and S.-J. Kang,&#8221;Teaching Where to See: Knowledge Distillation-based Attentive Information Transfer in Vehicle Maker Classification,&#8221;&nbsp;<em>IEEE Access<\/em>, vol.7, pp.86412-86420, Jul. 2019. (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/8746266?source=authoralert\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>)&nbsp;(<em><strong>IF: 4.098<\/strong><\/em>)<\/li>\n\n\n\n<li>S. I. Cho and S.-J. Kang,&#8221;Power Control Technique Using Error Distribution Analysis for Ultrasound Imaging Displays,&#8221; <em>Electronics,<\/em>&nbsp;vol. 8(5), no.471, April. 2019. (<a href=\"https:\/\/www.mdpi.com\/2079-9292\/8\/5\/471\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>)&nbsp;(<em><strong>IF: 1.764<\/strong><\/em>)<\/li>\n\n\n\n<li>N. Ahn, S. Y. Jo, and S.-J. Kang,&#8221;Constraint-Aware Electric Power Consumption Estimation for Electric Vehicle Charging Station Placement,&#8221; <em>Energies<\/em>, vol.12(6), no.1000, pp.1-13, Mar. 2019. (<a href=\"https:\/\/www.mdpi.com\/1996-1073\/12\/6\/1000\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>),&nbsp;(<em><strong>IF: 2.707<\/strong><\/em>)<\/li>\n\n\n\n<li>S. I. Cho and S.-J. Kang,&#8221;Histogram Shape-based Scene-change Detection Algorithm,&#8221; <em>IEEE Access<\/em>, vol.7, pp.27662-27667, Feb. 2019. (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/8653285\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<em><strong>IF:&nbsp;4.098<\/strong><\/em>)<\/li>\n\n\n\n<li>S. I. Cho and S.-J. Kang,&#8221;Gradient Prior-aided CNN Denoiser with Separable Convolution-based Optimization of Feature Dimension,&#8221;&nbsp;<em>IEEE Transactions on Multimedia<\/em>,&nbsp;vol.21, no.2, pp.484-493, Feb.&nbsp;2019. (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/8419273\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<em><strong>IF: 5.452<\/strong><\/em>)<\/li>\n<\/ol>\n\n\n\n<h1 class=\"wp-block-heading\"><strong>2018<\/strong><\/h1>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<ol class=\"wp-block-list\">\n<li>H. S. Lee, S.-J. Kang, and Y. H. Kim,&#8221;Scrolling text detection based on region characteristic analysis for frame rate up-conversion,&#8221; <em>Displays<\/em>, vol.55, pp.19-30,&nbsp;Dec. 2018. (<a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0141938217302020\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<em><strong>IF: 1.526<\/strong><\/em>)<\/li>\n\n\n\n<li>G. H. An, S. Lee, M. W. Seo, K. Yun, W. S. Cheong, and&nbsp;S.-J. Kang, &#8220;Charuco Board-based Omnidirectional Camera Calibration Method,&#8221;&nbsp;<em>Electronics,<\/em>&nbsp;vol. 7(12), no.421, Dec. 2018. (<a href=\"https:\/\/www.mdpi.com\/2079-9292\/7\/12\/421\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<em><strong>IF:&nbsp;2.110<\/strong><\/em>)<\/li>\n\n\n\n<li>S. I. Cho and S.-J. Kang,&#8221;Real-time People Counting System for Customer Movement Analysis,&#8221; <em>IEEE Access<\/em>, vol.6, pp.55264-55272, Oct. 2018. (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/8478241\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<em><strong>IF: 3.557<\/strong><\/em>)<\/li>\n\n\n\n<li>J. W. Chang and&nbsp;S.-J. Kang,&#8221;Real-time Vehicle Detection and Tracking Algorithm for Forward Vehicle Collision Warning,&#8221; <em>Journal of Semiconductor Technology and Science (JSTS)<\/em>,&nbsp;vol.18, no.5, Oct. 2018. (<a href=\"http:\/\/www.jsts.org\/html\/main.htm\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<strong><em>IF: 0.515<\/em><\/strong>)<\/li>\n\n\n\n<li>S. Lee, G. H. An, and&nbsp;S.-J. Kang, &#8220;Deep Chain HDRI: Reconstructing a High Dynamic Range Image from a Single Low Dynamic Range Image,&#8221; <em>IEEE Access<\/em>,&nbsp;vol.6, pp.49913-49924, Sep.&nbsp;2018. (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/8457442\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<em><strong>IF: 3.557<\/strong><\/em>)<\/li>\n\n\n\n<li>S. W. Choi, S. Y. Lee, M. W. Seo, and S.-J. Kang, &#8220;Time Sequential Motion-to-Photon Latency Measurement System for Virtual Reality Head Mounted Displays,&#8221; <em>Electronics<\/em>,&nbsp;vol.7(9), no.171, Sep. 2018. (<a href=\"http:\/\/www.mdpi.com\/2079-9292\/7\/9\/171\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<em><strong>IF: 2.110<\/strong><\/em>)<\/li>\n\n\n\n<li>S. I. Cho and S.-J. Kang,&#8221;Geodesic Path-Based Diffusion Acceleration for Image Denoising,&#8221;&nbsp;<em>IEEE Transactions on Multimedia<\/em>,&nbsp;vol.20, no.7, pp.1738-1750, Jul. 2018. (<a href=\"https:\/\/ieeexplore.ieee.org\/document\/8170291\/?arnumber=8170291&amp;source=authoralert\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<em><strong>IF: 3.509<\/strong><\/em>)<\/li>\n\n\n\n<li>G. H. An, Y. D. Ahn, S. Y. Lee, and&nbsp;S.-J. Kang, &#8220;Perceptual Brightness-based Inverse Tone Mapping for High Dynamic Range Imaging,&#8221; <em>Displays<\/em>, vol.54, pp.1-8,&nbsp;Sep. 2018. (<a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0141938218301392\" class=\"external\" rel=\"nofollow\">View<\/a>) (<em><strong>IF: 1.175<\/strong><\/em>)<\/li>\n<\/ol>\n\n\n\n<h1 class=\"wp-block-heading\"><strong>2017<\/strong><\/h1>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Y. D. Ahn, S. Bae, and S.-J. Kang,&#8221;Power Controllable LED System with Increased Energy Efficiency Using&nbsp;Multi-Sensors for Plant Cultivation,&#8221;&nbsp;<em>Energies<\/em>, vol.10(10), no.1607, pp.1-13, Oct. 2017. (<a href=\"http:\/\/www.mdpi.com\/1996-1073\/10\/10\/1607\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>),&nbsp;(<em><strong>IF: 2.262<\/strong><\/em>)<\/li>\n\n\n\n<li>Y. D. Ahn and S.-J. Kang, \u201cBacklight Dimming based on Saliency Map acquired by Visual Attention Analysis,\u201d <em>Displays<\/em>, vol.50, pp.70-77, Dec. 2017. (<a href=\"http:\/\/www.sciencedirect.com\/science\/article\/pii\/S0141938217300495\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<strong><em>IF: 1.526<\/em><\/strong>)<\/li>\n\n\n\n<li>S. P. Cheon and S.-J. Kang,&#8221;An Electric Power Consumption Analysis System for the Installation of Electric Vehicle Charging Stations,&#8221;&nbsp;<em>Energies<\/em>, vol.10(10), no.1534, pp.1-13, Oct. 2017. (<a href=\"http:\/\/www.mdpi.com\/1996-1073\/10\/10\/1534\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>),&nbsp;(<em><strong>IF: 2.262<\/strong><\/em>)<\/li>\n\n\n\n<li>H. Cho, S.-J. Kang, and Y. H. Kim,&#8221;Image Segmentation using Linked Mean-Shift Vectors and Global\/Local Attributes,&#8221; <em>IEEE Trans. Circuits and Systems for Video Tech.<\/em>, vol.27, no.10, pp.2132-2140, Oct. 2017. (<a href=\"http:\/\/ieeexplore.ieee.org\/document\/7484679\/?arnumber=7484679&amp;source=authoralert\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<strong><em>IF: 3.599<\/em><\/strong>)<\/li>\n\n\n\n<li>G. Bae, S. I. Cho,&nbsp;S.-J. Kang, and&nbsp;Y. H. Kim,&#8221;Dual-dissimilarity measure-based statistical video cut detection,&#8221; <em>Journal of Real-Time Image Processing<\/em>, pp.1-11, Jun. 2017. (<a href=\"https:\/\/link.springer.com\/article\/10.1007\/s11554-017-0696-1\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>),&nbsp;(<em><strong>IF: 2.010<\/strong><\/em>)<\/li>\n\n\n\n<li>M. W. Seo, S. W. Choi, S. L. Lee, E. Y. Oh, J. S. Baek, and S.-J. Kang, &#8220;Photosensor-based Latency Measurement System for Head-Mounted Displays,&#8221;&nbsp;<em>Sensors<\/em>, vol.17(5), no.1112, May. 2017. (<a href=\"http:\/\/www.mdpi.com\/1424-8220\/17\/5\/1112\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>),&nbsp;(<em><strong>IF: 2.677<\/strong><\/em>)<\/li>\n\n\n\n<li>S.-J. Kang, \u201cMulti-user Identification-based Eye-tracking Algorithm Using Position Estimation,\u201d <em>Sensors<\/em>, vol.17(1), no.41, Dec. 2017. (<a href=\"http:\/\/www.mdpi.com\/1424-8220\/17\/1\/41\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<strong><em>IF: 2.033<\/em><\/strong>)<\/li>\n<\/ol>\n\n\n\n<h1 class=\"wp-block-heading\"><strong>2016<\/strong><\/h1>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Y. D. Ahn&nbsp;and&nbsp;S.-J. Kang, \u201cOverlapped Area Removal-based Image Interpolation for Head-Mounted Displays,\u201d <em>IEEE\/OSA Journal of Display Tech.<\/em>, vol.12, no.12, pp.1770-1776, Dec. 2016. (<a href=\"http:\/\/ieeexplore.ieee.org\/document\/7592417\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 1.925<\/em><\/strong>)&nbsp;<strong>(<em>The top 50 most frequently downloaded documents, 2017<\/em>)<\/strong><\/li>\n\n\n\n<li>J. Hyun&nbsp;and S.-J. Kang, and Y. H. Kim&nbsp;\u201cConfigurable Controller for High-Resolution LED Display Systems,\u201d <em>IEEE\/OSA Journal of Display Tech.<\/em>, vol.12, no.12, pp.1594-1601, Dec. 2016. (<a href=\"http:\/\/ieeexplore.ieee.org\/document\/7676260\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 1.925<\/em><\/strong>)<\/li>\n\n\n\n<li>M. S. Patil, J. H. Seo, S.-J. Kang&nbsp;and M. Y. Lee, \u201cReview on Synthesis, Thermo-Physical Property, and Heat Transfer Mechanism of Nanofluids,\u201d <em>Energies<\/em>, vol.9(10), no.840, Oct. 2016. (<a href=\"http:\/\/www.mdpi.com\/1996-1073\/9\/10\/840\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<strong><em>IF: 2.077<\/em><\/strong>)<\/li>\n\n\n\n<li>S.-J. Kang, \u201cOLED Power Control Algorithm Using Optimal Mapping Curve Determination,\u201d <em>IEEE\/OSA Journal of Display Tech.<\/em>, vol.12, no.11, pp.1278-1282, Nov. 2016. (<a href=\"http:\/\/ieeexplore.ieee.org\/document\/7562547\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<strong><em>IF: 1.925<\/em><\/strong>)<\/li>\n\n\n\n<li>C. Y. Jang, S.-J. Kang and Y. H. Kim,&#8221;Perceived Distortion\u2013Based Progressive LCD Backlight Dimming Method,&#8221; <em>IEEE\/OSA Journal of Display Tech.<\/em>, vol.12, no.10, pp.1130-1138, Oct. 2016. (<a href=\"http:\/\/ieeexplore.ieee.org\/document\/7508908\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 1.925<\/em><\/strong>)<\/li>\n\n\n\n<li>S. Kim, S.-J. Kang and Y. H. Kim,&#8221;Anisotropic Diffusion Noise Filtering Using Region Adaptive Smoothing Strength,&#8221; <em>Journal of Visual Communication and Image Representation<\/em>,&nbsp;vol.40, pp.384-391, Oct. 2016. (<a href=\"http:\/\/www.sciencedirect.com\/science\/article\/pii\/S1047320316301316?utm_campaign=49862_AUTH_UP_GC&amp;utm_campaignPK=253284656&amp;utm_term=OP26437&amp;utm_content=254943554&amp;utm_source=30&amp;BID=753137023&amp;utm_medium=email&amp;SIS_ID=137009\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>) (<strong><em>IF: 1.530<\/em><\/strong>)<\/li>\n\n\n\n<li>C. Y. Jang, S.-J. Kang and Y. H. Kim,&#8221;Non-iterative Power-constrained Contrast Enhancement Algorithm for OLED Display,&#8221; <em>IEEE\/OSA Journal of Display Tech.<\/em>, vol.12, no.11 pp.1257-1267, Nov. 2016. (<a href=\"http:\/\/ieeexplore.ieee.org\/document\/7524754\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 1.925<\/em><\/strong>)&nbsp;<strong>(<em>The top 50 most frequently downloaded documents, 2016<\/em>)<\/strong><\/li>\n\n\n\n<li>C. Y. Jang, S.-J. Kang and Y. H. Kim,&#8221;Adaptive contrast enhancement using edge-based lighting condition estimation,&#8221; <em>Digital Signal Processing<\/em>, vol.58, pp.1-9, Nov. 2016. (<a href=\"http:\/\/www.sciencedirect.com\/science\/article\/pii\/S1051200416300240\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>)&nbsp;(<em><strong>IF: 1.256<\/strong><\/em>)<\/li>\n\n\n\n<li>S. W. Choi, S. Bae, and S.-J. Kang, \u201cAdaptive Temperature Control System for LED Array Systems,\u201d&nbsp;<em>Journal of Electrical Engineering &amp; Technology<\/em>, vol.11, no.5, pp.1147-1152, Sep. 2016. (<a href=\"http:\/\/www.dbpia.co.kr\/Journal\/ArticleDetail\/NODE06770234\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 0.528<\/em><\/strong>)<\/li>\n\n\n\n<li>S.-J. Kang, \u201cPerceptual Quality-aware Power Reduction Technique for Organic Light Emitting Diodes,\u201d <em>IEEE\/OSA Journal of Display Tech.<\/em>, vol.12, no.6, pp.519-525, Jun. 2016. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/login.jsp?tp=&amp;arnumber=7336487&amp;url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D7336487\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\">View<\/a>) (<strong><em>IF: 2.241<\/em><\/strong>) <strong>(<em>The top 50 most frequently downloaded documents, 2016<\/em>)<\/strong><\/li>\n\n\n\n<li>S. Kim,&nbsp;S.-J. Kang, and Y. H. Kim,&#8221;Real-time stereo matching using extended binary weighted aggregation,&#8221; <em>Digital Signal Processing<\/em>, vol.53, pp.51-61, Jun. 2016. (<a href=\"http:\/\/www.sciencedirect.com\/science\/article\/pii\/S1051200416000105\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\">View<\/a>) (<strong><em>IF: 1.256<\/em><\/strong>)<\/li>\n\n\n\n<li>S. I. Cho,&nbsp;S.-J. Kang and S. Lee, Y. H. Kim, &#8220;Extended-dimensional anisotropic diffusion using diffusion paths&nbsp;on inter-color planes for noise reduction,&#8221; <em>Digital Signal Processing<\/em>, vol.48, pp.27-39, Jan. 2016. (<a href=\"http:\/\/www.sciencedirect.com\/science\/article\/pii\/S1051200415002699\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\">View<\/a>)&nbsp;(<strong><em>IF: 1.256<\/em><\/strong>)<\/li>\n\n\n\n<li>S.-J. Kang and M. Y. Lee, &#8220;Multi-Directional Subpixel Rendering&nbsp;Technique for Ultrasound Imaging Display,&#8221;&nbsp;<em>Journal of&nbsp;Nanoelectronics and Optoelectronics<\/em>, vol.11, no.1, pp.79-86, Feb. 2016. (<a href=\"http:\/\/www.ingentaconnect.com\/content\/asp\/jno\/2016\/00000011\/00000001\/art00014?crawler=true\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 0.385<\/em><\/strong>)<\/li>\n<\/ol>\n\n\n\n<h1 class=\"wp-block-heading\"><strong>2015<\/strong><\/h1>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Y. D. Ahn,&nbsp;S. Bae, J. J. Yun&nbsp;and M. Y. Lee, and&nbsp;<strong><u>S.-J. Kang<\/u><\/strong>, &#8220;Distance Recognition-based Intelligent LED control system,&#8221;&nbsp;<em>Int. Jour. of Applied Eng. Research<\/em>, vol. 10, no. 13, 2015. (<a href=\"http:\/\/www.ripublication.com\/Volume\/ijaerv10n13.htm\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)<\/li>\n\n\n\n<li>S.-J. Kang and S. Bae, &#8220;Fast Segmentation-Based Backlight Dimming,&#8221; <em>IEEE\/OSA Journal of Display Tech.<\/em>, vol.11, no.5, pp.399-402, May. 2015. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/login.jsp?tp=&amp;arnumber=7066877&amp;url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel7%2F9425%2F4356458%2F07066877.pdf%3Farnumber%3D7066877\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" class=\"external\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 2.241<\/em><\/strong>)<\/li>\n\n\n\n<li>S.-J. Kang, S. Bae, J. J. Yun&nbsp;and M. Y. Lee,&nbsp;&#8220;Color Distortion-aware Error Control for Backlight Dimming,&#8221;&nbsp;<em>IEEE\/OSA Journal of Display Tech.<\/em>,&nbsp;vol.11, no.1, pp.79-85, Jan. 2015. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/articleDetails.jsp?arnumber=6915842&amp;sortType%3Dasc_p_Sequence%26filter%3DAND(p_Publication_Number%3A9425)%26rowsPerPage%3D75\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 2.241<\/em><\/strong>)<\/li>\n\n\n\n<li>S.-J. Kang,&nbsp;&#8220;Image-Quality-Based Power Control Technique for Organic Light Emitting Diode Displays,&#8221;&nbsp;<em>IEEE\/OSA Journal of Display Tech.<\/em>,&nbsp;vol.11, no.1, pp.104-109, Jan. 2015. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/articleDetails.jsp?tp=&amp;arnumber=6923414&amp;queryText%3Dimage+quality-based+power\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 2.241<\/em><\/strong>)<\/li>\n<\/ol>\n\n\n\n<h1 class=\"wp-block-heading\"><strong>2014<\/strong><\/h1>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<ol class=\"wp-block-list\">\n<li>H. Cho, S.-J. Kang,&nbsp;S. I. Cho, and Y. H. Kim,&#8221;Image Segmentation Using Linked Mean-Shift Vectors and Its Implementation on GPU,&#8221; IEEE Trans. Consumer Elec., vol.60, no.4, pp.719-727, Nov. 2014. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/articleDetails.jsp?tp=&amp;arnumber=7027348&amp;queryText%3DImage+Segmentation+Using+Linked+Mean-Shift+Vectors+and+Its+Implementation+on+GPU\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>) (<em><strong>IF: 1.157<\/strong><\/em>)<\/li>\n\n\n\n<li>S. C. Kim, J. H. Seo, D. Y. Lee, D. P. Hong, S.-J. Kang and M. Y. Lee, &#8220;Thermodynamic Behaviors of Magnetic-Fluid in a Thin&nbsp;Channel with Magnetic Field and Aspect Ratio,&#8221; Inter. Jour. Precision Engineering and Manufacturing, vol. 15, no. 7, pp. 1377-1382, Jul. 2014. (<a href=\"http:\/\/link.springer.com\/article\/10.1007%2Fs12541-014-0479-6\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 1.500<\/em><\/strong>)<\/li>\n\n\n\n<li>D. G. Yu,&nbsp;S.-J. Kang, H. S. Kim,&nbsp;and Y. H. Kim,&nbsp;&#8220;Viewing Distance-Aware Backlight Dimming of Liquid Crystal Displays,&#8221;&nbsp;<em>IEEE\/OSA Journal of Display Tech.<\/em>,&nbsp;&nbsp;vol.10, no.10, pp.867-874, Oct. 2014. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/articleDetails.jsp?tp=&amp;arnumber=6823081&amp;queryText%3DViewing+Distance-Aware+Backlight+Dimming+of+Liquid+Crystal+Displays\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 1.686<\/em><\/strong>)&nbsp;<strong>(<em>The top 50 most frequently downloaded documents<\/em>)<\/strong><\/li>\n\n\n\n<li>S. I. Cho,&nbsp;S.-J. Kang, and&nbsp;Y. H. Kim, &#8220;Human Perception based Image Segmentation Using Optimizing of Color Quantization,&#8221; <em>IET Image Processing<\/em>, vol. 8, no. 12, pp.761-770, Dec. 2014. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/articleDetails.jsp?tp=&amp;arnumber=6969718&amp;queryText%3DHuman+Perception+based+Image+Segmentation+Using+Optimizing+of+Color+Quantization\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 0.676<\/em><\/strong>)<\/li>\n\n\n\n<li>S. I. Cho,&nbsp;S.-J. Kang, H. S. Kim,&nbsp;and&nbsp;Y. H. Kim, &#8220;Dictionary-based anisotropic diffusion for noise reduction,&#8221;&nbsp;<em>Pattern Recognition Letter<\/em>, vol. 46, pp.36-45,Sep. 2014.&nbsp; (<a href=\"http:\/\/www.sciencedirect.com\/science\/article\/pii\/S0167865514001500\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 1.062<\/em><\/strong>)<\/li>\n\n\n\n<li>S.-J. Kang and&nbsp;Y. H. Kim,&#8221;Segmentation-based Clipped Error Control&nbsp;Algorithm for Global Backlight Dimming,&#8221;<em>&nbsp;IEEE\/OSA Journal of Display Tech.<\/em>,&nbsp;vol.10, no.7, pp.568-573, Jul. 2014. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/articleDetails.jsp?arnumber=6767052&amp;pageNumber%3D136461\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 1.686<\/em><\/strong>)<\/li>\n\n\n\n<li>S.-J. Kang, &#8220;HSI-based Color Error-Aware Subpixel&nbsp;Rendering Technique,&#8221;<em> IEEE\/OSA Journal of Display Tech.<\/em>,&nbsp;vol.10, no.4, pp.251-254, Apr. 2014. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/articleDetails.jsp?arnumber=6732899&amp;newsearch=true&amp;queryText=HSI-based%20Color%20Error-Aware%20Subpixel%20Rendering%20Technique\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 1.686<\/em><\/strong>)<\/li>\n\n\n\n<li>S.-J. Kang, &#8220;SSIM Preservation-based Backlight Dimming,&#8221;<em> IEEE\/OSA Journal of Display Tech.<\/em>,&nbsp;vol.10, no.4, pp.247-250, Apr. 2014. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/articleDetails.jsp?arnumber=6725603&amp;newsearch=true&amp;queryText=SSIM%20Preservation-based%20Backlight%20Dimming\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 1.686<\/em><\/strong>)&nbsp;<strong>(<em>The top 25 most frequently downloaded documents<\/em>)<\/strong><\/li>\n\n\n\n<li>S.-J. Kang, &#8220;Adaptive Weight Allocation-based Subpixel Rendering Algorithm,&#8221;<em>IEEE Trans. Circuits and Systems for Video Tech.<\/em>,&nbsp;vol.24, no.2, pp.224-229, Feb. 2014. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpls\/abs_all.jsp?arnumber=6562746&amp;tag=1\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 2.259<\/em><\/strong>)<\/li>\n<\/ol>\n\n\n\n<h1 class=\"wp-block-heading\"><strong>2013<\/strong><\/h1>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<ol class=\"wp-block-list\">\n<li>D. G. Yu, S.-J. Kang,&nbsp;and Y. H. Kim, &#8220;Direction-Select Motion Estimation for Motion-Compensated Frame Rate Up-Conversion,&#8221;&nbsp;<em>IEEE\/OSA Journal of Display Tech.<\/em>,&nbsp;&nbsp;vol.9, no.10, pp.840-850, Oct.&nbsp;2013. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/articleDetails.jsp?tp=&amp;arnumber=6518163&amp;queryText%3DDirection-Select+Motion+Estimation+for+Motion-Compensated+Frame+Rate+Up-Conversion\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 1.663<\/em><\/strong>)&nbsp;<strong>(<em>The top 50 most frequently downloaded documents<\/em>)<\/strong><\/li>\n\n\n\n<li>S.-J. Kang, &#8220;Processor-based Backlight Dimming Using Computation Reduction Technique,&#8221;<em> IEEE\/OSA Journal of Display Tech.<\/em>,&nbsp;vol.9, no.10, pp.819-824, Oct.&nbsp;2013. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/articleDetails.jsp?arnumber=6516620\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 1.663<\/em><\/strong>)<\/li>\n\n\n\n<li>S.-J. Kang, &#8220;Color Difference-based Subpixel Rendering for Matrix Displays,&#8221;&nbsp;<em>IEEE\/OSA Journal of Display Tech.<\/em>, vol.9, no.8, pp.632-637, Aug.&nbsp;2013. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/articleDetails.jsp?tp=&amp;arnumber=6494696&amp;queryText%3Dcolor+difference+based+subpixel+rendering\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>) (<strong><em>IF: 1.663<\/em><\/strong>) <strong>(<em>The top 25 most frequently downloaded documents<\/em>)<\/strong><\/li>\n\n\n\n<li>S.-J. Kang, &#8220;Adaptive Luminance Coding-based Scene-change Detection for Frame Rate Up-conversion,&#8221;&nbsp;<em>IEEE Trans. Consumer Elec.<\/em>, vol.59, no.2, pp.370-375, May. 2013. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/articleDetails.jsp?arnumber=6531119\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)&nbsp;&nbsp;(<strong><em>IF: 0.941<\/em><\/strong>)<\/li>\n\n\n\n<li>S. I. Cho, S.-J. Kang,&nbsp;and&nbsp;Y. H. Kim, &#8220;Image Quality-Aware Backlight Dimming with Color and Detail Enhancement Techniques,&#8221;&nbsp;<em>IEEE\/OSA Journal of Display Tech.<\/em>, vol.9, no.2, pp.112-121, Feb. 2013. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/articleDetails.jsp?tp=&amp;arnumber=6418053&amp;contentType=Journals+%26+Magazines&amp;searchField%3DSearch_All%26queryText%3D.QT.backlight+dimming.QT.\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 1.663<\/em><\/strong>) <strong>(<em>The top 25 most frequently downloaded documents<\/em>)<\/strong><\/li>\n<\/ol>\n\n\n\n<h1 class=\"wp-block-heading\"><strong>~ 2012<\/strong><\/h1>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<ol class=\"wp-block-list\">\n<li>S.-J. Kang, S. I. Cho, S. Yoo, and Y. H. Kim, &#8220;Scene Change Detection Using Multiple Histograms for Motion-Compensated Frame Rate Up-Conversion,&#8221; <em>IEEE\/OSA Journal of Display Tech.<\/em>, vol.8, no.3, pp.121-126, Mar. 2012. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/freeabs_all.jsp?arnumber=6145429\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>) (<strong><em>IF: 2.280<\/em><\/strong>)<\/li>\n\n\n\n<li>S.-J. Kang and Y. H. Kim, &#8220;Multi-histogram-based Backlight Dimming for Low Power Liquid Crystal Displays,&#8221; <em>IEEE\/OSA Journal of Display Tech.<\/em>, vol.7, no.10, pp.544-549, Oct. 2011. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/freeabs_all.jsp?arnumber=6003723\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>) (<strong><em>IF: 2.280<\/em><\/strong>)<\/li>\n\n\n\n<li>S.-J. Kang, S. Yoo, and Y. H. Kim, &#8220;Dual Motion Estimation for Frame Rate Up-Conversion,&#8221; <em>IEEE Trans. Circuits and Systems for Video Tech.<\/em>, vol.20, no.12, pp.1909-1914, Dec. 2010. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpls\/abs_all.jsp?arnumber=5604667&amp;tag=1\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 1.649<\/em><\/strong>)<\/li>\n\n\n\n<li>S.-J.Kang, and Y. H. Kim, &#8220;Image Integrity-based Gray-Level Error Control for Low Power Liquid Crystal Displays,&#8221; <em>IEEE Trans. Consumer Elec.<\/em>, vol.55, no.4, pp.2401-2406, Nov. 2009. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpls\/abs_all.jsp?isnumber=5373721&amp;arnumber=5373816&amp;count=97&amp;index=89&amp;tag=1\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>)&nbsp;(<strong><em>IF: 0.941<\/em><\/strong>)<\/li>\n\n\n\n<li>S.-J. Kang, D. G. Yu, S. K. Lee, and Y. H. Kim, &#8220;Multiframe-based Bilateral Motion Estimation with Emphasis on Stationary Caption Processing for Frame Rate Up-Conversion,&#8221; <em>IEEE Trans. Consumer Elec.<\/em>, vol.54, no.4, pp.1830-1838, Nov. 2008. (<a href=\"http:\/\/ieeexplore.ieee.org\/xpls\/abs_all.jsp?arnumber=4711242\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>) (<strong><em>IF: 0.941<\/em><\/strong>)<\/li>\n\n\n\n<li>S.-J. Kang, K. R.Cho, and Y. H. Kim, &#8220;Motion compensated frame rate up-conversion using extended bilateral motion estimation,&#8221; <em>IEEE Trans. Consumer Elec.<\/em>, vol.53, no.4, pp.1758-1767, Nov. 2007.&nbsp;(<a href=\"http:\/\/ieeexplore.ieee.org\/search\/srchabstract.jsp?arnumber=4429281&amp;isnumber=4429199&amp;punumber=30&amp;k2dockey=4429281@ieeejrns&amp;query=%28motion+compensated+frame+rate+up-conversion+%3Cin%3E+metadata%29+%3Cand%3E+%2830+%3Cin%3E+punumber%29&amp;pos=0&amp;access=n0\" class=\"external\" rel=\"nofollow\"><u>View<\/u><\/a>) (<strong><em>IF: 0.941<\/em><\/strong>)<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Submitted 2025&nbsp; 1. ICSD-NeRF: Independent Canonical Spaces for Enhanced Dynamic Scene Modeling in Neural Radiance Fields (View) 2. Efficient Monocular Depth-Based Physical Distance Measurement for Low-Depth Scales (View) 3. DGTFNet: Depth-Guided Tri-Axial Fusion Network for Efficient Generalizable Stereo Matching (View) 4. Luminance Compensation for Stretchable Displays Using Deep Visual Feature-Optimized Gaussian-Weighted Kernels (View) 5. SO-Diffusion:&hellip;&nbsp;<a href=\"https:\/\/vds.sogang.ac.kr\/?p=2339\" class=\"\" rel=\"bookmark\">\ub354 \ubcf4\uae30 &raquo;<span class=\"screen-reader-text\">International Journal<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"off","neve_meta_content_width":70,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","footnotes":""},"categories":[5],"tags":[],"class_list":["post-2339","post","type-post","status-publish","format-standard","hentry","category-publication"],"_links":{"self":[{"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=\/wp\/v2\/posts\/2339","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2339"}],"version-history":[{"count":146,"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=\/wp\/v2\/posts\/2339\/revisions"}],"predecessor-version":[{"id":5003,"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=\/wp\/v2\/posts\/2339\/revisions\/5003"}],"wp:attachment":[{"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2339"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2339"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/vds.sogang.ac.kr\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2339"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}