Skip to content

Commit

Permalink
update arxiv papers
Browse files Browse the repository at this point in the history
  • Loading branch information
SanghyukChun committed Jul 19, 2024
1 parent c0d29d9 commit 3d228f5
Show file tree
Hide file tree
Showing 7 changed files with 9 additions and 9 deletions.
18 changes: 9 additions & 9 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ <h3>Research: <small>Scalable and Reliable Machine Learning with Language-guided
<p>How can we make a model comprehend human language alongside the target modality? To answer the question, I recently have worked on <span class="text-danger">text-conditioned diffusion models</span>. Especially, I am interested in utilizing the power of recent diffusion models for text-conditioned feature transforms or data augmentation. However, we need more versatility and controllability to adopt diffusion models to the desired tasks, e.g., localized conditions via providing region masks. My recent works have focused on the versatility and controllability of diffusion models, and applying diffusion models to non-generative downstream tasks, such as composed image retrieval (CIR) tasks.</p>
<ul>
<li><i>G. Gu<sup>&#10059;</sup>, <strong>S. Chun<sup>&#10059;</sup></strong>, W. Kim, Y. Kang, S. Yun, <a href="#language-only-efficient-training-of-zero-shot-composed-image-ret">Language-only Efficient Training of Zero-shot Composed Image Retrieval</a>, <strong>CVPR 2024</strong></i></li>
<li><i>G. Gu<sup>&#10059;</sup>, <strong>S. Chun<sup>&#10059;</sup></strong>, W. Kim, H. Jun, Y. Kang, S. Yun, <a href="#compodiff-versatile-composed-image-retrieval-with-latent-diffusi">CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion</a>, <strong>preprint</strong></i></li>
<li><i>G. Gu<sup>&#10059;</sup>, <strong>S. Chun<sup>&#10059;</sup></strong>, W. Kim, H. Jun, Y. Kang, S. Yun, <a href="#compodiff-versatile-composed-image-retrieval-with-latent-diffusi">CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion</a>, <strong>TMLR</strong></i></li>
<li><i>J. Byun<sup>&#10059;</sup>, S. Jeong<sup>&#10059;</sup>, W. Kim, <strong>S. Chun<sup>&#10013;</sup></strong>, T. Moon<sup>&#10013;</sup>, <a href="#reducing-task-discrepancy-of-text-encoders-for-zero-shot-compose">Reducing Task Discrepancy of Text Encoders for Zero-Shot Composed Image Retrieval</a>, <strong>preprint</strong></i></li>
<li><i>G. Gu, <strong>S. Chun</strong>, W. Kim, H. Jun, Y, S. Yun, Y. Kang, <a href="#graphit-a-unified-framework-for-diverse-image-editing-tasks">Graphit: A Unified Framework for Diverse Image Editing Tasks</a>, <strong>open source</strong></i></li>
</ul>
Expand Down Expand Up @@ -246,7 +246,7 @@ <h3>Research: <small>Scalable and Reliable Machine Learning with Language-guided
</ul>
</ul>

<p>Finally, I also have worked on <span class="text-warning">domain specific optimization techniques</span> by utilizing properties of the given data, e.g., compositionaly of Korean/Chinese letters, low- and high- frequency information for better audio understanding, or harmonic information for multi-source audio understanding.</p>
<p>Lastly, I also have worked on <span class="text-warning">domain specific optimization techniques</span> by utilizing properties of the given data, e.g., compositionaly of Korean/Chinese letters, low- and high- frequency information for better audio understanding, or harmonic information for multi-source audio understanding.</p>

<ul>
<li><i>S. Park, <strong>S. Chun</strong>, J. Cha, B. Lee, H. Shim, <a href="#multiple-heads-are-better-than-one-few-shot-font-generation-with">Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts</a>, <strong>ICCV 2021</strong></i></li>
Expand Down Expand Up @@ -317,7 +317,7 @@ <h3 id="papers" class="mt-5">Publications</h3>
<strong>CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion.</strong>
<ul>
<li>Geonmo Gu<sup>&#10059;</sup>, <strong>Sanghyuk Chun</strong><sup>&#10059;</sup>, Wonjae Kim, HeeJae Jun, Yoohoon Kang, Sangdoo Yun</li>
<li><strong><em>TMLR.</em></strong> <strong><em>CVPR 2024 SynData4CV Workshop.</em></strong> <a href="media/papers/gu2023compodiff.pdf">paper</a> | <a href="https://github.com/navervision/CompoDiff">code</a> | <a href="https://huggingface.co/datasets/navervision/SynthTriplets18M">SynthTriplets18M dataset</a> | <a href="https://huggingface.co/spaces/navervision/CompoDiff-Aesthetic">demo 🤗</a> | <a href="https://docs.google.com/presentation/d/1VTJlrHqnLAcQP3aHydnlFXNeZpsPMGa9-L-Oaigi_6M/edit?usp=sharing">slide</a> | <a href="media/bibtex/gu2023compodiff.txt">bibtex</a></li>
<li><strong><em>TMLR.</em></strong> <strong><em>CVPR 2024 SynData4CV Workshop.</em></strong> <a href="media/papers/gu2024compodiff.pdf">paper</a> | <a href="https://github.com/navervision/CompoDiff">code</a> | <a href="https://huggingface.co/datasets/navervision/SynthTriplets18M">SynthTriplets18M dataset</a> | <a href="https://huggingface.co/spaces/navervision/CompoDiff-Aesthetic">demo 🤗</a> | <a href="https://docs.google.com/presentation/d/1VTJlrHqnLAcQP3aHydnlFXNeZpsPMGa9-L-Oaigi_6M/edit?usp=sharing">slide</a> | <a href="media/bibtex/gu2024compodiff.txt">bibtex</a></li>
</ul>
</li>

Expand Down Expand Up @@ -413,7 +413,7 @@ <h3 id="papers" class="mt-5">Publications</h3>
<strong>Similarity of Neural Architectures using Adversarial Attack Transferability.</strong>
<ul>
<li>Jaehui Hwang, Dongyoon Han, Byeongho Heo, Song Park, <strong>Sanghyuk Chun</strong><sup>&#10059;</sup>, Jong-Seok Lee<sup>&#10059;</sup></li>
<li><strong><em>ECCV 2024.</em></strong> <a href="media/papers/hwang2022similarity.pdf">paper</a> | <a href="media/bibtex/hwang2024similarity.txt">bibtex</a></li>
<li><strong><em>ECCV 2024.</em></strong> <a href="media/papers/hwang2024sat.pdf">paper</a> | <a href="media/bibtex/hwang2024sat.txt">bibtex</a></li>
</ul>
</li>

Expand Down Expand Up @@ -446,7 +446,7 @@ <h4 class="float-left mb-0" id="papers-2024">2024</h4>
<strong class="anchor-strong">Similarity of Neural Architectures using Adversarial Attack Transferability.</strong>
<ul>
<li>Jaehui Hwang, Dongyoon Han, Byeongho Heo, Song Park, <strong>Sanghyuk Chun</strong><sup>&#10059;</sup>, Jong-Seok Lee<sup>&#10059;</sup></li>
<li><strong><em>ECCV 2024.</em></strong> <a href="media/papers/hwang2022similarity.pdf">paper</a> | <a href="media/bibtex/hwang2024similarity.txt">bibtex</a></li>
<li><strong><em>ECCV 2024.</em></strong> <a href="media/papers/hwang2024sat.pdf">paper</a> | <a href="media/bibtex/hwang2024sat.txt">bibtex</a></li>
</ul>
</li>

Expand Down Expand Up @@ -856,7 +856,7 @@ <h4 class="card-header no-border" id="papers-journal">Journals</h4>
<strong class="anchor-strong">CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion.</strong>
<ul>
<li>Geonmo Gu<sup>&#10059;</sup>, <strong>Sanghyuk Chun</strong><sup>&#10059;</sup>, Wonjae Kim, HeeJae Jun, Yoohoon Kang, Sangdoo Yun</li>
<li><strong><em>TMLR.</em></strong> <strong><em>CVPR 2024 SynData4CV Workshop.</em></strong> <a href="media/papers/gu2023compodiff.pdf">paper</a> | <a href="https://github.com/navervision/CompoDiff">code</a> | <a href="https://huggingface.co/datasets/navervision/SynthTriplets18M">SynthTriplets18M dataset</a> | <a href="https://huggingface.co/spaces/navervision/CompoDiff-Aesthetic">demo 🤗</a> | <a href="https://docs.google.com/presentation/d/1VTJlrHqnLAcQP3aHydnlFXNeZpsPMGa9-L-Oaigi_6M/edit?usp=sharing">slide</a> | <a href="media/bibtex/gu2023compodiff.txt">bibtex</a></li>
<li><strong><em>TMLR.</em></strong> <strong><em>CVPR 2024 SynData4CV Workshop.</em></strong> <a href="media/papers/gu2024compodiff.pdf">paper</a> | <a href="https://github.com/navervision/CompoDiff">code</a> | <a href="https://huggingface.co/datasets/navervision/SynthTriplets18M">SynthTriplets18M dataset</a> | <a href="https://huggingface.co/spaces/navervision/CompoDiff-Aesthetic">demo 🤗</a> | <a href="https://docs.google.com/presentation/d/1VTJlrHqnLAcQP3aHydnlFXNeZpsPMGa9-L-Oaigi_6M/edit?usp=sharing">slide</a> | <a href="media/bibtex/gu2024compodiff.txt">bibtex</a></li>
</ul>
</li>

Expand Down Expand Up @@ -910,7 +910,7 @@ <h4 class="card-header no-border" id="papers-journal">Journals</h4>
<strong>CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion.</strong>
<ul>
<li>Geonmo Gu<sup>&#10059;</sup>, <strong>Sanghyuk Chun</strong><sup>&#10059;</sup>, Wonjae Kim, HeeJae Jun, Yoohoon Kang, Sangdoo Yun</li>
<li><strong><em>TMLR.</em></strong> <strong><em>CVPR 2024 SynData4CV Workshop.</em></strong> <a href="media/papers/gu2023compodiff.pdf">paper</a> | <a href="https://github.com/navervision/CompoDiff">code</a> | <a href="https://huggingface.co/datasets/navervision/SynthTriplets18M">SynthTriplets18M dataset</a> | <a href="https://huggingface.co/spaces/navervision/CompoDiff-Aesthetic">demo 🤗</a> | <a href="https://docs.google.com/presentation/d/1VTJlrHqnLAcQP3aHydnlFXNeZpsPMGa9-L-Oaigi_6M/edit?usp=sharing">slide</a> | <a href="media/bibtex/gu2023compodiff.txt">bibtex</a></li>
<li><strong><em>TMLR.</em></strong> <strong><em>CVPR 2024 SynData4CV Workshop.</em></strong> <a href="media/papers/gu2024compodiff.pdf">paper</a> | <a href="https://github.com/navervision/CompoDiff">code</a> | <a href="https://huggingface.co/datasets/navervision/SynthTriplets18M">SynthTriplets18M dataset</a> | <a href="https://huggingface.co/spaces/navervision/CompoDiff-Aesthetic">demo 🤗</a> | <a href="https://docs.google.com/presentation/d/1VTJlrHqnLAcQP3aHydnlFXNeZpsPMGa9-L-Oaigi_6M/edit?usp=sharing">slide</a> | <a href="media/bibtex/gu2024compodiff.txt">bibtex</a></li>
</ul>
</li>

Expand Down Expand Up @@ -1018,7 +1018,7 @@ <h4 class="card-header no-border" id="papers-journal">Journals</h4>
<strong>Similarity of Neural Architectures using Adversarial Attack Transferability.</strong>
<ul>
<li>Jaehui Hwang, Dongyoon Han, Byeongho Heo, Song Park, <strong>Sanghyuk Chun</strong><sup>&#10059;</sup>, Jong-Seok Lee<sup>&#10059;</sup></li>
<li><strong><em>ECCV 2024.</em></strong> <a href="media/papers/hwang2022similarity.pdf">paper</a> | <a href="media/bibtex/hwang2024similarity.txt">bibtex</a></li>
<li><strong><em>ECCV 2024.</em></strong> <a href="media/papers/hwang2024sat.pdf">paper</a> | <a href="media/bibtex/hwang2024sat.txt">bibtex</a></li>
</ul>
</li>

Expand Down Expand Up @@ -1456,7 +1456,7 @@ <h4 id="interns" class="card-header">Mentees / Short-term post-doctoral collabor
<li><span class="badge round-pill bg-danger text-danger">_</span> <span class="badge round-pill bg-success text-success">_</span> <a href="https://scholar.google.co.kr/citations?user=D9U_ohsAAAAJ&hl=en">Heesun Bae</a> (KAIST, 2023) <span class="text-danger"> -- VL representation learning under noisy environment</span></li>
<li><span class="badge round-pill bg-danger text-danger">_</span> <span class="badge round-pill">&nbsp;&nbsp;</span> <a href="https://jbeomlee93.github.io/">Jungbeom Lee</a> (Visiting researcher, 2023) <a href="#toward-interactive-regional-understanding-in-vision-large-langua">[C28]</a> <span class="text-danger"> -- VL representation learning</span></li>
<li><span class="badge round-pill bg-success text-success">_</span> <span class="badge round-pill">&nbsp;&nbsp;</span> <a href="https://sites.google.com/snu.ac.kr/eunjikim">Eunji Kim</a> (Seoul National University, 2022) <span class="text-success"> -- XAI + Probabilistic Machine</span> (the internship project is published at ICML 2023 <a href="https://arxiv.org/abs/2306.01574">[paper]</a>)</li>
<li><span class="badge round-pill bg-success text-success">_</span> <span class="badge round-pill">&nbsp;&nbsp;</span> <a href="https://j-h-hwang.github.io/">Jaehui Hwang</a> (Yonsei University, 2022) <a href="#similarity-of-neural-architectures-based-on-input-gradient-trans">[A5]</a> <span class="text-success"> -- Adversarial robustness and XAI</span></li>
<li><span class="badge round-pill bg-success text-success">_</span> <span class="badge round-pill">&nbsp;&nbsp;</span> <a href="https://j-h-hwang.github.io/">Jaehui Hwang</a> (Yonsei University, 2022) <a href="#similarity-of-neural-architectures-based-on-input-gradient-trans">[C30]</a> <span class="text-success"> -- Adversarial robustness and XAI</span></li>
<li><span class="badge round-pill bg-dark text-dark">_</span> <span class="badge round-pill">&nbsp;&nbsp;</span> <a href="https://chanwoo-park-official.github.io/">Chanwoo Park</a> (Seoul National University, 2021-2022) <a href="#a-unified-analysis-of-mixed-sample-data-augmentation-a-loss-func">[C22]</a> <span class="text-dark"> -- Deep learning theory</span></li>
<li><span class="badge round-pill bg-dark text-dark">_</span> <span class="badge round-pill">&nbsp;&nbsp;</span> <a href="https://mks0601.github.io/">Gyeongsik Moon</a> (Visiting researcher, 2022) <a href="#three-recipes-for-better-3d-pseudo-gts-of-3d-human-mesh-estimati">[W7]</a> <span class="text-dark"> -- Semi-supervised learning for 3D Human Mesh Estimation</span></li>
<li><span class="badge round-pill bg-dark text-dark">_</span> <span class="badge round-pill">&nbsp;&nbsp;</span> <a href="https://hongsukchoi.github.io/">Hongsuk Choi</a> (Visiting researcher, 2022) <a href="#three-recipes-for-better-3d-pseudo-gts-of-3d-human-mesh-estimati">[W7]</a> <span class="text-dark"> -- Semi-supervised learning for 3D Human Mesh Estimation</span></li>
Expand Down
File renamed without changes.
File renamed without changes.
Binary file not shown.
Binary file removed media/papers/hwang2022similarity.pdf
Binary file not shown.
Binary file added media/papers/hwang2024sat.pdf
Binary file not shown.
Binary file modified media/papers/kim2024hype.pdf
Binary file not shown.

0 comments on commit 3d228f5

Please sign in to comment.