Arda Senocak

I am a Postdoctoral Research Associate working with Joon Son Chung at Multimodal AI Lab @ KAIST. I am interested in Audio-Visual Learning and Understanding.

Previously, I was a Ph. D student at KAIST advised by In So Kweon.


News

  • July 2024: A new extensive Sound Source Localization Analysis is released!
  • June 2024: AudioMamba is released!
  • June 2024: One paper is accepted to INTERSPEECH 2024.
  • December 2023: Two papers are accepted to ICASSP 2024.
  • October 2023: One paper is accepted to WACV 2024.
  • October 2023: Selected as an Outstanding Reviewer at ICCV 2023.
  • July 2023: One paper is accepted to ICCV 2023.
  • May 2023: FlexiAST is accepted to INTERSPEECH 2023.
  • Jan 2023: Sound2Scene is accepted to CVPR 2023.
  • Jan 2023: Two papers are accepted to ICASSP 2023.
  • Oct 2022: One paper is accepted to WACV 2023.
  • Sep 2022: I have joined Multimodal AI Lab.@KAIST as a Postdoctoral Research Associate.

Publications

Learning Sound Localization Better from Semantically Similar Samples

Learning Sound Localization Better from Semantically Similar Samples

ICASSP, 2022 Oral Presentation (* Equal Contribution)

Short Version at Sight and Sound Workshop @ CVPR 2022

Less Can Be More: Sound Source Localization With a Classification Model

Less Can Be More: Sound Source Localization With a Classification Model

WACV, 2022 (* Equal Contribution)

Received Honorable Mention, 28th HumanTech Paper Award, Samsung Electronics Co., Ltd. ($2000)

Learning to Localize Sound Source in Visual Scenes

Learning to Localize Sound Source in Visual Scenes

CVPR, 2018 Received Qualcomm Innovation Awards ($5000)

Invited talk at VisionMeetsCognition @ CVPR 2018 and Short Version at Sight and Sound Workshop @ CVPR 2018