[October 2023]: Our paper exploring the relationship between training dynamics and compositional generalization has been accepted for EMNLP Findings 2023.
[September 2023]: Our paper on hyperspectral image denoising has been accepted for publication in Signal Processing journal.
[August 2023]: Our work on omnidirectional video saliency prediction got accepted to BMVC 2023.
[July 2023]: Our work on text-guided image manipulation will be published in ACM Transactions on Graphics. We will be presenting our work at SIGGRAPH Asia 2023 at Sydney.
[June 2023]: I received a gift fund from Adobe Research. With Duygu Ceylan of Adobe Research and Aykut Erdem of KUIS AI LAB, we will develop novel methods for text-guided image synthesis and editing. Thanks Adobe!.
[May 2023]: We will be organizing the on Multimodal, Multilingual Natural Language Generation and Multilingual WebNLG Challenge at INLG/SIGDial 2023 in September 2023
[August 2023]: BIG-bench paper has been accepted for publication in Transactions on Machine Learning Research.
[February 2023]: Our work on omnidirectional image quality assessment got accepted to ICASSP 2023.
[December 2022]: I am honored to be recognized as one of the 2022 Outstanding Associate Editors of IEEE Transactions on Multimedia.
[November 2022]: We ranked 2nd in the euphemism detection shared task organized by the Figurative Language Processing workshop at EMNLP 2022.
[September 2022]: Our work on language-guided video manipulation accepted to BMVC 2022.
[June 2022]: Our work on language-guided image analysis got the best paper award at 5th Multimodal Learning and Applications Workshop.
[May 2022]: We will be organizing a training school on Representation Mediated Multimodality at Schloss Etelsen, Germany in September 26-30. 2022
[April 2022]: Our work on language-guided image analysis got accepted to 5th Multimodal Learning and Applications Workshop.
[February 2022]: Our survey paper on neural natural language generation has been accepted for publication in Journal of Artificial Intelligence Research.
[February 2022]: Our work on causal reasoning got accepted to Findings of ACL 2022.
[January 2022]: I will be teaching the undergraduate-level course: BBM444 Fundamentals of Computational Photography.
[January 2022]: Our work on query-specific video summarization has been accepted for publication in Multimedia Tools and Application.
[December 2021]: Excited to share that our project on event-based vision under extremely low-light conditions will be funded by TUBITAK-1001 program. With Aykut Erdem, we will explore hybrid approaches to bring traditional and event cameras together to solve crucial challenges we face when processing dark videos.
[December 2021]: I received a gift fund from Adobe Research. With Duygu Ceylan of Adobe Research, and Aykut Erdem and Deniz Yuret of KUIS AI LAB, we will develop novel methods for semantic image editing. Thanks Adobe!.
[October 2021]: Our work on low-light image enhancement will be published in IEEE Transactions on Image Processing.
[August 2021]: I was appointed to an Associate Editor of IEEE Transactions on Multimedia (T-MM).
[July 2021]: Our work on stochastic video prediction got accepted to ICCV 2021.
[July 2021]: Our work on Turkish video captioning has been published in the Machine Translation journal
[June 2021]: Our work on dynamic saliency prediction will be published in IEEE Transactions on Cognitive and Developmental Systems.
[April 2021]: Our joint work with HUCGLab on the use of synthetic data to analyze the performance of trackers under adverse weather conditions has been accepted for publication in Signal Processing: Image Communication.
[May 2021]: Our collaborative work with HUCGLab on joint person re-identification and attribute recognition has been accepted for publication in Image and Vision Computing.
[May 2021]: Our joint work with HUCGLab on procedural generation of person videos has been accepted for publication in Computer Graphics Forum.
[April 2021]: Our collaborative work with HUCGLab on procedural generation of person videos has been accepted for publication in Computer Graphics Forum.
[February 2021]: Our joint work with ICON lab at UMRAM, Bilkent University on multi-contrast MRI synthesis has been accepted for publication in Medical Image Analysis.
[February 2021]: I will be teaching the undergraduate-level course: BBM406 Fundamentals of Machine Learning.
[February 2021]: Our work on dense video captioning has been accepted for publication in Pattern Recognition Letters.
[January 2021]: Our work on learning visually-grounded cross-lingual representations got accepted to EACL 2021.
[October 2020]: Our work on visual story graphs has been accepted for publication in Signal Processing: Image Communication.
[May 2020]: Our ACM TOG paper on manipulating transient attributes of natural scenes was featured on Two Minute Papers.
[January 2020]: Our joint work with the Cognition, Learning and Robotics (CoLoRs) lab at Bogazici University on reasoning about action effects on articulated multi-part objects has been accepted to ICRA 2020.
[October 2019]: Our work on manipulating transient attributes of natural scenes via hallucination has been accepted for publication in ACM Transactions on Graphics.
[September 2019]: Our work about reasoning on procedural data is accepted to CoNLL 2019: "Procedural Reasoning Networks for Understanding Multimodal Procedures".
[April 2019]: I will give a tutorial on "Multimodal Learning with Vision and Language" together with Aykut Erdem at IPTA 2019.
[February 2019]: Our joint work with ICON lab at UMRAM, Bilkent University on multi-contrast MRI synthesis with GANs has been accepted for publication in IEEE Transactions on Medical Imaging.
[December 2018]: I will give a talk on Integrated Vision and Language at ITURO 2019.
[December 2018]: I have received The Young Researcher Award given by Turkish Academy of Sciences.
[August 2018]: Our work on multimodal machine comprehension is accepted to EMNLP 2018: "RecipeQA: A Challenge Dataset for Multimodal Comprehension of Cooking Recipes". Read our paper, download the data, and submit your predictions at our project website.