Address: Beytepe Campus, Ankara, Turkey TR-06800
e-mail: erkut at cs dot hacettepe dot edu dot tr
Phone: +90 312 780 7549
Fax: +90 (312) 297 7502
My research centers on the areas of computer vision and machine learning. I believe the right algorithms and representations are the ones that take into account the contextual influences. Thus, the research objective that my students and I pursue is to incorporate different kinds of context (spatial, temporal and/or cross-modal) into all levels of visual processing from low to intermediate and high-level vision.
Current research interests: Visual Saliency Prediction, Automatic Image Description, Video/Photoset Summarization, Image Filtering, Image Editing
Supérieure des Télécommunications
Middle East Technical University
University of California
Oct. 2007 - Dec. 2007
Virginia Bioinformatics Institute, Virginia Tech
Jul. 2004 - Aug. 2004
[February 2018]: Our joint work with ICON lab at UMRAM, Bilkent University on utilizing GANs for multi-contrast MRI synthesis is out on arXiv.
[January 2018]: New TUBITAK 1001 project on "Using Synthetic Data for Deep Person Re-Identification", in partnership with HUCG (Hacettepe University Computer Graphics and Game Studies) group.
[November 2017]: Our work on deep dynamic saliency prediction will be published in IEEE Transactions on Multimedia: "Spatio-Temporal Saliency Networks for Dynamic Saliency Prediction".
[October 2017]: I am awarded a hardware donation (a Quadro P5000 GPU) from NVIDIA for my research.
[September 2017]: New TUBITAK 1003 project on "Summarization Approaches Towards Interpreting Big Visual Data", in partnership with Somera.
[June 2017]: Our work on sampling-based image and video matting will be published in IEEE Transactions on Image Processing: "Alpha Matting with KL-Divergence Based Sparse Sampling".
[May 2017]: Slides from our "Adversarial Training and Generative Adversarial Networks" tutorial at SIU 2017 are now available online.
[December 2016]: Our work on analysis of automatic evaluation metrics for image captioning is accepted to EACL 2017 as a long paper: "Re-evaluating Automatic Metrics for Image Captioning".
[December 2016]: Our paper on using GANs to generate outdoor scenes from attributes and semantic layouts is out on arXiv.
[November 2016]: Our work on learning dynamic saliency will be published in Signal Processing: Image Communication: "A Comparative Study for Feature Integration Strategies in Dynamic Saliency Estimation".
Project Duration: 3 years (2014-2017)
Sponsors: TUBITAK 1001 - Support Program for Scientific and Technological Research Projects (Award# 113E116) and European Union under European Cooperation in Science and Technology (COST) Programme (ICT COST IC1037 Action)
Project Duration: 3 years (2012-2015)
Sponsors: TUBITAK 3501 - Career Development Program (Award# 112E146)
Project Duration: 3 years (2017-2020)
Sponsors: TUBITAK 1003 - Primary Subjects R&D Funding Program (Award# 116E685)
Project Duration: 3 years (2016-2019)
Sponsors: TUBITAK 1007 - Public Institutions Research Funding Program (Award# 114G028)