SIMULTANEOUS BOTTOM-UP/TOP-DOWN PROCESSING IN EARLY AND MID LEVEL VISION ERDEM, Mehmet Erkut Ph.D., Department of Computer Engineering Supervisor: Assoc. Prof. Dr. Sibel Tari November 2008, 175 pages The prevalent view in computer vision since Marr is that visual perception is a data-driven bottom-up process. In this view, image data is processed in a feed-forward fashion where a sequence of independent visual modules transforms simple low-level cues into more complex abstract perceptual units. Over the years, a variety of techniques has been developed using this paradigm. Yet an important realization is that low-level visual cues are generally so ambiguous that they could make purely bottom-up methods quite unsuccessful. These ambiguities cannot be resolved without taking account of high-level contextual information. In this thesis, we explore different ways of enriching early and mid-level computer vision modules with a capacity to extract and use contextual knowledge. Mainly, we integrate low-level image features with contextual information within unified formulations where bottom-up and top-down processing take place simultaneously. Keywords: bottom up/top down paradigms in computer vision, image denoising, image segmentation, skeleton extraction, PDE methods