Welcome to the Computer Vision Group at RWTH Aachen University!

The Computer Vision group has been established at RWTH Aachen University in context with the Cluster of Excellence "UMIC - Ultra High-Speed Mobile Information and Communication" and is associated with the Chair Computer Sciences 8 - Computer Graphics, Computer Vision, and Multimedia. The group focuses on computer vision applications for mobile devices and robotic or automotive platforms. Our main research areas are visual object recognition, tracking, self-localization, 3D reconstruction, and in particular combinations between those topics.

We offer lectures and seminars about computer vision and machine learning.

You can browse through all our publications and the projects we are working on.

We have two papers accepted at the IEEE Winter Conference on Applications of Computer Vision (WACV) 2017.

Jan. 4, 2017

We have a paper on Scene Flow Propagation for Semantic Mapping and Object Discovery in Dynamic Street Scenes at IROS 2016

Aug. 19, 2016

We have three papers accepted at the British Machine Vision Conference (BMVC) 2016.

Aug. 19, 2016

We have a paper on Joint Object Pose Estimation and Shape Reconstruction in Urban Street Scenes Using 3D Shape Priors at GCPR 2016

June 19, 2016

Semantic Segmentation dataset released

We just uploaded our dataset used to train the semantic classifier in our ICRA 2016 paper on tracking of generic objects. You can find the dataset here.

May 23, 2016


Computer Vision: Shift of exam time

Due to the high amount of registrations we had to shift the time for the Computer Vision exam to the following slot:

Friday, 24th of February, 12:00-15:00 (entry ~12:15, exam lasts 2 hours)

Sorry for the inconvenience. Please let us know if you run into any problems regarding this time shift.

Feb. 17, 2017

Recent Publications

SAMP: Shape and Motion Priors for 4D Vehicle Reconstruction

IEEE Winter Conference on Applications of Computer Vision (WACV'17), to appear.

Inferring the pose and shape of vehicles in 3D from a movable platform still remains a challenging task due to the projective sensing principle of cameras, difficult surface properties, e.g. reflections or transparency, and illumination changes between images. In this paper, we propose to use 3D shape and motion priors to regularize the estimation of the trajectory and the shape of vehicles in sequences of stereo images. We represent shapes by 3D signed distance functions and embed them in a low-dimensional manifold. Our optimization method allows for imposing a common shape across all image observations along an object track. We employ a motion model to regularize the trajectory to plausible object motions. We evaluate our method on the KITTI dataset and show state-of-the-art results in terms of shape reconstruction and pose estimation accuracy.


3D Semantic Segmentation of Modular Furniture using rjMCMC

IEEE Winter Conference on Applications of Computer Vision (WACV'17).

In this paper we propose a novel approach to identify and label the structural elements of furniture e.g. wardrobes, cabinets etc. Given a furniture item, the subdivision into its structural components like doors, drawers and shelves is difficult as the number of components and their spatial arrangements varies severely. Furthermore, structural elements are primarily distinguished by their function rather than by unique color or texture based appearance features. It is therefore difficult to classify them, even if their correct spatial extent were known. In our approach we jointly estimate the number of functional units, their spatial structure, and their corresponding labels by using reversible jump MCMC (rjMCMC), a method well suited for optimization on spaces of varying dimensions (the number of structural elements). Optionally, our system permits to invoke depth information e.g. from RGB-D cameras, which are already frequently mounted on mobile robot platforms. We show a considerable improvement over a baseline method even without using depth data, and an additional performance gain when depth input is enabled.


Keyframe-Based Visual-Inertial Online SLAM with Relocalization

ArXiv e-prints

Complementing images with inertial measurements has become one of the most popular approaches to achieve highly accurate and robust real-time camera pose tracking. In this paper, we present a keyframe-based approach to visual-inertial simultaneous localization and mapping (SLAM) for monocular and stereo cameras. Our method is based on a real-time capable visual-inertial odometry method that provides locally consistent trajectory and map estimates. We achieve global consistency in the estimate through online loop-closing and non-linear optimization. Furthermore, our approach supports relocalization in a map that has been previously obtained and allows for continued SLAM operation. We evaluate our approach in terms of accuracy, relocalization capability and run-time efficiency on public benchmark datasets and on newly recorded sequences. We demonstrate state-of-the-art performance of our approach towards a visual-inertial odometry method in recovering the trajectory of the camera.

Disclaimer Home Visual Computing institute RWTH Aachen University