heidelberg_banner
clearpixel aktuelle-seite-schriftzug contact us   sitemap  
clearpixel
Home
clearpixel
Call for Papers
Committees
Technical Program
Paper Submission
Schedule
Social Program
Registration
Travel Information
clearpixel
Site Service
Contact
clearpixel
    mfi-logo
     
 

Real Time Fusion and Evaluation of Vision Data

 
 

Session organizer: Thomas Längle 

Special-Session Abstract:

In many applications, the decision about how an intelligent system has to behave depends on data perceived from different sensors or from sequences of data from the same sensor, perhaps depending on changing conditions. Multi-sensor fusion deals with the acquisition, fusion and evaluation of several sensor signals and other information sources. In this session, the focus is set on real time fusion of vision data in different areas, e.g. in medicine, in robotics, and in other industrial environments.

 

Multisensor Fusion for Underwater Vehicles

 
 

Session organizer: Thomas Rauschenbach 

Special-Session Abstract:

This session focuses on algorithms and applications of multisensor systems for underwater vehicles.
Especially, contributions are expected which concentrate on multisensor based navigation and control, distributed sensor systems, sensor data fusion for mission and manoeuvre management, autonomous and cooperating vehicles.
Furthermore papers focusing on new sensor technologies for underwater applications, software architectures for enhancing the autonomy of vehicles as well as sensors for underwater handling devices and robots are welcome.

 

Advanced Surveillance Applications

 
 

Session organizer: Wolfgang Koch 

Special-Session Abstract:

In many military and civil applications there exists a pressing need for reliable surveillance in a complex environment. For contributing to this task sensor networks are frequently used for collecting mutually complementary aspects of dynamic scenarios characterized by the environment itself as well as by a plethora of stationary or mobile, separately or collectively operating objects. Taking the produced data streams as input information, multiple sensor data fusion provides a processing of the sensor data based on temporal integration, statistical analysis, and redundancy exploitation. In this context the integration of background information such as maps, behavior patterns (templates) is important finally resulting in a situation picture being the basis for supporting decisions or actions.

Complex environments can be found in civil applications such as traffic monitoring and security assistance systems. There also exist many military applications like ground or air surveillance. Special topics of interest are: Multiple sensor tracking for complex target scenarios; fusion of heterogeneous and passive sensors; exploitation of non-sensor background information; performance measures: evaluation, prediction.

 

Fusion of Acoustic and Optical Signals for Localization and Tracking

 
 

Session organizer: Kristian Kroschel 

Special-Session Abstract:

Both types - acoustical and optical - of signals have their specific pros and cons if they are applied for localization and tracking. Acoustical signals may be picked up by microphones from all directions wheras optical signals can be sensed only from the direction into which the camera looks.
On the other hand, the distance of an object can be determined much more precisely by optical sensors if the same basis for the stereo camera and the microphone array is used. In the workshop microphone arrays for fusion of acoustical signals will be presented, the fusion of optical and acoustical signals will be investigated to improve the measurement of the position of an object and its tracking is a topic, and finally acoustics might be used to improve the interpretation of an optical sensor system. Furthermore applications of these techniques in the industrial environment are welcomed.

 

Multisensor Fusion for Cognitive Automobiles

 
 

Session organizer: Dr.-Ing. Sören Kammel

Special-Session Abstract:

Robust and accurate perception of the environment is considered a prerequisite for a variety of next generation driver assistance systems.
For this reason, the fusion of data from complementary and/or spatially distributed sensors is essential. In this session, novel approaches for the information fusion on various levels of abstraction will be presented, reaching from fusion on low-level signal data up to fusion on a conceptual level.

 

Fusion of Heterogeneous Sensors in Robotics

 
 

Session organizer: Helge-Björn Kuntze 

Special-Session Abstract:

Increasing complexity of automation tasks in public, private and industrial areas requires in future the introduction of a new generation of multi-sensoric robots which are able to interact in a smart manner with their environment (e.g. with humans). Compared with the implementation of single individual visual or force sensors more comprehensive and accurate information about the robot in its environment can be achieved by combining sensory data (sensor fusion). The contributions of this session report both on new research results as well as on latest experiences of industrial implementations.

 

Fusion of Multivariate Sensor Data

 
 

Session organizer: Michael Heizmann 

Special-Session Abstract:

The focus of this special session lies on multivariate sensor data and their fusion. Multivariate sensor data result due to the variation of more than one degree of freedom available by the data acquisition. This special session discusses methods and applications for multivariate sensor fusion. The authors show by means of different sensor principles (e.g. camera-based imaging systems, eddy-current sensors and GPS-receivers) and fusion approaches (e.g. Kalman filtering, local Bayesian approaches) the properties and advantages of multivariate data and fusion.

 

Multisensor Fusion for Human-Activity Analysis

 
 

Session organizer: Rainer Stiefelhagen 

Special-Session Abstract:

This special session will focus on research related to the audio-visual analysis of human activities and interaction. Human interaction is inherently multimodal, ie. we use speech and language, but also facial expressions, body language, gestures, we look at each other etc. In order to fully understand human activities and interaction, multimodal perception and sensor fusion is therefore necessary.
The topics addressed in this session therefore include, but are not limited to

  • audio-visual analysis of activities and interaction
  • distributed sensor networks for tracking people and activities
  • audio-visual person tracking
  • audio-visual person identification
  • audio-visual speech recognition.

 

Multi-Component Sensors for Enhanced Information, Selectivity and Reliability

 
 

Session organizer: H. Ruser 

Special-Session Abstract:

In many practical applications low-cost, autonomous and maintenance-free sensors are increasingly desired. ‘Multi-component sensors’ are understood here as the entirety of sensor elements and model-based signal processing tools combined in the system, particularly including built-in monitoring functions.
Considered examples for these smart sensor functionalities, based on advanced methods of sensor modelling, are the observation and classification of model parameters for automatic self-validation or the derivation of situation-based behavioural patterns. As a consequence, multi-component sensors have the potential to gain more plant-relevant information and to have a greatly improved reliability and selectivity over a long maintenance-free operation time even in harsh environments.
In the Special Session, important aspects related to the design, development and illustrious practical applications of multi-component sensors are addressed.

 
 

All MFI 2006 prospective authors are encouraged to consider submitting to a special session whose theme seems appropriate to their work.

 
top-link top technical information   disclaimer