P1: L2S-Training with Continuous Sensor System Parameters and Irregular Data - Prof. Michael Moeller
Our vision to realize a joint training of network as well as sensor system design parameters poses several fundamental challenges on the machine learning side:
1) If the layout of a sensor is part of the optimization, the data the sensor produces continuously changes its spatial distribution and requires network architectures that can handle such changing irregular data.
2) An efficient joint optimization should benefit from the strong knowledge we have about the changes certain sensor system design parameters have on the recorded data. This kind of prior (physical) knowledge needs to be incorporated into the network architectures.
3) As continuously changing real sensor systems will be too slow to provide real-time training data, our training processes will largely rely on simulations, such that the resulting machine learning systems need to find ways to bridge the domain gap and generalize to real data.
4) A particular focus needs to be put on the formulation of the cost function for the joint optimization of sensor system and network parameters: Splitting the data set into two parts that can be used to optimize different sets of parameters has the potential to further improve the generalization.
This project aims at tackling the four challenges above at specific examples of 3D microscopy and ThZ imaging applications.
P2: Interpretable and Robust L2S Optimization - Prof. Margret Keuper
In this part of the project, we are addressing the joint optimization of neural network design and sensor design, with the objective to improve not only the overall prediction accuracy but also the network generalization ability and its robustness.
Thus the subproject will investigate for example:
- neural architecture search in the context of joint network and sensor optimization
- adversarial training to improve robustness and generalization
- explainability for optimized architecture and sensor systems
P3: Architectures for Non-Image-Generating L2S Vision Systems - Prof. Volker Blanz
The subproject Architectures for Non-Image-Generating L2S Vision Systems studies machine learning approaches that help to develop and optimize new types of sensors for visual light and THz radiation.
The goal is to develop sensors that are optimized for specific machine vision tasks and that do not record an image in the classical sense. Instead, they rely on differential signals such as spatial contrast, motion, and on foveated vision. Many of these techniques are motivated from the human visual system.
P4: L2S for CMOS Image Sensor Design - Prof. Bhaskar Choubey
CMOS image sensors have been key to revolutionise our life recently. In addition, images produced by these digital cameras are also a leading contributor to recent growth in AI technologies. However, traditional digital cameras have been built to match displays like computer monitors and paper, rather than being optimised for various machine learning tasks. Despite a number of AI algorithms being inspired by the animal brain, the image sensor seeks little inspiration from animal eye and undertakes almost no image processing. As a result, most digital cameras are mere data producing inputs with limited information extraction. In this project, we will develop and build novel image sensor architectures which are co-designed with intelligent algorithms. To do so, at one hand we will revisit the pixel geometry and layout, redesigning to provide optimal inputs to several image processing tasks developed by our collaborators in other L2S projects. Simultaneously, we will embed early analogue signal processing close to the pixel and the array in form of mathematical operators like derivatives and convolution as well as input stages of typical neural networks. These will be designed to reduce the complexity of the succeeding computational tasks of AI systems in software. The collaborative research between sensor developer and AI researchers will ensure that neither optical nor computation efficiency is lost in this process.
P5: Forward and Differentiable Simulation of L2S Sensor Data - Prof. Andreas Kolb
The overall goal of the Learning to Sense (L2S) research unit, i.e., the joint optimization of the design parameters of a sensor system and the associated neural network to analyze the resulting data in an end-to-end machine learning fashion, requires a large amount of training data for different sensor and scene configurations. Since collecting training and test data with real sensors is costly or partially not possible at all, simulation of the sensor data formation process is a key success factor to establish the link between sensor system parameters and the given application task.
This subproject focuses on the efficient simulation of the sensor data formation process, which includes the simulation of physical, real-world effects occurring in the scene at different wavelengths. This also includes the simulation of coherent radiation and material interaction and synthetic, i.e., unfocused imaging methods, and aspects of sensor system design, e.g., pixel and spectral filter layouts.
Technically, the focus of this project is on forward as well as differentiable simulation of sensor data, enabling application and hardware development based on machine learning for arbitrary sensor and scene parameters.
Image from Benjamin Rappaz, Pierre Marquet, Etienne Cuche, Yves Emery, Christian Depeursinge, and Pierre J. Magistretti, "Measurement of the integral refractive index and dynamic cell morphometry of living cells with digital holographic microscopy," Opt. Express 13, 9361-9373 (2005).
P6: 3D Microscopy for Unstained Cell Clusters - Prof. Ivo Ihrke
Motivated by the goal of creating tools for the time-resolved 3D observation of clusters of cancer cells in-vitro, i.e. under normal living conditions, but outside the body, the subproject "3D microscopy for unstained cell clusters" investigates the improvement potential for 3D refractive index microscopy, aiming at improving its speed, accuracy and/or resolution using machine learning tools and designing novel hardware configurations.
P7: Learning to Sense for Advanced Coherent THz Imaging Systems - Prof. Peter Haring-Bolivar
Coherent imaging in the mm-wave and THz frequency ranges opens up a huge application range, providing innovative imaging and sensing capabilities in optically inaccessible situations, like inter alia remote sensing under arbitrary environmental conditions, autonomous vehicle vision systems resilient towards fog and rain, subsurface imaging for material analysis, quality control, and non-destructive testing, or security related imaging systems for applications such as hidden explosives detection at airport checkpoints. This project plans to use machine-learning based approaches to learn to cope with fundamental limitations of coherent imaging systems, and to train and validate their adequacy in the mm-wave and THz frequency ranges. The experimental realization will concentrate on sparse multiple-input multiple-output (MIMO) synthetic aperture radar (SAR) mm-wave and THz imaging approaches (from 300 GHz – THz), given their wide-ranging application relevance and intrinsic advantages.
This objective is guided by following interrelated goals:
Learning how physical knowledge containing network architectures can be used to develop adaptive synthetic image generation approaches that enhance the image quality and correct scene dependent interference artifacts for coherent 3D imaging.
Evaluating and understanding the robustness of machine-learning based segmentation of reconstructed 3D THz imaging data originating from sparse illumination and sensor arrangements, including differential imaging modes.
Assessing and learning if segmentation from raw sensory data can directly be attained, without an intermediate 3D image generation step by synthetic reconstruction.
Learning how a sensory task dependent system optimization can fundamentally maximize imaging and recognition capabilities, and at the same time minimize hardware and data acquisition effort.