P1: L2S-Training with Continuous Sensor System Parameters and Irregular Data - Prof. Michael Moeller
Our vision to realize a joint training of network as well as sensor system design parameters poses several fundamental challenges on the machine learning side:
1) If the layout of a sensor is part of the optimization, the data the sensor produces continuously changes its spatial distribution and requires network architectures that can handle such changing irregular data.
2) An efficient joint optimization should benefit from the strong knowledge we have about the changes certain sensor system design parameters have on the recorded data. This kind of prior (physical) knowledge needs to be incorporated into the network architectures.
3) As continuously changing real sensor systems will be too slow to provide real-time training data, our training processes will largely rely on simulations, such that the resulting machine learning systems need to find ways to bridge the domain gap and generalize to real data.
4) A particular focus needs to be put on the formulation of the cost function for the joint optimization of sensor system and network parameters: Splitting the data set into two parts that can be used to optimize different sets of parameters has the potential to further improve the generalization.
This project aims at tackling the four challenges above at specific examples of 3D microscopy and ThZ imaging applications.
P2: Interpretable and Robust L2S Optimization - Prof. Margret Keuper
P3: Architectures for Non-Image-Generating L2S Vision Systems - Prof. Volker Blanz
The subproject Architectures for Non-Image-Generating L2S Vision Systems studies machine learning approaches that help to develop and optimize new types of sensors for visual light and THz radiation.
P4: L2S for CMOS Image Sensor Design - Prof. Bhaskar Choubey
CMOS image sensors have been key to revolutionise our life recently. In addition, images produced by these digital cameras are also a leading contributor to recent growth in AI technologies. However, traditional digital cameras have been built to match displays like computer monitors and paper, rather than being optimised for various machine learning tasks. Despite a number of AI algorithms being inspired by the animal brain, the image sensor seeks little inspiration from animal eye and undertakes almost no image processing. As a result, most digital cameras are mere data producing inputs with limited information extraction. In this project, we will develop and build novel image sensor architectures which are co-designed with intelligent algorithms. To do so, at one hand we will revisit the pixel geometry and layout, redesigning to provide optimal inputs to several image processing tasks developed by our collaborators in other L2S projects. Simultaneously, we will embed early analogue signal processing close to the pixel and the array in form of mathematical operators like derivatives and convolution as well as input stages of typical neural networks. These will be designed to reduce the complexity of the succeeding computational tasks of AI systems in software. The collaborative research between sensor developer and AI researchers will ensure that neither optical nor computation efficiency is lost in this process.
P5: Forward and Differentiable Simulation of L2S Sensor Data - Prof. Andreas Kolb
The overall goal of the Learning to Sense (L2S) research unit, i.e., the joint optimization of the design parameters of a sensor system and the associated neural network to analyze the resulting data in an end-to-end machine learning fashion, requires a large amount of training data for different sensor and scene configurations. Since collecting training and test data with real sensors is costly or partially not possible at all, simulation of the sensor data formation process is a key success factor to establish the link between sensor system parameters and the given application task.
Image from Benjamin Rappaz, Pierre Marquet, Etienne Cuche, Yves Emery, Christian Depeursinge, and Pierre J. Magistretti, "Measurement of the integral refractive index and dynamic cell morphometry of living cells with digital holographic microscopy," Opt. Express 13, 9361-9373 (2005).
P6: 3D Microscopy for Unstained Cell Clusters - Prof. Ivo Ihrke
Motivated by the goal of creating tools for the time-resolved 3D observation of clusters of cancer cells in-vitro, i.e. under normal living conditions, but outside the body, the subproject "3D microscopy for unstained cell clusters" investigates the improvement potential for 3D refractive index microscopy, aiming at improving its speed, accuracy and/or resolution using machine learning tools and designing novel hardware configurations.