The past decade has shown that a vast majority of visual computing problems admit significantly higher quality solutions if the entire processing of the visual data is learned jointly: The era of Deep Learning has largely replaced previous sequential approaches such as first designing important features for a specific task and subsequently learning to analyze or classify the data using such features. Yet, this so-called end-to-end learning paradigm considers the image data a sensor system records as the beginning of the learning pipeline. It neglects the fact that the image data itself is the result of an upstream process with many design choices in developing and dynamically adapting the sensor system, and thus still remains a sequential approach in which the sensor system is designed separate from the data processing pipeline. The goal of the "Learning to Sense" (L2S) research unit is a joint optimization for design parameters of the sensor system along with the neural network to analyze the resulting data, i.e., developing a true end-to-end machine learning methodology yielding systems that are optimized for an application specific task. Consequently, the L2S project will conduct joint fundamental research on both, making sensor systems adaptive to provide promising degrees of freedom, and on a machine learning methodology that allows for the joint optimization of the resulting sensor system and network parameters. In the long run, the L2S paradigm will provide a new methodology for the integral development of adaptive sensor systems alongside neural networks with optimal task-specific characteristics, which results in a substantially more efficient and more precise scene analysis with minimal manual inference in the sensor system design.