Area 1: Algorithm and Software Engineering
This area is about high quality software and thus an indispensable part of Core Informatics.
Fast high quality solutions of computational problems are at the core of Area 1. We address this using a methodology that closely integrates theoretical modelling and analysis with tuned implementations and careful experimental evaluation. We single out two focus activities:
The Basic Algorithmic Toolbox: (PIs: Sanders, Beckert, Tahoori) Some basic algorithmic components of software are reused in almost every program, often in a position relevant for the overall performance. Examples are sorting and data structures managing collections of objects (e.g., dictionaries, priority queues, . . . ). Sanders’s group is leading in efficiently realizing such components on modern hardware. A large part of this long-term activity is currently financed by an ERC advanced grant. However, the extension to future processor-in-memory systems will be pursued in cooperation with Tahoori’s group. Moreover, some of these algorithms are so complicated that for including them into the standard library of a program language one should prove their correctness. This will be done in cooperation with Beckert’s group.
Massive Scale Route Planning: (PIs: Bläsius, Sanders, Wagner) Finding the shortest path in a graph is a fundamental optimization problems with many applications. We want to use and adapt our leading position in this area to scale it to applications where millions of routes have to be computed in a short time or where the network is huge. Examples are traffic steering or controlling a fleet of (autonomous) taxis with various forms of ride-sharing with the eventual vision of a stream-lined software-define transportation system. Here the sheer number of choices (which car picks up which passenger? Where? When? Split into subtrips?) demands a huge number of distance calculations with real-time requirements. Another example is robot route planning that happens in high-dimensional continuous spaces.
Similar to algorithm engineering our approach to program correctness closely integrates theory and practice.
Provably Correct Program Construction: (PIs: Beckert, Platzer, Schaefer) Processes, methods and tools for the construction of provably correct programs are essential for all areas where functional correctness of software is relevant, in particular where cyber-physical systems are used (automotive, aviation, robotics etc.). So far, most advances in the area of formal methods aim at verification in the small, i.e., correctness of individual programs or components. These methods need to be generalised to be applicable to large software systems and allow the required combination of different methods for different system properties. Thus, in this activity, we target the following research questions: (1) embedding of verification methods into the software development process (in cooperation with PI Reussner), (2) the scalability of analysis and verification techniques for larger software systems (including parallalisation in cooperation with PI Sanders), (3) the development of verification techniques that are robust w.r.t. system changes, allow reuse and adaptation of specifications and proof artefacts and are, thus, applicable in agile development processes, and (4) methods for the coupling of different verification tools based on a system’s architecture and behavioral models, which allows to combine light-weight methods (testing, simulation, runtime checking) with formal post-hoc verification or correctness-by-construction development.
In this subtopic, we look at methods and tools that improve large software systems.
Quality Engineering for Research Software: (PIs: Koziolek, Reussner, Stamatakis) Software plays a substantial role in essentially any scientific discipline. Challenges encompass gathering, analyzing, persisting, and managing rapidly growing data volumes. As a consequence, scientific software quality is pivotal to generating novel insights and knowledge. Software quality is a multi-dimensional term comprising numerous specific quality attributes, such as performance, efficiency, scalability, reliability, availability, reproducibility but also correctness and maintainability. Researchers in this field will extend current tools for white-box algorithm performance models (cooperation with subtopic algorithm engineering) and methods via sophisticated hardware- and infrastructure models to more accurately assess research software quality. Examples of targeted application areas is software for material sciences, but also medical, biological or bioinformatics analysis software. Such tools and models will yield rapid feedback during the design and evolution of software on quality attributes such as performance, scalability, reliability, availability and maintainability. The correctness of research software will be assessed via formal verification tools (cooperation with subtopic formal methods), in particular light-weight methods favoring automatization and scalability over comprehensibility. These tools need to be offered as a service and need to be extended in terms of scalability and detectable error classes, for example, with regard to checking correct resource de-allocation by means of model-checking approaches.
|Integration of KeY and KeYmaeraX||
|Refinement for Algorithm Verification||
|Engineering the Basic Algorithmic Toolbox||
PI Bläsius, PI Sanders
|Logical Refinement Reasoning for Dynamical Systems||
|Towards Software Defined Public Transportation||