We consider safety and security from a systems engineering perspective: concepts, models, languages, architectures, components, patterns, tools, methods, and processes for safer and more secure software systems. In the Munich Center for Internet Research, we study the application of our technologies and methods to the Internet in interdisciplinary research collaborations.
A first focus is on testing. At the level of code, we design and implement methods for generating defect-based tests and for reducing large test suites, specifically for regression testing for embedded systems. We are especially interested in driver assistance systems -- (partly) autonomous driving.
We have a strong record in model-based testing. The idea is to generate tests from a model of the system under test (SUT) and its environment: sequences or trees of input and expected output. Since the model must be more abstract than the SUT, the different levels of abstraction must be bridged - which usually accounts for as much as 50% of the model-based testing effort. We currently work on property-driven (i.e., not purely structural) and random test case generation as well as on generation mechanisms for bridge components. Recent dissertations include Holling's on the defect-based generation of tests for embedded systems (2016) and Büchler's on the generation of automated security tests for web applications (2015). Current work focuses on testing advanced driver assistance systems and fault localization/failure clustering.
Modern cyber-physical systems require not only tests but runtime monitoring. Under the umbrella of accountability and on the basis of our distributed data usage technology (see below), we study frameworks and implementations that can be used to link unwanted events at runtime to responsible components or people. In addition to algorithms for runtime verification and causality analyses, mechanisms for checking software integrity turn out to be crucial in this context. We look at the problem both for CPS, web-based applications and microservices.
One of the focal points of our research is distributed data usage control. Usage control generalizes access control to the future: what happens to data once it's been given away? Relevant requirements include, "delete data after 30 days," "don't delete data within 5 years," "notify me whenever data is accessed," "pictures in my social network profile must not be printed nor saved," "no data must leave the system un-anonymized." This is relevant in the areas of data protection, compliance with regulatory frameworks, business processes that are implemented in a distributed way (e.g., via SOAs in the cloud), the general management of intellectual property and secrets and, yes, DRM. The fun part is that requirements of this kind can be enforced at all levels of the software stack: the CPU, in a virtualized processor, in the OS, the runtime system, infrastructure applications such as X11, application frameworks, services, and business processes. Even better, the topic spans exciting theoretical, conceptual, methodological, economical and technical challenges. Several demos are available online. Recent dissertations include those of Lovat on combining information flow tracking with usage control across abstraction layers (2015), Kelbert's on distributed data usage control (2016), Birnstill's on usage control for privacy-respecting camera surveillance (2016) and Kumari's on turning human-understandable policies into machine-readable configurations for enforcement mechanisms (2015). In his interdisciplinary thesis at the intersection of computer science and law, Bier (2017) has extended the ideas to data provenance and the adverse effects of using data provenance tracking systems.
In the domain of information security in a narrower sense, we currently work at the interface between software obfuscation and diversity (Banescu 2017); and on machine learning for malware detection and forensics (Wüchner 2016).