Knowledge Reuse: From Threat to Causal Models and Back!

Type:

Master Seminar

Semester:

Summer Semester 2018

Language:

English

Preliminary Meeting:

Thursday 01.02.2018 13:00-14:00 

Room: 00.11.038

Lecturer:

Prof. Dr. Alexander Pretschner

Amjad Ibrahim

SWS:

2

ECTS:

4/5

LvNr:

1123 (IN2107)

Max. Number of participants

12

Rules for participation and registration

  1. Plagiarism of any form (blatant copy-paste, summarizing some else's ideas/results without reference etc.) will result in immediate expulsion from the course.
  2. All submissions are mandatory. Each submission must fulfill a certain level of quality. Submissions that are just collections of buzzword/keywords or coarse document structures will not be accepted. Failing that will be graded 5.0
  3. Late submissions will invite penalties.
  4. Non-adherence to submission guidelines will invite penalties.
  5. Slides must be discussed with the supervisor at least one week before the presentation. Presentation must be held in English.
  6. Participation and attendance in all seminar presentations is mandatory. Students must read the final submissions of their colleagues and participate in the discussions.
  7. Registration for the seminar takes place by the TUM Online Matching System.
  8. Once successfully registered for the seminar
    1. Students select at most 3 free available individual seminar topics of their choice.
    2.  Send the selected topics via email (subject: “Knowledge Reuse seminar") in a preferred order from 1 (=most preferred topic) to 3 to Amjad Ibrahim.
  9. Once assigned a topic, you will receive a confirmation email.
  10. Students must acknowledge their acceptance of the topic and participation in the seminar latest by TBA.
  11. Students willing to quit the seminar must send a cancellation email by TBA, failing which they will be graded 5.0

Content

Causality is an intuitive notion of the world. Philosophers struggled with finding a non circular definition of causality [3, 5]. Computer scientists, however, were more practical and established a formal foundation of causality [2, 1]. Such foundation provides a language to reason about causality. This enables system designers to understand the causal relationships between events during the runtime of their systems. This is beneficial in different computer science domains like: software testing, distributed systems, AI, security and safety.

    Causality, at least in computer science, is model relative. Therefore, the action of creating a model is a crucial steps towards an effective causality inference. The goal of this seminar is to propose domain specific causal modeling methodologies related to security goals.

   In threat modeling domain, we already have abstract causal knowledge represented in different forms. For example, there are at least 30 [4] directed acyclic graphical models that are used for different purposes like risk estimation, attack cost approximation, defenses planning. Let us consider an example. Attack trees [7] (AT) are the most known and perhaps used model for security purposes. Such trees enable engineers to systematically categorize how an attack can be carried out to achieve a high-level goal. Managers find attack trees appealing because of the their visual nature, as well as their ability to estimate cost of defense or attack in a top-down preservative. On the other hand, researchers appreciate attack trees as well, mainly because of their well-defined syntax and semantics[6]. Although attack trees are widely used to model possible attacks on a system, they are not really sufficient to be used to answer accountability causal-queries.

    This research will study the mapping between threat models and causal models. The resulting goals of this task can be summarized as follows:

• Analyzing the interaction between security threat modeling techniques (e.g. attack trees) and causal models. Specifically, reasoning about the value added by creating causal models.

• Proposing algorithms to extract causal models from security models in contexts like malicious insiders

How it goes?

  1. Each student will study and analyze the literature around a topic/model.
  2.  The literature should provide a general understanding, definition and components of a specific field, e.g attack graphs. Taxonomies or meta models are a good starting point.
  3.  The understanding is directed towards the comparison with causal models.
  4.  Each student will come up with a clear comparison (differences and commonalities) between the related model and causal model.

Pre-requisites

 A background in security, system modeling or any field specified in the list of topics is desirable, but not required. Basic understanding of algorithms and data structures.

Objective

• Analyzing the interaction between security threat modeling techniques (e.g. attack trees) and causal models. Specifically, reasoning about the value added by creating causal models.

• Proposing algorithms to extract causal models from security models in contexts like malicious insiders.

Working Plan

1. Familiarize with the concepts of actual causality and threat modeling.

2. Write a state-of-the-art survey of threat modeling.

3. Analyze, document and present the details of how the goals were achieved

Possible topics

The list of related fields can be (but not limited to):

Causal modeling +

  • Threat and causal models
  • Safety Models: Fault trees and others
  • Attack Tree and Graph Generation
  • Graph Transformation Systems
  • A theory of malicious insiders 

Organization

Students will survey the literature of one of the research topics assigned to them by their supervisors;they are encouraged to find and read further relevant articles on the topic. At the end of the seminar,students are to submit an exposé that incorporates the knowledge they acquired and the findings of any experiments they conducted whilst researching the topic. The exposé depicts a scientific paper that adopts their own succinct chain of argumentation. Merely paraphrasing and augmenting the contents of original papers is not sufficient. We expect the paper to be maximum 15 pages in Springer LNCS style. We will notaccept any other formats. All submissions must be as PDF files: no other file formats are acceptable. The presentation will be 30 minutes + 15 minutes of discussion.

Seminar Literature

Causality 

 

[1] Joseph Y. Halpern. A modification of the halpern-pearl definition of causality. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 3022–3033, 2015.

[2] Joseph Y Halpern. Actual causality. MIT Press, 2016.

[3] David Hume. An equiry concerning human understanding. History of Economic Thought Books, 1748.

[4] Barbara Kordy, Ludovic Piètre-Cambacédès, and Patrick Schweitzer. Dag-based attack and defense modeling: Don’t miss the forest for the attack trees. Computer science review, 13:1–38, 2014.

[5] David Lewis. Counterfactuals and comparative possibility. Journal of Philosophical Logic, 2(4):418–446, 1973.

[6] Sjouke Mauw and Martijn Oostdijk. Foundations of attack trees. In Information Security and Cryptology - ICISC 2005, 8th International Conference, Seoul, Korea, December 1-2, 2005, Revised Selected Papers, pages 186–198, 2005.

[7] Bruce Schneier. Attack Trees - Modeling security threats. Dr. Dobb’s Journal, December 1999