Software Integrity Protection (IN2106)


Master Praktikum (Lab)


Summer Semester 2017



Class Time and Room:

Tuesday 14:00-16:30

Room: 00.11.038


Mohsen Ahmadvand








TUMonline, Moodle


C programming knowledge

Good understanding of security concepts

Comfortable with command line and basic knowledge of Linux operating system

Basic understanding of algorithms and data structures


Assembly language knowledge

Knowledge of using Linux

Any previous knowledge in program analysis would be a plus

Max. Number of participants:


Misc: All participants of this lab must fill-in a disclaimer and hand it to the instructors in original hard-copy form, before the end of the first lab session.


Software systems are subject to Man-At-The-End attacks. MATE attackers have control over the system on which the software is running, and thus they can manipulate both the software itself and its runtime environment for their own benefits. Attackers motive includes but not limited to: illegal usage of a software by bypassing license checks, accessing proprietary data, cheating in games or extracting confidential information (e.g. encryption keys) in an application. In this lab course, students will learn about different protection measures, their cons and pros and finally implement a selected set of techniques.

This lab aims at raising the bar against MATE attackers by making their attacks more expensive and labor intensive. In this lab course, students will first learn about the state of the art in software-based and hardware-aided tamper-proofing fields. Later on, the participating students in groups of two will apply the acquired knowledge in practice (hands on) by implementing a selected set of the software-based measures. Moreover, students will employ Intel SGX (hardware-assisted integrity protection) mechanism to protect a provided program. This lab ends with a thorough evaluation of the performance and security of the implemented protection schemes.

Phase1: ”Introspection: Self-Checksumming” The goal of this phase is to detect and prevent tampering attacks on program’s static properties. Groups of two students will be asked to implement a self-checksumming program in C/C++ and python. Afterwards, they will employ their tool to protect a set of programs, which will be provided by the course instructor. All the steps taken, including the algorithm and necessary configurations, will be documented by each group. At the end of this phase, each group will submit their protector’s implementation, protected programs and documentation.

Phase2: ”Introspection: Self-Encryption” In this phase we will protect the program’s static and the same time attempt to conceal its functionality to the max. Groups will implement a self-encrypting program and employ it on the provided programs. All the source codes along with the documentation will be submitted by each group.

Phase3: ”State Inspection: Oblivious Hashing” Detecting and preventing tampering attacks on program dynamics are the objectives of this phase. Each group of the students will implement a runtime trace verification program (oblivious hashing) in C/C++ and/or python. They will again use their tool to protect the set of provided programs. Each group will submit their tool, protected programs and corresponding documentations at the end of the phase.

Phase4: ”Hardware-aided Integrity Protection: Intel SGX” The objective of this phase is to protect applications using tamper resistant hardware. We will use Intel SGX for this purpose and grant shell access to a machine (with SGX hardware) for all students. Each group will receive a program for which they need to identify assets and sensitive code regions (targets for tampering). Once the sensitive regions are identified, they refactor and make all the necessary modification to leverage hardware aided tamper protection.

Phase5: ”Evaluation” In this phase every group will carry out a thorough evaluation of their implemented protection measures. Each group will measure and report the protection time and runtime overhead of their protectors and SGX protected program. Further, security guarantees and limitations of the implemented schemes in a security analysis will be identified. The outcome of the performance and security analyses will be documented and delivered by each group at the end of this phase.


Groups must hand in their results in electronic form at the end of each phase. For all phases each group of students must deliver their software, documents and screen-casts, on a Docker container. Groups will document individual effort (measured in hours per task/activity). This information, together with the delivered documents, screen-casts and software will be taken into account for the final individual grades. The outcome of each phase will be graded and will contribute to 80% of the final grade.

All groups will present and discuss protection mechanisms and reverse engineering techniques in every phase. Every student must present at least once. Presentation grades are assigned to each individual student based on: speech clarity, organisation and logical flow of the presentation and ability of the presenter to answer questions and engage the audience. The weight of the presentation is 20% of the final grade.

A grade bonus of 10% is offered to students who use further hardening measures to improve resilience of their protection measures.


Modules taught will include:

Module 0: Introduction and Motivation

  • Software protection scenarios 

  • Attack trees
Man-at-the-End vs. network attacker
  • Overview of the attacks
    • Disassembly

    • decompilation

    • debuggers
    • symbolic / concolic execution

  • Introspection
  • State inspection
  • Layered and remote protection

Module 1: Protection Process & LLVM

  • Check() and Response() paradigm 

  • Overview of protection process & code transformation 

    • Post-compile
    • Pre-compile
    • Compile-time
    • Load time and runtime
  • Granularity of protection
    • Function
    • Basic block
    • Instruction
  • Control flow integrity
  • LLVM compiler infrastructure & passes

Module 2: Introspection self-checksumming

  • Self-checking and self-checksumming
  • Network of checkers and cyclic checks
  • Stealth analysis
  • Attacks: memory split and taint analysis

Module 3: Introspection self-encrypting

  • Key derivation and Block-chain
  • Whitebox cryptography
  • Process level virtualisation
  • Stealth analysis 

  • Attacks: memory dump and key extraction

Module 4: State inspection

  • Trace authentication

  • Environmental states

  • Oblivious hashing

  • Stealth analysis

  • Attacks: time-of-check vs. time-of-use

Module 5: Intel SGX

  • Running software on untrusted commodity
  • Runtime integrity

  • Trusted and untrusted program domains
  • Enclaves
  • Local attestation Remote attestation
  • Limitations



There is no single literature source for the topics treated in the lab. However the following is a list of recommended further reading:

  1. Surreptitious Software: Obfuscation, Watermarking, and Tamperproofing for Software Protection, Jasvir Nagra, Christian Collberg, Pearson Education, Jul 24, 2009
  2. Aucsmith, D. Tamper resistant software: An implementation. Proceedings of the First Inter- national Workshop on Information Hiding (1996), 317–333.
  3. Banescu, S., Pretschner, A., Battre ́, D., Cazzulani, S., Shield, R., and Thompson, G. Software-based protection against changeware. In Proceedings of the 5th ACM Conference on Data and Application Security and Privacy (2015), ACM, pp. 231–242.
  4. Chang, H., and Atallah, M. J. Protecting software code by guards. In ACM Workshop on Digital Rights Management (2001), Springer, pp. 160–175.
  5. Chen, Y., Venkatesan, R., Cary, M., Pang, R., Sinha, S., and Jakubowski, M. H. Oblivious hashing: A stealthy software integrity verification primitive. In International Workshop on Information Hiding (2002), Springer, pp. 400–414.
  6. Dewan, P., Durham, D., Khosravi, H., Long, M., and Nagabhushan, G. A hypervisor- based system for protecting software runtime memory and persistent storage. In Proceedings of the 2008 Spring simulation multiconference (2008), Society for Computer Simulation International, pp. 828–835.
  7. Ghosh, S., Hiser, J., and Davidson, J. W. Software protection for dynamically-generated code. In Proceedings of the 2nd ACM SIGPLAN Program Protection and Reverse Engineering Workshop (2013), ACM, p. 1.
  8. Ghosh, S., Hiser, J. D., and Davidson, J. W. A secure and robust approach to software tamper resistance. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6387 LNCS (2010), 33–47.
  9. Horne, B., Matheson, L., Sheehan, C., and Tarjan, R. Dynamic Self-Checking Techniques for Improved Tamper Resistance. Security and Privacy in Digital Rights Management (2002), 141–159.
  10. Ibrahim, A., and Banescu, S. Stins4cs: A state inspection tool for c. In Proceedings of the 2016 ACM Workshop on Software PROtection (2016), ACM, pp. 61–71.
  11. Intel. Intel software guard extensions (intel sgx) sdk, Jan. 13 2017.
  12. Jacob, M., Jakubowski, M. H., and Venkatesan, R. Towards integral binary execution: Implementing oblivious hashing using overlapped instruction encodings. In Proceedings of the 9th workshop on Multimedia & security (2007), ACM, pp. 129–140.
  13. Jin, H., and Lotspiech, J. Forensic analysis for tamper resistant software. In Software Reliabil- ity Engineering, 2003. ISSRE 2003. 14th International Symposium on (2003), IEEE, pp. 133–142.
  14. Junod, P., Rinaldini, J., Wehrli, J., and Michielin, J. Obfuscator-llvm: software pro- tection for the masses. In Proceedings of the 1st International Workshop on Software Protection (2015), IEEE Press, pp. 3–9.
  15. The llvm compiler infrastructure, Jan. 13 2017.
  16. Madou, M., Anckaert, B., Moseley, P., Debray, S., De Sutter, B., and De Boss- chere, K. Software protection through dynamic code mutation. In International Workshop on Information Security Applications (2005), Springer, pp. 194–206.
  17. Malone, C., Zahran, M., and Karri, R. Are Hardware Performance Counters a Cost E↵ective Way for Integrity Checking of Programs. Proceedings of the sixth ACM workshop on Scalable trusted computing - STC ’11 (2011), 71.
  18.  Qiu, J., Yadegari, B., Johannesmeyer, B., Debray, S., and Su, X. Identifying and Understanding Self-Checksumming Defenses in Software. 207–218. [18] Wurster, G., Van Oorschot, P. C., and Somayaji, A. A generic attack on checksumming- based software tamper resistance. Proceedings - IEEE Symposium on Security and Privacy (2005), 127–135.