Microservice Integrity Protection (IN2106)
If you want a higher priority for the matching system, please send your CV and transcript of records / grades list to ahmadvan[at]in.tum.de
Master Praktikum (Lab)
Summer Semester 2018
Class Time and Room:
Good understanding of security concepts
Good C/C++ and python programming skills
Comfortable with command line and good knowledge of Linux operating system
Algorithms and data structures
Any previous experience with go program language and/or program analysis is a plus
Misc: All participants of this lab must fill-in a disclaimer and hand it to the instructors in original hard-copy form, before the end of the first lab session.
Microservices expose a wider attack surface, exhibit a more dynamic behaviour and change more rapidly, which make security a major concern in such systems. Worse yet, the new build and deployment paradigms, so-called "DevOps", have empowered a larger group of the organizational users with an extended access on production machines. This, in turn, significantly increased the risk of rogue insiders harming system assets. In the given setting protecting system integrity has become a challenging task.
In this lab, we utilize state-of-the-art protection techniques to mitigate the integrity risk in microservices. In the pursuit of a holistic protection, we combine hardware-aided (particularly Intel SGX) as well as software-based schemes (SIP-toolchain) to tackle various threats. Participants of this lab, in groups of two, will employ a multitude of protection mechanisms on a dataset of microservices. This lab ends with a comprehensive performance benchmark and a thorough security evaluation of the protected services.
Phase1: “Software-based service integrity protection using SIP-toolchain”
In this phase, we raise the bar against insider attackers using software-based protection. Students will utilize different tools provided in the SIP-toolchain to protect integrity assets of a provided dataset of microservices. All the steps taken, including the rationale behind choosing tools and the necessary configurations, will be documented by each group. At the end of this phase, each group will submit their protection script, protected programs and a comprehensive documentation of the entire procedure.
Phase2: “Sensitive data protection using Intel SGX”
In this phase, we aim at protecting data assets in the given set of microservices. Students will first identify sensitive data with which tampering renders system’s security defeated. Subsequently, they will utilize Intel SGX SDK to safeguard accesses to such data. All the source codes along with a comprehensive documentation will be submitted by each group.
Phase3: “Behavioural integrity protection via Intel SGX”
In this phase, students will get hands on experience with protecting the integrity of sensitive operations. Similar to phase 2, they will first identify sensitive operations in the given services, and then use Intel SGX to protect them. All the source codes along with a comprehensive documentation will be submitted by each group.
Phase4: “Trusted infrastructure”
Phase 4 aims at protecting the system integrity at the infrastructure level. In this phase, we use state-of-the-art tools to run microservices in trusted containers (powered by SGX). On top of this setting students will design and implement a secure mediating service to seed in secrets and configurations.
Phase 5: “Evaluation”
In this phase, every group will carry out a thorough evaluation of their implemented protected services. Each group will measure and report throughput and latency of their protected microservices. Further, security guarantees and limitations of the utilized schemes will be analysed. The outcome of the performance and security analyses will be documented and delivered by each group at the end of this phase.
Groups must hand in their results in electronic form at the end of each phase. For all the phases, each group of students must deliver their protection script and configuration on a Docker file, protected services, documents and screen-casts. Groups will document individual effort (measured in hours per task/activity). This information, together with the delivered documents, screen-casts and software will be taken into account for the final individual grades.
The outcome of each phase will be graded and will contribute to 80% of the final grade. All groups will present and discuss protection mechanisms in every phase. All the team members must be able to answer questions. Presentation grades are assigned to each individual student based on: speech clarity, organization and logical flow of the presentation and ability of the presenter to answer questions and engage the audience. The weight of the presentation is 20% of the final grade.
A grade bonus of 10% is offered to students who show outstanding performance by i) contributing in the extension of the protection schemes ii) presenting automated (scripted) attacks on protection schemes iii) implementing defence mechanisms against attacks beyond what was covered in the class.
Modules taught will include:
Module 0: Crash course on Microservices
Module 2: Integrity threats and protections
Module 3: SIP-toolchain (hands on)
Module 4: Intel SGX and data protection
Module 5: Intel SGX and behavioural integrity
Module 6: Intel SGX and infrastructure security
There is no single literature source for the topics treated in the lab. However the following is a list of recommended further reading:
- Surreptitious Software: Obfuscation, Watermarking, and Tamperproofing for Software Protection, Jasvir Nagra, Christian Collberg, Pearson Education, Jul 24, 2009
- Aucsmith, D. Tamper resistant software: An implementation. Proceedings of the First Inter- national Workshop on Information Hiding (1996), 317–333.
- Banescu, S., Pretschner, A., Battre ́, D., Cazzulani, S., Shield, R., and Thompson, G. Software-based protection against changeware. In Proceedings of the 5th ACM Conference on Data and Application Security and Privacy (2015), ACM, pp. 231–242.
- Chang, H., and Atallah, M. J. Protecting software code by guards. In ACM Workshop on Digital Rights Management (2001), Springer, pp. 160–175.
- Chen, Y., Venkatesan, R., Cary, M., Pang, R., Sinha, S., and Jakubowski, M. H. Oblivious hashing: A stealthy software integrity verification primitive. In International Workshop on Information Hiding (2002), Springer, pp. 400–414.
- Dewan, P., Durham, D., Khosravi, H., Long, M., and Nagabhushan, G. A hypervisor- based system for protecting software runtime memory and persistent storage. In Proceedings of the 2008 Spring simulation multiconference (2008), Society for Computer Simulation International, pp. 828–835.
- Ghosh, S., Hiser, J., and Davidson, J. W. Software protection for dynamically-generated code. In Proceedings of the 2nd ACM SIGPLAN Program Protection and Reverse Engineering Workshop (2013), ACM, p. 1.
- Ghosh, S., Hiser, J. D., and Davidson, J. W. A secure and robust approach to software tamper resistance. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6387 LNCS (2010), 33–47.
- Horne, B., Matheson, L., Sheehan, C., and Tarjan, R. Dynamic Self-Checking Techniques for Improved Tamper Resistance. Security and Privacy in Digital Rights Management (2002), 141–159.
- Ibrahim, A., and Banescu, S. Stins4cs: A state inspection tool for c. In Proceedings of the 2016 ACM Workshop on Software PROtection (2016), ACM, pp. 61–71.
- Intel. Intel software guard extensions (intel sgx) sdk, Jan. 13 2017. software.intel.com/en-us/sgx-sdk/documentation.
- Jacob, M., Jakubowski, M. H., and Venkatesan, R. Towards integral binary execution: Implementing oblivious hashing using overlapped instruction encodings. In Proceedings of the 9th workshop on Multimedia & security (2007), ACM, pp. 129–140.
- Jin, H., and Lotspiech, J. Forensic analysis for tamper resistant software. In Software Reliabil- ity Engineering, 2003. ISSRE 2003. 14th International Symposium on (2003), IEEE, pp. 133–142.
- Junod, P., Rinaldini, J., Wehrli, J., and Michielin, J. Obfuscator-llvm: software pro- tection for the masses. In Proceedings of the 1st International Workshop on Software Protection (2015), IEEE Press, pp. 3–9.
- LLVM.org. The llvm compiler infrastructure, Jan. 13 2017. llvm.org/docs/.
- Madou, M., Anckaert, B., Moseley, P., Debray, S., De Sutter, B., and De Boss- chere, K. Software protection through dynamic code mutation. In International Workshop on Information Security Applications (2005), Springer, pp. 194–206.
- Malone, C., Zahran, M., and Karri, R. Are Hardware Performance Counters a Cost E↵ective Way for Integrity Checking of Programs. Proceedings of the sixth ACM workshop on Scalable trusted computing - STC ’11 (2011), 71.
- Qiu, J., Yadegari, B., Johannesmeyer, B., Debray, S., and Su, X. Identifying and Understanding Self-Checksumming Defenses in Software. 207–218.  Wurster, G., Van Oorschot, P. C., and Somayaji, A. A generic attack on checksumming- based software tamper resistance. Proceedings - IEEE Symposium on Security and Privacy (2005), 127–135.