Title Software based Remote Attestation: measuring integrity of user applications and kernels Authors



Download 135.21 Kb.
Page1/3
Date18.10.2016
Size135.21 Kb.
  1   2   3
Title - Software based Remote Attestation: measuring integrity of user applications and kernels

Authors: Raghunathan Srinivasan1 (corresponding author), Partha Dasgupta1, Tushar Gohad2

Affiliation: 1. School of Computing, Informatics and Decision Systems Engineering, Arizona State University, Tempe, AZ, USA

2. MontaVista Software LLC

Address:

Email: raghus@asu.edu

Phone: (1) 480-965-5583

Fax: (1)-480-965-2751



Abstract:

This research describes a method known as Remote attestation to attest the integrity of a process using a trusted remote entity. Remote attestation has mostly been implemented using hardware support. Our research focuses on the implementation of these techniques based entirely on software utilizing code injection inside a running process to attest its integrity.


A trusted external entity issues a challenge to the client machine and the client machine has to respond to this challenge. The result of this challenge provides the external entity with an assurance on whether or not the software executing on the client machine is compromised. This paper also shows methods to determine the integrity of the operating system on which software based remote attestation occurs.
Keywords: Remote Attestation, Integrity Measurement, Root of Trust, Kernel Integrity, Code Injection.

  1. Introduction

Many consumers utilize security sensitive applications on a machine (PC) along with other vulnerable software. Malware can patch on various software in the system by exploiting these vulnerabilities. A regular commodity OS consists of millions of lines of code (LOC) [1]. Device drivers usually range in size between a few lines of code to around 100 thousand lines of code (KLOC), with an average of 1 bug per device driver [2]. Another empirical study showed that bugs in the kernel may have a lifetime of nearly 1.8 years on average [3], and that there may be as many as 1000 bugs in the 2.4.1 Linux kernel. The cumulative effect of such studies is that it is difficult to prevent errors that can be exploited by malware. Smart malware can render Anti-malware detection techniques by disabling them. Hardware detection schemes are considered to be non modifiable by malware. However, mass scale deployment of hardware techniques remains a challenge, and they also have the stigma of digital rights management (DRM) attached. Another issue with hardware measurement schemes is that software updates have to be handled such that only legitimate updates get registered with the hardware. If the hardware device offers an API to update measurements, malware can attempt to use that API to place malicious measurements in the hardware. If the hardware device is not updatable from the OS, then reprogramming has to be performed on it to reflect updated measurements.
Software based attestation schemes offer flexibility and can be changed quickly to reflect legitimate updates. Due to the ease of use and the potential of mass scale deployment, software based attestation schemes offer significant advantages over hardware counterparts. However, every software based attestation scheme is potentially vulnerable to some corner case attack scenario. In extreme threat model cases and cases where updates are rare, network administrators can switch to using hardware based measurement schemes. For the general consumer, software based schemes offer a lightweight protocol that can detect intrusions prior to serious data losses.
Remote Attestation is a set of methods that allows an external trusted agent to measure the integrity of a system. Software based solutions for Remote Attestation schemes vary in their implementation techniques. Pioneer [4], SWATT [5], Genuinity [6], and TEAS [7] are well known examples. In TEAS, the authors prove mathematically that it is highly difficult for an attacker to determine the response for every integrity challenge, provided the code for the challenge is regenerated for every instance. However, TEAS does not provide any implementation framework.
In Genuinity, a trusted authority sends executable code to the kernel on the un-trusted machine, and the kernel loads the attestation code to perform the integrity measurements. Genuinity has been shown to have some weaknesses by two studies [8], [5]. However, the authors of Genuinity have since claimed that these attacks may work only on the specific cases mentioned in the two works, a regeneration of the challenge by the server would render the attacks insignificant [9].
This work is quite similar to Genuinity with certain differences in technique. Like Genuinity, this work focuses on the importance of regenerating code that performs integrity measurement of an application on the client. We do not utilize the Operating System support to load the challenge, the application has to receive the code and execute it. In addition, this paper also deals with the problem of what we term a ‘redirect’ attack where an attacker may direct the challenge to a different machine.
The attestation mechanisms presented in this work use the system call interface of the client platform. Due to this, the problem of determining integrity of an application on a client platform is split into two orthogonal problems. The first involves determining the integrity of the user application in question by utilizing system calls and software interrupts. The orthogonal problem is determining the integrity of the system call table, interrupt descriptors, and the Text section of the kernel that runs on client platform
For the first problem, it is assumed that the system calls will produce the correct results. Rootkits are assumed to be absent from the system. We assume that there may be various other user level applications on the client platform that may attempt to tamper with the execution of the challenge. For the second problem, this paper presents a scheme where an external entity can determine the state of the OS Text section, System call Table, and the Interrupt Descriptor table on the client machine. It can be noted that the external entities obtaining the integrity measure for the application and the OS can be different.
The solution in this paper is designed to detect changes made to the code section of a process. This allows the user (Alice) to determine whether one application is clean on the system. The same technique can be extended to every application on the system to determine whether all installed applications are clean. Trent is a trusted entity who has knowledge of the structure of an un-tampered copy of the process (P) to be verified. Trent may be the application vendor or Trent may be an entity that offers attestation services for various applications. It should be noted that Trent only needs to know the contents and behavior of the clean program image of P to generate challenges. Trent provides executable code (C) to Alice (the client/ end user), which Alice injects on P. C takes overlapping MD5 hashes on the sub-regions of P and returns the results to Trent. Trent has to be a trusted agent as the client downloads program code or performs certain operations based on Trent’s instructions. If Trent is not trusted then Alice cannot run the required code with certainty that it will not compromise Alice’s machine (MAlice).
C is newly generated randomized code that executes on the user end to determine the integrity of an application on an x86 based platform. This ensures that an attacker cannot determine the results of the integrity measurement without executing C. Trent places some programming constructs in C that ensure that C is difficult to execute in a sandbox or a controlled environment. A software protocol means that there exists opportunity for an attacker (Mallory) to forge results. The solution provided in this paper protects itself from the following attacks.
Replay Attack: Mallory may provide Trent forged results by replaying the response to a previous attestation challenge. To prevent this scenario, Trent changes the operations performed in every instance of C. This is done by placing some lines in the source code of C that depend on various constants. C is recompiled for every attestation request. These constants are generated prior to code compilation using random numbers. Consequentially, the outputs of these measurements change with the change of every constant. The code produced by Trent requires that Mallory monitors and adapts the attack to suit the challenge. We utilize the concept that program analysis of obfuscated code is complex enough to prevent attacks [7].
Tampering: Mallory may analyze the operations performed by the challenge to return forged values. Trent places dummy instructions, randomizes locations of variables, and places some self modifying instructions to prevent static analysis of the application. It must be noted that self modifying code is normally not permitted in the Intel x86 architecture as the code section is protected against writes. However, we use a Linux OS call ‘mprotect’ to change the protections on the code section of the process in which C executes to allow this feature. Furthermore Trent also maintains a time threshold by which the results are expected to be received; this reduces the window of opportunity for Mallory to launch a successful attack.
Redirect: Mallory may re-direct the challenge from Trent to a clean machine or execute it in a sandbox which will provide correct integrity values as the response to Trent. The executable code sent by Trent obtains machine identifiers to determine whether it executed at the correct machine. It also executes certain tests to determine if it was executed inside a sandbox. C communicates multiple times to Trent while executing tests on P. This makes it harder for Mallory to prevent C from executing. These techniques are discussed in detail in section 5.
For obtaining the integrity measurement of the OS Text section, the attestation service provider Trent′ provides executable code (Ckernel) to the client OS (OSAlice). OSAlice receives the code into a kernel module and executes the code. It is assumed that OSAlice has means such as Digital Signatures to verify that Ckernel did originate from Trent′. The details of implementation of this scheme are in section 7.
The rest of the paper is organized as follows. Section 2 contains a review of the related work. Section 3 describes the problem statement, threat model and assumptions made in this solution. Section 4 describes the overall design of the system; section 5 describes the obfuscation techniques used in creating C. Section 6 describes the implementation of the application attestation system, section 7 describes the implementation of kernel runtime measurements and section 8 concludes the paper.


  1. Related Work

Code attestation involves checking if the program code executing within a process is legitimate or has been tampered. It has been implemented using hardware, virtual machine and software based detection schemes. In this section we discuss these schemes as well as methods to perform program analysis and obfuscation techniques available in literature.
2.1 Hardware based integrity checking

Some hardware based schemes operate off the TPM chip provided by the Trusted Computing Group [10],[11], [12], while others use a hardware coprocessor which can be placed into the PCI slot of the platform [13], [14]. The schemes using the TPM chip involve the kernel or an application executing on the client obtaining integrity measurements, and providing it to the TPM, the TPM signs the values with its private key and may forward it to an external agent for verification. The coprocessor based schemes read measurements on the machine without any assistance from the OS or the CPU on the platform, and compare measurements to previously stored values. The hardware based scheme can allow a remote (or external) agent to verify whether the integrity of all the programs on the client machine is intact or not. Hardware based schemes have a stigma of DRM attached to them, may be difficult to reprogram and are not ideally suited for mass deployment. The TPM based schemes have little backward compatibility in that it does not work on legacy systems which do not have a TPM chip.


Integrity Measurement Architecture (IMA) [15] is a software based integrity measurement scheme that utilizes the underlying TPM on the platform. The verification mechanism does not rely on the trustworthiness of the software on the system. IMA maintains a list of hash values of all possible executable content that is loaded in the system. When an executable, library, or kernel module is loaded, IMA performs an integrity check prior to executing it. IMA measures values while the system is being loaded, however does not provide means to determine whether any program that is in execution got tampered in memory. IMA also relies on being called by the OS when any application is loaded; it relies on the kernel functions for reading the file system, and relies on the underlying TPM to maintain an integrity value over the measurement list residing in the kernel. Due to this, each new measurement added to a kernel-held measurement list results in a change required for values stored in the Platform Configuration Register (PCR) of the TPM security chip on the system.
2.2 Virtualization based Integrity checking

Virtualization implemented without hardware support has been used for security applications. This form of virtualization was implemented prior to large scale deployment of platforms containing in built hardware support for virtualization. Terra uses a trusted virtual machine monitor (TVMM) and partitions the hardware platform into multiple virtual machines that are isolated from one another [16]. Hardware dependent isolation and virtualization are used by Terra to isolate the TVMM from the other VMs. Terra implements a scheme where potentially every class of operation is performed on a separate virtual machine (VM) on the client platform. Terra is installed in one of the VMs and is not exposed to external applications like mail, gaming, and so on. The TVMM is provided the role of a Host OS. The root of trust in Terra is present in the hardware TPM; the TPM takes measurements on the boot loader, which in turn takes measurements on the TVMM. The TVMM takes measurements on the VMs prior to loading them. Terra relies on the underlying TPM to take some measurements. Most traditional VMM based schemes are bulky and need significant resources on the platform to appear transparent to the end user, this holds true for Terra where the authors advocate multiple virtual machines.


2.3 Integrity checking using hardware assisted virtualization

Hardware support for virtualization has been deployed in the widely used x86 consumer platforms recently. Intel and AMD have come out with Intel VT-x and AMD-V which provide processor extensions where a system administrator can load certain values in the hardware to setup a VMM and execute the operating system in a guest environment. The VMM runs in a mode that has higher privileges than the guest OS and can therefore enforce access control between multiple guest operating systems and also between application programs inside an OS. The system administrator can also setup events in the hardware which cause the control to exit from the guest OS to the VMM in a trap and emulate model. The VMM can take a decision based on the local policy whether to emulate or ignore the instruction.


VIS [17] is a hardware based virtualization scheme which determines the integrity of client programs that connect to a remote server.  VIS contains an Integrity Measurement Module which reads the cryptographically signed reference measurement (manifest) of a client process.  VIS verifies the signature in a scheme similar to X.509 certificate measurement and then takes the exact same measurements on the running client process to determine whether it has been tampered.  The OS loader may perform relocation of certain sections of the client program, in which case the IMM reverses these relocations using information provided in the manifest and then obtains the measurement values.  VIS requires that the pages of the client programs are pinned in memory (not paged out).  VIS restricts network access during the verification phase to prevent any malicious program from bypassing registration.  VIS does not allow the client programs unrestricted access to network before the program has been verified.
2.4 Software based integrity measurement schemes

Genuinity [6] implements a remote attestation system in which the client kernel initializes the attestation for a program. It receives executable code and maps it into the execution environment as directed by the trusted authority. The system maps each page of physical memory into multiple pages of virtual memory creating a one to many relationship between the physical and virtual pages. The trusted external agent sends a pseudorandom sequence of addresses, the Genuinity system othen takes the checksum over the specified memory regions. Genuinity also incorporates various other values like the Instruction and Data TLB miss count, counters which determine number of branches and instructions executed. The executable code performs various checks on the client kernel and returns the results to a verified location in the kernel on the remote machine, which returns the results back to the server. The server verifies if the results are in accordance with the checks performed, if so the client is verified. This protocol requires OS support on the remote machine for many operations including loading the attestation code into the correct area in memory, obtaining hardware values such as TLB. Commodity OS have many applications, requiring OS support or a kernel module for each specific application can be considered a major overhead.


In Pioneer [4] the verification code resides on the client machine. The verifier (server) sends a random number (nonce) as a challenge to the client machine. The result returned as response determines if the verification code has been tampered or not. The verification code then performs attestation on some entity within the machine and transfers control to it. This forms a dynamic root of trust in the client machine. Pioneer assumes that the challenge cannot be re directed to another machine on a network, however in many real world scenarios a malicious program can attempt to redirect challenges to another machine which has a clean copy of the attestation code. In its checksum procedure, it incorporates the values of Program Counter and Data Pointer, both of which hold virtual memory addresses. An adversary can load another copy of the client code to be executed in a sandbox like environment and provide it the challenge. This way an adversary can obtain results of the computation that the challenge produces and return it to the verifier. Pioneer also assumes that the server knows the exact hardware configuration of the client for performing a timing analysis, this places a restriction on the client to not upgrade or change hardware components. In TEAS [7] the authors propose a remote attestation scheme in which the verifier generates program code to be executed by the client machine. Random code is incorporated in the attestation code to make analysis difficult for the attacker. The analysis provided by them proves that it is very unlikely that an attacker can clearly determine the actions performed by the verification code; however implementation is not described in the research.
A Java Virtual Machine (JVM) based root of trust method has also been implemented to attest code [18]. The authors implement programs in Java and modify the JVM to attest the runtime environment. However, the JVM has known vulnerabilities and is itself software that operates within the Operating System, and hence is not a suitable candidate for checking integrity.
SWATT [5] implements a remote attestation scheme for embedded devices. The attestation code resides on the node to be attested. The code contains a pseudorandom number generator (PRG) which receives a seed from the verifier. The attestation code includes memory areas which correspond to the random numbers generated by PRG as part of the measurement to be returned to the verifier. The obtained measurements are passed through a keyed MAC function, the key for the instance of MAC operation is provided by the verifier. The problem with this scheme is that if an adversary obtains the seed and the key to the MAC function, the integrity measurements can be spoofed as the attacker would have access to the MAC function and the PRG code.
2.5 Attacks against software based attestation schemes

Genuinity has been shown to have weaknesses by two works [8], [5]. In [8] it is described that Genuinity would fail against a range of attacks known as substitution attacks. The paper suggests placing attack code on the same physical page as the checksum code. The attack code leaves the checksum code unmodified and writes itself to the zero-filled locations in the page. If the pseudo random traversal maps into the page on which the imposter code is present, the attack code redirects the challenge to return byte values from the original code page. Authors of Genuinity countered these findings by stating that the attack scenario does not take into account the time required to extract test cases from the network, analyze it, find appropriate places to hide code and finally produce code to forge the checksum operations [9]. The attacks were specifically constructed against one instance of the checksum generation, and would require complex re engineering to succeed against all possible test cases. This would require a large scale code base to perform the attack. Such a large code base would not be easy to hide.


In [5] it is suggested that genuinity has a problem of mobile code where an attacker can exploit vulnerabilities of mobile code as code is sent over the network to be executed on the client platform. In addition, the paper also states that Genuinity reads 32 bit words for performing a checksum and hence will be vulnerable if the attack is constructed to avoid the lower 32 bits of memory regions. These two claims are countered by the authors of Genuinity [9]. The first is countered by stating that Genuinity incorporates public key signing which will prevent mobile code modifications by an attacker, while the second is countered by stating that genuinity reads 32 bits at a time, and not the lower 32 bits of an address.
A generic attack on software checksum based operations has been proposed [19]. This attack is based on installing a kernel patch that redirects data accesses of integrity measurement code to a different page in the memory containing a clean copy of the code. This attack constitutes installation of a rootkit to change the page table address translation routine in the OS. Although this scheme potentially defeats many software based techniques, the authors have themselves noted that it is difficult for this attack to work on an x86 based 64 bit machine which does not use segmentation, this is because the architecture does not provide the ability to use offsets for code and data segments. Moreover, an attack like this requires the installation of a kernel level rootkit that continuously redirects all read accesses to different pages in memory. The attestation scheme presented in this paper for the user application cannot defend itself against this attack, however, the scheme presented in this work to determine the integrity of the kernel is capable of detecting such modifications. In addition, Pioneer [4] suggests a workaround on this classes of attacks by suggesting that if there are multiple virtual address aliases, which in turn creates extra entries in the page table which will lead to the OS eventually flushing out the spurious pages.

Download 135.21 Kb.

Share with your friends:
  1   2   3




The database is protected by copyright ©ininet.org 2020
send message

    Main page