A survey of Microarchitectural Side-channel Vulnerabilities, Attacks, and Defenses in Cryptography



Download 176.24 Kb.
Page14/15
Date03.05.2023
Size176.24 Kb.
#61249
1   ...   7   8   9   10   11   12   13   14   15
3456629
Cache bank. The adversary can even get finer-grained side-channel information than the cache line. A cache line is divided into multiple cache banks. Concurrent requests to the same line but di$erent banks can be served in parallel. However, requests to the same bank would cause a conflict, resulting in observable execution delay. This cache bank conflict can reveal the access pattern of the secret within one cache line. Yarom et al. [226] demonstrated such a side-channel attack on L1 cache targeting RSA in OpenSSL. Moghimi et al. [139] designed a cache attack in the SGX platform, which is based on the false dependency of memory read-after-write (i.e., 4K Aliasing). This creates

a new timing channel, enabling the adversary to observe the memory accesses in the same cache line with di$erent o$sets.


    1. Memory Page Level


The memory page is the smallest unit for memory management in the OS and computer architec- ture. It is a contiguous and aligned memory block with a specific size, e.g., 4KB. The microarchitec- tural components responsible for manipulating memory pages can leak side-channel information at the granularity of the page size, which is coarser than that of instruction-level or cache-level attacks, but still allows the adversary to steal secrets from certain applications.
Page. The TLB is an address translation cache, which is similar to CPU caches in terms of tim- ing channels. Gras et al. [86] introduced a TLB-based side-channel attack, where interferences with the TLB are exploited to infer the victim’s memory page trace. Canella et al. [40] identified a new attack, which exploits the interactions with the store bu$er to steal information of store addresses.
Page faults can also be used as side-channel information to capture the memory accesses [180, 222]. A malicious OS can allocate a restricted number of physical pages to the victim ap- plication. When the application needs to access pages not available in the memory, a page fault is triggered and reported by the CPU. The OS is thus able to observe the memory pages the applica- tion tries to access. This technique, however, can induce huge performance overhead due to the large number of page faults. Researchers then proposed more advanced attacks [198, 207], where the adversary can infer the accessed pages based on the flags in the page table entries without the need to raise page faults. Moghimi et al. [141] combined the SGX-Step mechanism [197] with the page-fault-based technique to count the number of instructions issued within one page. This can reveal more information (instruction-level) about the victim program inside SGX enclaves for cryptanalysis.
DRAM bank row. Each DRAM bank has a row bu$er that caches the recently used DRAM row, which normally contains multiple pages. It accelerates the memory access, but also introduces a timing channel. Pessl et al. [158] designed a DRAM-based attack by reverse engineering the DRAM addressing schemes. This attack is less practical, as it can only recover very coarse-grained information. However, Kwong et al. [123] recently exploited the data-dependent bit flips induced by the Rowhammer [118] to reveal RSA private key stored in the adjacent pages bit-by-bit.

  1. Download 176.24 Kb.

    Share with your friends:
1   ...   7   8   9   10   11   12   13   14   15




The database is protected by copyright ©ininet.org 2024
send message

    Main page