PhD Seminar Notice: High Performance and Memory Request Handling For Safety-Critical Multicores

Friday, September 13, 2024 1:00 pm - 2:00 pm EDT (GMT -04:00)

Candidate: Zhuanhao Wu

Date: September 13, 2024

Time: 1:00 PM

Place: EIT 3145

Supervisor(s): Patel, Hiren

Abstract:

Safety-critical embedded systems (SCES) are deployed on hardware platforms and must deliver both high performance and predictability. Multi-core platforms, proven effective in general-purpose computing, are now used in SCES. However, ensuring predictability in multi-cores for SCES is challenging due to contention on shared resources, such as the memory hierarchy with multiple cache levels. A key aspect of predictability is honoring the worst-case execution time of tasks, which requires ensuring the worst-case latency (WCL) of memory access. Our studies focus on adopting performance enhancing techniques in SCES multi-cores while managing their impact on WCL.

We study the sharing of an inclusive last-level cache (LLC) in SCES multi-cores and its impact on the WCL. Our study reveals that back-invalidations (BIs) in an inclusive LLC negatively impact the WCL, potentially leading to unbounded WCL. When LLC access is constrained to a one-slot Time Division Multiplex (1S-TDM) schedule, our timing analysis shows that the WCL is bounded and can be improved to cubic with respect to the number of cores through a simple hardware extension. Despite the bounded WCL, an inclusive LLC leads to higher WCL compared to a predictable memory hierarchy without an LLC. We propose an improved LLC design, ZeroCost-LLC (ZCLLC), that eliminates back-invalidations by leveraging the LLC's larger capacity compared to private caches. ZCLLC achieves linear WCL with respect to the number of cores, on par with state-of-the-art predictable data sharing techniques without an LLC.

Another performance optimization in the memory hierarchy is buffering multiple outstanding memory requests (MOMR) in private caches. Supporting MOMR requires processing memory requests in compliance with memory consistency models (MCM). The load-to-load ordering (L2L) is a constraint in common MCM such as sequential consistency and total-store order. General-purpose approaches to ensuring L2L retry memory requests when potential L2L-violating stores are detected, enlarging WCL. We propose a technique that enforces L2L by detecting potential violations and delaying L2L-violating stores, ensuring that each request is issued only once to the memory hierarchy.