Performance Characterization of .NET Benchmarks
- Aniket Deshmukh ,
- Ruihao Li ,
- Rathijit Sen ,
- Robert R, Henry ,
- Monica Beckwith ,
- Gagan Gupta
International Symposium on Performance Analysis of Systems and Software (ISPASS) |
Published by IEEE
Managed language frameworks are pervasive today, especially in modern datacenters. .NET is one such framework that is used widely in Microsoft Azure but has not been well-studied. Applications built on these frameworks have different characteristics compared to traditional SPEC-like programs due to the presence of a managed runtime. This affects the tradeoffs associated with designing hardware for such applications. Our goal is to study hardware performance bottlenecks in .NET applications. To find suitable benchmarks, we use Principal Component Analysis (PCA) to find redundancies in a set of open-source .NET and ASP.NET benchmarks and use hierarchical clustering to create representative subsets. We perform microarchitecture and application-level characterization of these subsets and show that they are significantly different from SPEC CPU17 benchmarks in branch and memory behavior, and hence merit consideration in architecture research. In-depth analysis using the Top-Down methodology reveals that .NET benchmarks are significantly more frontend bound. We also analyze the effect of managed runtime events such as JIT (Just-in-Time) compilation and GC (Garbage Collection). Among other findings, GC improves cache performance significantly and JITing could benefit from aggressive prefetching and transformation of hardware microarchitectural state to prevent frequent cold starts. As computing increasingly moves to the cloud and managed languages grow even more in popularity, it is important to consider .NET-like benchmarks in architecture studies.