Should we worry about memory loss?
Perks, O. F. J., Hammond, Simon D., Pennycook, Simon J. and Jarvis, Stephen A., 1970- (2010) Should we worry about memory loss? In: 1st International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computing Systems (PMBS 10), Held in conjunction with IEEE/ACM Supercomputing 2010 (SC'10), New Orleans, LA, USA, 13-19 Nov 2010Full text not available from this repository.
Official URL: http://www.dcs.warwick.ac.uk/~sdh/pmbs10/Home.html
In recent years the High Performance Computing (HPC) industry has benefited from the development of higher density multi-core processors. With recent chips capable of executing up to 32 tasks in parallel, this rate of growth also shows no sign of slowing. Alongside the development of denser micro-processors has been the considerably more modest rate of improvement in random access memory (RAM) capacities. The effect has been that the available memory-per-core has reduced and current projections suggest that this is still set to reduce further.
In this paper we present three studies into the use and measurement of memory in parallel applications; our aim is to capture, understand and, if possible, reduce the memory-per-core needed by complete, multi-component applications. First, we present benchmarked memory usage and runtimes of a six scientific benchmarks, which represent algorithms that are common to a host of production-grade codes. Memory usage of each benchmark is measured and reported for a variety of compiler toolkits, and we show greater than 30% variation in memory high-water mark requirements between compilers. Second, we utilise this benchmark data combined with runtime data, to simulate, via the Maui scheduler simulator, the effect on a multi-science workflow if memory-per-core is reduced from 1.5GB-per-core to only 256MB. Finally, we present initial results from a new memory profiling tool currently in development at the University of Warwick. This tool is applied to a finite-element benchmark and is able to map high-water-mark memory allocations to individual program functions. This demonstrates a lightweight and accurate method of identifying potential memory problems, a technique we expect to become commonplace as memory capacities decrease.
|Item Type:||Conference Item (Paper)|
|Subjects:||Q Science > QA Mathematics > QA76 Electronic computers. Computer science. Computer software|
|Divisions:||Faculty of Science > Computer Science|
|Official Date:||23 November 2010|
|Access rights to Published version:||Restricted or Subscription Access|
|Conference Paper Type:||Paper|
|Title of Event:||1st International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computing Systems (PMBS 10)|
|Type of Event:||Workshop|
|Location of Event:||Held in conjunction with IEEE/ACM Supercomputing 2010 (SC'10), New Orleans, LA, USA|
|Date(s) of Event:||13-19 Nov 2010|
Actions (login required)