Skip to content Skip to navigation
University of Warwick
  • Study
  • |
  • Research
  • |
  • Business
  • |
  • Alumni
  • |
  • News
  • |
  • About

University of Warwick
Publications service & WRAP

Highlight your research

  • WRAP
    • Home
    • Search WRAP
    • Browse by Warwick Author
    • Browse WRAP by Year
    • Browse WRAP by Subject
    • Browse WRAP by Department
    • Browse WRAP by Funder
    • Browse Theses by Department
  • Publications Service
    • Home
    • Search Publications Service
    • Browse by Warwick Author
    • Browse Publications service by Year
    • Browse Publications service by Subject
    • Browse Publications service by Department
    • Browse Publications service by Funder
  • Help & Advice
University of Warwick

The Library

  • Login
  • Admin

Analysing the influence of InfiniBand choice on OpenMPI memory consumption

Tools
- Tools
+ Tools

Perks, O. F. J., Beckingsale, David A., Dawes, A. S., Herdman, J. A., Mazauric, C. and Jarvis, Stephen A. (2013) Analysing the influence of InfiniBand choice on OpenMPI memory consumption. In: International Workshop on High Performance Interconnection Networks, Helsinki, Finland, 1 - 5 July 2013. Published in: 2013 International Conference on High Performance Computing and Simulation (HPCS) pp. 186-193. ISBN 9781479908363. doi:10.1109/HPCSim.2013.6641412

[img] Text
Jarvis_HPIN.pdf
Embargoed item. Restricted access to Repository staff only

Download (337Kb)
Official URL: http://dx.doi.org/10.1109/HPCSim.2013.6641412

Request Changes to record.

Abstract

The ever increasing scale of modern high performance computing platforms poses challenges for system architects and code developers alike. The increase in core count densities and associated cost of components is having a dramatic effect on the viability of high memory-per-core ratios. Whilst the available memory per core is decreasing, the increased scale of parallel jobs is testing the efficiency of MPI implementations with respect to memory overhead. Scalability issues have always plagued both hardware manufacturers and software developers, and the combined effects can be disabling. In this paper we address the issue of MPI memory consumption with regard to InfiniBand network communications. We reaffirm some widely held beliefs regarding the existence of scalability problems under certain conditions. Additionally, we present results testing memory-optimised runtime configurations and vendor provided optimisation libraries. Using Orthrus, a linear solver benchmark developed by AWE, we demonstrate these memory-centric optimisations and their performance implications. We show the growth of OpenMPI memory consumption (demonstrating poor scalability) on both Mellanox and QLogic InfiniBand platforms. We demonstrate a 616× increase in MPI memory consumption for a 64× increase in core count, with a default OpenMPI configuration on Mellanox. Through the use of the Mellanox MXM and QLogic PSM optimisation libraries we are able to observe a 117× and 115× reduction in MPI memory at application memory high water mark. This significantly improves the potential scalability of the code.

Item Type: Conference Item (Paper)
Divisions: Faculty of Science > Computer Science
Journal or Publication Title: 2013 International Conference on High Performance Computing and Simulation (HPCS)
Publisher: IEEE
ISBN: 9781479908363
Book Title: 2013 International Conference on High Performance Computing & Simulation (HPCS)
Official Date: July 2013
Dates:
DateEvent
July 2013Published
Page Range: pp. 186-193
DOI: 10.1109/HPCSim.2013.6641412
Status: Peer Reviewed
Publication Status: Published
Access rights to Published version: Restricted or Subscription Access
Conference Paper Type: Paper
Title of Event: International Workshop on High Performance Interconnection Networks
Type of Event: Workshop
Location of Event: Helsinki, Finland
Date(s) of Event: 1 - 5 July 2013

Request changes or add full text files to a record

Repository staff actions (login required)

View Item View Item
twitter

Email us: wrap@warwick.ac.uk
Contact Details
About Us