Skip to content Skip to navigation
University of Warwick
  • Study
  • |
  • Research
  • |
  • Business
  • |
  • Alumni
  • |
  • News
  • |
  • About

University of Warwick
Publications service & WRAP

Highlight your research

  • WRAP
    • Home
    • Search WRAP
    • Browse by Warwick Author
    • Browse WRAP by Year
    • Browse WRAP by Subject
    • Browse WRAP by Department
    • Browse WRAP by Funder
    • Browse Theses by Department
  • Publications Service
    • Home
    • Search Publications Service
    • Browse by Warwick Author
    • Browse Publications service by Year
    • Browse Publications service by Subject
    • Browse Publications service by Department
    • Browse Publications service by Funder
  • Help & Advice
University of Warwick

The Library

  • Login
  • Admin

Developing a loss prediction-based asynchronous stochastic gradient descent algorithm for distributed training of deep neural networks

Tools
- Tools
+ Tools

Li, Junyu, He, Ligang, Ren, Shenyuan and Mao, Rui (2020) Developing a loss prediction-based asynchronous stochastic gradient descent algorithm for distributed training of deep neural networks. In: 49th International Conference on Parallel Processing (ICPP2020), Virtual conference, 17-20 Aug 2020. Published in: ICPP '20: 49th International Conference on Parallel Processing - ICPP ISBN 9781450388160. doi:10.1145/3404397.3404432

[img]
Preview
PDF
WRAP-Developing-loss-prediction-based-asynchronous-stochastic-Li-2020.pdf - Accepted Version - Requires a PDF viewer.

Download (1257Kb) | Preview
Official URL: https://doi.org/10.1145/3404397.3404432

Request Changes to record.

Abstract

Training Deep Neural Network is a computation-intensive and time-consuming task. Asynchronous Stochastic Gradient Descent (ASGD) is an effective solution to accelerate the training process since it enables the network to be trained in a distributed fashion, but with a main issue of the delayed gradient update. A recent notable work called DC-ASGD improves the performance of ASGD by compensating the delay using a cheap approximation of the Hessian matrix. DC-ASGD works well with a short delay; however, the performance drops considerably with an increasing delay between the workers and the server. In real-life large-scale distributed training, such gradient delay experienced by the worker is usually high and volatile. In this paper, we propose a novel algorithm called LC-ASGD to compensate for the delay, basing on Loss Prediction. It effectively extends the tolerable delay duration for the compensation mechanism. Specifically, LC-ASGD utilizes additional models that reside in the parameter server and predict the loss to compensate for the delay, basing on historical losses collected from each worker. The algorithm is evaluated on the popular networks and benchmark datasets. The experimental results show that our LC-ASGD significantly improves over existing methods, especially when the networks are trained with a large number of workers.

Item Type: Conference Item (Paper)
Subjects: Q Science > QA Mathematics > QA76 Electronic computers. Computer science. Computer software
Divisions: Faculty of Science, Engineering and Medicine > Science > Computer Science
Library of Congress Subject Headings (LCSH): Neural networks (Computer science), Machine learning, Computer-assisted instruction
Journal or Publication Title: ICPP '20: 49th International Conference on Parallel Processing - ICPP
Publisher: ACM
ISBN: 9781450388160
Official Date: 17 August 2020
Dates:
DateEvent
17 August 2020Published
26 May 2020Accepted
Article Number: 47
DOI: 10.1145/3404397.3404432
Status: Peer Reviewed
Publication Status: Published
Reuse Statement (publisher, data, author rights): "© ACM, 2020. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in PUBLICATION, {VOL#, ISS#, (DATE)} http://doi.acm.org/10.1145/nnnnnn.nnnnnn"
Access rights to Published version: Restricted or Subscription Access
Date of first compliant deposit: 7 August 2020
Date of first compliant Open Access: 7 August 2020
RIOXX Funder/Project Grant:
Project/Grant IDRIOXX Funder NameFunder ID
DBIR2019001ACCF-HuaweiUNSPECIFIED
2018B030325002Guangdong Power Grid Companyhttp://dx.doi.org/10.13039/501100012302
Conference Paper Type: Paper
Title of Event: 49th International Conference on Parallel Processing (ICPP2020)
Type of Event: Conference
Location of Event: Virtual conference
Date(s) of Event: 17-20 Aug 2020
Related URLs:
  • Organisation

Request changes or add full text files to a record

Repository staff actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics

twitter

Email us: wrap@warwick.ac.uk
Contact Details
About Us