Skip to content Skip to navigation
University of Warwick
  • Study
  • |
  • Research
  • |
  • Business
  • |
  • Alumni
  • |
  • News
  • |
  • About

University of Warwick
Publications service & WRAP

Highlight your research

  • WRAP
    • Home
    • Search WRAP
    • Browse by Warwick Author
    • Browse WRAP by Year
    • Browse WRAP by Subject
    • Browse WRAP by Department
    • Browse WRAP by Funder
    • Browse Theses by Department
  • Publications Service
    • Home
    • Search Publications Service
    • Browse by Warwick Author
    • Browse Publications service by Year
    • Browse Publications service by Subject
    • Browse Publications service by Department
    • Browse Publications service by Funder
  • Help & Advice
University of Warwick

The Library

  • Login
  • Admin

Bayesian optimisation with multi-task Gaussian processes

Tools
- Tools
+ Tools

Pearce, Michael Arthur Leopold (2019) Bayesian optimisation with multi-task Gaussian processes. PhD thesis, University of Warwick.

[img]
Preview
PDF
WRAP_Theses_Pearce_2019.pdf - Submitted Version - Requires a PDF viewer.

Download (5Mb) | Preview
Official URL: http://webcat.warwick.ac.uk/record=b3520418~S15

Request Changes to record.

Abstract

Gaussian processes are simple efficient regression models that allows a user to encode abstract prior beliefs such as smoothness or periodicity and provide predictions with uncertainty estimates. Multi-Task Gaussian processes extend these methods to model functions with multiple outputs or functions over joint continuous and categorical domains. Using a Gaussian process as a surrogate model of an expensive function to guide the search to find the peak is the field of Bayesian optimisation. Within this field, Knowledge Gradient is an effective family of methods based on a simple Value of Information derivation yet there are many problems to which it hasn’t been
applied. We consider a variety of problems and derive new algorithms using the same Value of Information framework yielding significant improvements over many previous methods. We first propose the Regional Expected Value of Improvement (REVI) method for learning the best of a set of candidate solutions for each point in a domain where the best solution varies across the domain. For example, the best from a set of treatments varies across the domain of patients. We next generalize
this method to optimising a range of continuous global optimization problems, multitask conditional global optimization, querying one objective/task can inform the optimisation of other tasks. We then follow with a natural extension of KG to the optimization of functions that are an average over tasks that the user aims to maximise. Finally, we cast simulation optimization with common random numbers as optimization of an infinite summation of tasks where each task is the objective with a single random number seed. We therefore propose the Knowledge Gradient for Common Random Numbers that sequentially determines a seed and a solution to optimise the unobservable infinite average over seeds.

Item Type: Thesis (PhD)
Subjects: Q Science > QA Mathematics
Library of Congress Subject Headings (LCSH): Gaussian processes, Bayesian statistical decision theory, Mathematical optimization, Algorithms
Official Date: September 2019
Dates:
DateEvent
September 2019UNSPECIFIED
Institution: University of Warwick
Theses Department: Centre for Complexity Science
Thesis Type: PhD
Publication Status: Unpublished
Supervisor(s)/Advisor: Branke, Jürgen
Sponsors: Engineering and Physical Sciences Research Council
Extent: x, 188 leaves : illustrations, charts
Language: eng

Request changes or add full text files to a record

Repository staff actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics

twitter

Email us: wrap@warwick.ac.uk
Contact Details
About Us