AI_Site

Pareto-frontier Entropy Search with Variational Lower Bound Maximization

pdf_1195  ·  Masanori Ishikura1 and Masayuki Karasuyama∗1 ·

This study considers multi-objective Bayesian optimization (MOBO) through the information gain of the Pareto-frontier. To calculate the information gain, a predictive distribution conditioned on the Pareto-frontier plays a key role, which is defined as a distribution truncated by the Pareto-frontier. However, it is usually impossible to obtain the entire Pareto-frontier in a continuous domain, and therefore, the complete truncation cannot be known. We consider an approximation of the truncate distribution by using a mixture distribution consisting of two possible approximate truncation obtainable from a subset of the Pareto-frontier, which we call over- and under-truncation. Since the optimal balance of the mixture is unknown beforehand, we propose optimizing the balancing coefficient through the variational lower bound maximization framework, by which the approximation error of the information gain can be minimized. Our empirical evaluation demonstrates the effectiveness of the proposed method particularly when the number of objective functions is large.

Code


https://github.com/SheffieldML/GPy

Tasks


Maximizing L ≥ 2 objective functions

Datasets


Gaussian process generated functions and benchmark functions

Problems


Multi-objective Bayesian optimization (MOBO)

Methods


Pareto-frontier Entropy search with Variational lower bound maximization (PFEV)

Results from the Paper


PFEV shows superior or comparable performance to existing methods, particularly when the number of objective functions is large.