The underlying GaussianProcessRegressor model instance.
This is initialized with the provided parameters and can be accessed
for further customization or inspection.
When the default optimizer method, _optimizer(), is used as
optimizer, this routine reports different warnings compared to
sklearn.gaussian_process.GaussianProcessRegressor.fit(). The latter
reports any convergence failure in L-BFGS-B. This implementation reports
the last convergence failure in the multiple L-BFGS-B runs only if there
all the runs end up failing. The number of optimization runs is
n_restarts_optimizer + 1.
Parameters:
Xnew – m-by-d matrix with m point coordinates in a d-dimensional
space.
Filter values by replacing large function values by the median of all.
This strategy was proposed by [1] based on results from [2]. Use this
strategy to reduce oscillations of the interpolator, especially if the range
target function is large. This filter may reduce the quality of the
approximation by the surrogate.
\(\beta_i\) are the coefficients of the RBF model.
\(\phi\) is the kernel function.
\(p_i\) are the basis functions of the polynomial tail.
\(n\) is the dimension of the polynomial tail.
This implementation focuses on quick successive updates of the model, which
is essential for the good performance of active learning processes.
Parameters:
kernel (RadialBasisFunction) – Kernel function \(\phi\) used in the RBF model. (default: <soogo.model.rbf_kernel.CubicRadialBasisFunctionobjectat0x7f33de78a570>)
iindex (tuple[int, ...]) – Indices of integer variables in the feature space. (default: ())
filter (Optional[RbfFilter]) – Filter to be used in the target (image) space. (default: None)
The mu measure was first defined in [3] with suggestions of usage for
global optimization with RBF functions. In [4], the authors detail the
strategy to make the evaluations computationally viable.
The current
implementation, uses a different strategy than that from Björkman and
Holmström (2000), where a single LDLt factorization is used instead of
the QR and Cholesky factorizations. The new algorithm’s performs 10
times less operations than the former. Like the former, the new
algorithm is also able to use high-intensity linear algebra operations
when the routine is called with multiple points \(x\) are evaluated
at once.
Note
Before calling this method, the model must be prepared with
prepare_mu_measure().
Parameters:
x (ndarray) – m-by-d matrix with m point coordinates in a d-dimensional
space.
This routine computes the LDLt factorization of the matrix A, which is
used to compute the mu measure. The factorization is computed only once
and can be reused for multiple calls to mu_measure().
This routine avoids successive dynamic memory allocations with
successive calls of update(). If the input maxeval is smaller
than the current number of sample points, nothing is done.
Parameters:
maxeval (int) – Maximum number of function evaluations.
dim (int) – Dimension of the domain space.
ntarget (int) – Dimension of the target space. (default: 1)
bounds – List with the limits [x_min,x_max] of each direction x in the search
space.
maxeval (int) – Maximum number of function evaluations.
surrogateModel (Optional[Surrogate]) – Gaussian Process surrogate model. The default is GaussianProcess().
On exit, if provided, the surrogate model the points used during the
optimization. (default: None)
acquisitionFunc (Optional[MaximizeEI]) – Acquisition function to be used. (default: None)
batchSize (int) – Number of new sample points to be generated per iteration. The default is 1. (default: 1)
disp (bool) – If True, print information about the optimization process. The default
is False. (default: False)
callback (Optional[Callable[[OptimizeResult], None]]) – If provided, the callback function will be called after each iteration
with the current optimization result. The default is None. (default: None)
this step would use a WeightedAcquisition object with a
NormalSampler sampler. The implementation is configured to
use the acquisition proposed by Müller (2016) by default.
Local step (only when useLocalSearch is True): Runs a local
continuous optimization with the true objective using the best point
found so far as initial guess.
The stopping criteria of steps 1 and 2 is related to the number of
consecutive attempts that fail to improve the best solution by at least
improvementTol. The algorithm alternates between steps 1 and 2 until there
is a sequence (CP,TV,CP) where the individual steps do not meet the
successful improvement tolerance. In that case, the algorithm switches to
step 3. When the local step is finished, the algorithm goes back top step 1.
Parameters:
fun – The objective function to be minimized.
bounds – List with the limits [x_min,x_max] of each direction x in the
search space.
maxeval (int) – Maximum number of function evaluations.
surrogateModel (Optional[RbfModel]) – Surrogate model to be used. If None is provided, a
RbfModel model with median low-pass filter is used.
On exit, if provided, the surrogate model the points used during the
optimization. (default: None)
acquisitionFunc (Optional[WeightedAcquisition]) – Acquisition function to be used. If None is
provided, a WeightedAcquisition is used following what is
described by Müller (2016). (default: None)
improvementTol (float) – Expected improvement in the global optimum per
iteration. (default: 0.001)
consecutiveQuickFailuresTol (int) – Number of times that the CP step or the
TV step fails quickly before the
algorithm stops. The default is 0, which means the algorithm will stop
after maxeval function evaluations. A quick failure is when the
acquisition function in the CP or TV step does not find any significant
improvement. (default: 0)
useLocalSearch (bool) – If True, the algorithm will perform a continuous
local search when a significant improvement is not found in a sequence
of (CP,TV,CP) steps. (default: False)
disp (bool) – If True, print information about the optimization process. (default: False)
callback (Optional[Callable[[OptimizeResult], None]]) – If provided, the callback function will be called after
each iteration with the current optimization result. (default: None)
DYCORS algorithm for single-objective optimization
Implementation of the DYCORS (DYnamic COordinate search using Response
Surface models) algorithm proposed in [7]. That is a wrapper to
surrogate_optimization().
Parameters:
fun – The objective function to be minimized.
bounds – List with the limits [x_min,x_max] of each direction x in the
search space.
maxeval (int) – Maximum number of function evaluations.
surrogateModel (Optional[Surrogate]) – Surrogate model to be used. If None is provided, a
RbfModel model with median low-pass filter is used.
On exit, if provided, the surrogate model the points used during the
optimization. (default: None)
acquisitionFunc (Optional[WeightedAcquisition]) – Acquisition function to be used. If None is
provided, the acquisition function is the one used in DYCORS-LMSRBF from
Regis and Shoemaker (2012). (default: None)
batchSize (int) – Number of new sample points to be generated per iteration. (default: 1)
disp (bool) – If True, print information about the optimization process. (default: False)
callback (Optional[Callable[[OptimizeResult], None]]) – If provided, the callback function will be called after
each iteration with the current optimization result. (default: None)
Minimize a scalar function of one or more variables subject to
constraints.
The surrogate models are used to approximate the constraints. The objective
function is assumed to be cheap to evaluate, while the constraints are
assumed to be expensive to evaluate.
gfun – The constraint function to be minimized. The constraints must be
formulated as g(x) <= 0.
bounds – List with the limits [x_min,x_max] of each direction x in the search
space.
maxeval (int) – Maximum number of function evaluations.
surrogateModel (Optional[Surrogate]) – Surrogate model to be used for the constraints. If None is provided, a
RbfModel model is used. (default: None)
disp (bool) – If True, print information about the optimization process. The default
is False. (default: False)
callback (Optional[Callable[[OptimizeResult], None]]) – If provided, the callback function will be called after each iteration
with the current optimization result. The default is None. (default: None)
Minimize a scalar function of one or more variables using a response
surface model approach with restarts.
This implementation generalizes the algorithms Multistart LMSRS from [9].
The general algorithm calls surrogate_optimization() successive
times until there are no more function evaluations available. The first
time surrogate_optimization() is called with the given, if any, trained
surrogate model. Other function calls use an empty surrogate model. This is
done to enable truly different starting samples each time.
Parameters:
fun – The objective function to be minimized.
bounds – List with the limits [x_min,x_max] of each direction x in the
search space.
maxeval (int) – Maximum number of function evaluations.
surrogateModel (Optional[Surrogate]) – Surrogate model to be used. If None is provided, a
RbfModel model with median low-pass filter is used. (default: None)
batchSize (int) – Number of new sample points to be generated per iteration. (default: 1)
disp (bool) – If True, print information about the optimization process. (default: False)
callback (Optional[Callable[[OptimizeResult], None]]) – If provided, the callback function will be called after
each iteration with the current optimization result. (default: None)
Minimize a multiobjective function using the surrogate model approach
from [10].
Parameters:
fun – The objective function to be minimized.
bounds – List with the limits [x_min,x_max] of each direction x in the search
space.
maxeval (int) – Maximum number of function evaluations.
surrogateModel (Optional[Surrogate]) – Multi-target surrogate model to be used. If None is provided, a
RbfModel model is used. (default: None)
acquisitionFunc (Optional[WeightedAcquisition]) – Acquisition function to be used in the CP step. The default is
WeightedAcquisition(0). (default: None)
acquisitionFuncGlobal (Optional[WeightedAcquisition]) – Acquisition function to be used in the global step. The default is
WeightedAcquisition(Sampler(0), 0.95). (default: None)
disp (bool) – If True, print information about the optimization process. The default
is False. (default: False)
callback (Optional[Callable[[OptimizeResult], None]]) – If provided, the callback function will be called after each iteration
with the current optimization result. The default is None. (default: None)
Minimize a scalar function of one or more variables using a surrogate
model and an acquisition strategy.
This is a more generic implementation of the RBF algorithm described in
[11], using multiple ideas from [12] especially in what concerns
mixed-integer optimization. Briefly, the implementation works as follows:
If a surrogate model or initial sample points are not provided,
choose the initial sample using a Symmetric Latin Hypercube design.
Evaluate the objective function at the initial sample points.
Repeat 3-8 until there are no function evaluations left.
Update the surrogate model with the last sample.
Acquire a new sample based on the provided acquisition function.
Evaluate the objective function at the new sample.
Update the optimization solution and best function value if needed.
Determine if there is a significant improvement and update counters.
Exit after nFailTol successive failures to improve the minimum.
Mind that, when solving mixed-integer optimization, the algorithm may
perform a continuous search whenever a significant improvement is found by
updating an integer variable. In the continuous search mode, the algorithm
executes step 4 only on continuous variables. The continuous search ends
when there are no significant improvements for a number of times as in
Müller (2016).
Parameters:
fun – The objective function to be minimized.
bounds – List with the limits [x_min,x_max] of each direction x in the
search space.
maxeval (int) – Maximum number of function evaluations.
surrogateModel (Optional[Surrogate]) – Surrogate model to be used. If None is provided, a
RbfModel model with median low-pass filter is used.
On exit, if provided, the surrogate model the points used during the
optimization. (default: None)
acquisitionFunc (Optional[AcquisitionFunction]) – Acquisition function to be used. If None is
provided, the TargetValueAcquisition is used. (default: None)
batchSize (int) – Number of new sample points to be generated per iteration. (default: 1)
improvementTol – Expected improvement in the global optimum per
iteration.
nSuccTol – Number of consecutive successes before updating the
acquisition when necessary. A zero value means there is no need to
update the acquisition based no the number of successes.
nFailTol – Number of consecutive failures before updating the
acquisition when necessary. A zero value means there is no need to
update the acquisition based no the number of failures.
termination – Termination condition. Possible values: “nFailTol” and
None.
performContinuousSearch – If True, the algorithm will perform a
continuous search when a significant improvement is found by updating an
integer variable.
disp (bool) – If True, print information about the optimization process. (default: False)
callback (Optional[Callable[[OptimizeResult], None]]) – If provided, the callback function will be called after
each iteration with the current optimization result. (default: None)