The EGO tutorial notebook is not behaving as expected. Tested July 6, 2020.
Installed SMT in Docker container running on Ubuntu 18.04 with Python 3.8.3.
Here are other packages installed by pip during building. Since it is relevant to this problem, I will highlight Scipy==1.5.1:
Package Version
----------------------------- ---------
alabaster 0.7.12
attrs 19.3.0
Babel 2.8.0
backcall 0.2.0
bleach 3.1.5
certifi 2020.6.20
chardet 3.0.4
cycler 0.10.0
Cython 0.29.20
decorator 4.4.2
defusedxml 0.6.0
docutils 0.16
entrypoints 0.3
idna 2.10
imagesize 1.2.0
ipykernel 5.3.1
ipython 7.16.1
ipython-genutils 0.2.0
ipywidgets 7.5.1
jedi 0.17.1
Jinja2 2.11.2
joblib 0.16.0
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.5
jupyter-console 6.1.0
jupyter-core 4.6.3
kiwisolver 1.2.0
MarkupSafe 1.1.1
matplotlib 3.2.2
mistune 0.8.4
nbconvert 5.6.1
nbformat 5.0.7
notebook 6.0.3
numexpr 2.7.1
numpy 1.19.0
numpydoc 1.1.0
packaging 20.4
pandas 1.0.5
pandocfilters 1.4.2
parso 0.7.0
pexpect 4.8.0
pickleshare 0.7.5
pip 20.1.1
prometheus-client 0.8.0
prompt-toolkit 3.0.5
ptyprocess 0.6.0
pyDOE2 1.3.0
Pygments 2.6.1
pyparsing 2.4.7
pyrsistent 0.16.0
python-dateutil 2.8.1
pytz 2020.1
pyzmq 19.0.1
qtconsole 4.7.5
QtPy 1.9.0
requests 2.24.0
scikit-learn 0.23.1
scipy 1.5.1
seaborn 0.10.1
Send2Trash 1.5.0
setuptools 47.1.1
six 1.15.0
smt 0.5.3
snowballstemmer 2.0.0
Sphinx 3.1.2
sphinxcontrib-applehelp 1.0.2
sphinxcontrib-devhelp 1.0.2
sphinxcontrib-htmlhelp 1.0.3
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-qthelp 1.0.3
sphinxcontrib-serializinghtml 1.1.4
tables 3.6.1
terminado 0.8.3
testpath 0.4.4
threadpoolctl 2.1.0
tornado 6.0.4
tqdm 4.47.0
traitlets 4.3.3
urllib3 1.25.9
wcwidth 0.2.5
webencodings 0.5.1
wheel 0.34.2
widgetsnbextension 3.5.1
Installation of SMT was performed after installing Cython, numpy and other packages, and was apparently successful.
When running the cells in the EGO Tutorial notebook, I received the following error and trace for any calls that involve Scipy's minimize
function, which is itself wrapped by the optimize
method:
ValueError Traceback (most recent call last)
in
15 ego = EGO(n_iter=n_iter, criterion=criterion, xdoe=xdoe, xlimits=xlimits)
16
---> 17 x_opt, y_opt, ind_best, x_data, y_data, x_doe, y_doe = ego.optimize(fun=rosenbrock)
18
19 print('Xopt for Rosenbrock ', x_opt,y_opt, ' obtained using EGO criterion = ', criterion )
/usr/local/lib/python3.8/site-packages/smt/applications/ego.py in optimize(self, fun)
155 # Virtual enrichement loop
156 for p in range(n_parallel):
--> 157 x_et_k, success = self._find_points(x_data, y_data)
158 if not success:
159 self.log(
/usr/local/lib/python3.8/site-packages/smt/applications/ego.py in _find_points(self, x_data, y_data)
251 for ii in range(n_start):
252 opt_all.append(
--> 253 minimize(
254 self.obj_k,
255 x_start[ii, :],
/usr/local/lib/python3.8/site-packages/scipy/optimize/_minimize.py in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options)
623 return _minimize_cobyla(fun, x0, args, constraints, **options)
624 elif meth == 'slsqp':
--> 625 return _minimize_slsqp(fun, x0, args, jac, bounds,
626 constraints, callback=callback, **options)
627 elif meth == 'trust-constr':
/usr/local/lib/python3.8/site-packages/scipy/optimize/slsqp.py in _minimize_slsqp(func, x0, args, jac, bounds, constraints, maxiter, ftol, iprint, disp, eps, callback, finite_diff_rel_step, **unknown_options)
366
367 # ScalarFunction provides function and gradient evaluation
--> 368 sf = _prepare_scalar_function(func, x, jac=jac, args=args, epsilon=eps,
369 finite_diff_rel_step=finite_diff_rel_step,
370 bounds=new_bounds)
/usr/local/lib/python3.8/site-packages/scipy/optimize/optimize.py in _prepare_scalar_function(fun, x0, jac, args, bounds, epsilon, finite_diff_rel_step, hess)
259 # ScalarFunction caches. Reuse of fun(x) during grad
260 # calculation reduces overall function evaluations.
--> 261 sf = ScalarFunction(fun, x0, args, grad, hess,
262 finite_diff_rel_step, bounds, epsilon=epsilon)
263
/usr/local/lib/python3.8/site-packages/scipy/optimize/_differentiable_functions.py in init(self, fun, x0, args, grad, hess, finite_diff_rel_step, finite_diff_bounds, epsilon)
93
94 self._update_grad_impl = update_grad
---> 95 self._update_grad()
96
97 # Hessian Evaluation
/usr/local/lib/python3.8/site-packages/scipy/optimize/_differentiable_functions.py in _update_grad(self)
169 def _update_grad(self):
170 if not self.g_updated:
--> 171 self._update_grad_impl()
172 self.g_updated = True
173
/usr/local/lib/python3.8/site-packages/scipy/optimize/_differentiable_functions.py in update_grad()
89 self._update_fun()
90 self.ngev += 1
---> 91 self.g = approx_derivative(fun_wrapped, self.x, f0=self.f,
92 **finite_diff_options)
93
/usr/local/lib/python3.8/site-packages/scipy/optimize/_numdiff.py in approx_derivative(fun, x0, method, rel_step, abs_step, f0, bounds, sparsity, as_linear_operator, args, kwargs)
386 f0 = np.atleast_1d(f0)
387 if f0.ndim > 1:
--> 388 raise ValueError("f0
passed has more than 1 dimension.")
389
390 if np.any((x0 < lb) | (x0 > ub)):
ValueError: f0
passed has more than 1 dimension.
Downgrading Scipy==1.4.1 (the last release prior to the release of the EGO notebook in April) led to expected behavior for the section that uses fun = x sin x
. I could make this section work with Scipy==1.5.1 by adding .flatten()
to all of the return calls in the function definitions for EI, SBO, and UCB. Note that SBO does not return the global minimum; is that expected behavior?
For the EGO section using Rosenbrock function, I could not find a way to flatten the output from the function in a way to make it work with Scipy 1.5.1. The section works with Scipy==1.4.1 if I reduce the number of iterations to 50 (from default of 100 in the notebook). Using 100 leads to the following runtime error being repeated (this could be a separate issue, but recording here for reference, can break out to separate issue if appropriate):
/usr/local/lib/python3.8/site-packages/smt/applications/ego.py:197: RuntimeWarning: divide by zero encountered in true_divide args0 = (f_min - pred) / sig
Let me know if I can provide any additional data. Thank you.