Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maximum number of iterations exceeded on toy convex problem, but answer is correct #279

Open
aleksejs-fomins opened this issue Dec 11, 2024 · 2 comments

Comments

@aleksejs-fomins
Copy link

Python 3.10, Ipopt version 3.14.16 installed from conda-forge.

Here is a toy convex optimization problem in 24 dimensions

import numpy as np
from cyipopt import minimize_ipopt

np.random.seed(42)

nDim = 24
xBase = np.random.normal(0, 1, nDim)
x0 = np.zeros(nDim)

loss = lambda x: np.linalg.norm(x - xBase)

res = minimize_ipopt(loss, x0, tol=1.0E-5, options={'max_iter': 10000})

The answer it produces is correct, but the error message says that it failed to converge. Is this a bug? I have tried different values of tol and max_iter, but the result is still the same. The max_iter argument works in the sense that it changes the number of iterations the algorithm performs. However, I suspect that minimize_ipopt is ignoring the tol argument, because the accuracy of the result it has found is clearly lower than the tolerance I have requested.

message: b'Maximum number of iterations exceeded (can be specified by an option).'
 success: False
  status: -1
     fun: 2.3146770904170615e-08
       x: [ 4.967e-01 -1.383e-01 ...  6.753e-02 -1.425e+00]
     nit: 10000
    info:     status: -1
                   x: [ 4.967e-01 -1.383e-01 ...  6.753e-02 -1.425e+00]
                   g: []
             obj_val: 2.3146770904170615e-08
              mult_g: []
            mult_x_L: [ 0.000e+00  0.000e+00 ...  0.000e+00  0.000e+00]
            mult_x_U: [ 0.000e+00  0.000e+00 ...  0.000e+00  0.000e+00]
          status_msg: b'Maximum number of iterations exceeded (can be specified by an option).'
    nfev: 470774
    njev: 10002
@aleksejs-fomins
Copy link
Author

Hmm, when I square the objective function, everything works correctly

loss = lambda x: (x - xBase).dot(x - xBase)

But it's the exact same problem, is it not?

@chrhansk
Copy link
Contributor

chrhansk commented Jan 4, 2025

@aleksejs-fomins : Your two variants are certainly not the same (even though they have the same optimum). The normal two-norm is not differentiable at the origin, so as the iterates x_k approach xBase, you are guaranteed to approach the point of non-differentiability of your objective.

Almost any optimization code assumes that the objective is continuously differentiable over the problem domain (in fact, the codes make the even stronger assumption that the it is twice continuously differentiable with bounded derivatives). If these conditions are violated, the codes struggle and/or break down. This is not a problem of cyipopt (or even ipopt).

The remedy is quite simple (as you have discovered). The squared norm is monotone in the norm, so the optima coincide between both problems, but the squared norm is differentiable everywhere.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants