Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Started addressing #21 and it seems to have made the module a fraction slower. Only really appreciable at$n = 1000$ , but definitely above the variation we had been seeing. For the SQP with HiGHS it's only a 1% slowdown, but it's 3% for the non-robust methods and 7% for SQP with Gurobi.
Since I made a few other typing related changes at the same time, before pressing ahead I want to isolate which of the following are responsible:
scipy.sparse.spmatrix
toscipy.sparse.sparray
. If this is what's responsible not much we can do there since SciPy are (eventually) deprecatingspmatrix
, though if it makes a difference we could hold off switching until the deprecation is looming.np.int8
andnp.float64
tonp.integer
andnp.floating
. This was a good practice change since requiring the end user to work at 64 bit precision isn't ideal, but possible that now NumPy is spending some time deciding which C precision to convert the Python values to rather than just being told "this one". If that's the case, this could be exposed as an optionaldtype
argument which defaults to 64 bit.dtype=float
fordtype=np.floating
. I'd've thought this was a speed up rather than a slowdown, because before doing an operation NumPy convertsfloat
to a C precision anyways. This change should just be pre-empting that, but maybe I just implemented it wrong.dtype=np.int8
fordtype=np.bool
forM
inbool
type means only storing one byte per entry. However it's possible the conversion NumPy goes through when it's used in an operation is adding some overhead.TL;DR: made some changes and it got slower, but have an idea who the culprits are. Will try and minimize the impact before merging.