Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ASM1 Property Package Re-Scaling #1519

Merged
merged 43 commits into from
Dec 12, 2024

Conversation

MarcusHolly
Copy link
Contributor

@MarcusHolly MarcusHolly commented Nov 6, 2024

Summary/Motivation:

Reworks how the ASM1 property package is scaled according to the new IDAES scaling routine

Changes proposed in this PR:

  • Updates ASM1 property package scaling
  • Updates the corresponding test files

Legal Acknowledgement

By contributing to this software project, I agree to the following terms and conditions for my contribution:

  1. I agree my contributions are submitted under the license terms described in the LICENSE.txt file at the top level of this directory.
  2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.

@MarcusHolly MarcusHolly added the Priority:Normal Normal Priority Issue or PR label Nov 6, 2024
@MarcusHolly MarcusHolly self-assigned this Nov 6, 2024
@lbianchi-lbl
Copy link
Contributor

Copy link
Contributor

@adam-a-a adam-a-a left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unsure of current status on this, but I still would like to see things come in one at a time. My reasoning is that, ideally, if we only add a Scaler for ASM1, it might be easier to troubleshoot failed tests. Additionally, you could focus specifically on ASM1 tests and perhaps particular unit model tests that use ASM1. Moreover, adding a test for ASM1 Scaler could be helpful to gauge whether your default scaler class works better or worse with different settings. From there, we could move on to ADM1 and do the same for that.

@adam-a-a
Copy link
Contributor

Right now, for example, you might not be able to tell if ASM1, ADM1 liquid or ADM1 vapor properties have problematic scaling that cause checks to fail.

@MarcusHolly
Copy link
Contributor Author

Right now, for example, you might not be able to tell if ASM1, ADM1 liquid or ADM1 vapor properties have problematic scaling that cause checks to fail.

I agree - I just haven't touched this since we had our last meeting since we're still waiting on IDAES/idaes-pse#1523 to go through. I'll make sure only one prop pack is changed at a time.

@MarcusHolly
Copy link
Contributor Author

MarcusHolly commented Nov 19, 2024

@lbianchi-lbl It seems like Bumping the IDAES tag of Pyomo to check for issues IDAES/idaes-pse#1523 is close to being merged, so just sending a reminder that this PR needs a tag for a pre-release version of IDAES after the PR goes through. If the IDAES PR just needs one more rubber stamp, I could do that.

@MarcusHolly MarcusHolly changed the title BSM2 Property Package Re-Scaling ASM1 Property Package Re-Scaling Nov 21, 2024
@MarcusHolly MarcusHolly marked this pull request as ready for review November 22, 2024 15:11
@MarcusHolly MarcusHolly requested a review from adam-a-a November 22, 2024 15:11
@MarcusHolly MarcusHolly marked this pull request as draft December 5, 2024 13:50
@MarcusHolly MarcusHolly marked this pull request as ready for review December 6, 2024 16:24
@@ -161,6 +167,121 @@ def check_units(self, model):
assert_units_consistent(model)


class TestASM1ReactionScaler(object):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thought that comes to mind--- I am hoping we can at least track some metric, even it doesn't adequately represent the shift in model states, to keep some record to quantify the performance of a scaler class (as well as the old calculate_scaling_factors method).

Perhaps the easiest thing to do is to use the DiagnosticToolbox, check the condition number and assert that value for the new Scaler, the old method-based approach, and any new Scaler classes (or revisions to existing ones) that we bring in. This way, we can get some idea of how impactful scaler additions/revisions are as we progress.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adam-a-a I should be able to add this relatively easily. Can you take a look at the AD tests I added here for comparing the old and new scaling methods:

# TODO: Remove test once iscale is deprecated

That PR is still a WIP, but all I need to do is refine the scaling for the bio_P=False case in the BSM2_P flowsheet

Copy link
Contributor

@adam-a-a adam-a-a left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, and I am ready to approve. @MarcusHolly The one thing I think we should consider adding is some assertions to check condition number before/after applying scaler. Ideally we'd do this for the old calculate_scaling_factors approach too. This way, we can be mindful of model improvements as we go, rather than just depending on the model to solve, which doesn't necessarily tell us whether one scaler is better than the next (which might be a fool's errand in general anyway).

@MarcusHolly
Copy link
Contributor Author

LGTM, and I am ready to approve. @MarcusHolly The one thing I think we should consider adding is some assertions to check condition number before/after applying scaler. Ideally we'd do this for the old calculate_scaling_factors approach too. This way, we can be mindful of model improvements as we go, rather than just depending on the model to solve, which doesn't necessarily tell us whether one scaler is better than the next (which might be a fool's errand in general anyway).

This discussion was continued offline - these tests will be updated in future PRs as we add more UnitModelScalers to WaterTAP. We will also consider the possibility of using the IPOPTConvergenceAnalysis to compare metrics from the old scaling routine to those of the new scaling routine.

@ksbeattie ksbeattie enabled auto-merge (squash) December 12, 2024 21:16
@ksbeattie ksbeattie merged commit fd83d37 into watertap-org:main Dec 12, 2024
18 of 19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
IDAES Priority:Normal Normal Priority Issue or PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants