-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate Model Variable Renaming Sprint changes in to GDASApp yamls and templates #1362
Comments
Started from g-w PR #2992 with
Changes to yamls (templates) thus far include
Using Puzzled by current failure in the variational analysis job
A check of |
With this local change in place the init job ran to completion. The var job successfully ran 3dvar assimilating only sondes. The job failed the reference check since the reference state assimilates amsua_n19 and sondes. Note that Has the default behavior for radiance data assimilation changed? Do we now require numerous surface fields be available? This makes sense if one wants to accurately compute surface emissivity. Surface conditions can also be used for data filtering and QC. This is a change from previous JEDI hashes. |
@RussTreadon-NOAA Is the failure in Jedi.remove_redundant()? Just so I know have I can fix #2992 |
@DavidNew-NOAA Yes, the traceback mentions
If you can fix this in g-w PR #2992, great! |
@DavidNew-NOAA : Updated working copy of
|
@RussTreadon-NOAA That newest commit didn't have a fix yet for this ob issue. I will work on it this morning. |
@RussTreadon-NOAA Actually, I just committed the changes you suggested. There's really no reason to mess with |
Forgot a line..make sure it's commit 7ac6ccb2bbf88b25fb533185c5d481cd328415ee (latest) |
Thank you @DavidNew-NOAA . |
@danholdaway , @ADCollard , and @emilyhcliu : When I update GDASApp JEDI hashes in
Updating the JEDI hashes brings in changes from the Model Variable Renaming Sprint. What changed in fv-3jedi, ufo, or vader which now requires the variables listed on the
|
This is failing because this if statement is not true when it should be. Likely because a variable is not being recognized as being present. Can you point me to your GDASapp and jcb-gdas code? |
@danholdaway : Here key directories and the job log file (all on Hercules):
|
Prints added to
There is no
Does the cube history file contain all the information we need to defined surface characteristics for radiance assimilation? |
In jcb-gdas you changed surface_geopotential_height to hgtsfc and tsea to tmpsfc. Perhaps try changing as:
Switching from the old short name to the IO name may have resulted in crossed wires. |
I think the sst change is because of https://github.com/JCSDA-internal/fv3-jedi/pull/1258 rather than variable naming conventions. |
Thank you @danholdaway for pointing me at fv3-jedi PR #1258. I see there was confusion over the name used for the skin temperature. This confusion remains when I Our cube sphere surface history files contain the following fields
There is neither Our tiled surface restart files contain the following fields starting with
The restart surface tiles contains Our atmospheric variational and local ensemble yamls now use The restart tiles have what appear to be fields for temperature over various surface types
Which temperature or combination of temperature should we pass to CRTM? I sidestepped this question and did a simple test. I renamed I can replace the variable name Tagging @emilyhcliu , @ADCollard , @CoryMartin-NOAA , and @DavidNew-NOAA . Two questions
The response to question 1 can be captured in this issue. Resolution of questions 2 likely needs a new issue. |
@RussTreadon-NOAA the issue might be in the mapping between tmpsfc and the long name in the FieldMetadata file. Do you know where that is coming from? It might be a fix file I guess. |
@danholdaway , you are right. I spent the morning wading through code, yamls, parm files, & fix files. I found the spot to make the correct linkage between fv3-jedi source code and our gfs cube sphere history files. With the change in place the variational and local ensemble DA jobs passed. The increment jobs failed. I still need to update the yamls for these jobs
The file I modified is
with
|
Thanks @RussTreadon-NOAA, really nice work digging through. If that fix file came directly from fv3-jedi (and was used in the fv3-jedi tests) there wouldn't have been any work to do so perhaps we should look into doing that. |
Agreed! We've been bitten by this disconnect more than once. |
Hercules test
The
@apchoiCMD , do you have a branch with changes that allow these tests to pass? The
@guillaumevernieres , do you know where / what needs to be changed in yamls or fixed files to get the marinevar test to pass? The log file for the failed job is |
g-w CI for DA Successfully run C96C48_ufs_hybatmDA g-w CI on Hercules. C96C48_hybatmaerosnowDA and C48mx500_3DVarAOWCDA fail. The C48mx500_3DVarAOWCDA failure is expected given ctest failures. The C96C48_hybatmaerosnowDA failure is in the 20211220 18Z
It is not clear from the traceback what the actual error is. Since this installation of GDASApp includes JEDI hashes with changes from the Model Variable Renaming Sprint, one or more yaml keywords or fix file keyword most likely need to be updated. @jiaruidong2017, @ClaraDraper-NOAA : Any ideas what we need to change in JEDI snow DA when moving to JEDI hashes which include changes from the Model Variable Renaming Sprint? The log file for the failed job is |
test_gdasapp update Install g-w PR #2992 on Hercules. Specifically, g-w branch
Log files for failed marine 3DVar jobs are in
The appears to a model variable renaming issue. Correcting the bmat job may allow the subsequent marine jobs to successfully run to completion. The log file for failed marine hyb job contains
This error may indicate that it is premature to run the marine letkf ctest. This test may need updates from g-w PR #3401. If true, this again highlights the problem we face with GDASApp getting several development cycles ahead of g-w. Tagging @guillaumevernieres , @AndrewEichmann-NOAA , and @apchoiCMD for help in debugging the marine DA and bufr2ioda_insitu failures. |
g-w CI update Install g-w PR #2992 on Hercules. Specifically, g-w branch The following g-w DA CI was configured and run
prgsi (1) and prjedi (2) successfully ran to completion
praero (3) and prwcda (4) encountered DEAD jobs which halted each parallel
The log files for the DEAD jobs are
|
Thanks you for this effort Russ. |
@AndrewEichmann-NOAA , I updated
Is this failure possibly related to #1352? |
GDASApp PR #1374 modifies
|
@AndrewEichmann-NOAA , I rolled back the change to
Does failure of |
@guillaumevernieres and @AndrewEichmann-NOAA :
There is no g-w directory Should |
cmake now detects the python version; previously hard-coded. This came up in #1362
11/14 status g-w DA CI testing complete on Hercules. 63 out of 64 Two issues remain to be resolved:
We can not resume nightly testing until all ctest pass. Given this we need to answer two questions
Tagging @guillaumevernieres , @AndrewEichmann-NOAA , @CoryMartin-NOAA , @danholdaway , @DavidNew-NOAA |
@RussTreadon-NOAA With regard to gridgen, I deleted that in GDASApp when I refactored the marine bmat, not realizing another code would use it. It exists now as |
@RussTreadon-NOAA Just to answer your question, I say that I add the following to #2992
And then we either revert the |
gdas_marineanlletkf failure - RESOLVED
|
gdas_marineanlfinal failure - UPDATE
This is problematic. Variable
Method
However, the
Do we need to change the logic in method What do you think @guillaumevernieres ? Who on the Marine DA team should I discuss this issue with? |
FYI, making the change suggested above to
works. With this change
Another thought: Is the better solution to add
|
@RussTreadon-NOAA The letkf problems would be resolved with #1372, which adds back the original gridgen yaml under parm, and adds the localization blocks to the obs space config files. |
@RussTreadon-NOAA , reverting the obs_list.yaml is what we should do. |
@AndrewEichmann-NOAA , thank you for pointing me at GDASApp PR #1372 to me. PR #1372 places We don't need It's good to see that #1372 addresses the missing |
@RussTreadon-NOAA @AndrewEichmann-NOAA Let's just leave gridgen.yaml in jcb-gdas and point there for now |
…ions sections to insitu_profile yamls (#1362)
@RussTreadon-NOAA @DavidNew-NOAA While it does belong under jcb and the letkf task should be converted to using it, that will require a PR to global-workflow, and letkf will be broken until that PR gets merged. |
@AndrewEichmann-NOAA I put the jcb-gdas |
@AndrewEichmann-NOAA Sorry, I meant in GW PR #2992 |
@DavidNew-NOAA Ah, ok |
…on of several insitue obs) (#1362)
Thanks @guillaumevernieres for the guidance. |
FYI I am manually running g-w DA CI on Hera & Hercules using g-w branch
|
Several JEDI repositories have been updated with changes from the Model Variable Renaming Sprint. Updating JEDI hashes in
sorc/
requires changes in GDASApp and jcb-gdas yamls and templates. This issue is opened to document these changes.The text was updated successfully, but these errors were encountered: