Releases: ASFHyP3/hyp3
Releases · ASFHyP3/hyp3
HyP3 v9.5.4
HyP3 v9.5.3
Fixed
- When the API returns an error for an
INSAR_ISCE_BURST
job because the requested scenes have different polarizations, the error message now always includes the requested polarizations in the same order as the requested scenes (previously, the order of the polarizations was not guaranteed). For example, passing two scenes withVV
andHH
polarizations, respectively, results in the error message:The requested scenes need to have the same polarization, got: VV, HH
- The API validation behavior for the
INSAR_ISCE_MULTI_BURST
job type is now more closely aligned with the CLI validation for the underlying HyP3 ISCE2 container. Currently, this only affects thehyp3-multi-burst-sandbox
deployment. - The requested scene names are now validated before DEM coverage for both
INSAR_ISCE_BURST
andINSAR_ISCE_MULTI_BURST
. - The
lambda_logging.log_exceptions
decorator (for logging unhandled exceptions in AWS Lambda functions) now returns the wrapped function's return value rather than always returningNone
. - Ruff now enforces that all functions and methods must have type annotations.
- Updated the DIST-S1 entrypoint of the image and changed the job spec accordingly.
HyP3 v9.5.2
Added
- The
ARIA_S1_GUNW
job type is now available in the hyp3-edc-prod deployment.
Changed
- OPERA-DIST-S1 runtime increases from 3 to 6 hours for experimentation.
Fixed
- OPERA-DIST-S1 job spec had wrong CLI interface (e.g. --n-lookbacks should be --n_lookbacks).
HyP3 v9.5.1
Added
OPERA_DIST_S1
job type to all ARIA Tibet and NISAR JPL deployments.- Stood up a new
hyp3-tibet-jpl-test
deployment for the ARIA Tibet project at JPL.
Changed
- Increased throughput for
hyp3-cargill
(640 -> 1600 vCPUs) to support their processing needs.
Removed
- Removed the
hyp3-enterprise-test
deployment.
HyP3 v9.5.0
Added
ARIA_S1_GUNW
job type to hyp3-edc-uat deployment.- All jobs now have
sns:Publish
permissions for SNS topics in the same AWS region and account for the purpose of sending messages to a co-located deployment of https://github.com/ASFHyP3/ingest-adapter.
Changed
- The reserved
bucket_prefix
job spec parameter has been renamed tojob_id
and can be referenced asRef::job_id
within each step'scommand
field. - The
job_id
parameter of theARIA_RAIDER
job type has been renamed togunw_job_id
. - The
AUTORIFT_ITS_LIVE
job type now accepts Sentinel-1 burst products. ruff
now checks for incorrect docstrings (missing docstrings are still allowed), incomplete type annotations (missing annotations are still allowed), and opportunities to usepathlib
.- Cloudformation parameter overrides are now provided via a .json file input to the
deploy-hyp3
GitHub action. - The
OriginAccessIdentityId
used in EDC deployments has been renamed toBucketReadPricipals
and now accepts multiple values.
HyP3 v9.4.0
Changed
- The
OPERA_DISP_TMS
job type is now a fan-out/fan-in workflow.
Fixed
- Previously there was a bug in which fan-out job steps, defined using the
map: for item in items
syntax, would fail ifitems
was an array of non-string values, because AWS Batch SubmitJob expects string parameters. This bug has been fixed by converting each value to a string before passing it to SubmitJob.
HyP3 v9.3.0
Added
- Added
velocity
option for thetile_type
parameter ofOPERA_DISP_TMS
jobs - Restored previously deleted
hyp3-opera-disp-sandbox
deployment - Added validator to check that bounds provided do not exceed maximum size for SRG jobs
Removed
- Removed default bounds option for SRG jobs
HyP3 v9.2.0
Added
- Add
mypy
tostatic-analysis
workflow OPERA_DISP_TMS
job type is now available in EDC UAT deployment
Changed
- Upgrade to Python 3.13
Removed
- Remove
hyp3-opera-disp-sandbox
deployment
HyP3 v9.1.1
Changed
- The
static-analysis
Github Actions workflow now usesruff
rather thanflake8
for linting.
HyP3 v9.1.0
Added
- Add a new https://hyp3-opera-disp-sandbox.asf.alaska.edu deployment with an
OPERA_DISP_TMS
job type for generating tilesets for the OPERA displacement tool.