Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Achievement of platinum on the integration quality scale #226

Open
wants to merge 151 commits into
base: main
Choose a base branch
from

Conversation

autoSteve
Copy link
Collaborator

Why anyone of sound mind would do this is beyond me. But it does result in exceptions that want heading off at the pass...

@autoSteve autoSteve requested a review from BJReplay December 5, 2024 08:56
Copy link
Owner

@BJReplay BJReplay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most polite. No "WTAF?".

@autoSteve
Copy link
Collaborator Author

🤣🤣🤣

@autoSteve
Copy link
Collaborator Author

Long may the changes in the last two PRs hide dormant. Prepped for next anyways.

@BJReplay
Copy link
Owner

BJReplay commented Dec 5, 2024

Long may the changes in the last two PRs hide dormant. Prepped for next anyways.

While I was reviewing and approving your PR, I was sitting in the Palais waiting for...

20241205_220716 1

@autoSteve
Copy link
Collaborator Author

Here's an example for you of how smart Copilot can be.

Scenario: I've copied a unit test function for invalid API limit, and am modifying it to test for invalid hard limit.
Result: The clever bugger has worked out exactly what I am doing and is suggesting absolutely correct lines of code (in grey italic, just press tab...)

image

@autoSteve
Copy link
Collaborator Author

autoSteve commented Dec 6, 2024

We now meet (well, far exceed) the minimum requirements for becoming a core component. Unit testing of config_flow.py is must-ride.

I worked out how to shoehorn pytests for a custom into the dev container environment, with minimal alterations required when transitioned as a core component...

In devcontainer.json...

  "mounts": [
    "source=${localEnv:HOME}/Documents/GitHub/ha-solcast-solar/custom_components/solcast_solar,target=${containerWorkspaceFolder}/config/custom_components/solcast_solar,type=bind",
    "source=${localEnv:HOME}/Documents/GitHub/ha-solcast-solar/tests,target=${containerWorkspaceFolder}/tests/components/solcast_solar,type=bind",

from tests/components/solcast_solar run pytest.

(ha-venv) vscode ➜ /workspaces/…/tests/components/solcast_solar $ pytest
Test session starts (platform: linux, Python 3.12.7, pytest 8.3.3, pytest-sugar 1.0.0)
rootdir: /workspaces/homeAssistant-core
configfile: pyproject.toml
plugins: github-actions-annotate-failures-0.2.0, sugar-1.0.0, pytest_freezer-0.4.8, cov-5.0.0, syrupy-4.7.1, picked-0.5.0, timeout-2.3.1, socket-0.7.0, respx-0.21.1, aiohttp-1.0.5, xdist-3.6.1, asyncio-0.24.0, requests-mock-1.12.1, unordered-0.6.1, anyio-4.6.0
asyncio: mode=Mode.AUTO, default_loop_scope=function
collected 7 items                                                                                                                                                                                                                                                                        

 tests/components/solcast_solar/test_config_flow.py ✓✓✓✓✓✓✓                                                                                                                                                                                                                100% ██████████

Results (0.21s):
       7 passed

@autoSteve autoSteve changed the title Catch duplicate API key specified Catch duplicate API key specified (plus scope creep) Dec 6, 2024
Copy link
Owner

@BJReplay BJReplay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice

@BJReplay
Copy link
Owner

BJReplay commented Dec 6, 2024

We now meet (well, far exceed) the minimum requirements for becoming a core component. Unit testing of config_flow.py is must-ride.

OK, I'll try to have a look over the weekend.

Very tired, now, so off to bed, but maybe we throw it at the wall and see if it sticks.

Will read up on the process.

@autoSteve
Copy link
Collaborator Author

Sounds like a journey, but I'm betting it's a shorter one than HACS at the mo'...

We got this.

@autoSteve
Copy link
Collaborator Author

I've removed the self-signed certs to stop GitGuardian complaining about them. The sim creates the cert if required on startup.

@autoSteve
Copy link
Collaborator Author

The hard limit validation oopsie would have been caught by the unit test had I also looked for an expected positive outcome going wrong instead of expecting the worst from bad input. I did not. The unit test failed everyone.

It sailed through without a care in the world. It does not now.

@autoSteve
Copy link
Collaborator Author

So I am kind-of guessing re. #227. I am only led by the exception.

The latest commit catches the very first exception regarding 'index' and then bluntly refuses to start the integration with an inferance of a corrupt solcast.json file reported. It might prevent issues raised, but more likely will improve sit. rep. and avoid an exploded log with a bunch of knock-on bad stuff happening.

I thought about doing something fancier, but fancier would have required a LOT more input as to the situation, and a LOT more testing. I am not in the mood for testing. And I have no input. So it is what it is for now.

@BJReplay
Copy link
Owner

BJReplay commented Dec 9, 2024

So I am kind-of guessing re. #227.

I am kind-of guessing that some ancient installs - possibly manual, non-HACS, have been upgraded, and they are the cause of the last few issues like this.

@autoSteve
Copy link
Collaborator Author

Kind-of not, I'm tipping.

The log indicated a usage cache load, with a count of two for API used, so must be a relatively recent version. Last reset was within twenty-four hours. This might have been from a prior start attempt, but there are two sites, so if anything API used should be four calls, given history load on fresh start. Unless the second site failed to load...

I got nothin' but conjecture.

2024-12-08 19:54:24.881 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Usage cache exists for ******Zr-mpU
2024-12-08 19:54:24.888 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Usage cache for ******Zr-mpU last reset 2024-12-08 01:00:00
2024-12-08 19:54:24.888 INFO (MainThread) [custom_components.solcast_solar.solcastapi] Usage loaded
2024-12-08 19:54:24.889 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] API counter for ******Zr-mpU is 2/10
2024-12-08 19:54:24.889 DEBUG (MainThread) [custom_components.solcast_solar] UTC times are converted to Europe/Berlin

@BJReplay
Copy link
Owner

BJReplay commented Dec 9, 2024

Kind-of not, I'm tipping.

Ahh, I did squint at the log on my phone after I carefully picked up my specs case, left home, and discovered that I had left me specs at home, but didn't spot that.

@autoSteve
Copy link
Collaborator Author

Further fun with unit tests. test_solcastapi.py.

Just uncovered a bug: Turning off detailed half-hourly attribute with detailed hourly enabled results in a puff of virtual magic smoke.

(ha-venv) vscode ➜ /workspaces/…/tests/components/solcast_solar $ pytest -o log_cli=true --log-cli-level=DEBUG -vv
Test session starts (platform: linux, Python 3.12.7, pytest 8.3.3, pytest-sugar 1.0.0)
cachedir: .pytest_cache
rootdir: /workspaces/homeAssistant-core
configfile: pyproject.toml
plugins: github-actions-annotate-failures-0.2.0, sugar-1.0.0, pytest_freezer-0.4.8, cov-5.0.0, syrupy-4.7.1, picked-0.5.0, timeout-2.3.1, socket-0.7.0, respx-0.21.1, aiohttp-1.0.5, xdist-3.6.1, asyncio-0.24.0, requests-mock-1.12.1, unordered-0.6.1, anyio-4.6.0
asyncio: mode=Mode.AUTO, default_loop_scope=function
collected 16 items                                                                                                                                                                                                                                                                                                         

 tests/components/solcast_solar/test_config_flow.py::test_create_entry ✓                                                                                                                                                                                                                                       6% ▋         
 tests/components/solcast_solar/test_config_flow.py::test_api_key ✓                                                                                                                                                                                                                                           12% █▍        
 tests/components/solcast_solar/test_config_flow.py::test_api_quota ✓                                                                                                                                                                                                                                         19% █▉        
 tests/components/solcast_solar/test_config_flow.py::test_option_api_key ✓                                                                                                                                                                                                                                    25% ██▌       
 tests/components/solcast_solar/test_config_flow.py::test_option_api_quota ✓                                                                                                                                                                                                                                  31% ███▎      
 tests/components/solcast_solar/test_config_flow.py::test_option_custom_hour_sensor ✓                                                                                                                                                                                                                         38% ███▊      
 tests/components/solcast_solar/test_config_flow.py::test_option_hard_limit ✓                                                                                                                                                                                                                                 44% ████▍     
 tests/components/solcast_solar/test_solcastapi.py::test_forecast_update ✓                                                                                                                                                                                                                                    50% █████     
 tests/components/solcast_solar/test_solcastapi.py::test_build_splines ✓                                                                                                                                                                                                                                      56% █████▋    
 tests/components/solcast_solar/test_solcastapi.py::test_get_total_energy_forecast ✓                                                                                                                                                                                                                          62% ██████▍   
 tests/components/solcast_solar/test_solcastapi.py::test_get_peaks ✓                                                                                                                                                                                                                                          69% ██████▉   
 tests/components/solcast_solar/test_solcastapi.py::test_get_power_n_minutes ✓                                                                                                                                                                                                                                75% ███████▌  
 tests/components/solcast_solar/test_solcastapi.py::test_get_forecast_n_hour ✓                                                                                                                                                                                                                                81% ████████▎ 
 tests/components/solcast_solar/test_solcastapi.py::test_get_forecast_custom_hours ✓                                                                                                                                                                                                                          88% ████████▊ 
 tests/components/solcast_solar/test_solcastapi.py::test_get_forecast_remaining_today ✓                                                                                                                                                                                                                       94% █████████▍
 tests/components/solcast_solar/test_solcastapi.py::test_get_forecast_day ✓                                                                                                                                                                                                                                  100% ██████████

Results (0.63s):
      16 passed

@autoSteve
Copy link
Collaborator Author

wait > 24 hours for the repair to be triggered

Correct. Two days with missing forecasts. The logic here is that at midnight roll-over the last day will quite reasonably have no forecast data, then on next update that day will be populated. check_data_records() executes at startup, at midnight, plus after each forecast update.

The Opera House should trigger its repair required at midnight tonight and confirm the logic is good in the real world.

@autoSteve
Copy link
Collaborator Author

Automagic repair FTW!

There should be no gaps in your forecast history because 82 hours of estimated actuals filling the void. If there is a gap then we have a problem. A minor one, albeit, but one that would want correcting. 3.4 days of missing history in your case probably won't reveal any gap visually. The solcast.json would tell a story, so maybe attach it.

The most past data that would be loaded in a super-stale start situation is 168 hours / 7 days, which would see a gap if things are that stale. The Solcast doco does not state the max for a hobbyist account, but my hunch would be that this is the max.

@BJReplay
Copy link
Owner

BJReplay commented Feb 4, 2025

Correct. Two days with missing forecasts.

Now Thursday arvo / Friday morning. Ran up dev container today to debug Polestar API changes.

@autoSteve
Copy link
Collaborator Author

Dev container clock is whacko, because on me laptop, which is suspended and resumed, but it fired.

image

Opera House is running always, and triggered the repair also, as expected.

image image image

DISAPPOINTED! DID, but did not apparently cause an issue repair for the Opera House... Refresh browser and issue is gone. Review log and the setting of auto-update was applied. This is annoying. It should not need a browser reset. This was the same for both Opera House and the dev. container.

So the code logic was bang on, but the user experience not so much...

image

(After browser refresh...)

image

@BJReplay
Copy link
Owner

BJReplay commented Feb 4, 2025

There is a remove issue call that I assume (commenting from phone, so not able to easily review code) that you have called on the user choosing to enable automatic updates, but the UI has cached the old issue until refresh?

I have noticed similar - an integration that has not started correctly, you can do a reload, but it shows in an error state until browser refresh.

I think that it is likely to be over aggressive caching by the front end.

I will try to repro, and if I can cancel, raise an issue on the front end.

@BJReplay
Copy link
Owner

BJReplay commented Feb 4, 2025

Also, we are 30 in the HACS default PR queue.

Just sayin

@BJReplay
Copy link
Owner

BJReplay commented Feb 4, 2025

There is a remove issue call that I assume (commenting from phone, so not able to easily review code) that you have called on the user choosing to enable automatic updates, but the UI has cached the old issue until refresh?

I see that you've correctly set is_fixable.

I've reviewed issues on the HA frontend project, and found none that mention is_fixable, which makes me think it is worth raising a bug.

Will repro after 48h has elapsed from my last Dev Container run.

@autoSteve
Copy link
Collaborator Author

I just set up a synthetic "data missing" test by setting manual updates, deleting forecast data in the cache for UTC 11/2 and 12/2 and re-starting HA. The suggested repair immediately appeared.

Did the repair, and got confirmation of config set. A very short while later the repair suggestion went away, which would represent the time to reload the integration. Almost blink-of-the eye stuff.

It's hit/miss.

What the integration does in the repair flow is set the new option in config entry, which causes update_config_entry() to issue an integration re-load, which on start does check_data_records(), which either sets an issue to repair if needed if it isn't already in the issue registry or clears any outstanding issue if there is no data issue. (The issue is not set to persist HA re-starts so it checks every time.)

So removal of the issue is on startup, and not when the repair actually happens.

Shouldn't make a diff, but. But might...

@autoSteve
Copy link
Collaborator Author

Okay. On pulling apart what happened in the log:

  1. Initial issue raised
  2. Repaired by enabling auto-update
  3. Intgration re-loaded
  4. During startup it reported it was raising another issue because forecast data still missing. Fair enough. An auto-update has not occurred yet... This was not the same issue. It was the stale-when-auto-update-enabled issue that does not offer a repair flow. At this time the previous issue was cleared.
  5. Decides that the previous auto-update was missed (me initially thinks, "What the?")
  6. Updates forecasts because "stale"
  7. Decides that it now has all data so clears the auto-stale issue

This sequence is interesting.

The steps to arrive at this was to turn off auto-update, but never again update a forecast until this test. This left the this-update-was-auto-updated flag set in the cache. "Time passed". Load with auto-update enabled and the stale check did the right thing. It was stale.

So yes, interesting, but my head wonders whether the additional goings-on contributed to the issue being removed in this test.

Therefore I'm declaring it an invalid test. Not the same animal.

@autoSteve
Copy link
Collaborator Author

Next test, valid, but still sub-optimal.

  1. Set to manual
  2. Nobble last two days of forecasts, plus set auto-updated to zero
  3. Restart HA, repair offered, and submitted
  4. Integration re-loads
  5. Issue seen that forecasts are out of date, not repairable
image

What should happen, IMHO is that raising this issue should be suppressed if the current state of the cache/forecasts in memory says last update was manual. This I am going to code.

@autoSteve
Copy link
Collaborator Author

Valid test.

  1. Set to manual
  2. Issue a forecast update action
  3. Nobble last two days of forecasts
  4. Restart HA, repair offered, and submitted
  5. Integration re-loads
  6. Enjoy the silence whilst waiting for the next auto-update

@BJReplay
Copy link
Owner

BJReplay commented Feb 5, 2025

2. Issue a forecast update action
3. Nobble last two days of forecasts

Is step 2 to save the autoupdate = 0 in the config files?

I'm hoping that I can do Step 1 in the config files on Dev Container tomorrow, and then jump to 4 (Start Dev Container) where two days have elapsed already.

@autoSteve
Copy link
Collaborator Author

Strict type checking restored, 100% test coverage.

Is step 2 to save the autoupdate = 0 in the config files?

It will should an auto-update have occurred prior. It had not in my case, but wanted to almost 100% replicate a valid test scenario. It was synthetic given the "knobbling" of the cache.

If your current solcast.json prior update was not auto-updated, plus the dev. container has been sitting there doing nothing then you should have to do nothing but wait.

Watch out for if prior update was ages ago then the test for "stale" update may perform an update on start (i.e. the @gcoan solution) and things get reset for the next two days. This can be avoided by manually changing the solcast.json last updated date, but that creates a synthetic test. The dev container really has to be running with auto-update off but not updating forecasts for a couple-o-days for a bona fide real-world test, where the suggested repair will pop up at midnight local time.

Explaining that lot makes me wonder whether the @gcoan solution should even exist now. An issue offering a repair would make it obvious to a user that something had broken while away flying an ultralight for a couple of weeks.

@BJReplay
Copy link
Owner

BJReplay commented Feb 5, 2025

Explaining that lot makes me wonder whether the @gcoan solution should even exist now. An issue offering a repair would make it obvious to a user that something had broken while away flying an ultralight for a couple of weeks.

True, but the offered repair might at least make it easier.

Testing tomorrow. I'm pretty sure last update is auto-update, but migraine time now so turning off.

@autoSteve
Copy link
Collaborator Author

the offered repair might at least make it easier

Not going to happen. Forecast would be updated on start, and then the configured automation would execute when programmed, keeping the forecast fwesh with a service call action. No issue raised, no repair needed. Old school automation keeps on keeping on.

This makes it harder to test this raised issue/repair scenario because a startup may be involved, but the @gcoan solution still may need to be a valid thing because history prior to auto-update being a thing. Just putting it out there.

PyTest code synthetically tests all scenarios like a champ.

@gcoan
Copy link

gcoan commented Feb 5, 2025

Explaining that lot makes me wonder whether the @gcoan solution should even exist now. An issue offering a repair would make it obvious to a user that something had broken while away flying an ultralight for a couple of weeks.

Now that you are flagging repairs to HA it would seem to me to be the better solution to do that so the user knows what is happening and can take action rather than a silent behind the scenes auto-update.

The counter argument is that the more repairs you give the user the more they have to act upon them, support questions get raised as to why these repairs are happening, when in fact the fix is simple and can be automated.

There, just argued it either way round.

And its a Microlight in the UK. I don't know about the Southern hemisphere but in America an Ultralight is what we in the UK call an SSDR (single seater de-regulated microlight) - even lighter weight planes that require no annual permit or inspection. Generally in the US Ultralights are considered high risk because they can be flown with very little oversight.
The terminology is not simple, but I known what you mean...

@autoSteve
Copy link
Collaborator Author

Microlight in the UK

Bloody pedant. Love it. I am officially a little smarter.

Okay. You were off flying a bloody little plane with good oversight, great care, and great experience. I stand corrected. 😂 (And I love using an Oxford comma.)

My pendulum swings towards leaving the @gcoan's-gone-flying solution based on you pointing out that it lowers the support burden, albeit somewhat silently, with foresaken transparency. It is an automation scenario, so we're just ... automating a little more on the user's behalf...

@gcoan
Copy link

gcoan commented Feb 5, 2025

My pendulum swings towards leaving the @gcoan's-gone-flying solution based on you pointing out that it lowers the support burden, albeit somewhat silently, with foresaken transparency. It is an automation scenario, so we're just ... automating a little more on the user's behalf...

I don't feel strongly either way, but concur with keeping it in.

@BJReplay
Copy link
Owner

BJReplay commented Feb 5, 2025

My pendulum swings towards leaving the @gcoan's-gone-flying solution based on you pointing out that it lowers the support burden, albeit somewhat silently, with foresaken transparency. It is an automation scenario, so we're just ... automating a little more on the user's behalf...

I don't feel strongly either way, but concur with keeping it in.

Agreed.

@BJReplay
Copy link
Owner

BJReplay commented Feb 5, 2025

Would this be a relatively simple drop in that allows testing under custom_components?

https://github.com/MatthewFlamm/pytest-homeassistant-custom-component

@autoSteve
Copy link
Collaborator Author

I have seen that, and tried to get it going in my dev container.

I initially failed, so reverted to having dev set up as a core component to achieve my primary aim: get testing happening to aid development instead of wasting time.

This also served the dual purpose before Christmas of ensuring everything (except the manifest) was good for core.

I should re-visit it.

@autoSteve
Copy link
Collaborator Author

In the most recent commit I give you: Yeah, Steve's bored. Details of up-coming auto-updates as attributes of api_last_polled. Someone might want this.

@autoSteve
Copy link
Collaborator Author

(The attributes do not flow to recorder, so no storage bloat yellow card.)

@autoSteve
Copy link
Collaborator Author

I reviewed the change log for this commit re. fixes vs. enhancements, and it struck me that this lot should probably be a v4.3.0.

Well aside from the masses of strict type checking and PyTest additions/alterations, we actually have a payload that gives solid additional function. Like repairs. Like re-auth flow. Like new attributes. Like migrating from ancient Oziee. Like fetching past data on super-stale start. These are not patches.

It is one heck of a behemoth 12,871 changed lines of code payload in this one PR. It really should be a minor version bump, and not a patch fix. Let's face it, I did not "tweak"... I have honestly smacked it significantly until we both wept at times, resulting in something pure. Something 100% testable. Something that fixed underlying bugs that no one has yet hit but that were waiting to bite them and us in the neck. (I may have over-dramatised this a teensy bit...)

Plus it would give us a new User-Agent string to keep Solcast's perimeter guessing. Not that that has appeared as an issue since setting a dedicated one...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants