-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Achievement of platinum on the integration quality scale #226
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most polite. No "WTAF?".
🤣🤣🤣 |
Long may the changes in the last two PRs hide dormant. Prepped for next anyways. |
We now meet (well, far exceed) the minimum requirements for becoming a core component. Unit testing of config_flow.py is must-ride. I worked out how to shoehorn pytests for a custom into the dev container environment, with minimal alterations required when transitioned as a core component... In "mounts": [
"source=${localEnv:HOME}/Documents/GitHub/ha-solcast-solar/custom_components/solcast_solar,target=${containerWorkspaceFolder}/config/custom_components/solcast_solar,type=bind",
"source=${localEnv:HOME}/Documents/GitHub/ha-solcast-solar/tests,target=${containerWorkspaceFolder}/tests/components/solcast_solar,type=bind", from
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice
OK, I'll try to have a look over the weekend. Very tired, now, so off to bed, but maybe we throw it at the wall and see if it sticks. Will read up on the process. |
Sounds like a journey, but I'm betting it's a shorter one than HACS at the mo'... We got this. |
I've removed the self-signed certs to stop GitGuardian complaining about them. The sim creates the cert if required on startup. |
The hard limit validation oopsie would have been caught by the unit test had I also looked for an expected positive outcome going wrong instead of expecting the worst from bad input. I did not. The unit test failed everyone. It sailed through without a care in the world. It does not now. |
So I am kind-of guessing re. #227. I am only led by the exception. The latest commit catches the very first exception regarding 'index' and then bluntly refuses to start the integration with an inferance of a corrupt I thought about doing something fancier, but fancier would have required a LOT more input as to the situation, and a LOT more testing. I am not in the mood for testing. And I have no input. So it is what it is for now. |
I am kind-of guessing that some ancient installs - possibly manual, non-HACS, have been upgraded, and they are the cause of the last few issues like this. |
Kind-of not, I'm tipping. The log indicated a usage cache load, with a count of two for API used, so must be a relatively recent version. Last reset was within twenty-four hours. This might have been from a prior start attempt, but there are two sites, so if anything API used should be four calls, given history load on fresh start. Unless the second site failed to load... I got nothin' but conjecture.
|
Ahh, I did squint at the log on my phone after I carefully picked up my specs case, left home, and discovered that I had left me specs at home, but didn't spot that. |
Further fun with unit tests. Just uncovered a bug: Turning off detailed half-hourly attribute with detailed hourly enabled results in a puff of virtual magic smoke.
|
Correct. Two days with missing forecasts. The logic here is that at midnight roll-over the last day will quite reasonably have no forecast data, then on next update that day will be populated. The Opera House should trigger its repair required at midnight tonight and confirm the logic is good in the real world. |
There should be no gaps in your forecast history because 82 hours of estimated actuals filling the void. If there is a gap then we have a problem. A minor one, albeit, but one that would want correcting. 3.4 days of missing history in your case probably won't reveal any gap visually. The The most past data that would be loaded in a super-stale start situation is 168 hours / 7 days, which would see a gap if things are that stale. The Solcast doco does not state the max for a hobbyist account, but my hunch would be that this is the max. |
Now Thursday arvo / Friday morning. Ran up dev container today to debug Polestar API changes. |
There is a remove issue call that I assume (commenting from phone, so not able to easily review code) that you have called on the user choosing to enable automatic updates, but the UI has cached the old issue until refresh? I have noticed similar - an integration that has not started correctly, you can do a reload, but it shows in an error state until browser refresh. I think that it is likely to be over aggressive caching by the front end. I will try to repro, and if I can cancel, raise an issue on the front end. |
Also, we are 30 in the HACS default PR queue. Just sayin |
I see that you've correctly set is_fixable. I've reviewed issues on the HA frontend project, and found none that mention is_fixable, which makes me think it is worth raising a bug. Will repro after 48h has elapsed from my last Dev Container run. |
I just set up a synthetic "data missing" test by setting manual updates, deleting forecast data in the cache for UTC 11/2 and 12/2 and re-starting HA. The suggested repair immediately appeared. Did the repair, and got confirmation of config set. A very short while later the repair suggestion went away, which would represent the time to reload the integration. Almost blink-of-the eye stuff. It's hit/miss. What the integration does in the repair flow is set the new option in config entry, which causes So removal of the issue is on startup, and not when the repair actually happens. Shouldn't make a diff, but. But might... |
Okay. On pulling apart what happened in the log:
This sequence is interesting. The steps to arrive at this was to turn off auto-update, but never again update a forecast until this test. This left the this-update-was-auto-updated flag set in the cache. "Time passed". Load with auto-update enabled and the stale check did the right thing. It was stale. So yes, interesting, but my head wonders whether the additional goings-on contributed to the issue being removed in this test. Therefore I'm declaring it an invalid test. Not the same animal. |
Valid test.
|
Is step 2 to save the autoupdate = 0 in the config files? I'm hoping that I can do Step 1 in the config files on Dev Container tomorrow, and then jump to 4 (Start Dev Container) where two days have elapsed already. |
Strict type checking restored, 100% test coverage.
It will should an auto-update have occurred prior. It had not in my case, but wanted to almost 100% replicate a valid test scenario. It was synthetic given the "knobbling" of the cache. If your current Watch out for if prior update was ages ago then the test for "stale" update may perform an update on start (i.e. the @gcoan solution) and things get reset for the next two days. This can be avoided by manually changing the Explaining that lot makes me wonder whether the @gcoan solution should even exist now. An issue offering a repair would make it obvious to a user that something had broken while away flying an ultralight for a couple of weeks. |
True, but the offered repair might at least make it easier. Testing tomorrow. I'm pretty sure last update is auto-update, but migraine time now so turning off. |
Not going to happen. Forecast would be updated on start, and then the configured automation would execute when programmed, keeping the forecast fwesh with a service call action. No issue raised, no repair needed. Old school automation keeps on keeping on. This makes it harder to test this raised issue/repair scenario because a startup may be involved, but the @gcoan solution still may need to be a valid thing because history prior to auto-update being a thing. Just putting it out there. PyTest code synthetically tests all scenarios like a champ. |
Now that you are flagging repairs to HA it would seem to me to be the better solution to do that so the user knows what is happening and can take action rather than a silent behind the scenes auto-update. The counter argument is that the more repairs you give the user the more they have to act upon them, support questions get raised as to why these repairs are happening, when in fact the fix is simple and can be automated. There, just argued it either way round. And its a Microlight in the UK. I don't know about the Southern hemisphere but in America an Ultralight is what we in the UK call an SSDR (single seater de-regulated microlight) - even lighter weight planes that require no annual permit or inspection. Generally in the US Ultralights are considered high risk because they can be flown with very little oversight. |
Bloody pedant. Love it. I am officially a little smarter. Okay. You were off flying a bloody little plane with good oversight, great care, and great experience. I stand corrected. 😂 (And I love using an Oxford comma.) My pendulum swings towards leaving the @gcoan's-gone-flying solution based on you pointing out that it lowers the support burden, albeit somewhat silently, with foresaken transparency. It is an automation scenario, so we're just ... automating a little more on the user's behalf... |
I don't feel strongly either way, but concur with keeping it in. |
Agreed. |
Would this be a relatively simple drop in that allows testing under custom_components? https://github.com/MatthewFlamm/pytest-homeassistant-custom-component |
I have seen that, and tried to get it going in my dev container. I initially failed, so reverted to having dev set up as a core component to achieve my primary aim: get testing happening to aid development instead of wasting time. This also served the dual purpose before Christmas of ensuring everything (except the manifest) was good for core. I should re-visit it. |
In the most recent commit I give you: Yeah, Steve's bored. Details of up-coming auto-updates as attributes of api_last_polled. Someone might want this. |
(The attributes do not flow to recorder, so no storage bloat yellow card.) |
I reviewed the change log for this commit re. fixes vs. enhancements, and it struck me that this lot should probably be a v4.3.0. Well aside from the masses of strict type checking and PyTest additions/alterations, we actually have a payload that gives solid additional function. Like repairs. Like re-auth flow. Like new attributes. Like migrating from ancient Oziee. Like fetching past data on super-stale start. These are not patches. It is one heck of a behemoth 12,871 changed lines of code payload in this one PR. It really should be a minor version bump, and not a patch fix. Let's face it, I did not "tweak"... I have honestly smacked it significantly until we both wept at times, resulting in something pure. Something 100% testable. Something that fixed underlying bugs that no one has yet hit but that were waiting to bite them and us in the neck. (I may have over-dramatised this a teensy bit...) Plus it would give us a new User-Agent string to keep Solcast's perimeter guessing. Not that that has appeared as an issue since setting a dedicated one... |
Why anyone of sound mind would do this is beyond me. But it does result in exceptions that want heading off at the pass...