Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conf Intervals - qnorm approach #16

Closed
mdancho84 opened this issue Aug 4, 2020 · 5 comments
Closed

Conf Intervals - qnorm approach #16

mdancho84 opened this issue Aug 4, 2020 · 5 comments

Comments

@mdancho84
Copy link
Contributor

Review if the qnorm() approach should be used.

@brshallo
Copy link

brshallo commented Apr 16, 2021

Would be nice if confidence/prediction intervals increased with h, e.g. using some of the adjustments on standard deviation that Hyndman applies fpp2, 3.5, prediction intervals (maybe having an argument available to specify and default to something conservative).

Would be great to have available some of the bootstrap (/ block bootstrap) approaches Hyndman describes as well. I recently opened tidymodels/parsnip#464 which links to a post I did on bootstrapping prediction intervals in tidymodels on regression problems. I'd thought about adjusting example so could pass in custom resampling object, e.g. so could set-up for time-series resampling schemes for example (but haven't really thought through what this would entail -- also likely need to look more into alternative approaches, e.g. in field of conformal inference, as methodology I walk through is so computationally costly). (Point may be more appropriate in {modeltime.resample} but thought that seemed primarily focused on performance evaluation...).

In your overview video you briefly mention distributional forecasts, I'm interested to read plans on that, if documented somewhere? (Also curious how may compare/contrast to {fable}'s approach of creating separate distribution objects that then can handle reconciliation or aggregation schemes).

I am just diving into {modeltime}, really awesome stuff! (Apologies if missed available documentation somewhere regarding my questions / comments.) (Also realize is a bit of a stretch for this topic though didn't want to spam with you a bunch of new issues -- feel free to let me know if should open separately or elsewhere.)

@mdancho84
Copy link
Contributor Author

Hey thanks for this. I'd like to make some improvements here. Mainly to make it more scalable by ID of the series rather than a global confidence interval.

The existing approach is a prediction interval based on test set error. The docs can certainly be updated to reflect this.

Bandwidth is tight at the moment. So will need assistance on any changes you'd like to see in the near term.

@brshallo
Copy link

Thanks, I'll report back in a few weeks (after have had time to familiarize myself more with package). But as a starting point from then, I think I could help with adding a few notes in documentation (per #102 ), as well as attempting:

make it more scalable by ID of the series rather than a global confidence interval.

I assume by ID, you are referring to the index of the series (as opposed to the model id for example) and how uncertainty scales based on steps ahead (per my first point).

@mdancho84
Copy link
Contributor Author

Sounds good. And yes - we should have a way of calculating local confidence intervals and local accuracy. Now that most of our modeling approaches / algorithms accept panel data with time series that has an ID.

@mdancho84 mdancho84 reopened this May 27, 2021
@mdancho84
Copy link
Contributor Author

Local confidence intervals and Conformal Prediction intervals are now being tracked in #173

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants