Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Report loss of R^2 when terms are dropped #10

Open
Aariq opened this issue Dec 12, 2024 · 0 comments
Open

Report loss of R^2 when terms are dropped #10

Aariq opened this issue Dec 12, 2024 · 0 comments
Assignees

Comments

@Aariq
Copy link
Member

Aariq commented Dec 12, 2024

I had previously read Wood 2013 and my main takeaway was "yes, you can trust the p-values output by summary(gam)". However, Wellington et al. 2023 cite this paper and interpret it as "the very large dataset size effectively voids the hypothesis testing procedure adapted to GAMs". So, need to re-read the Wood paper to double-check that this is the case.

In Wellington et al. 2023, they do not report p-values or effect sizes, but rather the lose of % deviance explained (R^2?) when model terms are dropped. (importantly, when dropping 'year', they also dropped interactions with year because it doesn't make sense to have interactions in a model without main effects).

We might consider a similar more "qualitative" approach if the goal is to say "yes, there are temporal trends" or "yes, there are spatial trends".

How does this translate to the p-values used to shade out areas in the average slope plots? I don't think it does—I think those p-values are still trustworthy.

@Aariq Aariq self-assigned this Feb 6, 2025
@Aariq Aariq changed the title What to report (maybe not p-values) Report loss of R^2 when terms are dropped Feb 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant