Skip to content

Commit

Permalink
tune sample size
Browse files Browse the repository at this point in the history
  • Loading branch information
HanzhangRen committed Sep 15, 2024
1 parent d0ee7f0 commit 1fd2d62
Show file tree
Hide file tree
Showing 3 changed files with 8 additions and 8 deletions.
2 changes: 1 addition & 1 deletion description.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ XGBoost with the following strategies: (1) Expanded sample size with "time-shift

(2) For households where both partners participated in the survey, we merge in the partner's fertility intentions from 2019 and 2020 (cf19l128 to cf19l130 and cf20m128 to cf20m130), plus the partner's answers to questions about how many kids they have.

(3) We generate "scales" in the feature data by averaging related features together. Our scales are: Feelings toward current child, gendered religiosity, attitudes about traditional fertility, attitudes about traditional motherhood, attitudes about traditional fatherhood, attitudes about traditional marriage, attitudes toward working mothers, and sexism.
(3) We generate "scales" in the feature data by averaging related features together. Our scales are: Feelings toward current child, gendered religiosity, attitudes about traditional fertility, attitudes about traditional motherhood, attitudes about traditional fatherhood, attitudes about traditional marriage, attitudes toward working mothers, and sexism. We also constructed variables for age of youngest child and household income per capita.

We choose the hyperparameters for our XGBoost model via grid-search hyperparameter tuning with 5-fold cross-validation.

Binary file modified model.rds
Binary file not shown.
14 changes: 7 additions & 7 deletions training.R
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ train_save_model <- function(cleaned_train_2021to2023, outcome_2021to2023,
# Tune an xgboost model using grid search and cross validation
model_to_tune <- boost_tree(
mode = "classification",
mtry = tune(), trees = tune(), tree_depth = tune(), learn_rate = tune()
sample_size = tune(), trees = tune(), tree_depth = tune(), learn_rate = tune()
) %>%
set_engine("xgboost", counts = FALSE)
# Set up cross-validation folds
Expand Down Expand Up @@ -90,10 +90,10 @@ train_save_model <- function(cleaned_train_2021to2023, outcome_2021to2023,

# Grid search for hyperparameter tuning
grid <- expand.grid(
mtry = c(.2, .4, .6, .8, 1),
trees = c(5, 10, 50, 100, 200),
tree_depth = 1:9,
learn_rate = c(.001, .003, .01, .03, .1, .3, 1)
sample_size = c(.4, .5, .6, .7, .8, .9, 1),
trees = c(1:150),
tree_depth = c(1, 2, 4, 6, 8, 10, 12),
learn_rate = c(.01, .03, .05, .1, .3, .5)
)
best <- tune_grid(model_to_tune, recipe, folds,
grid = grid,
Expand All @@ -102,11 +102,11 @@ train_save_model <- function(cleaned_train_2021to2023, outcome_2021to2023,
) %>%
collect_metrics() %>%
filter(n == 5) %>%
arrange(desc(mean)) %>%
arrange(desc(mean), trees) %>%
head(1)
model <- boost_tree(
mode = "classification",
mtry = best$mtry,
sample_size = best$sample_size,
trees = best$trees,
tree_depth = best$tree_depth,
learn_rate = best$learn_rate
Expand Down

0 comments on commit 1fd2d62

Please sign in to comment.