-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathREADME.Rmd
202 lines (145 loc) · 7.66 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
---
title: ""
output: github_document
---
[](https://travis-ci.com/github/SMAC-Group/SWAG-R-Package)
[)`-green.svg)](https://github.com/SMAC-Group/SWAG-R-Package)
[](https://www.gnu.org/licenses/gpl-3.0.en.html)
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
# `swag` package
**swag** is a package that trains a meta-learning procedure which combines screening and wrapper methods to find a set of extremely low-dimensional attribute combinations.
## Installing the package with GitHub
First install the **devtools** package. Then **swag** with the following code:
```{r,echo=FALSE,include=FALSE,eval=TRUE}
# devtools::install_github("SMAC-Group/SWAG-R-Package")
library(swag) #load the new package
```
```{r, eval=F,echo=TRUE}
## if not installed
## install.packages("remotes")
remotes::install_github("SMAC-Group/SWAG-R-Package")
library(swag) #load the new package
```
## Quick start
We propose to use the **breastcancer** dataset readily available from the package **mlbench** to give an overview of **swag**.
```{r BreastCancer, eval=T}
# After having installed the mlbench package
data(BreastCancer, package = "mlbench")
# Pre-processing of the data
y <- BreastCancer$Class # response variable
x <- as.matrix(BreastCancer[setdiff(names(BreastCancer),c("Id","Class"))]) # features
# remove missing values and change to 'numeric'
id <- which(apply(x,1,function(x) sum(is.na(x)))>0)
y <- y[-id]
x <- x[-id,]
x <- apply(x,2,as.numeric)
# Training and test set
set.seed(180) # for replication
ind <- sample(1:dim(x)[1],dim(x)[1]*0.2)
y_test <- y[ind]
y_train <- y[-ind]
x_test <- x[ind,]
x_train <-x[-ind,]
```
Now we are ready to train with **swag**! The first step is to define the meta-parameters of the **swag** procedure: $p_{max}$ the maximum dimension of attributes, $\alpha$ a performance quantile which represents the percentage of learners which are selected at each dimension and $m$, the maximum numbers of learners trained at each dimension. We can set all these meta-parameters, together with a seed for replicability purposes and `verbose = TRUE` to get a message as each dimension is completed, thanks to the *swagcontrol()* function which behaves similarly to the `trControl = ` argument of **caret**.
```{r control-swag, eval=T}
# Meta-parameters chosen for the breast cancer dataset
swagcon <- swagControl(pmax = 4L,
alpha = 0.5,
m = 20L,
seed = 163L, #for replicability
verbose = T #keeps track of completed dimensions
)
# Given the low dimensional dataset, we can afford a wider search
# by fixing alpha = 0.5 as a smaller alpha may also stop the
# training procedure earlier than expected.
```
Having set-up the meta-parameters as explained above, we are now ready to train the **swag**. We start with the linear Support Vector Machine learner:
```{r, eval=FALSE, message=FALSE,warning=FALSE,echo=FALSE}
library(caret) # swag is build around caret and uses it to train each learner
```
```{r SVM, eval=TRUE, warning=FALSE,message=FALSE}
### SVM Linear Learner ###
train_swag_svml <- swag(
# arguments for swag
x = x_train,
y = y_train,
control = swagcon,
auto_control = FALSE,
# arguments for caret
trControl = caret::trainControl(method = "repeatedcv", number = 10, repeats = 1, allowParallel = F),
metric = "Accuracy",
method = "svmLinear", # Use method = "svmRadial" to train this alternative learner
preProcess = c("center", "scale")
)
```
The only difference with respect to the classic **caret** train function, is the specification of the **swag** arguments which have been explained previously. In the above chunk for the *svmLinear* learner, we define the estimator of the out-of-sample accuracy as 10-fold cross-validation repeated 1 time. For this specific case, we have chosen to center and rescale the data, as usually done for SVMs, and, the parameter that controls the margin in SVMs is automatically fixed at unitary value (i.e. $c=1$).
Let's have a look at the typical output of a **swag** training object for the *svmLinear* learner:
```{r CVs, eval=T}
train_swag_svml$CVs
# A list which contains the cv training errors of each learner explored in a given dimension
```
```{r VarMat, eval=T}
train_swag_svml$VarMat
# A list which contrains a matrix, for each dimension, with the attributes tested at that step
```
```{r cv-alpha, eval= T}
train_swag_svml$cv_alpha
# The cut-off cv training error, at each dimension, determined by the choice of alpha
```
The other two learners that we have implemented on **swag** are: lasso (**glmnet** package required) and random forest (**party** package required). The training phase for these learners, differs a little with respect to the SVM one. We can look at the random forest for a practical example:
```{r random-forest, eval=TRUE}
### Random Forest Learner ###
train_swag_rf <- swag(
# arguments for swag
x = x,
y = y,
control = swagcon,
auto_control = FALSE,
# arguments for caret
trControl = caret::trainControl(method = "repeatedcv", number = 10, repeats = 1, allowParallel = F),
metric = "Accuracy",
method = "rf",
# dynamically modify arguments for caret
caret_args_dyn = function(list_arg,iter){
list_arg$tuneGrid = expand.grid(.mtry=sqrt(iter))
list_arg
}
)
```
The newly introduced argument `caret_args_dyn` enables the user to modify the hyper-parameters related to a given learner in a dynamic way since they can change as the dimension grows up to the desired $p_{max}$. This allows to adapt the *mtry* hyper-parameter as the dimension grows. In the example above, we have fixed *mtry* to the square root of the number of attributes at each step as it is usually done in practice.
You can tailor the learning arguments of *swag()* as you like, introducing for example grids for the hyper-parameters specific of a given learner or update these grids as the dimension increases similarly to what is usually done for the **caret** package. This gives you a wide range of possibilities and a lot of flexibility in the training phase.
To conclude this brief introduction, we present the usual *predict()* function which can be applied to a **swag** trained object similarly to many other packages in R. We pick the random forest learner for this purpose.
```{r predictions, eval=T}
# best learner predictions
# if `newdata` is not specified, then predict gives predictions based on the training
# sample
sapply(predict(object = train_swag_rf), function(x) head(x))
# best learner predictions
best_pred <- predict(object = train_swag_rf,
newdata = x_test)
sapply(best_pred, function(x) head(x))
# predictions for a given dimension
dim_pred <- predict(
object = train_swag_rf,
newdata = x_test,
type = "attribute",
attribute = 4L)
sapply(dim_pred,function(x) head(x))
# predictions below a given CV error
cv_pred <- predict(
object = train_swag_rf,
newdata = x_test,
type = "cv_performance",
cv_performance = 0.04)
sapply(cv_pred,function(x) head(x))
```
Now we can evaluate the performance of the best learner selected by **swag** thanks to the *confusionMatrix()* function of **caret**.
```{r confusion-matrix, eval=T}
# transform predictions into a data.frame of factors with levels of `y_test`
best_learn <- factor(levels(y_test)[best_pred$predictions])
caret::confusionMatrix(best_learn,y_test)
```
Thanks for the attention. You can definitely say that you worked with **swag** !!!