-
Notifications
You must be signed in to change notification settings - Fork 40
/
02-databasics.Rmd
1447 lines (1153 loc) · 61.4 KB
/
02-databasics.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
# (PART) Part II: Basics {-}
# Entering and cleaning data #1
The video lectures for this chapter are embedded at relevant places in the text,
with links to download a pdf of the associated slides for each video.
You can also access [a full playlist for the videos for this chapter](https://www.youtube.com/playlist?list=PLuGPtwgRXxqIXVqTKUrnMT9Mhpl7eqCxl).
## Objectives
After this chapter, you should (know / understand / be able to ):
- Understand what a flat file is and how it differs from data stored in a binary file format
- Be able to distinguish between delimited and fixed width formats for flat files
- Be able to identify the delimiter in a delimited file
- Be able to describe a working directory
- Be able to read in different types of flat files
- Be able to read in a few types of binary files (SAS, Excel)
- Understand the difference between relative and absolute file pathnames
- Describe the basics of your computer's directory structure
- Reference files in different locations in your directory structure using relative and absolute pathnames
- Use the basic `dplyr` functions `rename`, `select`, `mutate`, `slice`, `filter`, and `arrange` to work with data in a dataframe object
- Convert a column to a date format using `lubridate` functions
- Extract information from a date object (e.g., month, year, day of week) using `lubridate` functions
- Define a logical operator and know the R syntax for common logical operators
- Use logical operators in conjunction with `dplyr`'s `filter` function to create subsets of a dataframe based on logical conditions
- Use piping to apply multiple `dplyr` functions in sequence to a dataframe
## Overview
<iframe width="768" height="480" src="https://www.youtube.com/embed/HIWxtOK0-DI?list=PLuGPtwgRXxqIXVqTKUrnMT9Mhpl7eqCxl" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
[Download](https://github.com/geanders/RProgrammingForResearch/raw/master/slides/CourseNotes_Week2_part_1.pdf)
a pdf of the lecture slides for this video.
There are four basic steps you will often repeat as you prepare to analyze data
in R:
1. Identify where the data is (If it's on your computer, which directory? If
it's online, what's the url?)
2. Read data into R (e.g., `read_delim`, `read_csv` from the `readr` package)
using the file path you figured out in step 1
3. Check to make sure the data came in correctly (`dim`, `head`, `tail`, `str`)
4. Clean the data up
In this chapter, I'll go basics for each of these steps, as well as dive a bit
deeper into some related topics you should learn now to make your life easier as
you get started using R for research.
## Reading data into R
Data comes in files of all shapes and sizes. R has the capability to read data
in from many of these, even proprietary files for other software (e.g., Excel
and SAS files). As a small sample, here are some of the types of data files that
R can read and work with:
- Flat files (much more about these in just a minute)
- Files from other statistical packages (SAS, Excel, Stata, SPSS)
- Tables on webpages (e.g., the table on ebola outbreaks near the end of [this
Wikipedia
page](http://en.wikipedia.org/wiki/Ebola_virus_epidemic_in_West_Africa))
- Data in a database (e.g., MySQL, Oracle)
- Data in JSON and XML formats
- Really crazy data formats used in other disciplines (e.g., netCDF files from
climate research, MRI data stored in Analyze, NIfTI, and DICOM formats)
- Geographic shapefiles
- Data through APIs
Often, it is possible to read in and clean up even incredibly messy data, by
using functions like `scan` and `readLines` to read the data in a line at a
time, and then using regular expressions (which I'll cover in the "Intermediate"
section of the course) to clean up each line as it comes in. In over a decade of
coding in R, I think the only time I've come across a data file I couldn't get
into R was for proprietary precision agriculture data collected at harvest by a
combine.
### Reading local flat files
Much of the data that you will want to read in will be in flat files. Basically,
these are files that you can open using a text editor; the most common type
you'll work with are probably comma-separated files (often with a `.csv` or
`.txt` file extension). Most flat files come in two general categories:
1. Fixed width files; and
2. Delimited files:
- ".csv": Comma-separated values
- ".tab", ".tsv": Tab-separated values
- Other possible delimiters: colon, semicolon, pipe ("|")
*Fixed width files* are files where a column always has the same width, for all
the rows in the column. These tend to look very neat and easy-to-read when you
open them in a text editor. For example, the first few rows of a fixed-width
file might look like this:
```
Course Number Day Time
Intro to Epi 501 M/W/F 9:00-9:50
Advanced Epi 521 T/Th 1:00-2:15
```
Fixed width files used to be very popular, and they make it easier to look at data
when you open the file in a text editor. However, now it's pretty rare to just use
a text editor to open a file and check out the data, and these files can be a bit
of a pain to read into R and other programs because you sometimes have to specify
exactly how wide each of the columns is. You may come across a fixed width file
every now and then, though, particularly when working with older data sets, so it's
useful to be able to recognize one and to know how to read it in.
*Delimited files* use some *delimiter* (for example, a column or a tab) to
separate each column value within a row. The first few rows of a delimited file
might look like this:
```
Course, Number, Day, Time
"Intro to Epi", 501, "M/W/F", "9:00-9:50"
"Advanced Epi", 521, "T/Th", "1:00-2:15"
```
Delimited files are very easy to read into R. You just need to be able to figure
out what character is used as a delimiter (commas in the example above) and
specify that to R in the function call to read in the data.
These flat files can have a number of different file extensions. The most
generic is `.txt`, but they will also have ones more specific to their format,
like `.csv` for a comma-delimited file or `.fwf` for a fixed with file.
<iframe width="768" height="480" src="https://www.youtube.com/embed/A5VRVjSQ_Ws?list=PLuGPtwgRXxqIXVqTKUrnMT9Mhpl7eqCxl" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
[Download](https://github.com/geanders/RProgrammingForResearch/raw/master/slides/CourseNotes_Week2_part_2.pdf)
a pdf of the lecture slides for this video.
R can read in data from both fixed with and delimited flat files. The only catch
is that you need to tell R a bit more about the format of the flat file,
including whether it is fixed width or delimited. If the file is fixed width,
you will usually have to tell R the width of each column. If the file is
delimited, you'll need to tell R which delimiter is being used.
If the file is delimited, you can use the `read_delim` family of functions from
the `readr` package to read it in. This family of functions includes several
specialized functions. All members of the `read_delim` family are doing the same
basic thing. The only difference is what defaults each function has for the
delimiter (`delim`). Members of the `read_delim` family include:
Function | Delimiter
------------ | ----------
`read_csv` | comma
`read_csv2` | semi-colon
`read_table2` | whitespace
`read_tsv` | tab
You can use `read_delim` to read in any delimited file, regardless of the delimiter.
However, you will need to specify the delimiter using the `delim` parameters. If you
remember the more specialized function call (e.g., `read_csv` for a comma delimited
file), therefore, you can save yourself some typing.
For example, to read in the Ebola data, which is comma-delimited, you could
either use `read_table` with a `delim` argument specified or use `read_csv`, in
which case you don't have to specify `delim`:
```{r, message = FALSE}
library(package = "readr")
# The following two calls do the same thing
ebola <- read_delim(file = "data/country_timeseries.csv", delim = ",")
```
```{r}
ebola <- read_csv(file = "data/country_timeseries.csv")
```
```{block, type = 'rmdtip'}
The message that R prints after this call ("Parsed with column specification:..
") lets you know what classes were used for each column (this function tries to
guess the appropriate class and typically gets it right). You can suppress the
message using the `cols_types = cols()` argument.
If `readr` doesn't correctly guess some of the columns classes you can use the
`type_convert()` function to take another go at guessing them after you've
tweaked the formats of the rogue columns.
```
This family of functions has a few other helpful options you can specify. For example,
if you want to skip the first few lines of a file before you start reading in the data,
you can use `skip` to set the number of lines to skip. If you only want to read in
a few lines of the data, you can use the `n_max` option. For example, if you have a
really long file, and you want to save time by only reading in the first ten lines
as you figure out what other options to use in `read_delim` for that file, you could
include the option `n_max = 10` in the `read_delim` call. Here is a table of some of
the most useful options common to the `read_delim` family of functions:
Option | Description
------- | -----------
`skip` | How many lines of the start of the file should you skip?
`col_names` | What would you like to use as the column names?
`col_types` | What would you like to use as the column types?
`n_max` | How many rows do you want to read in?
`na` | How are missing values coded?
```{block, type = 'rmdnote'}
Remember that you can always find out more about a function by looking at its
help file. For example, check out `?read_delim` and `?read_fwf`. You can also
use the help files to determine the default values of arguments for each
function.
```
So far, we've only looked at functions from the `readr` package for reading in data
files. There is a similar family of functions available in base R, the `read.table`
family of functions. The `readr` family of functions is very similar to the base R
`read.table` functions, but have some more sensible defaults. Compared to the
`read.table` family of functions, the `readr` functions:
- Work better with large datasets: faster, includes progress bar
- Have more sensible defaults (e.g., characters default to characters, not factors)
I recommend that you always use the `readr` functions rather than their base R
alternatives, given these advantages. However, you are likely to come across code
that someone else has written that uses one of these base R functions, so it's
helpful to know what they are. Functions in the `read.table` family include:
- `read.csv`
- `read.delim`
- `read.table`
- `read.fwf`
```{block, type = 'rmdnote'}
The `readr` package is a member of the tidyverse of packages. The *tidyverse*
describes an evolving collection of R packages with a common philosophy, and
they are unquestionably changing the way people code in R. Many were developed
in part or full by Hadley Wickham and others at RStudio. Many of these packages
are less than ten years old, but have been rapidly adapted by the R community.
As a result, newer examples of R code will often look very different from the
code in older R scripts, including examples in books that are more than a few
years old. In this course, I'll focus on "tidyverse" functions when possible,
but I do put in details about base R equivalent functions or processes at some
points---this will help you interpret older code. You can download all the
tidyverse packages using `install.packages("tidyverse")`, `library("tidyverse")`
makes all the tidyverse functions available for use.
```
### Reading in other file types
Later in the course, we'll talk about how to open a variety of other file types
in R. However, you might find it immediately useful to be able to read in files
from other statistical programs.
There are two "tidyverse" packages---`readxl` and `haven`---that help with this.
They allow you to read in files from the following formats:
```{r echo = FALSE}
read_funcs <- data.frame(file_type = c("Excel",
"SAS",
"SPSS",
"Stata"),
func = c("`read_excel`",
"`read_sas`",
"`read_spss`",
"`read_stata`"),
pkg = c("`readxl`",
"`haven`",
"`haven`",
"`haven`"))
knitr::kable(read_funcs, col.names = c("File type", "Function", "Package"))
```
## Directories and pathnames
### Directory structure
<iframe width="768" height="480" src="https://www.youtube.com/embed/Ll5seRpzekY?list=PLuGPtwgRXxqIXVqTKUrnMT9Mhpl7eqCxl" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
[Download](https://github.com/geanders/RProgrammingForResearch/raw/master/slides/CourseNotes_Week2_part_3.pdf)
a pdf of the lecture slides for this video.
So far, we've only looked at reading in files that are located in your current
working directory. For example, if you're working in an R Project, by default
the project will open with that directory as the working directory, so you can
read files that are saved in that project's main directory using only the file
name as a reference.
However, you'll often want to read in files that are located somewhere else on
your computer, or even files that are saved on another computer (for example,
data files that are posted online). Doing this is very similar to reading in a
file that is in your current working directory; the only difference is that you
need to give R some directions so it can find the file.
The most common case will be reading in files in a subdirectory of your current
working directory. For example, you may have created a "data" subdirectory in
one of your R Projects directories to keep all the project's data files in the
same place while keeping the structure of the main directory fairly clean. In
this case, you'll need to direct R into that subdirectory when you want to read
one of those files.
To understand how to give R these directions, you need to have some
understanding of the directory structure of your computer. It seems a bit of a
pain and a bit complex to have to think about computer directory structure in
the "basics" part of this class, but this structure is not terribly complex once
you get the idea of it. There are a couple of very good reasons why it's worth
learning now.
First, many of the most frustrating errors you get when you start using R trace
back to understanding directories and filepaths. For example, when you try to
read a file into R using only the filename, and that file is not in your current
working directory, you will get an error like:
```
Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") : cannot open file 'Ex.csv': No such file or directory
```
This error is especially frustrating when you're new to R because it happens at
the very beginning of your analysis---you can't even get your data in. Also, if
you don't understand a bit about working directories and how R looks for the
file you're asking it to find, you'd have no idea where to start to fix this
error. Second, once you understand how to use pathnames, especially relative
pathnames, to tell R how to find a file that is in a directory other than your
working directory, you will be able to organize all of your files for a project
in a much cleaner way. For example, you can create a directory for your project,
then create one subdirectory to store all of your R scripts, and another to
store all of your data, and so on. This can help you keep very complex projects
more structured and easier to navigate. We'll talk about these ideas more in the
course sections on Reproducible Research, but it's good to start learning how
directory structures and filepaths work early.
Your computer organizes files through a collection of directories. Chances are,
you are fairly used to working with these in your daily life already (although
you may call them "folders" rather than "directories"). For example, you've
probably created new directories to store data files and Word documents for a
specific project.
Figure \@ref(fig:filedirstructure) gives an example file directory structure for
a hypothetical computer. Directories are shown in blue, and files in green.
```{r filedirstructure, echo = FALSE, fig.cap= "An example of file directory structure.", out.width = "600pt", fig.align = "center"}
knitr::include_graphics("figures/FileDirectoryStructure.png")
```
You can notice a few interesting things from Figure \@ref(fig:filedirstructure).
First, you might notice the structure includes a few of the directories that you
use a lot on your own computer, like `Desktop`, `Documents`, and `Downloads`.
Next, the directory at the very top is the computer's root directory, `/`. For a
PC, the root directory might something like `C:`; for Unix and Macs, it's
usually `/`. Finally, if you look closely, you'll notice that it's possible to
have different files in different locations of the directory structure with the
same file name. For example, in the figure, there are files names
`heat_mort.csv` in both the `CourseText` directory and in the `example_data`
directory. These are two different files with different contents, but they can
have the same name as long as they're in different directories. This fact---that
you can have files with the same name in different places---should help you
appreciate how useful it is that R requires you to give very clear directions to
describe exactly which file you want R to read in, if you aren't reading in
something in your current working directory.
You will have a home directory somewhere near the top of your structure,
although it's likely not your root directory. In the hypothetic computer in
Figure \@ref(fig:filedirstructure), the home directory is
`/Users/brookeanderson`. I'll describe just a bit later how you can figure out
what your own home directory is on your own computer.
### Working directory
When you run R, it's always running from within some working directory, which
will be one of the directories somewhere in your computer's directory structure.
At any time, you can figure out which directory R is working in by running the
command `getwd()` (short for "get working directory"). For example, my R session
is currently running in the following directory:
```{r}
getwd()
```
This means that, for my current R session, R is working in the
`RProgrammingForResearch` subdirectory of my `brookeanderson` directory (which
is my home directory).
There are a few general rules for which working directory R will start in when
you open an R session. These are not absolute rules, but they're generally true.
If you have R closed, and you open it by double-clicking on an R script, then R
will generally open with, as its working directory, the directory in which that
script is stored. This is often a very convenient convention, because often any
of the data you'll be reading in for that script is somewhere near where the
script file is saved in the directory structure. If you open R by
double-clicking on the R icon in "Applications" (or something similar on a PC),
R will start in its default working directory. You can find out what this is, or
change it, in RStudio's "Preferences". Finally, if you open
an R Project, R will start in that project's working directory (the directory in
which the `.Rproj` file for the project is stored).
### File and directory pathnames
<iframe width="768" height="480" src="https://www.youtube.com/embed/vutYRvQj36c?list=PLuGPtwgRXxqIXVqTKUrnMT9Mhpl7eqCxl" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
[Download](https://github.com/geanders/RProgrammingForResearch/raw/master/slides/CourseNotes_Week2_part_4.pdf)
a pdf of the lecture slides for this video.
Once you get a picture of how your directories and files are organized, you can
use pathnames, either absolute or relative, to read in files from different
directories than your current working directory. Pathnames are the directions
for getting to a directory or file stored on your computer.
When you want to reference a directory or file, you can use one of two types of
pathnames:
- *Relative pathname*: How to get to the file or directory from your current
working directory
- *Absolute pathname*: How to get to the file or directory from anywhere on the
computer
Absolute pathnames are a bit more straightforward conceptually, because they
don't depend on your current working directory. However, they're also a lot
longer to write, and they're much less convenient if you'll be sharing some of
your code with other people who might run it on their own computers. I'll
explain this second point a bit more later in this section.
*Absolute pathnames* give the full directions to a directory or file, starting
all the way at the root directory. For example, the `heat_mort.csv` file in the
`CourseText` directory has the absolute pathname:
```
"/Users/brookeanderson/Desktop/RCourseFall2015/CourseText/heat_mort.csv"
```
You can use this absolute pathname to read this file in using any of the `readr`
functions to read in data. This absolute pathname will *always* work, regardless
of your current working directory, because it gives directions from the
root---it will always be clear to R exactly what file you're talking about.
Here's the code to use to read that file in using the `read.csv` function with
the file's absolute pathname:
```{r eval = FALSE}
heat_mort <- read_csv(file = "/Users/brookeanderson/Desktop/RCourseFall2015/CourseText/heat_mort.csv")
```
The *relative pathname*, on the other hand, gives R the directions for how to
get to a directory or file from the current working directory. If the file or
directory you're looking for is pretty close to your current working directory
in your directory structure, then a relative pathname can be a much shorter way
to tell R how to get to the file than an absolute pathname. However, the
relative pathname depends on your current working directory---the relative
pathname that works perfectly when you're working in one directory will not work
at all once you move into a different working directory.
As an example of a relative pathname, say you're working in the directory
`RCourseFall2015` within the file structure shown in Figure
\@ref(fig:filedirstructure), and you want to read in the `heat_mort.csv` file in
the `CourseText` directory. To get from `RCourseFall2015` to that file, you'd
need to look in the subdirectory `CourseText`, where you could find
`heat_mort.csv`. Therefore, the relative pathname from your working directory
would be:
```
"CourseText/heat_mort.csv"
```
You can use this relative pathname to tell R where to find and read in the file:
```{r eval = FALSE}
heat_mort <- read_csv("CourseText/heat_mort.csv")
```
While this pathname is much shorter than the absolute pathname, it is important
to remember that if you are working in a different working directory, this
relative pathname would no longer work.
There are a few abbreviations that can be really useful for pathnames:
```{r echo = FALSE}
dirpath_shortcuts <- data.frame(abbr = c("`~`", "`.`", "`..`", "`../..`"),
meaning = c("Home directory",
"Current working directory",
"One directory up from current working directory",
"Two directories up from current working directory"))
knitr::kable(dirpath_shortcuts, col.names = c("Shorthand", "Meaning"))
```
These can help you keep pathnames shorter and also help you move "up-and-over"
to get to a file or directory that's not on the direct path below your current
working directory.
For example, my home directory is `/Users/brookeanderson`. You can use the
`list.files()` function to list all the files in a directory. If I wanted to
list all the files in my `Downloads` directory, which is a direct sub-directory
of my home directory, I could use:
```
list.files("~/Downloads")
```
As a second example, say I was working in the working directory `CourseText`,
but I wanted to read in the `heat_mort.csv` file that's in the `example_data`
directory, rather than the one in the `CourseText` directory. I can use the `..`
abbreviation to tell R to look up one directory from the current working
directory, and then down within a subdirectory of that. The relative pathname in
this case is:
```
"../Week2_Aug31/example_data/heat_mort.csv"
```
This tells R to look one directory up from the working directory (`..`) (this is
also known as the **parent directory** of the current directory), which in this
case is to `RCourseFall2015`, and then down within that directory to
`Week2_Aug31`, then to `example_data`, and then to look there for the file
`heat_mort.csv`.
The relative pathname to read this file while R is working in the `CourseTest`
directory would be:
```
heat_mort <- read_csv("../Week2_Aug31/example_data/heat_mort.csv")
```
Relative pathnames "break" as soon as you tried them from a different working
directory---this fact might make it seem like you would never want to use
relative pathnames, and would always want to use absolute ones instead, even if
they're longer. If that were the only consideration (length of the pathname),
then perhaps that would be true. However, as you do more and more in R, there
will likely be many occasions when you want to use relative pathnames instead.
They are particularly useful if you ever want to share a whole directory, with
all subdirectories, with a collaborator. In that case, if you've used relative
pathnames, all the code should work fine for the person you share with, even
though they're running it on their own computer. Conversely, if you'd used
absolute pathnames, none of them would work on another computer, because the
"top" of the directory structure (i.e., for me, `/Users/brookeanderson/Desktop`)
will almost definitely be different for your collaborator's computer than it is
for yours.
If you're getting errors reading in files, and you think it's related to the
relative pathname you're using, it's often helpful to use `list.files()` to make
sure the file you're trying to load is in the directory that the relative
pathname you're using is directing R to.
### Diversion: `paste`
This is a good opportunity to explain how to use some functions that can be very
helpful when you're using relative or absolute pathnames: `paste()` and
`paste0()`.
As a bit of important background information, it's important that you understand
that you can save a pathname (absolute or relative) as an R object, and then use
that R object in calls to later functions like `list.files()` and `read_csv()`.
For example, to use the absolute pathname to read the `heat_mort.csv` file in
the `CourseText` directory, you could run:
```
my_file <- "/Users/brookeanderson/Desktop/RCourseFall2015/CourseText/heat_mort.csv"
heat_mort <- read_csv(file = my_file)
```
You'll notice from this code that the pathname to get to a directory or file can
sometimes become ungainly and long. To keep your code cleaner, you can address
this by using the `paste` or `paste0` functions. These functions come in handy
in a lot of other applications, too, but this is a good place to introduce them.
The `paste()` function is very straightforward. It takes, as inputs, a series of
different character strings you want to join together, and it pastes them
together in a single character string. (As a note, this means that your result
vector will only be one element long, for basic uses of `paste()`, while the
inputs will be several different character stings.) You separate all the
different things you want to paste together using with commas in the function
call. For example:
```{r}
paste("Sunday", "Monday", "Tuesday")
length(x = c("Sunday", "Monday", "Tuesday"))
length(x = paste("Sunday", "Monday", "Tuesday"))
```
The `paste()` function has an option called `sep = `. This tells R what you want
to use to separate the values you're pasting together in the output. The default
is for R to use a space, as shown in the example above. To change the separator,
you can change this option, and you can put in just about anything you want. For
example, if you wanted to paste all the values together without spaces, you
could use `sep = ""`:
```{r}
paste("Sunday", "Monday", "Tuesday", sep = "")
```
As a shortcut, instead of using the `sep = ""` option, you could achieve the
same thing using the `paste0` function. This function is almost exactly like
`paste`, but it defaults to `""` (i.e., no space) as the separator between
values by default:
```{r}
paste0("Sunday", "Monday", "Tuesday")
```
With pathnames, you will usually not want spaces. Therefore, you could think
about using `paste0()` to write an object with the pathname you want to
ultimately use in commands like `list.files()` and `setwd()`. This will allow
you to keep your code cleaner, since you can now divide long pathnames over
multiple lines:
```
my_file <- paste0("/Users/brookeanderson/Desktop/",
"RCourseFall2015/CourseText/heat_mort.csv")
heat_mort <- read_csv(file = my_file)
```
You will end up using `paste()` and `paste0()` for many other applications, but
this is a good example of how you can start using these functions to start to
get a feel for them.
### Reading online flat files
So far, I've only shown you how to read in data from files that are saved to
your computer. R can also read in data directly from the web. If a flat file is
posted online, you can read it into R in almost exactly the same way that you
would read in a local file. The only difference is that you will use the file's
url instead of a local file path for the `file` argument.
With the `read_*` family of functions, you can do this both for flat files from
a non-secure webpage (i.e., one that starts with `http`) and for files from a
secure webpage (i.e., one that starts with `https`), including GitHub and
Dropbox.
For example, to read in data from this [GitHub repository of Ebola
data](https://raw.githubusercontent.com/cmrivers/ebola/master/country_timeseries.csv),
you can run:
```{r message = FALSE}
library("dplyr")
url <- paste0("https://raw.githubusercontent.com/cmrivers/",
"ebola/master/country_timeseries.csv")
ebola <- read_csv(file = url)
slice(.data = (select(.data = ebola, 1:3)), 1:3)
```
## Data cleaning
<iframe width="768" height="480" src="https://www.youtube.com/embed/uimHCVdYgwM?list=PLuGPtwgRXxqIXVqTKUrnMT9Mhpl7eqCxl" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
[Download](https://github.com/geanders/RProgrammingForResearch/raw/master/slides/CourseNotes_Week2_part_5.pdf)
a pdf of the lecture slides for this video.
Once you have loaded data into R, you'll likely need to clean it up a little
before you're ready to analyze it. Here, I'll go over the first steps of how to
do that with functions from `dplyr`, another package in the tidyverse. Here are
some of the most common data-cleaning tasks, along with the corresponding
`dplyr` function for each:
```{r echo = FALSE}
library(package = "tibble")
dc_func <- tibble(task = c("Renaming columns",
"Filtering to certain rows",
"Selecting certain columns",
"Adding or changing columns"),
func = c("`rename`",
"`filter`",
"`select`",
"`mutate`"))
knitr::kable(dc_func, col.names = c("Task", "`dplyr` function"))
```
In this section, I'll describe how to do each of these tasks; in later
sections of the course, we'll go much deeper into how to clean messier data.
For the examples in this section, I'll use example data listing guests to the
Daily Show. To follow along with these examples, you'll want to load that data,
as well as load the `dplyr` package (install it using `install.packages` if you
have not already):
```{r message = FALSE}
library("dplyr")
daily_show <- read_csv(file = "data/daily_show_guests.csv", skip = 4)
```
I've used this data in previous examples, but as a reminder, here's what it looks like:
```{r}
head(x = daily_show)
```
### Renaming columns
A first step is often re-naming the columns of the dataframe. It can be hard to
work with a column name that:
- is long
- includes spaces or other special characters
- includes upper case
You can check out the column names for a dataframe using the `colnames`
function, with the dataframe object as the argument. Several of the column names
in `daily_show` have some of these issues:
```{r}
colnames(x = daily_show)
```
To rename these columns, use `rename`. The basic syntax is:
```{r eval = FALSE}
## Generic code
rename(.data = dataframe,
new_column_name_1 = old_column_name_1,
new_column_name_2 = old_column_name_2)
```
The first argument is the dataframe for which you'd like to rename columns. Then
you list each pair of new versus old column names (in that order) for each of
the columns you want to rename. To rename columns in the `daily_show` data using
`rename`, for example, you would run:
```{r}
daily_show <- rename(.data = daily_show,
year = YEAR,
job = GoogleKnowlege_Occupation,
date = Show,
category = Group,
guest_name = Raw_Guest_List)
head(x = daily_show, 3)
```
```{block, type = 'rmdwarning'}
Many of the functions in tidyverse packages, including those in `dplyr`, provide
exceptions to the general rule about when to use quotation marks versus when to
leave them off. Unfortunately, this may make it a bit hard to learn when to use
quotation marks versus when not to. One way to think about this, which is a bit
of an oversimplification but can help as you're learning, is to assume that
anytime you're using a `dplyr` function, every column in the dataframe you're
working with has been loaded to your R session as its own object.
```
### Selecting columns
<iframe width="659" height="412" src="https://www.youtube.com/embed/yAYm_V6Y1Cw?list=PLuGPtwgRXxqIXVqTKUrnMT9Mhpl7eqCxl" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
[Download](https://github.com/geanders/RProgrammingForResearch/raw/master/slides/CourseNotes_Week2_part_6.pdf)
a pdf of the lecture slides for this video.
Next, you may want to select only some columns of the dataframe. You can use the
`select` function from `dplyr` to subset the dataframe to certain columns. The
basic structure of this command is:
```{r eval = FALSE}
## Generic code
select(.data = dataframe, column_name_1, column_name_2, ...)
```
In this call, you first specify the dataframe to use and then list all of the
column names to include in the output dataframe, with commas between each column
name. For example, to select all columns in `daily_show` except `year` (since
that information is already included in `date`), run:
```{r}
select(.data = daily_show, job, date, category, guest_name)
```
```{block, type = 'rmdwarning'}
Don't forget that, if you want to change column names in the saved object, you
must reassign the object to be the output of `rename`. If you run one of these
cleaning functions without reassigning the object, R will print out the result,
but the object itself won't change. You can take advantage of this, as I've done
in this example, to look at the result of applying a function to a dataframe
without changing the original dataframe. This can be helpful as you're figuring
out how to write your code.
```
The `select` function also provides some time-saving tools. For example, in the
last example, we wanted all the columns except one. Instead of writing out all
the columns we want, we can use `-` with the columns we don't want to save time:
```{r}
daily_show <- select(.data = daily_show, -year)
head(x = daily_show, n = 3)
```
### Extracting and arranging rows
<iframe width="659" height="412" src="https://www.youtube.com/embed/vIxlSmbFsQ0?list=PLuGPtwgRXxqIXVqTKUrnMT9Mhpl7eqCxl" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
[Download](https://github.com/geanders/RProgrammingForResearch/raw/master/slides/CourseNotes_Week2_part_7.pdf)
a pdf of the lecture slides for this video.
There are a number of different actions you can take to extract or rearrange rows from a dataset to clean it up for your current analysis, including:
- `slice`
- `sample_n`
- `arrange`
- `filter`
We'll go through what each of these does and how to use them.
### Slicing and sampling
The `slice` function from the `dplyr` package can extract certain rows based on
their position in the dataframe.
We already looked at this a bit in Chapter 1.
In the last chapter, you learned how to use the `slice` function
to limit a dataframe to certain rows by row position.
For example, to print the first three rows of the `daily_show` data, you can
run:
```{r}
library("dplyr")
slice(.data = daily_show, 1:3)
```
There are some other functions you can use to extract rows from a tibble dataframe,
all from the "dplyr" package.
For example, if you'd like to extract a random subset of *n* rows, you can use the
`sample_n` function, with the `size` argument set to *n*.
To extract two random rows from the `daily_show` dataframe, run:
```{r}
sample_n(tbl = daily_show, size = 2)
```
### Arranging rows
There is also a function, `arrange`, you can use to re-order the rows in a
dataframe based on the values in one of its columns. The syntax for this
function is:
```{r eval = FALSE}
# Generic code
arrange(.data = dataframe, column_to_order_by)
```
If you run this function to use a character vector to order, it will order the
rows alphabetically by the values in that column. If you specify a numeric
vector, it will order the rows by the numeric value.
For example, we could reorder the `daily_show` data alphabetically by the values
in the `category` column with the following call:
```{r}
daily_show <- arrange(.data = daily_show, category)
slice(.data = daily_show, 1:3)
```
If you want the ordering to be reversed (e.g., from "z" to "a" for character
vectors, from higher to lower for numeric, latest to earliest for a Date), you
can include the `desc` function.
For example, to reorder the `daily_show` data by job category in descending
alphabetical order, you can run:
```{r}
daily_show <- arrange(.data = daily_show,
desc(x = category))
slice(.data = daily_show, 1:2)
```
### Filtering to certain rows
Next, you might want to filter the dataset down so that it only includes certain
rows. For example, you might want to get a dataset with only the guests from
2015, or only guests who are scientists.
You can use the `filter` function from `dplyr` to filter a dataframe down to a
subset of rows. The syntax is:
```{r eval = FALSE}
## Generic code
filter(.data = dataframe, logical expression)
```
The `logical expression` in this call gives the condition that a row must meet to
be included in the output data frame. For example, if you want to create a data
frame that only includes guests who were scientists, you can run:
```{r}
scientists <- filter(.data = daily_show,
category == "Science")
head(x = scientists)
```
To build a logical expression to use in `filter`, you'll need to know some of R's
logical operators. Some of the most commonly used ones are:
Operator | Meaning | Example
--------- | ------- | ---------------------------------
`==` | equals | `category == "Acting"`
`!=` | does not equal | `category != "Comedy"`
`%in%` | is in | `category %in% c("Academic", "Science")`
`is.na()` | is missing | `is.na(job)`
`!is.na()`| is not missing | `!is.na(job)`
`&` | and | `year == 2015 & category == "Academic"`
`|` | or | `year == 2015 | category == "Academic"`
We'll use these logical operators and expressions a lot more as the course
continues, so they're worth learning by heart.
```{block, type = 'rmdwarning'}
Two common errors with logical operators are: (1) Using `=` instead of `==` to
check if two values are equal; and (2) Using `== NA` instead of `is.na` to check
for missing observations.
```
### Add or change columns
<iframe width="659" height="412" src="https://www.youtube.com/embed/_oC0WrrTf5Q?list=PLuGPtwgRXxqIXVqTKUrnMT9Mhpl7eqCxl" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
[Download](https://github.com/geanders/RProgrammingForResearch/raw/master/slides/CourseNotes_Week2_part_8.pdf)
a pdf of the lecture slides for this video.
You can change a column or add a new column using the `mutate` function from the
`dplyr` package. That function has the syntax:
```{r eval = FALSE}
# Generic code
mutate(.data = dataframe,
changed_column = function(changed_column),
new_column = function(other arguments))
```
For example, the `job` column in `daily_show` sometimes uses upper case and
sometimes does not (this call uses the `unique` function to list only unique
values in this column):
```{r}
head(x = unique(x = daily_show$job), n = 10)
```
To make all the observations in the `job` column lowercase, use the `str_to_lower` function from the `stringr` package within a `mutate` function:
```{r}
library(package = "stringr")
mutate(.data = daily_show,
job = str_to_lower(string = job))
```
## Piping
<iframe width="659" height="412" src="https://www.youtube.com/embed/fSJR0cM1qT0?list=PLuGPtwgRXxqIXVqTKUrnMT9Mhpl7eqCxl" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
[Download](https://github.com/geanders/RProgrammingForResearch/raw/master/slides/CourseNotes_Week2_part_9.pdf) a pdf of the lecture slides for this video.
So far, I've shown how to use these `dplyr` functions one at a time to clean up
the data, reassigning the dataframe object at each step. However, there's a
trick called "piping" that will let you clean up your code a bit when you're
writing a script to clean data.
If you look at the format of these `dplyr` functions, you'll notice that they
all take a dataframe as their first argument:
```{r eval = FALSE}
# Generic code
rename(.data = dataframe,
new_column_name_1 = old_column_name_1,
new_column_name_2 = old_column_name_2)
select(.data = dataframe,
column_name_1, column_name_2)
filter(.data = dataframe,
logical expression)
mutate(.data = dataframe,
changed_column = function(changed_column),
new_column = function(other arguments))
```
Without piping, you have to reassign the dataframe object at each step of this
cleaning if you want the changes saved in the object:
```{r eval = FALSE, message = FALSE}
daily_show <-read_csv(file = "data/daily_show_guests.csv",
skip = 4)
daily_show <- rename(.data = daily_show,
job = GoogleKnowlege_Occupation,
date = Show,
category = Group,
guest_name = Raw_Guest_List)
daily_show <- select(.data = daily_show,
-YEAR)
daily_show <- mutate(.data = daily_show,
job = str_to_lower(job))
daily_show <- filter(.data = daily_show,
category == "Science")
```
Piping lets you clean this code up a bit. It can be used with any function that
inputs a dataframe as its first argument. It *pipes* the dataframe created right
before the pipe (`%>%`) into the function right after the pipe. With piping,
therefore, the same data cleaning looks like:
```{r message = FALSE}
daily_show <-read_csv(file = "data/daily_show_guests.csv",
skip = 4) %>%
rename(job = GoogleKnowlege_Occupation,
date = Show,
category = Group,
guest_name = Raw_Guest_List) %>%
select(-YEAR) %>%
mutate(job = str_to_lower(job)) %>%
filter(category == "Science")
```