Chapter 4 Cleaning Up Data

Data is not born tidy, so we must learn to clean it up.

4.1 Learning Objectives

  • Describe and use the read_csv function.
  • Describe and use the str_replace function.
  • Describe and use the is.numeric and as.numeric functions.
  • Describe and use the map function and its kin.
  • Describe and use pre-allocation to capture the results of loops.

4.2 What is our starting point?

Here is a sample of data from the original data set raw/infant_hiv.csv, where ... shows values elided to make the segment readable:

"Early Infant Diagnosis: Percentage of infants born to women living with HIV...",,,,,,,,,,,,,,,,,,,,,,,,,,,,,
,,2009,,,2010,,,2011,,,2012,,,2013,,,2014,,,2015,,,2016,,,2017,,,
ISO3,Countries,Estimate,hi,lo,Estimate,hi,lo,Estimate,hi,lo,Estimate,hi,lo,...
AFG,Afghanistan,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,
ALB,Albania,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,
DZA,Algeria,-,-,-,-,-,-,38%,42%,35%,23%,25%,21%,55%,60%,50%,27%,30%,25%,23%,25%,21%,33%,37%,31%,61%,68%,57%,
AGO,Angola,-,-,-,3%,4%,2%,5%,7%,4%,6%,8%,5%,15%,20%,12%,10%,14%,8%,6%,8%,5%,1%,2%,1%,1%,2%,1%,
... many more rows ...
YEM,Yemen,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,
ZMB,Zambia,59%,70%,53%,27%,32%,24%,70%,84%,63%,74%,88%,67%,64%,76%,57%,91%,>95%,81%,43%,52%,39%,43%,51%,39%,46%,54%,41%,
ZWE,Zimbabwe,-,-,-,12%,15%,10%,23%,28%,20%,38%,47%,33%,57%,70%,49%,54%,67%,47%,59%,73%,51%,71%,88%,62%,65%,81%,57%,
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
,,2009,,,2010,,,2011,,,2012,,,2013,,,2014,,,2015,,,2016,,,2017,,,
,,Estimate,hi,lo,Estimate,hi,lo,Estimate,hi,lo,Estimate,hi,lo,...
Region,East Asia and the Pacific,25%,30%,22%,35%,42%,29%,30%,37%,26%,32%,38%,27%,28%,34%,24%,26%,31%,22%,31%,37%,27%,30%,35%,25%,28%,33%,24%,
,Eastern and Southern Africa,23%,29%,20%,44%,57%,37%,48%,62%,40%,54%,69%,46%,51%,65%,43%,62%,80%,53%,62%,79%,52%,54%,68%,45%,62%,80%,53%,
,Eastern Europe and Central Asia,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,
... several more rows ...
,Sub-Saharan Africa,16%,22%,13%,34%,46%,28%,37%,50%,30%,43%,57%,35%,41%,54%,33%,50%,66%,41%,50%,66%,41%,45%,60%,37%,52%,69%,42%,
,Global,17%,23%,13%,33%,45%,27%,36%,49%,29%,41%,55%,34%,40%,53%,32%,48%,64%,39%,49%,64%,40%,44%,59%,36%,51%,67%,41%,
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Indicator definition: Percentage of infants born to women living with HIV... ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Note: Data are not available if country did not submit data...,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Data source: Global AIDS Monitoring 2018 and UNAIDS 2018 estimates,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
"For more information on this indicator, please visit the guidance:...",,,,,,,,,,,,,,,,,,,,,,,,,,,,,
"For more information on the data, visit data.unicef.org",,,,,,,,,,,,,,,,,,,,,,,,,,,,,

This is a mess—no, more than that, it is an affront to decency. There are comments mixed with data, values’ actual indices have to be synthesized by combining column headings from two rows (two thirds of which have to be carried forward from previous columns), and so on. We want to create the tidy data found in results/infant_hiv.csv:

country,year,estimate,hi,lo
AFG,2009,NA,NA,NA
AFG,2010,NA,NA,NA
AFG,2011,NA,NA,NA
AFG,2012,NA,NA,NA
...
ZWE,2016,0.71,0.88,0.62
ZWE,2017,0.65,0.81,0.57

To bring this data to a state of grace will take some trial and effort, which we shall do in stages.

4.3 How do I inspect the raw data?

We will begin by reading the data into a tibble:

raw <- read_csv("data/infant_hiv.csv")
Warning: Missing column names filled in: 'X2' [2], 'X3' [3], 'X4' [4],
'X5' [5], 'X6' [6], 'X7' [7], 'X8' [8], 'X9' [9], 'X10' [10], 'X11' [11],
'X12' [12], 'X13' [13], 'X14' [14], 'X15' [15], 'X16' [16], 'X17' [17],
'X18' [18], 'X19' [19], 'X20' [20], 'X21' [21], 'X22' [22], 'X23' [23],
'X24' [24], 'X25' [25], 'X26' [26], 'X27' [27], 'X28' [28], 'X29' [29],
'X30' [30]
Parsed with column specification:
cols(
  .default = col_character(),
  X30 = col_logical()
)
See spec(...) for full column specifications.
head(raw)
# A tibble: 6 x 30
  `Early Infant D… X2    X3    X4    X5    X6    X7    X8    X9    X10  
  <chr>            <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 <NA>             <NA>  2009  <NA>  <NA>  2010  <NA>  <NA>  2011  <NA> 
2 ISO3             Coun… Esti… hi    lo    Esti… hi    lo    Esti… hi   
3 AFG              Afgh… -     -     -     -     -     -     -     -    
4 ALB              Alba… -     -     -     -     -     -     -     -    
5 DZA              Alge… -     -     -     -     -     -     38%   42%  
6 AGO              Ango… -     -     -     3%    4%    2%    5%    7%   
# … with 20 more variables: X11 <chr>, X12 <chr>, X13 <chr>, X14 <chr>,
#   X15 <chr>, X16 <chr>, X17 <chr>, X18 <chr>, X19 <chr>, X20 <chr>,
#   X21 <chr>, X22 <chr>, X23 <chr>, X24 <chr>, X25 <chr>, X26 <chr>,
#   X27 <chr>, X28 <chr>, X29 <chr>, X30 <lgl>

All right: R isn’t able to infer column names, so it uses the entire first comment string as a very long column name and then makes up names for the other columns. Looking at the file, the second row has years (spaced at three-column intervals) and the column after that has the ISO3 country code, the country’s name, and then “Estimate”, “hi”, and “lo” repeated for every year. We are going to have to combine what’s in the second and third rows, so we’re going to have to do some work no matter which we skip or keep. Since we want the ISO3 code and the country name, let’s skip the first two rows.

raw <- read_csv("data/infant_hiv.csv", skip = 2)
Warning: Missing column names filled in: 'X30' [30]
Warning: Duplicated column names deduplicated: 'Estimate' =>
'Estimate_1' [6], 'hi' => 'hi_1' [7], 'lo' => 'lo_1' [8], 'Estimate' =>
'Estimate_2' [9], 'hi' => 'hi_2' [10], 'lo' => 'lo_2' [11], 'Estimate' =>
'Estimate_3' [12], 'hi' => 'hi_3' [13], 'lo' => 'lo_3' [14], 'Estimate' =>
'Estimate_4' [15], 'hi' => 'hi_4' [16], 'lo' => 'lo_4' [17], 'Estimate' =>
'Estimate_5' [18], 'hi' => 'hi_5' [19], 'lo' => 'lo_5' [20], 'Estimate' =>
'Estimate_6' [21], 'hi' => 'hi_6' [22], 'lo' => 'lo_6' [23], 'Estimate' =>
'Estimate_7' [24], 'hi' => 'hi_7' [25], 'lo' => 'lo_7' [26], 'Estimate' =>
'Estimate_8' [27], 'hi' => 'hi_8' [28], 'lo' => 'lo_8' [29]
Parsed with column specification:
cols(
  .default = col_character(),
  X30 = col_logical()
)
See spec(...) for full column specifications.
head(raw)
# A tibble: 6 x 30
  ISO3  Countries Estimate hi    lo    Estimate_1 hi_1  lo_1  Estimate_2
  <chr> <chr>     <chr>    <chr> <chr> <chr>      <chr> <chr> <chr>     
1 AFG   Afghanis… -        -     -     -          -     -     -         
2 ALB   Albania   -        -     -     -          -     -     -         
3 DZA   Algeria   -        -     -     -          -     -     38%       
4 AGO   Angola    -        -     -     3%         4%    2%    5%        
5 AIA   Anguilla  -        -     -     -          -     -     -         
6 ATG   Antigua … -        -     -     -          -     -     -         
# … with 21 more variables: hi_2 <chr>, lo_2 <chr>, Estimate_3 <chr>,
#   hi_3 <chr>, lo_3 <chr>, Estimate_4 <chr>, hi_4 <chr>, lo_4 <chr>,
#   Estimate_5 <chr>, hi_5 <chr>, lo_5 <chr>, Estimate_6 <chr>,
#   hi_6 <chr>, lo_6 <chr>, Estimate_7 <chr>, hi_7 <chr>, lo_7 <chr>,
#   Estimate_8 <chr>, hi_8 <chr>, lo_8 <chr>, X30 <lgl>

That’s a bit of an improvement, but why are all the columns character instead of numbers? This happens because:

  1. our CSV file uses - (a single dash) to show missing data, and
  2. all of our numbers end with %, which means those values actually are character strings.

We will tackle the first problem by setting na = c("-") in our read_csv call (since we should never do ourselves what a library function will do for us):

raw <- read_csv("data/infant_hiv.csv", skip = 2, na = c("-"))
Warning: Missing column names filled in: 'X30' [30]
Warning: Duplicated column names deduplicated: 'Estimate' =>
'Estimate_1' [6], 'hi' => 'hi_1' [7], 'lo' => 'lo_1' [8], 'Estimate' =>
'Estimate_2' [9], 'hi' => 'hi_2' [10], 'lo' => 'lo_2' [11], 'Estimate' =>
'Estimate_3' [12], 'hi' => 'hi_3' [13], 'lo' => 'lo_3' [14], 'Estimate' =>
'Estimate_4' [15], 'hi' => 'hi_4' [16], 'lo' => 'lo_4' [17], 'Estimate' =>
'Estimate_5' [18], 'hi' => 'hi_5' [19], 'lo' => 'lo_5' [20], 'Estimate' =>
'Estimate_6' [21], 'hi' => 'hi_6' [22], 'lo' => 'lo_6' [23], 'Estimate' =>
'Estimate_7' [24], 'hi' => 'hi_7' [25], 'lo' => 'lo_7' [26], 'Estimate' =>
'Estimate_8' [27], 'hi' => 'hi_8' [28], 'lo' => 'lo_8' [29]
Parsed with column specification:
cols(
  .default = col_character(),
  X30 = col_logical()
)
See spec(...) for full column specifications.
head(raw)
# A tibble: 6 x 30
  ISO3  Countries Estimate hi    lo    Estimate_1 hi_1  lo_1  Estimate_2
  <chr> <chr>     <chr>    <chr> <chr> <chr>      <chr> <chr> <chr>     
1 AFG   Afghanis… <NA>     <NA>  <NA>  <NA>       <NA>  <NA>  <NA>      
2 ALB   Albania   <NA>     <NA>  <NA>  <NA>       <NA>  <NA>  <NA>      
3 DZA   Algeria   <NA>     <NA>  <NA>  <NA>       <NA>  <NA>  38%       
4 AGO   Angola    <NA>     <NA>  <NA>  3%         4%    2%    5%        
5 AIA   Anguilla  <NA>     <NA>  <NA>  <NA>       <NA>  <NA>  <NA>      
6 ATG   Antigua … <NA>     <NA>  <NA>  <NA>       <NA>  <NA>  <NA>      
# … with 21 more variables: hi_2 <chr>, lo_2 <chr>, Estimate_3 <chr>,
#   hi_3 <chr>, lo_3 <chr>, Estimate_4 <chr>, hi_4 <chr>, lo_4 <chr>,
#   Estimate_5 <chr>, hi_5 <chr>, lo_5 <chr>, Estimate_6 <chr>,
#   hi_6 <chr>, lo_6 <chr>, Estimate_7 <chr>, hi_7 <chr>, lo_7 <chr>,
#   Estimate_8 <chr>, hi_8 <chr>, lo_8 <chr>, X30 <lgl>

That’s progress. We now need to strip the percentage signs and convert what’s left to numeric values. To simplify our lives, let’s get the ISO3 and Countries columns out of the way. We will save the ISO3 values for later use (and because it will illustrate a point about data hygiene that we want to make later, but which we don’t want to reveal just yet).

raw <- read_csv("data/infant_hiv.csv", skip = 2, na = c("-"))
Warning: Missing column names filled in: 'X30' [30]
Warning: Duplicated column names deduplicated: 'Estimate' =>
'Estimate_1' [6], 'hi' => 'hi_1' [7], 'lo' => 'lo_1' [8], 'Estimate' =>
'Estimate_2' [9], 'hi' => 'hi_2' [10], 'lo' => 'lo_2' [11], 'Estimate' =>
'Estimate_3' [12], 'hi' => 'hi_3' [13], 'lo' => 'lo_3' [14], 'Estimate' =>
'Estimate_4' [15], 'hi' => 'hi_4' [16], 'lo' => 'lo_4' [17], 'Estimate' =>
'Estimate_5' [18], 'hi' => 'hi_5' [19], 'lo' => 'lo_5' [20], 'Estimate' =>
'Estimate_6' [21], 'hi' => 'hi_6' [22], 'lo' => 'lo_6' [23], 'Estimate' =>
'Estimate_7' [24], 'hi' => 'hi_7' [25], 'lo' => 'lo_7' [26], 'Estimate' =>
'Estimate_8' [27], 'hi' => 'hi_8' [28], 'lo' => 'lo_8' [29]
Parsed with column specification:
cols(
  .default = col_character(),
  X30 = col_logical()
)
See spec(...) for full column specifications.
countries <- raw$ISO3
body <- raw %>%
  filter(-ISO3, -Countries)
Error in -ISO3: invalid argument to unary operator

In the Hollywood version of this lesson, we would sigh heavily at this point as we realize that we should have called select, not filter. Once we make that change, we can move forward once again:

raw <- read_csv("data/infant_hiv.csv", skip = 2, na = c("-"))
Warning: Missing column names filled in: 'X30' [30]
Warning: Duplicated column names deduplicated: 'Estimate' =>
'Estimate_1' [6], 'hi' => 'hi_1' [7], 'lo' => 'lo_1' [8], 'Estimate' =>
'Estimate_2' [9], 'hi' => 'hi_2' [10], 'lo' => 'lo_2' [11], 'Estimate' =>
'Estimate_3' [12], 'hi' => 'hi_3' [13], 'lo' => 'lo_3' [14], 'Estimate' =>
'Estimate_4' [15], 'hi' => 'hi_4' [16], 'lo' => 'lo_4' [17], 'Estimate' =>
'Estimate_5' [18], 'hi' => 'hi_5' [19], 'lo' => 'lo_5' [20], 'Estimate' =>
'Estimate_6' [21], 'hi' => 'hi_6' [22], 'lo' => 'lo_6' [23], 'Estimate' =>
'Estimate_7' [24], 'hi' => 'hi_7' [25], 'lo' => 'lo_7' [26], 'Estimate' =>
'Estimate_8' [27], 'hi' => 'hi_8' [28], 'lo' => 'lo_8' [29]
Parsed with column specification:
cols(
  .default = col_character(),
  X30 = col_logical()
)
See spec(...) for full column specifications.
countries <- raw$ISO3
body <- raw %>%
  select(-ISO3, -Countries)
head(body)
# A tibble: 6 x 28
  Estimate hi    lo    Estimate_1 hi_1  lo_1  Estimate_2 hi_2  lo_2 
  <chr>    <chr> <chr> <chr>      <chr> <chr> <chr>      <chr> <chr>
1 <NA>     <NA>  <NA>  <NA>       <NA>  <NA>  <NA>       <NA>  <NA> 
2 <NA>     <NA>  <NA>  <NA>       <NA>  <NA>  <NA>       <NA>  <NA> 
3 <NA>     <NA>  <NA>  <NA>       <NA>  <NA>  38%        42%   35%  
4 <NA>     <NA>  <NA>  3%         4%    2%    5%         7%    4%   
5 <NA>     <NA>  <NA>  <NA>       <NA>  <NA>  <NA>       <NA>  <NA> 
6 <NA>     <NA>  <NA>  <NA>       <NA>  <NA>  <NA>       <NA>  <NA> 
# … with 19 more variables: Estimate_3 <chr>, hi_3 <chr>, lo_3 <chr>,
#   Estimate_4 <chr>, hi_4 <chr>, lo_4 <chr>, Estimate_5 <chr>,
#   hi_5 <chr>, lo_5 <chr>, Estimate_6 <chr>, hi_6 <chr>, lo_6 <chr>,
#   Estimate_7 <chr>, hi_7 <chr>, lo_7 <chr>, Estimate_8 <chr>,
#   hi_8 <chr>, lo_8 <chr>, X30 <lgl>

But wait. Weren’t there some aggregate lines of data at the end of our input? What happened to them?

tail(countries, n = 25)
 [1] "YEM"                                                                                                                                                       
 [2] "ZMB"                                                                                                                                                       
 [3] "ZWE"                                                                                                                                                       
 [4] ""                                                                                                                                                          
 [5] ""                                                                                                                                                          
 [6] ""                                                                                                                                                          
 [7] "Region"                                                                                                                                                    
 [8] ""                                                                                                                                                          
 [9] ""                                                                                                                                                          
[10] ""                                                                                                                                                          
[11] ""                                                                                                                                                          
[12] ""                                                                                                                                                          
[13] ""                                                                                                                                                          
[14] ""                                                                                                                                                          
[15] ""                                                                                                                                                          
[16] "Super-region"                                                                                                                                              
[17] ""                                                                                                                                                          
[18] ""                                                                                                                                                          
[19] ""                                                                                                                                                          
[20] ""                                                                                                                                                          
[21] "Indicator definition: Percentage of infants born to women living with HIV receiving a virological test for HIV within two months of birth"                 
[22] "Note: Data are not available if country did not submit data to Global AIDS Monitoring or if estimates of pregnant women living with HIV are not published."
[23] "Data source: Global AIDS Monitoring 2018 and UNAIDS 2018 estimates"                                                                                        
[24] "For more information on this indicator, please visit the guidance: http://www.unaids.org/sites/default/files/media_asset/global-aids-monitoring_en.pdf"    
[25] "For more information on the data, visit data.unicef.org"                                                                                                   

Once again the actor playing our part on screen sighs heavily. How are we to trim this? Since there is only one file, we can manually count the number of rows we are interested in (or rather, open the file with an editor or spreadsheet program, scroll down, and check the line number), and then slice there. This is a very bad idea if we’re planning to use this script on other files—we should instead look for the first blank line or the entry for Zimbabwe or something like that—but let’s revisit the problem once we have our data in place.

num_rows <- 192
raw <- read_csv("data/infant_hiv.csv", skip = 2, na = c("-"))
Warning: Missing column names filled in: 'X30' [30]
Warning: Duplicated column names deduplicated: 'Estimate' =>
'Estimate_1' [6], 'hi' => 'hi_1' [7], 'lo' => 'lo_1' [8], 'Estimate' =>
'Estimate_2' [9], 'hi' => 'hi_2' [10], 'lo' => 'lo_2' [11], 'Estimate' =>
'Estimate_3' [12], 'hi' => 'hi_3' [13], 'lo' => 'lo_3' [14], 'Estimate' =>
'Estimate_4' [15], 'hi' => 'hi_4' [16], 'lo' => 'lo_4' [17], 'Estimate' =>
'Estimate_5' [18], 'hi' => 'hi_5' [19], 'lo' => 'lo_5' [20], 'Estimate' =>
'Estimate_6' [21], 'hi' => 'hi_6' [22], 'lo' => 'lo_6' [23], 'Estimate' =>
'Estimate_7' [24], 'hi' => 'hi_7' [25], 'lo' => 'lo_7' [26], 'Estimate' =>
'Estimate_8' [27], 'hi' => 'hi_8' [28], 'lo' => 'lo_8' [29]
Parsed with column specification:
cols(
  .default = col_character(),
  X30 = col_logical()
)
See spec(...) for full column specifications.
sliced <- slice(raw, 1:num_rows)
countries <- sliced$ISO3
tail(countries, n = 5)
[1] "VEN" "VNM" "YEM" "ZMB" "ZWE"

Notice that we’re counting rows not including the two we’re skipping, which means that the 192 in the call to slice above corresponds to row 195 of our original data: 195, not 194, because we’re using the first row of unskipped data as headers and yes, you are in fact making that faint whimpering sound you now hear. You will hear it often when dealing with real-world data…

And notice also that we are slicing, then extracting the column containing the countries. We did, in a temporary version of this script, peel off the countries, slice those, and then wonder why our main data table still had unwanted data at the end. Vigilance, my friends—vigilance shall be our watchword, and in light of that, we shall first test our plan for converting our strings to numbers:

fixture <- c(NA, "1%", "10%", "100%")
result <- as.numeric(str_replace(fixture, "%", "")) / 100
result
[1]   NA 0.01 0.10 1.00

And as a further check:

is.numeric(result)
[1] TRUE

The function is.numeric is TRUE for both NA and actual numbers, so it is doing the right thing here, and so are we. Our updated conversion script is now:

raw <- read_csv("data/infant_hiv.csv", skip = 2, na = c("-"))
Warning: Missing column names filled in: 'X30' [30]
Warning: Duplicated column names deduplicated: 'Estimate' =>
'Estimate_1' [6], 'hi' => 'hi_1' [7], 'lo' => 'lo_1' [8], 'Estimate' =>
'Estimate_2' [9], 'hi' => 'hi_2' [10], 'lo' => 'lo_2' [11], 'Estimate' =>
'Estimate_3' [12], 'hi' => 'hi_3' [13], 'lo' => 'lo_3' [14], 'Estimate' =>
'Estimate_4' [15], 'hi' => 'hi_4' [16], 'lo' => 'lo_4' [17], 'Estimate' =>
'Estimate_5' [18], 'hi' => 'hi_5' [19], 'lo' => 'lo_5' [20], 'Estimate' =>
'Estimate_6' [21], 'hi' => 'hi_6' [22], 'lo' => 'lo_6' [23], 'Estimate' =>
'Estimate_7' [24], 'hi' => 'hi_7' [25], 'lo' => 'lo_7' [26], 'Estimate' =>
'Estimate_8' [27], 'hi' => 'hi_8' [28], 'lo' => 'lo_8' [29]
Parsed with column specification:
cols(
  .default = col_character(),
  X30 = col_logical()
)
See spec(...) for full column specifications.
sliced <- slice(raw, 1:192)
countries <- sliced$ISO3
body <- raw %>%
  select(-ISO3, -Countries)
numbers <- as.numeric(str_replace(body, "%", "")) / 100
Warning in stri_replace_first_regex(string, pattern,
fix_replacement(replacement), : argument is not an atomic vector; coercing
Warning: NAs introduced by coercion
is.numeric(numbers)
[1] TRUE

Oh dear. It appears that some function that str_replace is calling is expecting an atomic vector, not a tibble. It worked for our test case because that was a character vector, but tibbles have more structure than that.

The second complaint is that NAs were introduced, which is troubling because we didn’t get a complaint when we had actual NAs in our data. However, is.numeric tells us that all of our results are numbers. Let’s take a closer look:

is_tibble(body)
[1] TRUE
is_tibble(numbers)
[1] FALSE

Perdition. After browsing the data, we realize that some entries are ">95%", i.e., there is a greater-than sign as well as a percentage in the text. We will need to regularize those before we do any conversions.

Before that, however, let’s see if we can get rid of the percent signs. The obvious way is is to use str_replace(body, "%", ""), but that doesn’t work: str_replace works on vectors, but a tibble is a list of vectors. Instead, we can use a higher-order function called map to apply the function str_replace to each column in turn to get rid of the percent signs:

raw <- read_csv("data/infant_hiv.csv", skip = 2, na = c("-"))
Warning: Missing column names filled in: 'X30' [30]
Warning: Duplicated column names deduplicated: 'Estimate' =>
'Estimate_1' [6], 'hi' => 'hi_1' [7], 'lo' => 'lo_1' [8], 'Estimate' =>
'Estimate_2' [9], 'hi' => 'hi_2' [10], 'lo' => 'lo_2' [11], 'Estimate' =>
'Estimate_3' [12], 'hi' => 'hi_3' [13], 'lo' => 'lo_3' [14], 'Estimate' =>
'Estimate_4' [15], 'hi' => 'hi_4' [16], 'lo' => 'lo_4' [17], 'Estimate' =>
'Estimate_5' [18], 'hi' => 'hi_5' [19], 'lo' => 'lo_5' [20], 'Estimate' =>
'Estimate_6' [21], 'hi' => 'hi_6' [22], 'lo' => 'lo_6' [23], 'Estimate' =>
'Estimate_7' [24], 'hi' => 'hi_7' [25], 'lo' => 'lo_7' [26], 'Estimate' =>
'Estimate_8' [27], 'hi' => 'hi_8' [28], 'lo' => 'lo_8' [29]
Parsed with column specification:
cols(
  .default = col_character(),
  X30 = col_logical()
)
See spec(...) for full column specifications.
sliced <- slice(raw, 1:192)
countries <- sliced$ISO3
body <- raw %>%
  select(-ISO3, -Countries)
trimmed <- map(body, str_replace, pattern = "%", replacement = "")
head(trimmed)
$Estimate
  [1] NA         NA         NA         NA         NA         NA        
  [7] NA         NA         NA         NA         "26"       NA        
 [13] NA         NA         NA         ">95"      NA         "77"      
 [19] NA         NA         "7"        NA         NA         "25"      
 [25] NA         NA         "3"        NA         ">95"      NA        
 [31] "27"       NA         "1"        NA         NA         NA        
 [37] "5"        NA         "8"        NA         "92"       NA        
 [43] NA         "83"       NA         NA         NA         NA        
 [49] NA         NA         NA         "28"       "1"        "4"       
 [55] NA         NA         NA         NA         "4"        NA        
 [61] NA         NA         NA         NA         "61"       NA        
 [67] NA         NA         NA         NA         NA         NA        
 [73] NA         NA         "61"       NA         NA         NA        
 [79] NA         "2"        NA         NA         NA         NA        
 [85] NA         NA         NA         ">95"      NA         NA        
 [91] NA         NA         NA         NA         NA         "43"      
 [97] "5"        NA         NA         NA         NA         NA        
[103] "37"       NA         "8"        NA         NA         NA        
[109] NA         NA         NA         NA         NA         "2"       
[115] NA         NA         NA         NA         "2"        NA        
[121] NA         "50"       NA         "4"        NA         NA        
[127] NA         "1"        NA         NA         NA         NA        
[133] NA         NA         "1"        NA         NA         NA        
[139] ">95"      NA         NA         "58"       NA         NA        
[145] NA         NA         NA         NA         "11"       NA        
[151] NA         NA         NA         NA         NA         NA        
[157] NA         NA         NA         NA         NA         NA        
[163] "9"        NA         NA         NA         NA         "1"       
[169] NA         NA         NA         "7"        NA         NA        
[175] NA         NA         NA         NA         "8"        "78"      
[181] NA         NA         "13"       NA         NA         "0"       
[187] NA         NA         NA         NA         "59"       NA        
[193] ""         "2009"     "Estimate" "25"       "23"       NA        
[199] "24"       "2"        NA         "1"        "8"        NA        
[205] "7"        "72"       "16"       "17"       ""         ""        
[211] ""         ""         ""         ""        

$hi
  [1] NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    "35" 
...

Perdition once again. The problem now is that map produces a raw list as output. The function we want is map_dfr, which maps a function across the rows of a tibble and returns a tibble as a result. (There is a corresponding function map_dfc that maps a function across columns.)

raw <- read_csv("data/infant_hiv.csv", skip = 2, na = c("-"))
Warning: Missing column names filled in: 'X30' [30]
Warning: Duplicated column names deduplicated: 'Estimate' =>
'Estimate_1' [6], 'hi' => 'hi_1' [7], 'lo' => 'lo_1' [8], 'Estimate' =>
'Estimate_2' [9], 'hi' => 'hi_2' [10], 'lo' => 'lo_2' [11], 'Estimate' =>
'Estimate_3' [12], 'hi' => 'hi_3' [13], 'lo' => 'lo_3' [14], 'Estimate' =>
'Estimate_4' [15], 'hi' => 'hi_4' [16], 'lo' => 'lo_4' [17], 'Estimate' =>
'Estimate_5' [18], 'hi' => 'hi_5' [19], 'lo' => 'lo_5' [20], 'Estimate' =>
'Estimate_6' [21], 'hi' => 'hi_6' [22], 'lo' => 'lo_6' [23], 'Estimate' =>
'Estimate_7' [24], 'hi' => 'hi_7' [25], 'lo' => 'lo_7' [26], 'Estimate' =>
'Estimate_8' [27], 'hi' => 'hi_8' [28], 'lo' => 'lo_8' [29]
Parsed with column specification:
cols(
  .default = col_character(),
  X30 = col_logical()
)
See spec(...) for full column specifications.
sliced <- slice(raw, 1:192)
countries <- sliced$ISO3
body <- raw %>%
  select(-ISO3, -Countries)
trimmed <- map_dfr(body, str_replace, pattern = "%", replacement = "")
head(trimmed)
# A tibble: 6 x 28
  Estimate hi    lo    Estimate_1 hi_1  lo_1  Estimate_2 hi_2  lo_2 
  <chr>    <chr> <chr> <chr>      <chr> <chr> <chr>      <chr> <chr>
1 <NA>     <NA>  <NA>  <NA>       <NA>  <NA>  <NA>       <NA>  <NA> 
2 <NA>     <NA>  <NA>  <NA>       <NA>  <NA>  <NA>       <NA>  <NA> 
3 <NA>     <NA>  <NA>  <NA>       <NA>  <NA>  38         42    35   
4 <NA>     <NA>  <NA>  3          4     2     5          7     4    
5 <NA>     <NA>  <NA>  <NA>       <NA>  <NA>  <NA>       <NA>  <NA> 
6 <NA>     <NA>  <NA>  <NA>       <NA>  <NA>  <NA>       <NA>  <NA> 
# … with 19 more variables: Estimate_3 <chr>, hi_3 <chr>, lo_3 <chr>,
#   Estimate_4 <chr>, hi_4 <chr>, lo_4 <chr>, Estimate_5 <chr>,
#   hi_5 <chr>, lo_5 <chr>, Estimate_6 <chr>, hi_6 <chr>, lo_6 <chr>,
#   Estimate_7 <chr>, hi_7 <chr>, lo_7 <chr>, Estimate_8 <chr>,
#   hi_8 <chr>, lo_8 <chr>, X30 <chr>

Now to tackle those ">95%" values. It turns out that str_replace uses regular expressions, not just direct string matches, so we can get rid of the > at the same time as we get rid of the %. We will check by looking at the first Estimate column, which earlier inspection informed us had at least one ">95%" in it:

raw <- read_csv("data/infant_hiv.csv", skip = 2, na = c("-"))
Warning: Missing column names filled in: 'X30' [30]
Warning: Duplicated column names deduplicated: 'Estimate' =>
'Estimate_1' [6], 'hi' => 'hi_1' [7], 'lo' => 'lo_1' [8], 'Estimate' =>
'Estimate_2' [9], 'hi' => 'hi_2' [10], 'lo' => 'lo_2' [11], 'Estimate' =>
'Estimate_3' [12], 'hi' => 'hi_3' [13], 'lo' => 'lo_3' [14], 'Estimate' =>
'Estimate_4' [15], 'hi' => 'hi_4' [16], 'lo' => 'lo_4' [17], 'Estimate' =>
'Estimate_5' [18], 'hi' => 'hi_5' [19], 'lo' => 'lo_5' [20], 'Estimate' =>
'Estimate_6' [21], 'hi' => 'hi_6' [22], 'lo' => 'lo_6' [23], 'Estimate' =>
'Estimate_7' [24], 'hi' => 'hi_7' [25], 'lo' => 'lo_7' [26], 'Estimate' =>
'Estimate_8' [27], 'hi' => 'hi_8' [28], 'lo' => 'lo_8' [29]
Parsed with column specification:
cols(
  .default = col_character(),
  X30 = col_logical()
)
See spec(...) for full column specifications.
sliced <- slice(raw, 1:192)
countries <- sliced$ISO3
body <- raw %>%
  select(-ISO3, -Countries)
trimmed <- map_dfr(body, str_replace, pattern = ">?(\\d+)%", replacement = "\\1")
trimmed$Estimate
  [1] NA         NA         NA         NA         NA         NA        
  [7] NA         NA         NA         NA         "26"       NA        
 [13] NA         NA         NA         "95"       NA         "77"      
 [19] NA         NA         "7"        NA         NA         "25"      
 [25] NA         NA         "3"        NA         "95"       NA        
 [31] "27"       NA         "1"        NA         NA         NA        
 [37] "5"        NA         "8"        NA         "92"       NA        
 [43] NA         "83"       NA         NA         NA         NA        
 [49] NA         NA         NA         "28"       "1"        "4"       
 [55] NA         NA         NA         NA         "4"        NA        
 [61] NA         NA         NA         NA         "61"       NA        
 [67] NA         NA         NA         NA         NA         NA        
 [73] NA         NA         "61"       NA         NA         NA        
 [79] NA         "2"        NA         NA         NA         NA        
 [85] NA         NA         NA         "95"       NA         NA        
 [91] NA         NA         NA         NA         NA         "43"      
 [97] "5"        NA         NA         NA         NA         NA        
[103] "37"       NA         "8"        NA         NA         NA        
[109] NA         NA         NA         NA         NA         "2"       
[115] NA         NA         NA         NA         "2"        NA        
...

Excellent. We can now use map_dfr to convert the columns to numeric percentages using an anonymous function that we define inside the map_dfr call itself:

raw <- read_csv("data/infant_hiv.csv", skip = 2, na = c("-"))
Warning: Missing column names filled in: 'X30' [30]
Warning: Duplicated column names deduplicated: 'Estimate' =>
'Estimate_1' [6], 'hi' => 'hi_1' [7], 'lo' => 'lo_1' [8], 'Estimate' =>
'Estimate_2' [9], 'hi' => 'hi_2' [10], 'lo' => 'lo_2' [11], 'Estimate' =>
'Estimate_3' [12], 'hi' => 'hi_3' [13], 'lo' => 'lo_3' [14], 'Estimate' =>
'Estimate_4' [15], 'hi' => 'hi_4' [16], 'lo' => 'lo_4' [17], 'Estimate' =>
'Estimate_5' [18], 'hi' => 'hi_5' [19], 'lo' => 'lo_5' [20], 'Estimate' =>
'Estimate_6' [21], 'hi' => 'hi_6' [22], 'lo' => 'lo_6' [23], 'Estimate' =>
'Estimate_7' [24], 'hi' => 'hi_7' [25], 'lo' => 'lo_7' [26], 'Estimate' =>
'Estimate_8' [27], 'hi' => 'hi_8' [28], 'lo' => 'lo_8' [29]
Parsed with column specification:
cols(
  .default = col_character(),
  X30 = col_logical()
)
See spec(...) for full column specifications.
sliced <- slice(raw, 1:192)
countries <- sliced$ISO3
body <- raw %>%
  select(-ISO3, -Countries)
trimmed <- map_dfr(body, str_replace, pattern = ">?(\\d+)%", replacement = "\\1")
percents <- map_dfr(trimmed, function(col) as.numeric(col) / 100)
Warning in .f(.x[[i]], ...): NAs introduced by coercion
Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion
head(percents)
# A tibble: 6 x 28
  Estimate    hi    lo Estimate_1  hi_1  lo_1 Estimate_2  hi_2  lo_2
     <dbl> <dbl> <dbl>      <dbl> <dbl> <dbl>      <dbl> <dbl> <dbl>
1       NA    NA    NA      NA    NA    NA         NA    NA    NA   
2       NA    NA    NA      NA    NA    NA         NA    NA    NA   
3       NA    NA    NA      NA    NA    NA          0.38  0.42  0.35
4       NA    NA    NA       0.03  0.04  0.02       0.05  0.07  0.04
5       NA    NA    NA      NA    NA    NA         NA    NA    NA   
6       NA    NA    NA      NA    NA    NA         NA    NA    NA   
# … with 19 more variables: Estimate_3 <dbl>, hi_3 <dbl>, lo_3 <dbl>,
#   Estimate_4 <dbl>, hi_4 <dbl>, lo_4 <dbl>, Estimate_5 <dbl>,
#   hi_5 <dbl>, lo_5 <dbl>, Estimate_6 <dbl>, hi_6 <dbl>, lo_6 <dbl>,
#   Estimate_7 <dbl>, hi_7 <dbl>, lo_7 <dbl>, Estimate_8 <dbl>,
#   hi_8 <dbl>, lo_8 <dbl>, X30 <dbl>

27 warnings is rather a lot, so let’s see what running warnings() produces right after the as.numeric call:

raw <- read_csv("data/infant_hiv.csv", skip = 2, na = c("-"))
Warning: Missing column names filled in: 'X30' [30]
Warning: Duplicated column names deduplicated: 'Estimate' =>
'Estimate_1' [6], 'hi' => 'hi_1' [7], 'lo' => 'lo_1' [8], 'Estimate' =>
'Estimate_2' [9], 'hi' => 'hi_2' [10], 'lo' => 'lo_2' [11], 'Estimate' =>
'Estimate_3' [12], 'hi' => 'hi_3' [13], 'lo' => 'lo_3' [14], 'Estimate' =>
'Estimate_4' [15], 'hi' => 'hi_4' [16], 'lo' => 'lo_4' [17], 'Estimate' =>
'Estimate_5' [18], 'hi' => 'hi_5' [19], 'lo' => 'lo_5' [20], 'Estimate' =>
'Estimate_6' [21], 'hi' => 'hi_6' [22], 'lo' => 'lo_6' [23], 'Estimate' =>
'Estimate_7' [24], 'hi' => 'hi_7' [25], 'lo' => 'lo_7' [26], 'Estimate' =>
'Estimate_8' [27], 'hi' => 'hi_8' [28], 'lo' => 'lo_8' [29]
Parsed with column specification:
cols(
  .default = col_character(),
  X30 = col_logical()
)
See spec(...) for full column specifications.
sliced <- slice(raw, 1:192)
countries <- sliced$ISO3
body <- raw %>%
  select(-ISO3, -Countries)
trimmed <- map_dfr(body, str_replace, pattern = ">?(\\d+)%", replacement = "\\1")
percents <- map_dfr(trimmed, function(col) as.numeric(col) / 100)
Warning in .f(.x[[i]], ...): NAs introduced by coercion
Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion

Warning in .f(.x[[i]], ...): NAs introduced by coercion
warnings()

Something is still not right. The first Estimates column looks all right, so let’s have a look at the second column:

trimmed$hi
  [1] NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   "35" NA   NA   NA  
 [15] NA   "95" NA   "89" NA   NA   "10" NA   NA   "35" NA   NA   "5"  NA  
 [29] "95" NA   "36" NA   "1"  NA   NA   NA   "6"  NA   "12" NA   "95" NA  
 [43] NA   "95" NA   NA   NA   NA   NA   NA   NA   "36" "1"  "4"  NA   NA  
 [57] NA   NA   "6"  NA   NA   NA   NA   NA   "77" NA   NA   NA   NA   NA  
 [71] NA   NA   NA   NA   "74" NA   NA   NA   NA   "2"  NA   NA   NA   NA  
 [85] NA   NA   NA   "95" NA   NA   NA   NA   NA   NA   NA   "53" "7"  NA  
 [99] NA   NA   NA   NA   "44" NA   "9"  NA   NA   NA   NA   NA   NA   NA  
[113] NA   "2"  NA   NA   NA   NA   "2"  NA   NA   "69" NA   "7"  NA   NA  
[127] NA   "1"  NA   NA   NA   NA   NA   NA   "1"  NA   NA   NA   "95" NA  
[141] NA   "75" NA   NA   NA   NA   NA   NA   "13" NA   NA   NA   NA   NA  
[155] NA   NA   NA   NA   NA   NA   NA   NA   "11" NA   NA   NA   NA   "1" 
[169] NA   NA   NA   "12" NA   NA   NA   NA   NA   NA   "9"  "95" NA   NA  
[183] "16" NA   NA   "1"  NA   NA   NA   NA   "70" NA   ""   ""   "hi" "30"
[197] "29" NA   "32" "2"  NA   "2"  "12" NA   "9"  "89" "22" "23" ""   ""  
[211] ""   ""   ""   ""  

Where are those empty strings coming from? Let’s backtrack by examining the hi column of each of our intermediate variables interactively in the console…

…and there’s our bug. We are creating a variable called sliced that has only the rows we care about, but then using the full table in raw to create body. It’s a simple mistake, and one that could easily have slipped by us. Here is our revised script:

raw <- read_csv("data/infant_hiv.csv", skip = 2, na = c("-"))
Warning: Missing column names filled in: 'X30' [30]
Warning: Duplicated column names deduplicated: 'Estimate' =>
'Estimate_1' [6], 'hi' => 'hi_1' [7], 'lo' => 'lo_1' [8], 'Estimate' =>
'Estimate_2' [9], 'hi' => 'hi_2' [10], 'lo' => 'lo_2' [11], 'Estimate' =>
'Estimate_3' [12], 'hi' => 'hi_3' [13], 'lo' => 'lo_3' [14], 'Estimate' =>
'Estimate_4' [15], 'hi' => 'hi_4' [16], 'lo' => 'lo_4' [17], 'Estimate' =>
'Estimate_5' [18], 'hi' => 'hi_5' [19], 'lo' => 'lo_5' [20], 'Estimate' =>
'Estimate_6' [21], 'hi' => 'hi_6' [22], 'lo' => 'lo_6' [23], 'Estimate' =>
'Estimate_7' [24], 'hi' => 'hi_7' [25], 'lo' => 'lo_7' [26], 'Estimate' =>
'Estimate_8' [27], 'hi' => 'hi_8' [28], 'lo' => 'lo_8' [29]
Parsed with column specification:
cols(
  .default = col_character(),
  X30 = col_logical()
)
See spec(...) for full column specifications.
sliced <- slice(raw, 1:192)
countries <- sliced$ISO3
body <- sliced %>%
  select(-ISO3, -Countries)
trimmed <- map_dfr(body, str_replace, pattern = ">?(\\d+)%", replacement = "\\1")
percents <- map_dfr(trimmed, function(col) as.numeric(col) / 100)

and here are the checks on the head:

head(percents)
# A tibble: 6 x 28
  Estimate    hi    lo Estimate_1  hi_1  lo_1 Estimate_2  hi_2  lo_2
     <dbl> <dbl> <dbl>      <dbl> <dbl> <dbl>      <dbl> <dbl> <dbl>
1       NA    NA    NA      NA    NA    NA         NA    NA    NA   
2       NA    NA    NA      NA    NA    NA         NA    NA    NA   
3       NA    NA    NA      NA    NA    NA          0.38  0.42  0.35
4       NA    NA    NA       0.03  0.04  0.02       0.05  0.07  0.04
5       NA    NA    NA      NA    NA    NA         NA    NA    NA   
6       NA    NA    NA      NA    NA    NA         NA    NA    NA   
# … with 19 more variables: Estimate_3 <dbl>, hi_3 <dbl>, lo_3 <dbl>,
#   Estimate_4 <dbl>, hi_4 <dbl>, lo_4 <dbl>, Estimate_5 <dbl>,
#   hi_5 <dbl>, lo_5 <dbl>, Estimate_6 <dbl>, hi_6 <dbl>, lo_6 <dbl>,
#   Estimate_7 <dbl>, hi_7 <dbl>, lo_7 <dbl>, Estimate_8 <dbl>,
#   hi_8 <dbl>, lo_8 <dbl>, X30 <dbl>

and tail:

tail(percents)
# A tibble: 6 x 28
  Estimate    hi    lo Estimate_1  hi_1  lo_1 Estimate_2  hi_2  lo_2
     <dbl> <dbl> <dbl>      <dbl> <dbl> <dbl>      <dbl> <dbl> <dbl>
1    NA     NA   NA         NA    NA    NA         NA    NA    NA   
2    NA     NA   NA         NA    NA    NA         NA    NA    NA   
3    NA     NA   NA         NA    NA    NA          0.31  0.37  0.26
4    NA     NA   NA         NA    NA    NA         NA    NA    NA   
5     0.59   0.7  0.53       0.27  0.32  0.24       0.7   0.84  0.63
6    NA     NA   NA          0.12  0.15  0.1        0.23  0.28  0.2 
# … with 19 more variables: Estimate_3 <dbl>, hi_3 <dbl>, lo_3 <dbl>,
#   Estimate_4 <dbl>, hi_4 <dbl>, lo_4 <dbl>, Estimate_5 <dbl>,
#   hi_5 <dbl>, lo_5 <dbl>, Estimate_6 <dbl>, hi_6 <dbl>, lo_6 <dbl>,
#   Estimate_7 <dbl>, hi_7 <dbl>, lo_7 <dbl>, Estimate_8 <dbl>,
#   hi_8 <dbl>, lo_8 <dbl>, X30 <dbl>

Comparing this to the raw data file convinces us that yes, we are now converting the percentages properly, which means we are halfway home.

4.4 How do I tidy the data?

We now have numeric values in percents and corresponding ISO3 codes in countries. What we do not have is tidy data: countries are not associated with records, years are not recorded at all, and the column headers for percents have mostly been manufactured for us by R. We must now sew these parts together like Dr. Frankenstein’s trusty assistant Igor (who, like so many lab assistants, did most of the actual work but was given only crumbs of credit).

We could write a loop to grab three columns at a time and relabel them, but a more concise solution makes use of two functions called gather and separate. gather takes multiple columns and collapses them into key-value pairs. To show how it works, let’s create a small tibble by hand using the function tribble. The first few arguments use ~ as a prefix operator to define columns names, and all of the other values are then put into a tibble with those columns:

small <- tribble(
  ~ISO, ~est, ~hi, ~lo,
  'ABC', 0.25, 0.3, 0.2,
  'DEF', 0.55, 0.6, 0.5
)
small
# A tibble: 2 x 4
  ISO     est    hi    lo
  <chr> <dbl> <dbl> <dbl>
1 ABC    0.25   0.3   0.2
2 DEF    0.55   0.6   0.5

and then gather the three columns est, hi, and lo:

small %>%
  gather(key = "kind", value = "reported", est, hi, lo)
# A tibble: 6 x 3
  ISO   kind  reported
  <chr> <chr>    <dbl>
1 ABC   est       0.25
2 DEF   est       0.55
3 ABC   hi        0.3 
4 DEF   hi        0.6 
5 ABC   lo        0.2 
6 DEF   lo        0.5 

The key and value parameters tell gather to create new columns with the specified names (in this case, kind and reported). The first of these columns (kind in our case) is filled by repeating the column headings from the original tibble; the second column (reported in our case) is then filled with the original tibble’s values.

separate splits one column into two. For example, if we have the year and the heading type in a single column:

single <- tribble(
  ~combined, ~value,
  '2009-est', 123,
  '2009-hi',  456,
  '2009-lo',  789,
  '2010-est', 987,
  '2010-hi',  654,
  '2010-lo',  321
)
single
# A tibble: 6 x 2
  combined value
  <chr>    <dbl>
1 2009-est   123
2 2009-hi    456
3 2009-lo    789
4 2010-est   987
5 2010-hi    654
6 2010-lo    321

we can get the year and the heading into separate columns by separating on the - character:

single %>%
  separate(combined, sep = "-", c("year", "kind"))
# A tibble: 6 x 3
  year  kind  value
  <chr> <chr> <dbl>
1 2009  est     123
2 2009  hi      456
3 2009  lo      789
4 2010  est     987
5 2010  hi      654
6 2010  lo      321

Our strategy is therefore going to be:

  1. Replace the double column headers in the existing data with a single header that combines the year with the kind.
  2. Gather the data so that the year-kind values are in a single column.
  3. Split that column.

We’ve seen the tools we need for the second and third step; the first involves a little bit of list manipulation. Let’s start by repeating "est", "hi", and "lo" as many times as we need them:

num_years <- 1 + 2017 - 2009
kinds <- rep(c("est", "hi", "lo"), num_years)
kinds
 [1] "est" "hi"  "lo"  "est" "hi"  "lo"  "est" "hi"  "lo"  "est" "hi" 
[12] "lo"  "est" "hi"  "lo"  "est" "hi"  "lo"  "est" "hi"  "lo"  "est"
[23] "hi"  "lo"  "est" "hi"  "lo" 

As you can probably guess from its name, rep repeats things a specified number of times, and as noted previously, a vector of vectors is flattened into a single vector.

What about the years? We want to wind up with:

c("2009", "2009" "2009", "2010", "2010", "2010", ...)

i.e., with each year repeated three times. rep won’t do this, but we can get there with map:

years <- map(2009:2017, rep, 3)
years
[[1]]
[1] 2009 2009 2009

[[2]]
[1] 2010 2010 2010

[[3]]
[1] 2011 2011 2011

[[4]]
[1] 2012 2012 2012

[[5]]
[1] 2013 2013 2013

[[6]]
[1] 2014 2014 2014

[[7]]
[1] 2015 2015 2015

[[8]]
[1] 2016 2016 2016

[[9]]
[1] 2017 2017 2017

That’s almost right, but map hasn’t flattened the list for us. Luckily, we can use unlist to do that:

years <- map(2009:2017, rep, 3) %>% unlist()
years
 [1] 2009 2009 2009 2010 2010 2010 2011 2011 2011 2012 2012 2012 2013 2013
[15] 2013 2014 2014 2014 2015 2015 2015 2016 2016 2016 2017 2017 2017

We can now combine the years and kinds by pasting the two vectors together with "-" as a separator:

headers <- paste(years, kinds, sep = "-")
headers
 [1] "2009-est" "2009-hi"  "2009-lo"  "2010-est" "2010-hi"  "2010-lo" 
 [7] "2011-est" "2011-hi"  "2011-lo"  "2012-est" "2012-hi"  "2012-lo" 
[13] "2013-est" "2013-hi"  "2013-lo"  "2014-est" "2014-hi"  "2014-lo" 
[19] "2015-est" "2015-hi"  "2015-lo"  "2016-est" "2016-hi"  "2016-lo" 
[25] "2017-est" "2017-hi"  "2017-lo" 

Let’s use this to relabel the columns of percents (which holds our data without the ISO country codes):

names(percents) <- headers
Warning: The `names` must have length 28, not 27.
This warning is displayed once per session.
percents
# A tibble: 192 x 28
   `2009-est` `2009-hi` `2009-lo` `2010-est` `2010-hi` `2010-lo` `2011-est`
        <dbl>     <dbl>     <dbl>      <dbl>     <dbl>     <dbl>      <dbl>
 1         NA        NA        NA      NA        NA        NA         NA   
 2         NA        NA        NA      NA        NA        NA         NA   
 3         NA        NA        NA      NA        NA        NA          0.38
 4         NA        NA        NA       0.03      0.04      0.02       0.05
 5         NA        NA        NA      NA        NA        NA         NA   
 6         NA        NA        NA      NA        NA        NA         NA   
 7         NA        NA        NA      NA        NA        NA          0.13
 8         NA        NA        NA      NA        NA        NA         NA   
 9         NA        NA        NA      NA        NA        NA         NA   
10         NA        NA        NA      NA        NA        NA         NA   
# … with 182 more rows, and 21 more variables: `2011-hi` <dbl>,
#   `2011-lo` <dbl>, `2012-est` <dbl>, `2012-hi` <dbl>, `2012-lo` <dbl>,
#   `2013-est` <dbl>, `2013-hi` <dbl>, `2013-lo` <dbl>, `2014-est` <dbl>,
#   `2014-hi` <dbl>, `2014-lo` <dbl>, `2015-est` <dbl>, `2015-hi` <dbl>,
#   `2015-lo` <dbl>, `2016-est` <dbl>, `2016-hi` <dbl>, `2016-lo` <dbl>,
#   `2017-est` <dbl>, `2017-hi` <dbl>, `2017-lo` <dbl>, NA <dbl>

Uh oh: the warning message tells us that percents has the wrong number of columns. Inspecting the tibble in the console, we see that the last column is full of NAs, which we can prove like this:

all(is.na(percents[,ncol(percents)]))
[1] TRUE

Let’s relabel our data again and then drop the empty column. (There are other ways to do this, but I find steps easier to read after the fact this way.)

headers <- c(headers, "empty")
names(percents) <- headers
percents <- select(percents, -empty)
percents
# A tibble: 192 x 27
   `2009-est` `2009-hi` `2009-lo` `2010-est` `2010-hi` `2010-lo` `2011-est`
        <dbl>     <dbl>     <dbl>      <dbl>     <dbl>     <dbl>      <dbl>
 1         NA        NA        NA      NA        NA        NA         NA   
 2         NA        NA        NA      NA        NA        NA         NA   
 3         NA        NA        NA      NA        NA        NA          0.38
 4         NA        NA        NA       0.03      0.04      0.02       0.05
 5         NA        NA        NA      NA        NA        NA         NA   
 6         NA        NA        NA      NA        NA        NA         NA   
 7         NA        NA        NA      NA        NA        NA          0.13
 8         NA        NA        NA      NA        NA        NA         NA   
 9         NA        NA        NA      NA        NA        NA         NA   
10         NA        NA        NA      NA        NA        NA         NA   
# … with 182 more rows, and 20 more variables: `2011-hi` <dbl>,
#   `2011-lo` <dbl>, `2012-est` <dbl>, `2012-hi` <dbl>, `2012-lo` <dbl>,
#   `2013-est` <dbl>, `2013-hi` <dbl>, `2013-lo` <dbl>, `2014-est` <dbl>,
#   `2014-hi` <dbl>, `2014-lo` <dbl>, `2015-est` <dbl>, `2015-hi` <dbl>,
#   `2015-lo` <dbl>, `2016-est` <dbl>, `2016-hi` <dbl>, `2016-lo` <dbl>,
#   `2017-est` <dbl>, `2017-hi` <dbl>, `2017-lo` <dbl>

It’s time to put the country codes back on the table, move the year and kind from column headers to a column with gather, and then split that column with separate:

final <- percents %>%
  mutate(country = countries) %>%
  gather(key = "year_kind", value = "value", -country) %>%
  separate(year_kind, c("year", "kind"))
final
# A tibble: 5,184 x 4
   country year  kind  value
   <chr>   <chr> <chr> <dbl>
 1 AFG     2009  est      NA
 2 ALB     2009  est      NA
 3 DZA     2009  est      NA
 4 AGO     2009  est      NA
 5 AIA     2009  est      NA
 6 ATG     2009  est      NA
 7 ARG     2009  est      NA
 8 ARM     2009  est      NA
 9 AUS     2009  est      NA
10 AUT     2009  est      NA
# … with 5,174 more rows

Here’s everything in one function:

clean_infant_hiv <- function(filename, num_rows) {
  # Read raw data.
  raw <- read_csv(filename, skip = 2, na = c("-")) %>%
    slice(1:num_rows)
  
  # Save the country names to reattach later.
  countries <- raw$ISO3
  
  # Convert data values to percentages.
  percents <- raw %>%
    select(-ISO3, -Countries) %>%
    slice(1:num_rows) %>%
    map_dfr(str_replace, pattern = ">?(\\d+)%", replacement = "\\1") %>%
    map_dfr(function(col) as.numeric(col) / 100)

  # Change the headers on the percentages.
  num_years <- 1 + 2017 - 2009
  kinds <- rep(c("est", "hi", "lo"), num_years)
  years <- map(2009:2017, rep, 3) %>% unlist()
  headers <- c(paste(years, kinds, sep = "-"), "empty")
  names(percents) <- headers
  
  # Stitch everything back together.
  percents %>%
    mutate(country = countries) %>%
    gather(key = "year_kind", value = "value", -country) %>%
    separate(year_kind, c("year", "kind"))
}
clean_infant_hiv("data/infant_hiv.csv", 192)
Warning: Missing column names filled in: 'X30' [30]
Warning: Duplicated column names deduplicated: 'Estimate' =>
'Estimate_1' [6], 'hi' => 'hi_1' [7], 'lo' => 'lo_1' [8], 'Estimate' =>
'Estimate_2' [9], 'hi' => 'hi_2' [10], 'lo' => 'lo_2' [11], 'Estimate' =>
'Estimate_3' [12], 'hi' => 'hi_3' [13], 'lo' => 'lo_3' [14], 'Estimate' =>
'Estimate_4' [15], 'hi' => 'hi_4' [16], 'lo' => 'lo_4' [17], 'Estimate' =>
'Estimate_5' [18], 'hi' => 'hi_5' [19], 'lo' => 'lo_5' [20], 'Estimate' =>
'Estimate_6' [21], 'hi' => 'hi_6' [22], 'lo' => 'lo_6' [23], 'Estimate' =>
'Estimate_7' [24], 'hi' => 'hi_7' [25], 'lo' => 'lo_7' [26], 'Estimate' =>
'Estimate_8' [27], 'hi' => 'hi_8' [28], 'lo' => 'lo_8' [29]
Parsed with column specification:
cols(
  .default = col_character(),
  X30 = col_logical()
)
See spec(...) for full column specifications.
Warning: Expected 2 pieces. Missing pieces filled with `NA` in 192 rows
[5185, 5186, 5187, 5188, 5189, 5190, 5191, 5192, 5193, 5194, 5195, 5196,
5197, 5198, 5199, 5200, 5201, 5202, 5203, 5204, ...].
# A tibble: 5,376 x 4
   country year  kind  value
   <chr>   <chr> <chr> <dbl>
 1 AFG     2009  est      NA
 2 ALB     2009  est      NA
 3 DZA     2009  est      NA
 4 AGO     2009  est      NA
 5 AIA     2009  est      NA
 6 ATG     2009  est      NA
 7 ARG     2009  est      NA
 8 ARM     2009  est      NA
 9 AUS     2009  est      NA
10 AUT     2009  est      NA
# … with 5,366 more rows

We’re done, and we have learned a lot of R, but what we have also learned is that we make mistakes, and that those mistakes can easily slip past us. If people are going to use our cleaned-up data in their analyses, we need a better way to develop and check our scripts.

4.5 Key Points

  • Develop data-cleaning scripts one step at a time, checking intermediate results carefully.
  • Use read_csv to read CSV-formatted tabular data into a tibble.
  • Use the skip and na parameters of read_csv to skip rows and interpret certain values as NA.
  • Use str_replace to replace portions of strings that match patterns with new strings.
  • Use is.numeric to test if a value is a number and as.numeric to convert it to a number.
  • Use map to apply a function to every element of a vector in turn.
  • Use map_dfc and map_dfr to map functions across the columns and rows of a tibble.
  • Pre-allocate storage in a list for each result from a loop and fill it in rather than repeatedly extending the list.