library(tidyverse)
5 Mutating data
One of the most common data analysis techniques is to look at change over time. The most common way of comparing change over time is through percent change. The math behind calculating percent change is very simple, and you should know it off the top of your head. The easy way to remember it is:
(new - old) / old
Or new minus old divided by old. Your new number minus the old number, the result of which is divided by the old number. To do that in R, we can use dplyr
and mutate
to calculate new metrics in a new field using existing fields of data.
So first we’ll import the tidyverse so we can read in our data and begin to work with it.
Now you’ll need a common and simple dataset of total attendance at NCAA football games over the last few seasons.
For this walkthrough:
You’ll import it something like this.
<- read_csv('data/attendance.csv') attendance
Rows: 146 Columns: 14
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr (2): Institution, Conference
dbl (12): 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023, ...
ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
If you want to see the first six rows – handy to take a peek at your data – you can use the function head
.
head(attendance)
# A tibble: 6 × 14
Institution Conference `2013` `2014` `2015` `2016` `2017` `2018` `2019` `2020`
<chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Air Force MWC 228562 168967 156158 177519 174924 166205 162505 5600
2 Akron MAC 107101 55019 108588 62021 117416 92575 107752 NA
3 Alabama SEC 710538 710736 707786 712747 712053 710931 707817 97120
4 Appalachia… FBS Indep… 149366 NA NA NA NA NA NA NA
5 Appalachia… Sun Belt NA 138995 128755 156916 154722 131716 166640 NA
6 Arizona Big 12 285713 354973 308355 338017 255791 318051 237194 NA
# ℹ 4 more variables: `2021` <dbl>, `2022` <dbl>, `2023` <dbl>, `2024` <dbl>
The code to calculate percent change is pretty simple. Remember, with summarize
, we used n()
to count things. With mutate
, we use very similar syntax to calculate a new value using other values in our dataset. So in this case, we’re trying to do (new-old)/old, but we’re doing it with fields. If we look at what we got when we did head
, you’ll see there’s `2024` as the new data, and we’ll use `2023` as the old data. So we’re looking at one year. Then, to help us, we’ll use arrange again to sort it, so we get the fastest growing school over one year.
|> mutate(
attendance change = (`2024` - `2023`)/`2023`
)
# A tibble: 146 × 15
Institution Conference `2013` `2014` `2015` `2016` `2017` `2018` `2019`
<chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Air Force MWC 228562 168967 156158 177519 174924 166205 162505
2 Akron MAC 107101 55019 108588 62021 117416 92575 107752
3 Alabama SEC 710538 710736 707786 712747 712053 710931 707817
4 Appalachian St. FBS Indepen… 149366 NA NA NA NA NA NA
5 Appalachian St. Sun Belt NA 138995 128755 156916 154722 131716 166640
6 Arizona Big 12 285713 354973 308355 338017 255791 318051 237194
7 Arizona St. Big 12 501509 343073 368985 286417 359660 291091 344161
8 Arkansas SEC 431174 399124 471279 487067 442569 367748 356517
9 Arkansas St. Sun Belt 149477 149163 138043 136200 119538 119001 124017
10 Army West Point The American 169781 171310 185946 163267 185543 190156 185935
# ℹ 136 more rows
# ℹ 6 more variables: `2020` <dbl>, `2021` <dbl>, `2022` <dbl>, `2023` <dbl>,
# `2024` <dbl>, change <dbl>
What do we see right away? Do those numbers look like we expect them to? No. They’re a decimal expressed as a percentage. So let’s fix that by multiplying by 100.
|> mutate(
attendance change = ((`2024` - `2023`)/`2023`)*100
)
# A tibble: 146 × 15
Institution Conference `2013` `2014` `2015` `2016` `2017` `2018` `2019`
<chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Air Force MWC 228562 168967 156158 177519 174924 166205 162505
2 Akron MAC 107101 55019 108588 62021 117416 92575 107752
3 Alabama SEC 710538 710736 707786 712747 712053 710931 707817
4 Appalachian St. FBS Indepen… 149366 NA NA NA NA NA NA
5 Appalachian St. Sun Belt NA 138995 128755 156916 154722 131716 166640
6 Arizona Big 12 285713 354973 308355 338017 255791 318051 237194
7 Arizona St. Big 12 501509 343073 368985 286417 359660 291091 344161
8 Arkansas SEC 431174 399124 471279 487067 442569 367748 356517
9 Arkansas St. Sun Belt 149477 149163 138043 136200 119538 119001 124017
10 Army West Point The American 169781 171310 185946 163267 185543 190156 185935
# ℹ 136 more rows
# ℹ 6 more variables: `2020` <dbl>, `2021` <dbl>, `2022` <dbl>, `2023` <dbl>,
# `2024` <dbl>, change <dbl>
Now, does this ordering do anything for us? No. Let’s fix that with arrange.
|> mutate(
attendance change = ((`2024` - `2023`)/`2023`)*100
|> arrange(desc(change)) )
# A tibble: 146 × 15
Institution Conference `2013` `2014` `2015` `2016` `2017` `2018` `2019`
<chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 SMU ACC 112347 129169 147301 142272 139609 116299 141798
2 Ohio St. Big Ten 734528 744075 750705 750944 752464 713630 723679
3 Colorado St. MWC 111598 159450 149500 165598 192369 177025 140025
4 Indiana Big Ten 354823 249941 310195 301190 263715 286753 247463
5 Texas SEC 593857 564618 540210 587283 556667 586277 577834
6 Central Mich. MAC 66119 97838 94029 104447 67520 77038 81386
7 North Texas The American 126182 115627 68155 119269 134174 140131 128150
8 Pittsburgh ACC 348188 289204 288900 322531 254062 250178 303606
9 Northern Ill. MAC 103344 67813 83649 55095 67748 52019 42590
10 Vanderbilt SEC 249728 274063 192802 187451 219390 196313 184016
# ℹ 136 more rows
# ℹ 6 more variables: `2020` <dbl>, `2021` <dbl>, `2022` <dbl>, `2023` <dbl>,
# `2024` <dbl>, change <dbl>
So who had the most growth in 2024 compared to the year before? SMU, followed by Ohio State and Colorado State. How about Central Michigan at #6!
5.1 A more complex example
There’s metric in basketball that’s easy to understand – shooting percentage. It’s the number of shots made divided by the number of shots attempted. Simple, right? Except it’s a little too simple. Because what about three point shooters? They tend to be more vailable because the three point shot is worth more. What about players who get to the line? In shooting percentage, free throws are nowhere to be found.
Basketball nerds, because of these weaknesses, have created a new metric called True Shooting Percentage. True shooting percentage takes into account all aspects of a players shooting to determine who the real shooters are.
Using dplyr
and mutate
, we can calculate true shooting percentage. So let’s look at a new dataset, one of every college basketball player’s season stats in 2024-25 season. It’s a dataset of 5,688 players, and we’ve got 59 variables – one of them is True Shooting Percentage, but we’re going to ignore that.
For this walkthrough:
Import it like this:
<- read_csv("data/players25.csv") players
Rows: 5818 Columns: 63
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr (14): Team, Player, Class, Pos.x, Height, Hometown, High School, Summary...
dbl (49): #, Weight, Rk.x, G, GS, MP, FG, FGA, FG%, 3P, 3PA, 3P%, 2P, 2PA, 2...
ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
The basic true shooting percentage formula is (Points / (2*(FieldGoalAttempts + (.44 * FreeThrowAttempts)))) * 100
. Let’s talk that through. Points divided by a lot. It’s really field goal attempts plus 44 percent of the free throw attempts. Why? Because that’s about what a free throw is worth, compared to other ways to score. After adding those things together, you double it. And after you divide points by that number, you multiply the whole lot by 100.
In our data, we need to be able to find the fields so we can complete the formula. To do that, one way is to use the Environment tab in R Studio. In the Environment tab is a listing of all the data you’ve imported, and if you click the triangle next to it, it’ll list all the field names, giving you a bit of information about each one.
So what does True Shooting Percentage look like in code?
Let’s think about this differently. Who had the best true shooting season last year?
|>
players mutate(trueshooting = (PTS/(2*(FGA + (.44*FTA))))*100) |>
arrange(desc(trueshooting))
# A tibble: 5,818 × 64
Team Player `#` Class Pos.x Height Weight Hometown `High School` Summary
<chr> <chr> <dbl> <chr> <chr> <chr> <dbl> <chr> <chr> <chr>
1 Wiscon… Isaac… 15 JR G 6-2 165 Oregon,… Oregon (WI) 0.2 Pt…
2 TCU Ho… Cole … 35 SR G 6-3 180 San Jos… Bellarmine C… 3.0 Pt…
3 TCU Ho… Drew … 30 FR G 6-2 180 Dallas,… Highland Par… 1.5 Pt…
4 Stanfo… Derin… 1 SO G 6-4 190 Istanbu… The Ashevill… 0.6 Pt…
5 St. Bo… Jack … 15 SO G 6-0 172 Olean, … Olean (NY) 3.0 Pt…
6 Seattl… Eric … 19 FR G 5-10 167 Beijing… Ruamrudee In… 3.0 Pt…
7 Samfor… Corey… 10 FR G 6-4 175 Fairbur… Landmark Chr… 1.5 Pt…
8 Queens… Aneek… 85 SO G 5-10 145 Foster … San Mateo (C… 3.0 Pt…
9 Old Do… CJ Pa… 15 FR F 6-6 182 Norfolk… Maury (VA) 1.0 Pt…
10 Oklaho… Jake … 30 SR G 6-3 187 Norman,… Loyola Acade… 0.4 Pt…
# ℹ 5,808 more rows
# ℹ 54 more variables: Rk.x <dbl>, Pos.y <chr>, G <dbl>, GS <dbl>, MP <dbl>,
# FG <dbl>, FGA <dbl>, `FG%` <dbl>, `3P` <dbl>, `3PA` <dbl>, `3P%` <dbl>,
# `2P` <dbl>, `2PA` <dbl>, `2P%` <dbl>, `eFG%` <dbl>, FT <dbl>, FTA <dbl>,
# `FT%` <dbl>, ORB <dbl>, DRB <dbl>, TRB <dbl>, AST <dbl>, STL <dbl>,
# BLK <dbl>, TOV <dbl>, PF <dbl>, PTS <dbl>, Rk.y <dbl>, Pos <chr>,
# PER <dbl>, `TS%` <dbl>, `3PAr` <dbl>, FTr <dbl>, PProd <dbl>, …
You’ll be forgiven if you did not hear about Winthrop’s shooting sensation Henry Harrison. He played in seven games, took three shots and actually hit them all. They all happened to be three pointers, which is three more three pointer than I’ve hit in college basketball. So props to him. Does that mean he had the best true shooting season in college basketball in the 2023-24 season?
Not hardly.
We’ll talk about how to narrow the pile and filter out data in the next chapter.