19  Testing functions in R with testthat

19.0.1 Example of workflow for writing functions and tests for data science

Let’s say we want to write a function for a task we repeatedly are performing in our data analysis. For example, summarizing the number of observations in each class. This is a common task performed for almost every classification problem to examine how many classes there are to understand if we are facing a binary or multi-class classification problem, as well as to examine whether there are any class imbalances that we may need to deal with before tuning our models.

1. Write the function specifications and documentation - but do not implement the function:

The first thing we should do is write the function specifications and documentation. This can effectively represented by an empty function and roxygen2-styled documentation in R as shown below:

#' Count class observations
#'
#' Creates a new data frame with two columns,
#' listing the classes present in the input data frame,
#' and the number of observations for each class.
#'
#' @param data_frame A data frame or data frame extension (e.g. a tibble).
#' @param class_col unquoted column name of column containing class labels
#'
#' @return A data frame with two columns.
#'   The first column (named class) lists the classes from the input data frame.
#'   The second column (named count) lists the number of observations
#'   for each class from the input data frame.
#'   It will have one row for each class present in input data frame.
#'
#' @export
#' @examples
#' count_classes(mtcars, am)
count_classes <- function(data_frame, class_col) {
  # returns a data frame with two columns: class and count
}

2. Plan the test cases and document them:

Next, we should plan out our test cases and start to document them. At this point we can sketch out a skeleton for our test cases with code, but we are not yet ready to write them, as we first will need to reproducibly create test data that is useful for assessing whether your function works as expected. So considering our function specifications, some kinds of input we might anticipate our function may receive, and correspondingly what it should return is listed below:

Simple expected use test case #1
  • Dataframe with 2 classes, with 2 observations per class

Function input:

  1. dataframe
  class_labels values
1       class1    0.2
2       class2    0.5
3       class1    0.8
4       class2    0.5
  1. unquoted column name
class_labels

Expected function output:

Dataframe (or tibble)

   class count
1 class1     2
2 class2     2
Simple expected use test case #2
  • Dataframe with 2 classes, with 2 observations for one class, and only one observation in the other

Function input:

  1. dataframe
  class_labels values
1       class1    1.0
2       class1    0.9
3       class2    0.9
  1. unquoted column name
class_labels

Expected function output:

Dataframe (or tibble)

   class count
1 class1     2
2 class2     1
Edge test case #1
  • Dataframe with 1 classes, with 2 observations for that class

Function input:

  1. dataframe
  class_labels values
1       class1    0.7
2       class1    0.5
  1. unquoted column name
class_labels

Expected function output:

Dataframe (or tibble)

   class count
1 class1     2
Edge test case #2
  • Dataframe with no class observations

Function input:

  1. dataframe
  class_labels values
  1. unquoted column name
class_labels

Expected function output:

Dataframe (or tibble)

   class count

19.0.2 Error test case #1

  • A list with 2 classes, with 2 observations for each class

Function input:

  1. list
$class_labels
[1] "class1" "class2" "class1" "class2"

$values
[1] 0.4 0.7 0.0 0.6
  1. unquoted list element name
class_labels

Expected function output:

Error

Error :
  `data_frame` should be a dataframe or dataframe extension (e.g. a tibble)

Next, I sketch out a skeleton for the unit tests. For R, we will use the well maintained and popular testthat R package for writing our tests. For extra resources on testthat beyond what is demonstrated here, we recommend reading: - testthat documentation - Testing chapter of the R packages book

With testthat we create a test_that statement for each related group of tests for a function. For our example, we will create the four test_that statements shown below:

library(testthat)

test_that("`count_classes` should return a data frame, or tibble,
with the number of rows corresponding to the number of unique classes
in the `class_col` from the original dataframe. The new dataframe
will have a `class column` whose values are the unique classes,
and a `count` column, whose values will be the number of observations
for each  class", {
  # "expected use cases" tests to be added here
})

test_that("`count_classes` should return an empty data frame, or tibble,
if the input to the function is an empty data frame", {
  # "edge cases" test to be added here
})

test_that("`count_classes` should throw an error when incorrect types
are passed to the `data_frame` argument", {
  # "error" tests to be added here
})
3. Create test data that is useful for assessing whether your function works as expected:

Now that we have a plan, we can create reproducible test data for that plan! When we do this, we want to keep our data as small and tractable as possible. We want to test things we know the answer to, or can at a minimum calculate by hand. We will use R code to reproducibly create the test data. We will need to do this for the data we will feed in as inputs to our function in the tests, as well as the data we expect our function to return.

library(dplyr)
set.seed(2024)

# test input data
two_classes_2_obs <- tibble(class_labels = rep(c("class1",
                                                       "class2"), 2),
                                  values = round(runif(4), 1))
two_classes_2_and_1_obs <- tibble(class_labels = c(rep("class1", 2),
                                                       "class2"),
                                      values = round(runif(3), 1))
one_class_2_obs <- tibble(class_labels = c("class1", "class1"),
                              values = round(runif(2), 1))
empty_df  <- tibble(class_labels = character(0),
                        values = double(0))
two_classes_two_obs_as_list <- list(class_labels = rep(c("class1",
                                                         "class2"), 2),
                                    values = round(runif(4), 1))

# expected test outputs
two_classes_2_obs_output <- tibble(class = c("class1", "class2"),
                                         count = c(2,2))
two_classes_2_and_1_obs_output <- tibble(class = c("class1", "class2"),
                                             count = c(2, 1))
one_class_2_obs_output <- tibble(class = "class1",
                                     count = 2)
empty_df_output <- tibble(class = character(0),
                              count = numeric(0))
4. Write the tests to evaluate your function based on the planned test cases and test data:

Now that we have the skeletons for our tests, and our reproducible test data, we can actually write the internals for our tests! We will do this by using expect_* functions from the testthat package. The table below shows some of the most commonly used expect_* functions. However, there are many more that can be found in the testthat expectations reference documentation.

testthat test structure:
test_that("Message to print if test fails", expect_*(...))
Common expect_* statements for use with test_that
Is the object equal to a value?
  • expect_identical - test two objects for being exactly equal
  • expect_equal - compare R objects x and y testing ‘near equality’ (can set a tolerance)
  • expect_equivalent - compare R objects x and y testing ‘near equality’ (can set a tolerance) and does not assess attributes
Does code produce an output/message/warning/error?
  • expect_error - tests if an expression throws an error
  • expect_warning - tests whether an expression outputs a warning
  • expect_output - tests that print output matches a specified value
Is the object true/false?

These are fall-back expectations that you can use when none of the other more specific expectations apply. The disadvantage is that you may get a less informative error message.

  • expect_true - tests if the object returns TRUE
  • expect_false - tests if the object returns FALSE
test_that("`count_classes` should return a data frame, or tibble,
with the number of rows corresponding to the number of unique classes
in the `class_col` from the original dataframe. The new dataframe
will have a `class column` whose values are the unique classes,
and a `count` column, whose values will be the number of observations
for each  class", {
  expect_s3_class(count_classes(two_classes_2_obs, class_labels),
                  "data.frame")
  expect_equal(count_classes(two_classes_2_obs, class_labels),
                    two_classes_2_obs_output, ignore_attr = TRUE)
  expect_equal(count_classes(two_classes_2_and_1_obs, class_labels),
                    two_classes_2_and_1_obs_output, ignore_attr = TRUE)
})
── Failure: `count_classes` should return a data frame, or tibble,
with the number of rows corresponding to the number of unique classes
in the `class_col` from the original dataframe. The new dataframe
will have a `class column` whose values are the unique classes,
and a `count` column, whose values will be the number of observations
for each  class ──
count_classes(two_classes_2_obs, class_labels) is not an S3 object

── Failure: `count_classes` should return a data frame, or tibble,
with the number of rows corresponding to the number of unique classes
in the `class_col` from the original dataframe. The new dataframe
will have a `class column` whose values are the unique classes,
and a `count` column, whose values will be the number of observations
for each  class ──
count_classes(two_classes_2_obs, class_labels) not equal to `two_classes_2_obs_output`.
target is NULL, current is tbl_df

── Failure: `count_classes` should return a data frame, or tibble,
with the number of rows corresponding to the number of unique classes
in the `class_col` from the original dataframe. The new dataframe
will have a `class column` whose values are the unique classes,
and a `count` column, whose values will be the number of observations
for each  class ──
count_classes(two_classes_2_and_1_obs, class_labels) not equal to `two_classes_2_and_1_obs_output`.
target is NULL, current is tbl_df
Error:
! Test failed
test_that("`count_classes` should return an empty data frame, or tibble,
if the input to the function is an empty data frame", {
  expect_equal(count_classes(one_class_2_obs, class_labels),
                    one_class_2_obs_output, ignore_attr = TRUE)
  expect_equal(count_classes(empty_df, class_labels),
                    empty_df_output, ignore_attr = TRUE)
})
── Failure: `count_classes` should return an empty data frame, or tibble,
if the input to the function is an empty data frame ──
count_classes(one_class_2_obs, class_labels) not equal to `one_class_2_obs_output`.
target is NULL, current is tbl_df

── Failure: `count_classes` should return an empty data frame, or tibble,
if the input to the function is an empty data frame ──
count_classes(empty_df, class_labels) not equal to `empty_df_output`.
target is NULL, current is tbl_df
Error:
! Test failed
test_that("`count_classes` should throw an error when incorrect types
are passed to the `data_frame` argument", {
  expect_error(count_classes(two_classes_two_obs_as_list, class_labels))
})
── Failure: `count_classes` should throw an error when incorrect types
are passed to the `data_frame` argument ──
`count_classes(two_classes_two_obs_as_list, class_labels)` did not throw an error.
Error:
! Test failed

Wait what??? Most of our tests fail…

Yes, we expect that, we haven’t written our function body yet!

5. Implement the function by writing the needed code in the function body to pass the tests:

FINALLY!! We can write the function body for our function! And then call our tests to see if they pass!

#' Count class observations
#'
#' Creates a new data frame with two columns,
#' listing the classes present in the input data frame,
#' and the number of observations for each class.
#'
#' @param data_frame A data frame or data frame extension (e.g. a tibble).
#' @param class_col unquoted column name of column containing class labels
#'
#' @return A data frame with two columns.
#'   The first column (named class) lists the classes from the input data frame.
#'   The second column (named count) lists the number of observations for each class from the input data frame.
#'   It will have one row for each class present in input data frame.
#'
#' @export
#'
#' @examples
#' count_classes(mtcars, am)
count_classes <- function(data_frame, class_col) {
    if (!is.data.frame(data_frame)) {
        stop("`data_frame` should be a data frame or data frame extension (e.g. a tibble)")
    }
    data_frame |>
        dplyr::group_by({{ class_col }}) |>
        dplyr::summarize(count = dplyr::n()) |>
        dplyr::rename("class" = {{ class_col }})
}
Note
  • we recommending using the syntax PACKAGE_NAME::FUNCTION() when writing functions that will be sourced into other files in R to make it explicitly clear what external packages they depend on. This becomes even more important when we create R packages from our functions later.

  • group_by will throw a fairly useful error message of class_col is not found in data_frame, and we we can let group_by handle that error case instead of writing our own exception to throw an error on.

test_that("`count_classes` should return a data frame, or tibble,
with the number of rows corresponding to the number of unique classes
in the `class_col` from the original dataframe. The new dataframe
will have a `class column` whose values are the unique classes,
and a `count` column, whose values will be the number of observations
for each  class", {
  expect_s3_class(count_classes(two_classes_2_obs, class_labels),
                  "data.frame")
  expect_equal(count_classes(two_classes_2_obs, class_labels),
                    two_classes_2_obs_output, ignore_attr = TRUE)
  expect_equal(count_classes(two_classes_2_and_1_obs, class_labels),
                    two_classes_2_and_1_obs_output, ignore_attr = TRUE)
})
Test passed 😸
test_that("`count_classes` should return an empty data frame, or tibble,
if the input to the function is an empty data frame", {
  expect_equal(count_classes(one_class_2_obs, class_labels),
                    one_class_2_obs_output, ignore_attr = TRUE)
  expect_equal(count_classes(empty_df, class_labels),
                    empty_df_output, ignore_attr = TRUE)
})
Test passed 🥇
test_that("`count_classes` should throw an error when incorrect types
are passed to the `data_frame` argument", {
  expect_error(count_classes(two_classes_two_obs_as_list, class_lables))
})
Test passed 🎉

The tests passed!

Are we done? For the purposes of this demo, yes! However in practice you would usually cycle through steps 2-5 two-three more times to further improve our tests and and function!

Discussion: Does test-driven development afford testability? How might it do so? Let’s discuss controllability, observability, isolateablilty, and automatability in our case study of test-driven development of count_classes.

19.0.3 Where do the function and test files go?

In the workflow above, we skipped over where we should put our functions we will use in our data analyses, as well as where we put the tests for our function, and how we call those tests!

We summarize the answer to these questions below, but highly recommend you explore and test out our demonstration GitHub repository that has a minimal working example of this: https://github.com/ttimbers/demo-tests-ds-analysis

Where does the function go?

In R, functions should be abstracted to R scripts (plain text files that end in .R) which live in the project’s R directory. Commonly we name the R script with the same name as the function (however, we might choose a more general name if the R script contains many functions).

In the analysis file where we call the function (e.g. eda.ipynb) we need to call source("PATH_TO_FILE_CONTAINING_FUNCTION") before we are able to use the function(s) contained in that R script inside our analysis file.

Where do the tests go?

The tests for the function should live in tests/testthat/test-FUNCTION_NAME.R, and the code to reproducibly generate helper data for the tests lives in tests/testthat/helper-FUNCTION_NAME.R. The test suite can be run via testthat::test_dir("tests/testthat"). testthat::test_dir("tests/testthat") first runs any files that begin with helper* and then any files that begin with test*.

Convenience functions for setting this up

Several usethis R package functions can be used to setup the file and directory structure needed for this: - usethis::use_r("FUNCTION_NAME") can be used to create the R script file the function should live in, inside the R directory - usethis::use_testthat() can be used to create the necessary test directories to use testthat’s automated test suite execution function (testthat::test_dir("tests/testthat")) - usethis::use_test("FUNCTION_NAME") can be used to create the test file for each function

Note: tests/testthat/helper-FUNCTION_NAME.R needs to be created manually, as there is no usethis function to automate this.

19.1 Reproducibly generating test data

As highlighted above, where at all possible, we should use code to generate reproducible, simple and tractable helper data for our tests. When using the testthat R package in R to automate the running of the test suite, the convention is to put such code in a file named helper-FUNCTION_NAME.R which should live in the tests/testthat directory.

19.2 Common types of test levels in data science

  1. Unit tests - exercise individual components, usually methods or functions, in isolation. This kind of testing is usually quick to write and the tests incur low maintenance effort since they touch such small parts of the system. They typically ensure that the unit fulfills its contract making test failures more straightforward to understand. This is the kind of tests we wrote for our example for count_classes above.

  2. Integration tests - exercise groups of components to ensure that their contained units interact correctly together. Integration tests touch much larger pieces of the system and are more prone to spurious failure. Since these tests validate many different units in concert, identifying the root-cause of a specific failure can be difficult. In data science, this might be testing whether several functions that call each other, or run in sequence, work as expected (e.g., tests for a tidymodel’s workflow function)

Source: CPSC 310 class notes from Reid Holmes, UBC

19.3 Attribution: