import_actigraph_csv_chunked imports the raw multi-channel accelerometer data stored in Actigraph raw csv format. It supports files from the following devices: GT3X, GT3X+, GT3X+BT, GT9X, and GT9X-IMU.

import_actigraph_csv_chunked(
  filepath,
  in_voltage = FALSE,
  has_ts = TRUE,
  header = TRUE,
  chunk_samples = 180000
)

Arguments

filepath

string. The filepath of the input data.

in_voltage

set as TRUE only when the input Actigraph csv file is in analog quantized format and need to be converted into g value

has_ts

set as TRUE only when timestamp is provided as the first column

header

boolean. If TRUE, the input csv file will have column names in the first row.

chunk_samples

number. The number of samples in each chunk. Default is 180000.

Value

list. The list contains two items. The first item is a generator function that each time it is called, it will return a data.frame of the imported chunk. The second item is a close function which you can call at any moment to close the file loading.

Details

For old device (GT3X) that stores accelerometer values as digital voltage. The function will convert the values to \(g\) unit using the following equation.

$$x_g = \frac{x_{voltage}r}{(2 ^ r) - \frac{v}{2}}$$

Where \(v\) is the max voltage corresponding to the max accelerometer value that can be found in the meta section in the csv file; \(r\) is the resolution level which is the number of bits used to store the voltage values. \(r\) can also be found in the meta section in the csv file.

How is it used in MIMS-unit algorithm?

This function is a File IO function that is used to import data from Actigraph devices during algorithm validation.

See also

Examples

default_ops = options() options(digits.secs=3) # Use the actigraph csv file shipped with the package filepath = system.file('extdata', 'actigraph.csv', package='MIMSunit') # Check original file format readLines(filepath)[1:15]
#> [1] "------------ Data File Created By ActiGraph GT3X ActiLife v6.13.3 Firmware v4.4.0 date format M/d/yyyy at 30 Hz Filter Normal -----------" #> [2] "Serial Number: MAT2A16099981" #> [3] "Start Time 11:21:00" #> [4] "Start Date 6/14/2018" #> [5] "Epoch Period (hh:mm:ss) 00:00:00" #> [6] "Download Time 15:17:47" #> [7] "Download Date 6/14/2018" #> [8] "Current Memory Address: 2545464" #> [9] "Current Battery Voltage: 4.21 Mode = 12" #> [10] "--------------------------------------------------" #> [11] "Axis1,Axis2,Axis3" #> [12] "-0.08,0.004,-1.052" #> [13] "-0.08,0.004,-1.056" #> [14] "-0.075,0.004,-1.056" #> [15] "-0.075,0.004,-1.052"
# Example 1: Load chunks every 2000 samples results = import_actigraph_csv_chunked(filepath, has_ts=FALSE, chunk_samples=2000) next_chunk = results[[1]] close_connection = results[[2]] # Check data as chunks, you can see chunks are shifted at each iteration. n = 1 repeat { df = next_chunk() if (nrow(df) > 0) { print(paste('chunk', n)) print(paste("df:", df[1, 1], '-', df[nrow(df),1])) n = n + 1 } else { break } }
#> [1] "chunk 1" #> [1] "df: 2018-06-14 11:21:00.033 - 2018-06-14 11:22:06.667" #> [1] "chunk 2" #> [1] "df: 2018-06-14 11:22:06.700 - 2018-06-14 11:23:13.333" #> [1] "chunk 3" #> [1] "df: 2018-06-14 11:23:13.367 - 2018-06-14 11:24:20.000" #> [1] "chunk 4" #> [1] "df: 2018-06-14 11:24:20.033 - 2018-06-14 11:25:26.667" #> [1] "chunk 5" #> [1] "df: 2018-06-14 11:25:26.700 - 2018-06-14 11:26:31.500"
# Close connection after reading all the data close_connection() # Example 2: Close loading early results = import_actigraph_csv_chunked(filepath, has_ts=FALSE, chunk_samples=2000) next_chunk = results[[1]] close_connection = results[[2]] # Check data as chunks, you can see chunk time is shifting forward at each iteration. n = 1 repeat { df = next_chunk() if (nrow(df) > 0) { print(paste('chunk', n)) print(paste("df:", df[1, 1], '-', df[nrow(df),1])) n = n + 1 close_connection() } else { break } }
#> [1] "chunk 1" #> [1] "df: 2018-06-14 11:21:00.033 - 2018-06-14 11:22:06.667"
# Restore default options options(default_ops)