aggregate_for_mims returns a dataframe with integrated values by trapzoidal method over each epoch for each column. The epoch start time will be used as timestamp in the first column.

aggregate_for_mims(df, epoch, method = "trapz", rectify = TRUE, st = NULL)

Arguments

df

dataframe of accelerometer data in mhealth format. First column should be timestamps in POSIXt format.

epoch

string. Any format that is acceptable by argument breaks in method cut.POSIXt.For example, "1 sec", "1 min", "5 secs", "10 mins".

method

string. Integration methods. Supported strings include: "trapz", "power", "sum", "meanBySecond", "meanBySize". Default is "trapz".

rectify

logical. If TRUE, input data will be rectified before integration. Default is TRUE.

st

character or POSIXct timestamp. An optional start time you can set to force the epochs generated by referencing this start time. If it is NULL, the function will use the first timestamp in the timestamp column as start time to generate epochs. This is useful when you are processing a stream of data and want to use a common start time for segmenting data. Default is NULL.

Value

dataframe. The returned dataframe will have the same format as input dataframe.

Details

This function accepts a dataframe (in mhealth accelerometer data format) and computes its aggregated values over each fixed epoch using different integration methods (default is trapzoidal method, other methods are not used by mims unit algorithm) for each value columns. The returned dataframe will have the same number of columns as input dataframe, and have the same datetime format as input dataframe in the timestamp column. The trapzoidal method used in the function is based on trapz.

Note

If epoch argument is not provided or is NULL, the function will treat the input dataframe as a single epoch.

If the number of samples in one segment is less than 90 samples, the aggregation result will be -1 (marker of invalid value).

How is it used in mims-unit algorithm?

This function is used in mims-unit algorithm after filtering (iir). The filtered signal will be rectified and integrated to get mims unit values for each axis using this function.

See also

aggregate_for_orientation for aggregating to get accelerometer orientation estimation for each epoch.

Other aggregate functions: aggregate_for_orientation()

Examples

# sample data df = sample_raw_accel_data head(df)
#> HEADER_TIME_STAMP X Y Z #> 1 2016-01-15 11:00:00 0.148 -0.438 0.016 #> 2 2016-01-15 11:00:00 0.215 -0.418 -0.023 #> 3 2016-01-15 11:00:00 0.266 -0.402 -0.012 #> 4 2016-01-15 11:00:00 0.336 -0.430 0.012 #> 5 2016-01-15 11:00:00 0.430 -0.320 0.000 #> 6 2016-01-15 11:00:00 0.535 -0.258 0.004
# epoch set to 5 seconds, and method set to "trapz" aggregate_for_mims(df, epoch = '5 sec', method='trapz')
#> HEADER_TIME_STAMP AGGREGATED_X AGGREGATED_Y AGGREGATED_Z #> 1 2016-01-15 11:00:00 6.359663 2.420398 1.62992 #> 2 2016-01-15 11:00:05 -1.000000 -1.000000 -1.00000
# epoch set to 1 second, method set to "sum" aggregate_for_mims(df, epoch = '1 sec', method='sum')
#> HEADER_TIME_STAMP AGGREGATED_X AGGREGATED_Y AGGREGATED_Z #> 1 2016-01-15 11:00:00 99.542 34.291 22.296 #> 2 2016-01-15 11:00:01 97.332 36.783 25.970 #> 3 2016-01-15 11:00:02 104.054 42.027 27.141 #> 4 2016-01-15 11:00:03 104.749 39.400 26.702 #> 5 2016-01-15 11:00:04 103.642 41.445 28.518 #> 6 2016-01-15 11:00:05 100.172 28.005 21.359 #> 7 2016-01-15 11:00:06 -1.000 -1.000 -1.000
# epoch set to 1 second, and st set to be 1 second before the start time of the data # so the first segment will only include data for 1 second, therefore the resulted # aggregated value for the first segment will be -1 (invalid) because the # samples are not enough. And the second segment starts from 11:00:01, instead # of 11:00:02 as shown in prior example, aggregate_for_mims(df, epoch = '1 sec', method='sum', st=df[1,1] - 1)
#> HEADER_TIME_STAMP AGGREGATED_X AGGREGATED_Y AGGREGATED_Z #> 1 2016-01-15 11:00:00 99.542 34.291 22.296 #> 2 2016-01-15 11:00:01 97.332 36.783 25.970 #> 3 2016-01-15 11:00:02 104.054 42.027 27.141 #> 4 2016-01-15 11:00:03 104.749 39.400 26.702 #> 5 2016-01-15 11:00:04 103.642 41.445 28.518 #> 6 2016-01-15 11:00:05 100.172 28.005 21.359 #> 7 2016-01-15 11:00:06 -1.000 -1.000 -1.000