Skip to content

Commit

Permalink
Merge pull request #941 from wadpac/issues_940_930_877
Browse files Browse the repository at this point in the history
Prepare for 3.0-0 release by fixing issues 940 930 and 877
  • Loading branch information
vincentvanhees authored Oct 12, 2023
2 parents 46c633e + 31c9bad commit 659394d
Show file tree
Hide file tree
Showing 10 changed files with 102 additions and 20 deletions.
4 changes: 2 additions & 2 deletions DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
Package: GGIR
Type: Package
Title: Raw Accelerometer Data Analysis
Version: 2.10-4
Date: 2023-10-05
Version: 3.0-0
Date: 2023-10-20
Authors@R: c(person("Vincent T","van Hees",role=c("aut","cre"),
email="[email protected]"),
person("Jairo H","Migueles",role="aut",
Expand Down
9 changes: 8 additions & 1 deletion NEWS.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# CHANGES IN GGIR VERSION 2.10-5
# CHANGES IN GGIR VERSION 3.0-0

- Part 1 and 2: Change default value for nonwear_approach to 2023

- Part 2: Move cosinor analysis code to its own function in order to ease re-using it in both part 2 and part 6.

Expand All @@ -7,6 +9,11 @@
- Part 2: Arguments hrs.del.start and hrs.del.end when combined with strategy = 3 and strategy = 5 now count
relative to start and end of the most active time window as identified. #905

- Part 5: Change default for segmentDAYSPTcrit.part5 from c(0,0) to c(0, 0.9) and now
prohibiting the use of c(0, 0) as it gives biased estimates. We knew this, but some users
started using the default without attempting to understand it, by which it seems necessary
to force a sensible selection. #940

- Part 5: Added optioned "OO" to argument timewindow, which defines windows from
sleep Onset to sleep Onset #931

Expand Down
9 changes: 9 additions & 0 deletions R/check_params.R
Original file line number Diff line number Diff line change
Expand Up @@ -397,6 +397,15 @@ check_params = function(params_sleep = c(), params_metrics = c(),
"fraction of the day between zero and one, please change."),
call. = FALSE)
}
if (length(params_cleaning[["segmentDAYSPTcrit.part5"]]) != 2) {
stop("\nArgument segmentDAYSPTcrit.part5 is expected to be a numeric vector of length 2", call. = FALSE)
}
if (sum(params_cleaning[["segmentDAYSPTcrit.part5"]]) < 0.5 |
0 %in% params_cleaning[["segmentDAYSPTcrit.part5"]] == FALSE) {
stop(paste0("\nArgument segmentDAYSPTcrit.part5 needs to include one zero",
" and one value of at least 0.5 as mixing incomplete windows with complete windows",
" biases the estimates"), call. = FALSE)
}
}


Expand Down
8 changes: 7 additions & 1 deletion R/g.calibrate.R
Original file line number Diff line number Diff line change
Expand Up @@ -324,7 +324,13 @@ g.calibrate = function(datafile, params_rawdata = c(),
nomovement = which(meta_temp[,5] < sdcriter & meta_temp[,6] < sdcriter & meta_temp[,7] < sdcriter &
abs(as.numeric(meta_temp[,2])) < 2 & abs(as.numeric(meta_temp[,3])) < 2 &
abs(as.numeric(meta_temp[,4])) < 2) #the latter three are to reduce chance of including clipping periods
meta_temp = meta_temp[nomovement,]
if (length(nomovement) < 10) {
# take only one row to trigger that autocalibration is skipped
# with the QCmessage that there is not enough data
meta_temp = meta_temp[1, ]
} else {
meta_temp = meta_temp[nomovement,]
}
dup = which(rowSums(meta_temp[1:(nrow(meta_temp) - 1), 2:7] == meta_temp[2:nrow(meta_temp), 2:7]) == 3) # remove duplicated values
if (length(dup) > 0) meta_temp = meta_temp[-dup,]
rm(nomovement, dup)
Expand Down
4 changes: 2 additions & 2 deletions R/load_params.R
Original file line number Diff line number Diff line change
Expand Up @@ -97,9 +97,9 @@ load_params = function(group = c("sleep", "metrics", "rawdata",
includenightcrit = 16, #<= to cleaning
excludefirst.part4 = FALSE, # => to cleaning
excludelast.part4 = FALSE, max_calendar_days = 0,
nonWearEdgeCorrection = TRUE, nonwear_approach = "2013",
nonWearEdgeCorrection = TRUE, nonwear_approach = "2023",
segmentWEARcrit.part5 = 0.5,
segmentDAYSPTcrit.part5 = c(0,0))
segmentDAYSPTcrit.part5 = c(0.9, 0))
}
if ("output" %in% group) {
params_output = list(epochvalues2csv = FALSE, save_ms5rawlevels = FALSE,
Expand Down
4 changes: 2 additions & 2 deletions man/GGIR-package.Rd
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@
\tabular{ll}{
Package: \tab GGIR\cr
Type: \tab Package\cr
Version: \tab 2.10-4\cr
Date: \tab 2023-10-05\cr
Version: \tab 3.0-0\cr
Date: \tab 2023-10-20\cr
License: \tab Apache License (== 2.0)\cr
Discussion group: \tab https://groups.google.com/forum/#!forum/rpackageggir\cr
}
Expand Down
30 changes: 24 additions & 6 deletions man/GGIR.Rd
Original file line number Diff line number Diff line change
Expand Up @@ -831,12 +831,26 @@ GGIR(mode = 1:5,
0.3 indicates that at least 30 percent of the time should be valid.}
\item{segmentDAYSPTcrit.part5}{
Numeric vector or length 2 (default = c(0, 0)).
Numeric vector or length 2 (default = c(0.9, 0)).
Inclusion criteria for the proportion of the segment that should be
classified as day (awake) and spt (sleep period time) to be considered
valid. Usually, one of the two numbers is 0, and the other defines the
proportion of the segment that should be classified as day or spt.}
valid. One of the two numbers should be 0, and the other defines the
proportion of the segment that should be classified as day or spt, respectively.
The default setting would focus on waking hour
segments and includes all segments that overlap for at least 90 percent
with waking hours. In order to shift focus to the SPT you could use
c(0, 0.9) which ensures that all segments that overlap for at least
90 percent with the SPT are included.
Setting both to zero would be problematic and is not allowed as that would introduce
bias in behavioural estimates for the following reason: A complete segment
would be averaged with an incomplete segments (someone going to bed or waking up
in the middle of a segment) by which it is no longer clear whether the person
is less active or sleeps more during that segment. Similarly it is not
clear whether the person has more wakefulness during SPT for a segment or
woke up or went to bed during the segment. Therefore, a
minimum value of 0.5 is required but any value closer to 1
would be better.
}
\item{includedaycrit}{
Numeric (default = 16).
Minimum required number of valid hours in day specific analysis
Expand All @@ -857,7 +871,7 @@ GGIR(mode = 1:5,
}
\item{nonwear_approach}{
Character (default = "2013").
Character (default = "2023").
Whether to use the traditional version of the non-wear detection algorithm
(nonwear_approach = "2013") or the new version (nonwear_approach = "2023").
The 2013 version would use the longsize window (windowsizes[3], one hour
Expand Down Expand Up @@ -1153,7 +1167,11 @@ GGIR(mode = 1:5,
no need to have a column with the date followed by a column with the next
date. If times in the activity diary are not multiple of the short window
size (epoch length), the next epoch is considered (e.g., with epoch of 5
seconds, 8:00:02 will be redefined as 8:00:05 in the activity log).}
seconds, 8:00:02 will be redefined as 8:00:05 in the activity log).
When using the qwindow functionality in combination with GGIR part 5 then
make sure to check that arguments \code{segmentWEARcrit.part5} and
\code{segmentDAYSPTcrit.part5} are specfied to your research needs.
}
\item{qwindow_dateformat}{
Character (default = "%d-%m-%Y").
Expand Down
9 changes: 7 additions & 2 deletions tests/testthat/test_lightPart5.R
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ test_that("lux_per_segment is correctly calculated", {

# run part 1
GGIR(mode = 1, datadir = fn, outputdir = getwd(), studyname = "test",
do.report = c(), dayborder = 23, verbose = FALSE)
do.report = c(), dayborder = 23, verbose = FALSE, nonwear_approach = "2013")

# add lightmean and lightpeak to metalong
meta_fn = paste(getwd(), "output_test", "meta", "basic",
Expand All @@ -37,7 +37,7 @@ test_that("lux_per_segment is correctly calculated", {
LUX_day_segments = c(9, 15, 24),
dayborder = 23, part5_agg2_60seconds = TRUE,
save_ms5rawlevels = TRUE, save_ms5raw_without_invalid = FALSE,
save_ms5raw_format = "RData")
save_ms5raw_format = "RData", nonwear_approach = "2013")

# Only segment 9 to 15hr is calculated because it is the only segment
# containing awake data in the file generated by create_test_acc_csv
Expand Down Expand Up @@ -65,4 +65,9 @@ test_that("lux_per_segment is correctly calculated", {
expect_equal(diff(mdat[1:2, "timenum"]), 60) #epoch = 60
pm11 = grep("23:00:00", as.character(mdat$timestamp))[1]
expect_equal(diff(mdat[(pm11 - 1):pm11, "window"]), 1) #dayborder = 23 (change in window at 23:00)

outfolder = paste(getwd(), "output_test", sep = .Platform$file.sep)
if (file.exists(outfolder)) unlink(outfolder, recursive = TRUE)
if (file.exists(dn)) unlink(dn, recursive = TRUE)

})
2 changes: 1 addition & 1 deletion vignettes/CutPoints.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ The physical activity research field has used so called cut-points to segment
accelerometer time series based on level of intensity. In this vignette we have
compiled a list of published cut-points with instructions on how to use them with GGIR.
As newer cut-points are frequently published the list below may not be up to date.
**Please let us know you if know of any cut-points we missed!**
**Please let us know if you are aware of any published cut-points that we missed!**

## Cut-points expressed in gravitational units (this vignette)

Expand Down
43 changes: 40 additions & 3 deletions vignettes/GGIR.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -767,6 +767,44 @@ in summary.csv
| ig\_intercept_ENMO_0-24hr | Intercept from intensity gradient analysis proposed by [Rowlands et al. 2018](https://journals.lww.com/acsm-msse/Fulltext/2018/06000/Beyond_Cut_%20Points__Accelerometer_Metrics_that.25.aspx) based on metric ENMO for the time segment 0 to 24 hours |
| ig\_rsquared_ENMO_0-24hr | r squared from intensity gradient analysis proposed by [Rowlands et al. 2018](https://journals.lww.com/acsm-msse/Fulltext/2018/06000/Beyond_Cut_%20Points__Accelerometer_Metrics_that.25.aspx) based on metric ENMO for the time segment 0 to 24 hours |

### Data_quality_report

The data_quality_report.csv is stored in subfolder folder results/QC.

| (Part of) variable name | Description |
|--------------------------|----------------------------------------------|
| filename | file name |
| file.corrupt | Is file corrupt? TRUE or FALSE (mainly tested for GENEActiv bin files) |
| file.too.short | File too short for processing? ([definition](#Minimum_recording_duration)) TRUE or FALSE |
| use.temperature | Temperature used for auto-calibration? TRUE or FALSE |
| scale.x | Auto-calibration scaling coefficient for x-axis (same for y and z axis, not shown here) |
| offset.x | Auto-calibration offset coefficient for x-axis (same for y and z axis, not shown here) |
| temperature.offset.x | Auto-calibration temperature offset coefficient for x-axis (same for y and z axis, not shown here) |
| cal.error.start | Calibration error prior to auto-calibration |
| cal.error.end | Calibration error after auto-calibration |
| n.10sec.windows | Number of 10 second epochs used as sphere data in auto-calibration |
| n.hours.considered | Number of hours of data considered for auto-calibration |
| QCmessage | Character QC message at the end of the auto-calibration |
| mean.temp | Mean temperature in sphere data |
| device.serial.number | Device serial number |
| NFilePagesSkipped | (Only for Axivity .cwa format) Number of data blocks skipped |
| filehealth_totimp_min | (Only for Axivity .cwa format) Total number of minutes of data imputed |
| filehealth_checksumfail_min | (Only for Axivity .cwa format) Total number of minutes of data where the checksum failed |
| filehealth_niblockid_min | (Only for Axivity .cwa format) Total number of minutes of data with non-incremental block ids |
| filehealth_fbias0510_min | (Only for Axivity .cwa format) Total number of minutes with a sampling frequency bias between 5 and 10% |
| filehealth_fbias1020_min | (Only for Axivity .cwa format) Total number of minutes with a sampling frequency bias between 10 and 20% |
| filehealth_fbias2030_min | (Only for Axivity .cwa format) Total number of minutes with a sampling frequency bias between 20 and 30% |
| filehealth_fbias30_min | (Only for Axivity .cwa format) Total number of minutes with a sampling frequency bias higher than 30% |
| filehealth_totimp_N | (Only for Axivity .cwa format) Total number of data blocks that were imputed |
| filehealth_checksumfail_N | (Only for Axivity .cwa format) Total number of blocks where the checksum failed |
| filehealth_niblockid_N | (Only for Axivity .cwa format) Total number of data blocks with non-incremental block ids |
| filehealth_fbias0510_N | (Only for Axivity .cwa format) Total number of data blocks with a sampling frequency bias between 5 and 10% |
| filehealth_fbias1020_N | (Only for Axivity .cwa format) Total number of data blocks with a sampling frequency bias between 10 and 20%|
| filehealth_fbias2030_N | (Only for Axivity .cwa format) Total number of data blocks with a sampling frequency bias between 20 and 30% |
| filehealth_fbias30_N | (Only for Axivity .cwa format) Total number of data blocks with a sampling frequency bias higher than 30% |



## Output part 4 {.tabset}

Part 4 generates the following output:
Expand Down Expand Up @@ -824,7 +862,6 @@ The csv. files contain the variables as shown below.
| nonwear_perc_spt | Non-wear percentage during the spt hours of this day. This is a copy of the nonwear_perc_spt calculated in [part 5](#output5), only included in part 4 reports if part 5 has been run with timewindow = WW |



#### Non-default variables in part 4 csv report

These additional are only stored if you used a sleeplog that captures
Expand Down Expand Up @@ -2013,14 +2050,14 @@ which provides the R code and detailed instructions on how to make the radar
plots using your own data.


## Minimum recording duration
## Minimum recording duration {#Minimum_recording_duration}

GGIR has been designed to process multi-day recordings. The minimum recording duration
considered by GGIR depends on the type of analysis:

**Running part 1 and 2**

- File size; At least 2MB, where 2MB can be adjusted with argument minimumFileSize.
- File size; At least 2MB, where 2MB can be adjusted with argument minimumFileSizeMB.
This should not be changed unless you have good reason to believe that a smaller
file size is also acceptable.

Expand Down

0 comments on commit 659394d

Please sign in to comment.