Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pull in updates from master branch #96

Merged
merged 14 commits into from
Nov 19, 2024
Binary file modified .DS_Store
Binary file not shown.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -19,3 +19,5 @@ table-of-features.xlsx
.DS_Store
.DS_Store
docs
/doc/
/Meta/
6 changes: 3 additions & 3 deletions CRAN-SUBMISSION
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
Version: 0.5.1
Date: 2024-04-12 20:01:33 UTC
SHA: 10ab14103dea87ac69287d72d299938c0b3b0ffc
Version: 1.0.2
Date: 2024-10-17 14:47:16 UTC
SHA: 24e277cde428c3af5fc3884966c5bb16d0fb46aa
11 changes: 6 additions & 5 deletions DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Type: Package
Package: konfound
Title: Quantify the Robustness of Causal Inferences
Version: 1.0.0
Version: 1.0.3
Authors@R: c(
person(c("Joshua", "M"), "Rosenberg", , "[email protected]", role = c("aut", "cre")),
person("Ran", "Xu", , "[email protected]", role = "ctb"),
Expand All @@ -20,15 +20,16 @@ Description: Statistical methods that quantify the conditions necessary to
alter inferences, also known as sensitivity analysis, are becoming
increasingly important to a variety of quantitative sciences. A series
of recent works, including Frank (2000)
<doi:10.1177/0049124100029002001> and Frank et al. (2013)
<doi:10.1177/0049124100029002001> and Frank et al. (2013)
<doi:10.3102/0162373713493129> extend previous sensitivity analyses by
considering the characteristics of omitted variables or unobserved
cases that would change an inference if such variables or cases were
observed. These analyses generate statements such as "an omitted
variable would have to be correlated at xx with the predictor of
interest (e.g., treatment) and outcome to invalidate an inference of a
interest (e.g., the treatment) and outcome to invalidate an inference of a
treatment effect". Or "one would have to replace pp percent of the
observed data with null hypothesis cases to invalidate the inference".
observed data with nor which the treatment had no effect to invalidate the
inference".
We implement these recent developments of sensitivity analysis and
provide modules to calculate these two robustness indices and generate
such statements in R. In particular, the functions konfound(),
Expand Down Expand Up @@ -68,6 +69,6 @@ VignetteBuilder:
knitr
Encoding: UTF-8
LazyData: true
RoxygenNote: 7.3.1
RoxygenNote: 7.3.2
Roxygen: list(roclets = c("collate", "rd", "namespace",
"doctest::dt_roclet"), packages = "doctest")
Binary file modified Meta/vignette.rds
Binary file not shown.
13 changes: 13 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,16 @@
# konfound 1.0.3 (IN DEVELOPMENT)

* fix sign problems in print output for some cases

# konfound 1.0.2

* edits to README and vignette
* small edit to DESCRIPTION

# konfound 1.0.1

* minor edits in advance of CRAN submit

# konfound 1.0.0

* major release of changes that were being made in the "newitcv" branch:
Expand Down
107 changes: 54 additions & 53 deletions R/helper_output_print.R

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions R/pkonfound.R
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@
#' \item{starting_table}{Observed 2 by 2 table before replacement and switching. Implied table for logistic regression.}
#' \item{final_table}{The 2 by 2 table after replacement and switching.}
#' \item{user_SE}{user entered standard error. Only applicable for logistic regression.}
#' \item{needtworows}{whether double row switches are needed.}
#' \item{analysis_SE}{the standard error used to generate a plausible 2 by 2 table. Only applicable for logistic regression.}
#' \item{Fig_ITCV}{figure for ITCV.}
#' \item{Fig_RIR}{figure for RIR.}
Expand Down
2 changes: 1 addition & 1 deletion R/test_sensitivity.R
Original file line number Diff line number Diff line change
Expand Up @@ -350,7 +350,7 @@ test_sensitivity <- function(est_eff,
} else if (to_return == "corr_plot") {
return(plot_correlation(r_con = r_con, obs_r = obs_r, critical_r = critical_r))
} else if (to_return == "print") {
return(output_print(n_covariates, est_eff, beta_threshold, bias, sustain, nu, eff_thr, recase, obs_r, critical_r, r_con, itcv, alpha, index, far_bound, R2))
return(output_print(n_covariates, est_eff, beta_threshold, bias, sustain, nu, eff_thr, recase, obs_r, critical_r, r_con, itcv, alpha, index, far_bound, sdx, sdy, R2, uncond_rxcv, uncond_rycv, rxcvGz, rycvGz))
} else if (to_return == "table") {
return(output_table(model_object, tested_variable))
} else {
Expand Down
12 changes: 11 additions & 1 deletion README.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ devtools::install_github("konfound-project/konfound")

## pkonfound() for published studies

`pkonfound()`, for published studies, calculates (1) how much bias there must be in an estimate to invalidate/sustain an inference, and interprets in terms of how much data would need to be replaced to nullify an inference; (2) the impact of an omitted variable necessary to invalidate/sustain an inference for a regression coefficient:
`pkonfound()`, for published studies, calculates (1) how much bias there must be in an estimate to invalidate/sustain an inference, and interprets in terms of how much data would need to be replaced to nullify an inference (Robustness of an Inference to Replacement, RIR); (2) the impact of an omitted variable necessary to invalidate/sustain an inference for a regression coefficient (Impact Threshold of a Confounding Variable, ITCV). RIR is reported by default. ITCV can be generated by specifying `index = "IT"`.

```{r, eval = T}
library(konfound)
Expand All @@ -53,6 +53,11 @@ pkonfound(est_eff = 2,
std_err = .4,
n_obs = 100,
n_covariates = 3)
pkonfound(est_eff = 2,
std_err = .4,
n_obs = 100,
n_covariates = 3,
index = "IT")
```

## konfound() for models fit in R
Expand All @@ -71,6 +76,11 @@ Sensitivity analysis for the effect for `wt` on `mpg` can be carried out as foll
konfound(m1, wt)
```

Similar to `pkonfound`, ITCV can be generated by specifying `index = "IT"`.
```{r}
konfound(m1, wt, index = "IT")
```

## mkonfound for meta-analyses including sensitivity analysis

`mkonfound()` supports sensitivity that can be compared or synthesized across multiple analyses. We can use an existing (and built-in) dataset, such as `mkonfound_ex`.
Expand Down
71 changes: 69 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,11 @@ devtools::install_github("konfound-project/konfound")
`pkonfound()`, for published studies, calculates (1) how much bias there
must be in an estimate to invalidate/sustain an inference, and
interprets in terms of how much data would need to be replaced to
nullify an inference; (2) the impact of an omitted variable necessary to
invalidate/sustain an inference for a regression coefficient:
nullify an inference (Robustness of an Inference to Replacement, RIR);
(2) the impact of an omitted variable necessary to invalidate/sustain an
inference for a regression coefficient (Impact Threshold of a
Confounding Variable, ITCV). RIR is reported by default. ITCV can be
generated by specifying `index = "IT"`.

``` r
library(konfound)
Expand Down Expand Up @@ -76,6 +79,39 @@ pkonfound(est_eff = 2,
#> For other forms of output, run
#> ?pkonfound and inspect the to_return argument
#> For models fit in R, consider use of konfound().
pkonfound(est_eff = 2,
std_err = .4,
n_obs = 100,
n_covariates = 3,
index = "IT")
#> Impact Threshold for a Confounding Variable (ITCV):
#>
#> The minimum impact of an omitted variable to invalidate an inference for
#> a null hypothesis of an effect of nu (0) is based on a correlation of 0.566
#> with the outcome and 0.566 with the predictor of interest (conditioning
#> on all observed covariates in the model; signs are interchangeable). This is
#> based on a threshold effect of 0.2 for statistical significance (alpha = 0.05).
#>
#> Correspondingly the impact of an omitted variable (as defined in Frank 2000) must be
#> 0.566 X 0.566 = 0.321 to invalidate an inference for a null hypothesis of an effect of nu (0).
#>
#> For calculation of unconditional ITCV using pkonfound(), additionally include
#> the R2, sdx, and sdy as input, and request raw output.
#>
#> See Frank (2000) for a description of the method.
#>
#> Citation:
#> Frank, K. (2000). Impact of a confounding variable on the inference of a
#> regression coefficient. Sociological Methods and Research, 29 (2), 147-194
#>
#> Accuracy of results increases with the number of decimals reported.
#>
#> The ITCV analysis was originally derived for OLS standard errors. If the
#> standard errors reported in the table were not based on OLS, some caution
#> should be used to interpret the ITCV.
#> For other forms of output, run
#> ?pkonfound and inspect the to_return argument
#> For models fit in R, consider use of konfound().
```

## konfound() for models fit in R
Expand Down Expand Up @@ -141,6 +177,37 @@ konfound(m1, wt)
#> NULL
```

Similar to `pkonfound`, ITCV can be generated by specifying
`index = "IT"`.

``` r
konfound(m1, wt, index = "IT")
#> Impact Threshold for a Confounding Variable (ITCV):
#>
#> The minimum (in absolute value) impact of an omitted variable to invalidate
#> an inference for a null hypothesis of an effect of nu (0) is based on
#> a correlation of -0.425 with the outcome and 0.425 with the predictor of
#> interest (conditioning on all observed covariates in the model; signs are
#> interchangeable). This is based on a threshold effect of -0.355 for statistical
#> significance (alpha = 0.05).
#>
#> Correspondingly the impact of an omitted variable (as defined in Frank 2000) must be
#> -0.425 X 0.425 = -0.18 to invalidate an inference for a null hypothesis of an effect of nu (0).
#>
#> See Frank (2000) for a description of the method.
#>
#> Citation:
#> Frank, K. (2000). Impact of a confounding variable on the inference of a
#> regression coefficient. Sociological Methods and Research, 29 (2), 147-194
#>
#> Accuracy of results increases with the number of decimals reported.
#>
#> The ITCV analysis was originally derived for OLS standard errors. If the
#> standard errors reported in the table were not based on OLS, some caution
#> should be used to interpret the ITCV.
#> NULL
```

## mkonfound for meta-analyses including sensitivity analysis

`mkonfound()` supports sensitivity that can be compared or synthesized
Expand Down
32 changes: 31 additions & 1 deletion _pkgdown.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,34 @@
url: https://konfound-it.org/konfound/
template:
bootstrap: 5

reference:
- title: "Main"
contents:
- konfound
- pkonfound
- mkonfound
- title: "Auxiliary"
contents:
- binary_dummy_data
- cal_delta_star
- cal_rxy
- cal_rxz
- cal_ryz
- chisq_p
- concord1
- get_kr_df
- konfound_glm
- konfound_glm_dichotomous
- konfound_lm
- konfound_lmer
- mkonfound_ex
- output_df
- output_print
- output_table
- plot_correlation
- plot_threshold
- tkonfound
- tkonfound_fig
- verify_reg_Gzcv
- verify_reg_uncond
- zzz
75 changes: 0 additions & 75 deletions doc/introduction-to-konfound.R
Original file line number Diff line number Diff line change
Expand Up @@ -4,78 +4,3 @@ collapse = TRUE,
comment = "#>"
)

## ----gh-installation, eval = FALSE--------------------------------------------
# install.packages("konfound")

## ----eval = TRUE--------------------------------------------------------------
library(konfound)

## -----------------------------------------------------------------------------
pkonfound(est_eff = 2, std_err = .4, n_obs = 100, n_covariates = 3)

## -----------------------------------------------------------------------------
pkonfound(est_eff = 2, std_err = .4, n_obs = 100, n_covariates = 3, index = "IT")

## ----fig.width = 6, fig.height = 6--------------------------------------------
pkonfound(est_eff = 2, std_err = .4, n_obs = 100, n_covariates = 3, to_return = "thresh_plot")

## ----fig.width = 6, fig.height = 6--------------------------------------------
pkonfound(est_eff = 2, std_err = .4, n_obs = 100, n_covariates = 3, to_return = "corr_plot")

## ----fig.width = 6, fig.height = 6--------------------------------------------
pkonfound(est_eff = 2, std_err = .4, n_obs = 100, n_covariates = 3, to_return = "raw_output")

## -----------------------------------------------------------------------------
pkonfound(a = 35, b = 17, c = 17, d = 38)

## -----------------------------------------------------------------------------
my_table <- tibble::tribble(
~unsuccess, ~success,
35, 17,
17, 38,
)
pkonfound(two_by_two_table = my_table)

## -----------------------------------------------------------------------------
pkonfound(est_eff = 0.4, std_err = 0.103,
n_obs = 20888, n_covariates = 3,
n_treat = 17888, model_type = 'logistic')

## -----------------------------------------------------------------------------
m1 <- lm(mpg ~ wt + hp + qsec, data = mtcars)
m1

konfound(model_object = m1,
tested_variable = hp)

## -----------------------------------------------------------------------------
konfound(model_object = m1, tested_variable = wt, to_return = "table")

## -----------------------------------------------------------------------------
# View summary stats for condition variable
table(binary_dummy_data$condition)
# Fit the logistic regression model
m4 <- glm(outcome ~ condition + control,
data = binary_dummy_data, family = binomial)
# View the summary of the model
summary(m4)

## -----------------------------------------------------------------------------
konfound(model_object = m4,
tested_variable = condition,
two_by_two = TRUE, n_treat = 55)

## -----------------------------------------------------------------------------
if (requireNamespace("lme4")) {
library(lme4)
m3 <- fm1 <- lmer(Reaction ~ Days + (1 | Subject), sleepstudy)
konfound(m3, Days)
}

## ----eval = TRUE--------------------------------------------------------------
mkonfound_ex
mkonfound(mkonfound_ex, t, df)

## ----eval = TRUE--------------------------------------------------------------
mkonfound(mkonfound_ex, t, df, return_plot = TRUE)

Loading
Loading