Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TMatrixSampler Error Comparison #6

Open
alexlafleur opened this issue Oct 6, 2016 · 6 comments
Open

TMatrixSampler Error Comparison #6

alexlafleur opened this issue Oct 6, 2016 · 6 comments

Comments

@alexlafleur
Copy link
Collaborator

error_comparison

@greenTara
Copy link
Collaborator

There is no text here, just plots. Can this issue be closed?

@alexlafleur
Copy link
Collaborator Author

These plots show the differences in errors from our error calculation (left) and the error calculation that was based on the count matrix from TransitionMatrixSampler(C=C1bayes).
I think it can be closed.

@greenTara
Copy link
Collaborator

There was something being tested in this check, and it doesn't look like we have brought this test to a satisfactory conclusion, unless that is documented elsewhere.

@greenTara
Copy link
Collaborator

As I recall, the question was about how the error estimate that comes from the bayes approach - without taking advantage of our knowledge of the ground truth transition matrix - compares to the error estimate that does take advantage of that knowledge. In looking at these graphs, I recall that we found a bug in the test, because the behavior was not as expected. But I don't recall where the rest of the development was documented.

@alexlafleur
Copy link
Collaborator Author

alexlafleur commented Nov 11, 2016

There is this script called "test_tmatrix_sampler", but we did not document anything for that as far as I remember.

@greenTara
Copy link
Collaborator

I checked the script test_tmatrix_sampler. It is basically checking that the bayesian uncertainty estimation coming from the count matrix is roughly the same as the actual error that we calculate based on the known model. I think the plots are out of date in the issue, but in the end the test was succeeding - there is an assertion in there checking at the errors are not too different.(ratio between 1/4 and 4).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants