You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From f2f discussion with @kokbent he noted that a large proportion of the sampling time in NUTS was during early sampling. This contributes to very long sampling times overall (many hours to get to 4000 samples).
This is suggestive that a lot of time is spent during adaptation phase, probably/possibly identification of a good metric for the momentum refresh step in NUTS.
A possible action to help with this would be to either serialise the metric from a successful run, for reusing in future runs on the same state. Or doing the same using an estimate of the inverse correlation matrix of posterior samples. A major limitation here would of course be that the number of parameters grows with number of variants, but this might be a useful approach to pre-heating the NUTS process.
The text was updated successfully, but these errors were encountered:
From f2f discussion with @kokbent he noted that a large proportion of the sampling time in NUTS was during early sampling. This contributes to very long sampling times overall (many hours to get to 4000 samples).
This is suggestive that a lot of time is spent during adaptation phase, probably/possibly identification of a good metric for the momentum refresh step in NUTS.
A possible action to help with this would be to either serialise the metric from a successful run, for reusing in future runs on the same state. Or doing the same using an estimate of the inverse correlation matrix of posterior samples. A major limitation here would of course be that the number of parameters grows with number of variants, but this might be a useful approach to pre-heating the NUTS process.
The text was updated successfully, but these errors were encountered: