You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While I don't understand retention time correction algorithms well enough to implement something like Obiwarp, a basic version using peak groups doesn't seem that hard. Basically I'd need the RTs for a couple peaks on a per-file basis and then apply a linear (loess?) interpolation that lines those peaks up at the same RT.
Option 1 would be integrating a couple peaks per-file which is a painful process. Option 2 would be applying some kind of heuristic like "match up the maximum intensity points" (maximize correlation?) within a large RT range. I know I might actually use the second one and would probably never do the first (except maybe on a subset?).
You could even pick peaks based on the BPC/TIC for a couple high-intensity peaks if they were consistent across files, or apply it to a single chromatogram. That would also likely get a better spread of RTs for interpolation and avoid the temptation to use six different internal standards that all elute within minutes 5-6.
Also, this could be done on a subset of the files (e.g. our Pooled samples) and then interpolated between using timestamp in the metadata.
Expected outputs would be a data frame of filename, init_rt, and corr_rt that could be joined back to the original MS1 data. Inputs would have to be like feature, filename, and peak_rt.
The text was updated successfully, but these errors were encountered:
While I don't understand retention time correction algorithms well enough to implement something like Obiwarp, a basic version using peak groups doesn't seem that hard. Basically I'd need the RTs for a couple peaks on a per-file basis and then apply a linear (loess?) interpolation that lines those peaks up at the same RT.
Option 1 would be integrating a couple peaks per-file which is a painful process. Option 2 would be applying some kind of heuristic like "match up the maximum intensity points" (maximize correlation?) within a large RT range. I know I might actually use the second one and would probably never do the first (except maybe on a subset?).
You could even pick peaks based on the BPC/TIC for a couple high-intensity peaks if they were consistent across files, or apply it to a single chromatogram. That would also likely get a better spread of RTs for interpolation and avoid the temptation to use six different internal standards that all elute within minutes 5-6.
Also, this could be done on a subset of the files (e.g. our Pooled samples) and then interpolated between using timestamp in the metadata.
Expected outputs would be a data frame of filename, init_rt, and corr_rt that could be joined back to the original MS1 data. Inputs would have to be like feature, filename, and peak_rt.
The text was updated successfully, but these errors were encountered: