Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using TTM with sparse Time Series Data #212

Open
kiran-vj opened this issue Nov 26, 2024 · 3 comments
Open

Using TTM with sparse Time Series Data #212

kiran-vj opened this issue Nov 26, 2024 · 3 comments

Comments

@kiran-vj
Copy link

Hi Folks!

We've been wondering what would be the ideal way to deal with sparse time series data given the fixed length requirement. Should we pad with zeroes/duplicate initital/last events to reach max length.

Has there been any analysis around this?

@vijaye12
Copy link
Collaborator

yes. if you are planning to finetune the model, you can append zeros or do repeat of initial timepoints as you suggested and the model will learn the augmented pattern easily.

for zeroshot - repeat works slightly better compared to zeroshot. However - both these approachs might still have some drop in performance based on the datasets under consideration..

@mrlucasrib
Copy link

yes. if you are planning to finetune the model, you can append zeros or do repeat of initial timepoints as you suggested and the model will learn the augmented pattern easily.

for zeroshot - repeat works slightly better compared to zeroshot. However - both these approachs might still have some drop in performance based on the datasets under consideration..

Hello, I plan to work with time series data that have relatively few time points (an average of 12) compared to the minimum context length required by most foundation models. As I’m starting my master’s, I aim to compare state-of-the-art (SOTA) foundation models using this dataset.

Is there a standard method to address or adapt datasets with fewer time points, especially for meeting the minimum context length? This applies not only to TTM but also to other SOTA approaches.

I’d greatly appreciate any guidance on this, including references to papers or resources that could help substantiate this approach.

@vg11072001
Copy link

vg11072001 commented Dec 23, 2024

@mrlucasrib you can check out this technique for generating augmentation using a combination of TSMixup and KernelSynth from the paper "Chronos: Learning the Language of Time Series."

  • image

This type of challenge with real-world datasets with lesser context. Can we design a solution for TTM in this context? Please share your thoughts: @vijaye12 @wgifford

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants