You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The paper said " trained and tested PInet on the DBD5 and older DBD3 benchmarks using leave-one-out cross-validation". However, the DBD dataset in "https://www.dropbox.com/sh/qqi9op061mfxbmo/AADibYuDdMF4n2bDS3uqiEVha?dl=0 " is splited into "tt0" - "tt4" folders with "shuffled_train_file_list_l.json" and "shuffled_test_file_list_l.json" in them, which seems that the "training" and "testing" datasets have been splitted. Which strategy should be used to evaluate the model for the results shown in the paper, "leave-one-out cross-validation" or "following the data partitioning in provided dataset"?
Thank you so much!
The text was updated successfully, but these errors were encountered:
Hi!
The paper said " trained and tested PInet on the DBD5 and older DBD3 benchmarks using leave-one-out cross-validation". However, the DBD dataset in "https://www.dropbox.com/sh/qqi9op061mfxbmo/AADibYuDdMF4n2bDS3uqiEVha?dl=0 " is splited into "tt0" - "tt4" folders with "shuffled_train_file_list_l.json" and "shuffled_test_file_list_l.json" in them, which seems that the "training" and "testing" datasets have been splitted. Which strategy should be used to evaluate the model for the results shown in the paper, "leave-one-out cross-validation" or "following the data partitioning in provided dataset"?
Thank you so much!
The text was updated successfully, but these errors were encountered: