Data preparation code to provide consistent and high-performance processing
numpy tqdm numba joblib scipy
- UAV-Human: Skeleton -->
N C T V M
- NTURGB-D:Skeleton [ST-GCN] -->
N C T V M
- NTURGB-D:Skeleton [CTR-GCN]
- NW-UCLA:Skeleton -->
N C T V M
- MIMII:Audio -->
[SNR[MFCC,device,label]]
- SHL-2024: seqence -->
Modal Channel Num sample
- ECG5000: seqence
- Kuairec
- Tenrec
- Mem:Try to minimize memory consumption
- Time:Faster as well as possible
- Data preprocessing uses a lot of performance optimizations, the goal of which is trying to strike a reasonable balance between speed and demand
- If you are willing to bear the memory consumption, you can replace 'open_memmap' with 'np.load' or 'np.save' to speed up the processing
- This project has a high demand for I/O to ensure that it works on a medium with high I/O capability
- Some datasets have different preprocessing patterns in different projects, and to avoid ambiguity, I have indicated the source in "[]".
N C T V M
isNum Channel Frames Joint Body