-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IndexError when processing (old ?) GENEActiv .bin files #120
Comments
@cWam-zz Hi. Any chance the file was empty? Do you know the size of the file? Is there a way you can share the file for me to debug? |
@chanshing Thank you for your message. |
@cWam-zz Thanks! We will investigate and get back to you. In the meantime, a workaround for you could be to first convert your file to a CSV (using GENEActiv's own parser) then use our tool. Sorry for the inconvenience! |
@chanshing Thank you for your suggestion. |
Thank you @cWam-zz , maybe you can try exporting your file to CSV using that tool. |
Hello, Here is the full output message:
I uploaded the .csv files I used into the same Zenodo repository. |
Hi @cWam-zz It seems that the file only has second-level summaries, which would make it impossible for our models to work (requiring at least 15Hz frequency, ideally more - yours is 1Hz). |
Hi @chanshing I however noted that it is necessary to have files with 2 or 3 digits precision for seconds at each row. The accepted format can be "time","x","y","z","id" Or like this with 3 digits precision: But I got an error with file contents like: Here is the encountered error with this type of file: Thanks again for your help. |
Thanks @cWam-zz that's a good diagnosis. The following modification should probably work (note that I added .00 to the first timestamp):
|
Hello,
I work with Python 3.8.19 on Windows 10 - 64 bits.
An error appeared after running the following command line to process GENEActiv .bin files, only in some cases: stepcount "E:\file\directory\GENEActiv_file.bin" -o "E:\output\directory"
Here is the output message:
java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 at GENEActivReader.parseBinFileHeader(GENEActivReader.java:221) at GENEActivReader.main(GENEActivReader.java:75) Reading file... Done! (0.16s) Error: C:\Users\***\AppData\Local\Temp\tmphxr_w9yo\data.npy - Le processus ne peut pas accéder au fichier car ce fichier est utilisé par un autre processus. Traceback (most recent call last): File "C:\Users\***\Anaconda3\envs\stepcount\lib\runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\***\Anaconda3\envs\stepcount\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\***\Anaconda3\envs\stepcount\Scripts\stepcount.exe\__main__.py", line 7, in <module> File "C:\Users\***\Anaconda3\envs\stepcount\lib\site-packages\stepcount\stepcount.py", line 58, in main data, info = read( File "C:\Users\***\Anaconda3\envs\stepcount\lib\site-packages\stepcount\stepcount.py", line 730, in read data, info = actipy.read_device( File "C:\Users\***\Anaconda3\envs\stepcount\lib\site-packages\actipy\reader.py", line 50, in read_device data, info = _read_device(input_file, verbose) File "C:\Users\***\Anaconda3\envs\stepcount\lib\site-packages\actipy\reader.py", line 220, in _read_device info['StartTime'] = t.iloc[0].strftime(strftime) File "C:\Users\***\Anaconda3\envs\stepcount\lib\site-packages\pandas\core\indexing.py", line 1103, in __getitem__ return self._getitem_axis(maybe_callable, axis=axis) File "C:\Users\***\Anaconda3\envs\stepcount\lib\site-packages\pandas\core\indexing.py", line 1656, in _getitem_axis self._validate_integer(key, axis) File "C:\Users\***\Anaconda3\envs\stepcount\lib\site-packages\pandas\core\indexing.py", line 1589, in _validate_integer raise IndexError("single positional indexer is out-of-bounds") IndexError: single positional indexer is out-of-bounds
The first error does not matter. It appeared everytime but the files can be processed. However, the IndexError stops the process.
I noted that the error did not appear for recent files (collected in 2023) but it appeared for old files (collected in 2018), even if the devices used to record the data were the same from one year to another.
The text was updated successfully, but these errors were encountered: