-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Streamed writes #9
Comments
This is certainly doable - I don't have time to work on it right now, but these steps would be required:
It is certainly preferable to do this, knowing how much data will be stored in the HDF5 dataset. If that is not known ahead of time, then additionally:
Your conversion program would then write the data a chunk at a time (by choosing an appropriate slice that corresponds to a chunk size with whole-chunk offset) |
I can't tell based on a quick reading of what you sent, but if your datasets use Compound datatypes the ability to initialize and write those would also have to be added to h5wasm (currently it can read, but not write Compound types) |
Hi Brian, as mentioned by mail earlier, we are working on streamed writes. We need it to convert large binary files in a web app with memory limitations.
We have already made a version with python, where we are parsing data blocks. But we are planning to use Javascript/Typescipt.
Attached you can find our example of using the NETCDF4 library. We append new data from the stream, extending the netcdf4 file. See attached source code. Most of the logic is in main(), but some other functions are included.
createNetCDF4.zip
Kind Regards, Jason
The text was updated successfully, but these errors were encountered: