You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,we run the "rapidsai/notebooks-contrib/../cugraph/multi_gpu_pagerank.ipynb" on 'twitter-2010.csv' dataset by using 4 Tesla T4 (16GB), these gpus are not connected with nvlink. Our process throw warning during "Read the data from disk". The warning below: "Memory use is high but worker has no data to disk. Perhaps some other process is leaking memory? Process memory: 5.83GB -- Worker memory limit: 8.29GB". Our code is the same as this example.
Besides, we found that the needed libraries are import dask_cugraph.pagerank as dcg in rapids docs, not the import cugraph.dask.pagerank as dcg in this example, but we can't find the dask_cugraph in anaconda repo, why?
Our environment: rapids 0.12.0, cuda10.1, centos 7.6, py3.7.
Could you please show me some tips about this issue?
Sincerely.
The text was updated successfully, but these errors were encountered:
peizhaoliu
changed the title
cuGraph example multi_gpu_pagerank: warning on "read the data from disk"
[QST] cuGraph example multi_gpu_pagerank: warning on "read the data from disk"
Mar 20, 2020
Hi,we run the "rapidsai/notebooks-contrib/../cugraph/multi_gpu_pagerank.ipynb" on 'twitter-2010.csv' dataset by using 4 Tesla T4 (16GB), these gpus are not connected with nvlink. Our process throw warning during "Read the data from disk". The warning below:
"Memory use is high but worker has no data to disk. Perhaps some other process is leaking memory? Process memory: 5.83GB -- Worker memory limit: 8.29GB".
Our code is the same as this example.Besides, we found that the needed libraries are
import dask_cugraph.pagerank as dcg
in rapids docs, not theimport cugraph.dask.pagerank as dcg
in this example, but we can't find the dask_cugraph in anaconda repo, why?Our environment: rapids 0.12.0, cuda10.1, centos 7.6, py3.7.
Could you please show me some tips about this issue?
Sincerely.
The text was updated successfully, but these errors were encountered: