why does the indexUpdates cost too much time? #1529
Replies: 2 comments
-
@Liuhaoxian you should ask such questions in google group: https://groups.google.com/forum/#!forum/janusgraph-users This isn't the best place to get answers. Developers use github issues only for bugs tracking and feature requests. Also, it is hard to know what is the problem you have by having so little details. What I can recommend is to play with properties and batch size. Try to enable batch loading and commit each transaction with about 10000 entities for example, compare the resulting performance. Also, you didn't tell the size of the entity you are uploading and amount of properties per entity. In my experience with enabled batch loading and 10k entities per transaction with about 5 threads I am loading about 9000-10000 entities per second but in my case I am using es+scylla. Hard to tell what the concern you have. Try to provide as much information about your case as possible and ask the question in the google group. |
Beta Was this translation helpful? Give feedback.
-
Thank you sir,I have imported 500 tables and each table contains about 20 columns . One time a thread just commits a table entity,and the size of addedRelations is about 600 per transactions. |
Beta Was this translation helpful? Give feedback.
-
Hi everyone, I have recently use JanusGraph to import entities, but the thoughtoutput is about 35/s. And we find the indexUpdates in prepareCommit costing too much time.
the details followed:
PC: 4vCPU+16GB
Backend: es+hbase
8 threads import entities.
Could anyone give me some suggestions?
Beta Was this translation helpful? Give feedback.
All reactions