You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I appreciate your great work and i have some question
I want to train the model on my custom data.
here is my steps:
i captured some pictures while moving around a stationary object by iphone 14
i use foreground masks in feature extraction process of COLMAP to restrict the sparse point cloud to foreground object
i do refine bundle adjustment to refine focal length and principle point
i bitwise_and image and masks so the inputs are also masked images(without Background)
i use colmap2mvsnet to create cam.txt and pair.txt
for ground truth depthmap, i used rendered depthmap from agisoft metashape(gray image[0-255], then i use minmaxscaler from sklearn to scale it between [depthmin and depthmax] per image(resulted from cam.txt files)
[i wanted to use geometric depthmap files from colmap but depthrange is varying from negative to positive values in a large range of numbers, and i dont know i should use these depthmin and depthmax or those from cam.txt file]
are these steps correct?i tried to train the network, loss is decreasing but not so much. but after saving checkpoint, when i want to evaluate, i dont get a good reconstructed output.
can you help me how can get the good result for my problem?
thank you so much
The text was updated successfully, but these errors were encountered:
Based on your description, I am curious to know if the sparse reconstruction process in COLMAP was successful. Could you possibly share the sparse point cloud and your input images for further analysis?
Based on your description, I am curious to know if the sparse reconstruction process in COLMAP was successful. Could you possibly share the sparse point cloud and your input images for further analysis?
I did two experiment. 1) without applying mask-> colmap tried to find points on background as well in sparse reconstruction step 2) with applying mask in feature extraction step of COLMAP -> colmap just found the sparse points on the object.
likewise, i can see the size of point3d.bin file in the later one is smaller. if i plot those points via matplotlib they are also only limited to my object.
my question is regarding depth_gt and depth_min and depth_max. how should they correctly be specified?
I appreciate your great work and i have some question
I want to train the model on my custom data.
here is my steps:
are these steps correct?i tried to train the network, loss is decreasing but not so much. but after saving checkpoint, when i want to evaluate, i dont get a good reconstructed output.
can you help me how can get the good result for my problem?
thank you so much
The text was updated successfully, but these errors were encountered: