Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question about training on my custom dataset #6

Open
fariba87 opened this issue Jun 6, 2024 · 3 comments
Open

question about training on my custom dataset #6

fariba87 opened this issue Jun 6, 2024 · 3 comments

Comments

@fariba87
Copy link

fariba87 commented Jun 6, 2024

I appreciate your great work and i have some question
I want to train the model on my custom data.
here is my steps:

  1. i captured some pictures while moving around a stationary object by iphone 14
  2. i use foreground masks in feature extraction process of COLMAP to restrict the sparse point cloud to foreground object
  3. i do refine bundle adjustment to refine focal length and principle point
  4. i bitwise_and image and masks so the inputs are also masked images(without Background)
  5. i use colmap2mvsnet to create cam.txt and pair.txt
  6. for ground truth depthmap, i used rendered depthmap from agisoft metashape(gray image[0-255], then i use minmaxscaler from sklearn to scale it between [depthmin and depthmax] per image(resulted from cam.txt files)
  • [i wanted to use geometric depthmap files from colmap but depthrange is varying from negative to positive values in a large range of numbers, and i dont know i should use these depthmin and depthmax or those from cam.txt file]
    are these steps correct?i tried to train the network, loss is decreasing but not so much. but after saving checkpoint, when i want to evaluate, i dont get a good reconstructed output.
    can you help me how can get the good result for my problem?
    thank you so much
@KaiqiangXiong
Copy link
Owner

Based on your description, I am curious to know if the sparse reconstruction process in COLMAP was successful. Could you possibly share the sparse point cloud and your input images for further analysis?

@fariba87
Copy link
Author

fariba87 commented Jun 8, 2024

Based on your description, I am curious to know if the sparse reconstruction process in COLMAP was successful. Could you possibly share the sparse point cloud and your input images for further analysis?

I did two experiment. 1) without applying mask-> colmap tried to find points on background as well in sparse reconstruction step 2) with applying mask in feature extraction step of COLMAP -> colmap just found the sparse points on the object.
likewise, i can see the size of point3d.bin file in the later one is smaller. if i plot those points via matplotlib they are also only limited to my object.
my question is regarding depth_gt and depth_min and depth_max. how should they correctly be specified?

@KaiqiangXiong
Copy link
Owner

Sorry, I haven't tried agisoft metashape before. Maybe it's better to use depthmin and depthmax in cam.txt files from colmap .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants