Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad results of Spin-Nerf dataset | INF and NAN in the depth estimation #25

Open
jingwu2121 opened this issue Jan 2, 2025 · 2 comments

Comments

@jingwu2121
Copy link

jingwu2121 commented Jan 2, 2025

Hi, thank you for your great work. I am trying to reproduce the result. There is one problem. After the stage 1, the obtained masked GS has no GS spheres in the masked region, resulting in the rendered depth with 0 value in these regions. (the bag region on the figure)
image
image

It will lead to an INF in the disparity and finally cause NAN in the depth estimation.

So I changed these 0 in the rendered depth to a small number 1e-6, resulting in a big number in the same region in disparity. However, in https://github.com/ali-vilab/Infusion/blob/main/depth_inpainting/inference/depth_inpainting_pipeline_half.py?plain=1#L311, the disparity is normalized using the max_value and min_value obtained from the unmasked region. So after the normalization, there are still super large numbers in disparity, still causing NAN in the final results.

Could you please kindly help me with this? Any ideas about how to reach a similar results as shown in the paper?

@jingwu2121
Copy link
Author

jingwu2121 commented Jan 2, 2025

I also tried to change the 0 in the rendered depth to a small number 1. Then the depth estimation finally can output something instead of a pure black image. But in this case, the depth information estimated is wrong, see the figure below

image

@jingwu2121 jingwu2121 changed the title INF and NAN in the depth estimation Bad results of Spin-Nerf dataset | INF and NAN in the depth estimation Jan 2, 2025
@Johanan528
Copy link
Collaborator

If you intend to use Infusion as a depth completion model, we recommend enabling color augmentation during training. We have observed that this helps reduce the number of holes. In fact, you might notice that the reconstructed point cloud appears better than what is visible on the depth map. Additionally, we suggest trying DepthLab, as this model allows for inputs with a depth value of 0. It is also important to note that you should expand the mask slightly to prevent erroneous depth values at the edges of the mask from being treated as ground truth (GT).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants