You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thank you for your great work. I am trying to reproduce the result. There is one problem. After the stage 1, the obtained masked GS has no GS spheres in the masked region, resulting in the rendered depth with 0 value in these regions. (the bag region on the figure)
It will lead to an INF in the disparity and finally cause NAN in the depth estimation.
I also tried to change the 0 in the rendered depth to a small number 1. Then the depth estimation finally can output something instead of a pure black image. But in this case, the depth information estimated is wrong, see the figure below
jingwu2121
changed the title
INF and NAN in the depth estimation
Bad results of Spin-Nerf dataset | INF and NAN in the depth estimation
Jan 2, 2025
If you intend to use Infusion as a depth completion model, we recommend enabling color augmentation during training. We have observed that this helps reduce the number of holes. In fact, you might notice that the reconstructed point cloud appears better than what is visible on the depth map. Additionally, we suggest trying DepthLab, as this model allows for inputs with a depth value of 0. It is also important to note that you should expand the mask slightly to prevent erroneous depth values at the edges of the mask from being treated as ground truth (GT).
Hi, thank you for your great work. I am trying to reproduce the result. There is one problem. After the stage 1, the obtained masked GS has no GS spheres in the masked region, resulting in the rendered depth with 0 value in these regions. (the bag region on the figure)
It will lead to an INF in the disparity and finally cause NAN in the depth estimation.
So I changed these 0 in the rendered depth to a small number 1e-6, resulting in a big number in the same region in disparity. However, in https://github.com/ali-vilab/Infusion/blob/main/depth_inpainting/inference/depth_inpainting_pipeline_half.py?plain=1#L311, the disparity is normalized using the
max_value
andmin_value
obtained from the unmasked region. So after the normalization, there are still super large numbers in disparity, still causing NAN in the final results.Could you please kindly help me with this? Any ideas about how to reach a similar results as shown in the paper?
The text was updated successfully, but these errors were encountered: