Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to build Superpoint engine. #174

Open
antithing opened this issue Nov 29, 2024 · 25 comments
Open

Failed to build Superpoint engine. #174

antithing opened this issue Nov 29, 2024 · 25 comments

Comments

@antithing
Copy link

Hi, and thank you for this code! I am compiling on Windows, with CUDA 12.1, TensorRT 8.6, and a RTX 4090 GPU.

When running the test_features application, I get

Error in SuperPoint building triggered here:

std::cout << "Error in SuperPoint building" << std::endl;

Digging in more, this errors at:

auto builder = TensorRTUniquePtr<nvinfer1::IBuilder>(nvinfer1::createInferBuilder(gLogger.getTRTLogger()));

What could this problem be?

I have tried upgrading to TensorRT 10, but I get a lot of compile errors.

Thanks!

@xukuanHIT
Copy link
Collaborator

Hello, I haven't developed programs on Windows before. The provided source code relies on APIs from libraries used on Ubuntu, and I'm not sure if they work the same way on Windows.

@antithing
Copy link
Author

Hi, it actually seems to be related to the 40 series GPU. (I am on a 4090). I think these cards require tensorrt 10.

Is that possible?

@xukuanHIT
Copy link
Collaborator

I have tested it on a 4080, and TensorRT 8.6 works properly.

@antithing
Copy link
Author

Strange. I wonder if it's a windows thing. With airvo I had no issues building on windows

@antithing
Copy link
Author

Okay @xukuanHIT i rebuilt everything and the engine is built! Yay!

Now I have a new crash.

Running the test feature application, the code crashes here:

host_data_buffer[row_shift + col] = float(ptr[0]) / 255.0;

If I comment out that line, it passes the function, if I leave it in, it crashes. I am using teh euroc dataset, everything default. What might be happening here?

Thanks again!

@antithing
Copy link
Author

AH there was a problem with my windows conversion. I have test feature running now! Next step: visual odometry!

@antithing
Copy link
Author

Hi again, I have visual odometry running great, however when i run map refine, or relocation, I have a crash on loading the vocabulary file.


void Database::LoadVocabulary(const std::string voc_path){
  SuperpointVocabulary voc_load;
  std::cout << "loading voc: " << voc_path << std::endl;
  std::ifstream ifs(voc_path, std::ios::binary);
  //crashes below
  boost::archive::binary_iarchive ia(ifs);
  ia >> voc_load;


  _voc = std::make_shared<SuperpointVocabulary>(voc_load);
  
  if(_inverted_file.empty()){
    _inverted_file.resize(_voc->size());
  }
}

What version of boost should I be using? Can you think of anything that might cause this?

Thank you.

@xukuanHIT
Copy link
Collaborator

@antithing Hi, we use boost 1.71.0 on ubuntu. Can you confirm if the dictionary path is correct?

@antithing
Copy link
Author

antithing commented Dec 13, 2024

Hi, yes the path is correct, i am actually hardcoding it just before the boost::archive::binary_iarchive ia(ifs); line to debug.
It seems to be an issue with the actual bin file, could this be a Linux->Windows thing?

Is the dbow superpoint training code included here? I can try training the vocabulary again perhaps

@xukuanHIT
Copy link
Collaborator

@antithing Hi, you can refer to and modify this code to train the dictionary.

@antithing
Copy link
Author

Thank you! What dataset did you train your dictionary on?

@xukuanHIT
Copy link
Collaborator

@antithing Hi, please refer to the Section VI-A of the paper.

@antithing
Copy link
Author

Hi , I have looked into this more and it looks like boost binary files are not portable between Linux and Windows.
I will look at creating a Windows file, but I was wondering if you have a txt file version of the created dbow2 vocabulary?
Txt would be portable to Windows and I can serialize it to bin.

If so would you be able to upload it?

Thank you!

@xukuanHIT
Copy link
Collaborator

@antithing I have uploaded a txt version of the vocabulary. However, in offline optimization stage, AirSLAM will build a scene-specific binary vocabulary, I am not sure if it can run on Windows.

@antithing
Copy link
Author

I am still struggling with this. Now it's the map loading. It saves fine from Visual odometry but I can't load txt or binary without a crash! Do you have any thoughts on where I could look to debug this? Thank you!

@antithing
Copy link
Author

antithing commented Jan 5, 2025

@xukuanHIT

Aha! By adding the BOOST_SERIALIZATION_SHARED_PTR(Map) macro and copying the _map data to a new Map object before i save it, I can now load the map bin!

I am still stuck with loading the voc file however, i get this error:

incompatible native format - size of long"

Are you able to share the exact code to create this file?

Thank you!

@xukuanHIT
Copy link
Collaborator

Sorry, I am currently busy with some tasks and may be unable to re-organize the code in the short term. You still can not load the txt vocabulary?

@antithing
Copy link
Author

The txt voc gives the same error unfortunately. I am building airslam inside Docker on WSL, so i may be able to use boost::portable_binary_archive, but I am struggling with ROS as I have never used it before. Will keep testing!

@antithing
Copy link
Author

@xukuanHIT sorry to bother you again, I have been fighting with this for weeks. Are you able to share the vocabulary training code that you used so I can just make a new voc file from scratch?

Thank you very much!

@antithing
Copy link
Author

Aha! After all that digging, i was able to save a vocabulary in Linux using the inbuilt DBow:

voc_load.save(txt_path);

to cv::FileStorage, and load it on windows using:


 SuperpointVocabulary voc_load;

 voc_load.load(voc_path);

 _voc = std::make_shared<SuperpointVocabulary>(voc_load);

 if (_inverted_file.empty()) {
     _inverted_file.resize(_voc->size());
 }
      

@antithing
Copy link
Author

antithing commented Jan 8, 2025

One more thing... :)

When I run visual odometry on euroc, I get a frame time of 25 - 30ms. (aside from keyframe creation)

Once I refine the map and run the relocalization application, I see a frame time of around 50 ms.

relocalization should be faster than vo, right?

Is there a way to run super fast localization in the optimized map?

Thanks!

@xukuanHIT
Copy link
Collaborator

xukuanHIT commented Jan 9, 2025

Well done!
Relocalization is usually slower than VO. In VO, for non-keyframes, we only detect and track keypoints for the left image. Therefore, for most frames, only the feature points of a single image are detected and tracked. While in relocalization, we detect both point and line features of an image and retrieve three keyframes for feature matching.

If you are particularly concerned about the speed of relocalization, you can try the following methods to accelerate it:

  1. Use only keypoint for relocalization. Since TensorRT currently does not support some operators in PLNet, the acceleration for PLNet is much less effective compared to the acceleration for SuperPoint. Therefore, using only point features for relocalization would be much faster. To achieve this, you need to modify three places (taking the EuRoC dataset as an example):
  • Change "use_superpoint" in this line to "1".
  • Change this line to "feature_detector->Detect(image_rect, features);" to detect only keypoints.
  • Comment out the code from this line to that line. (optional)

We provided an ablation study on this, which can be referred to Section VII.E and Table 3 of the paper.

  1. Early stopping during feature matching. Add this line of code.
  2. Reduce the retrieved keyframe by lowering the value on this line. But this may significantly reduce the recall rate.

@antithing
Copy link
Author

Thank you! i will take a look at those. What I want is to run odometry in localisation mode, so tracking runs on an existing map without adding any new keyframes.

@antithing
Copy link
Author

antithing commented Jan 9, 2025

also one more question :)

I am now trying to run live on my own camera. I have a factory calibrated stereo camera, that gives me the following data:

R1
{
0.99994695186615,-0.0102110998705029,0.00133434741292149,0.0102130826562643,0.999946713447571,-0.00148762774188071,-0.00131908606272191,0.00150117673911154,0.999997973442078
}
T1
{
-0.00536647392436862,0.000118494710477535,0.00380634469911456
}
Intrinsic1 type
{
3
}
K1
{
574.008239746094,574.008239746094,636.957580566406,407.77783203125,636.047668457031,407.978424072266,0.587792456150055,1.15076863765717,1280,800,0,0,0
}
R2
{
0.999991655349731,0.000332564493874088,-0.00407201470807195,-0.000381104851840064,0.999928832054138,-0.0119255064055324,0.00406775902956724,0.0119269592687488,0.999920606613159
}
T2
{
0.0744005963206291,0.00029618720873259,0.00490261800587177
}
Intrinsic2 type
{
3
}
K2
{
570.449096679688,570.449096679688,629.68212890625,404.168212890625,629.231506347656,404.288238525391,0.573584794998169,1.18674337863922,1280,800,0,0,0
}

I have added this to a camera config as so:

%YAML:1.0

image_height: 800
image_width: 1280
use_imu: 0

depth_lower_thr: 0.1
depth_upper_thr: 2.0
max_y_diff: 5

# Calibration
distortion_type: 1  # 0 for undistorted inputs, 1 for radial-tangential: [k1, k2, p1, p2, k3], 2 for equidistant/fisheye:  [k1, k2, k3, k4, 0].
cam0:
  intrinsics: [574.008239746094, 574.008239746094, 636.957580566406,407.77783203125] # fx, fy, cx, cy
  distortion_coeffs: [0.587792456150055,1.15076863765717,0,0,0]
  T_type: 0           # 0 for Euroc format, the following T is Tbc. 1 for Kalibr format, the following T is Tcb
  T: 
  - [0.99994695186615,-0.0102110998705029,0.00133434741292149, -0.00536647392436862]
  - [0.0102130826562643,0.999946713447571,-0.00148762774188071, 0.000118494710477535]
  - [-0.00131908606272191,0.00150117673911154,0.999997973442078, 0.00380634469911456]
  - [0.0, 0.0, 0.0, 1.0]
cam1:
  intrinsics: [570.449096679688, 570.449096679688, 629.68212890625,404.168212890625] # fx, fy, cx, cy
  distortion_coeffs: [0.573584794998169,1.18674337863922,0,0,0]
  T_type: 0           
  T: 
  - [0.999991655349731,0.000332564493874088,-0.00407201470807195, 0.0744005963206291]
  - [-0.000381104851840064,0.999928832054138,-0.0119255064055324, 0.00029618720873259]
  - [0.00406775902956724,0.0119269592687488,0.9999206066131591, 0.00490261800587177]
  - [0.0, 0.0, 0.0, 1.0]

But I think I need a transpose on the T matrix. I get:

good_stereo_point = 0
Not enough stereo points to initialize!

@xukuanHIT
Copy link
Collaborator

You can refer to this issue #154 to verify the stereo rectification.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants