forked from nerfstudio-project/nerfstudio
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathhelp_render.txt
195 lines (185 loc) · 17.4 KB
/
help_render.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
usage: ns-render [-h] {camera-path,interpolate,spiral,dataset}
╭─ options ──────────────────────────────────────────────────────────────────╮
│ -h, --help show this help message and exit │
╰────────────────────────────────────────────────────────────────────────────╯
╭─ subcommands ──────────────────────────────────────────────────────────────╮
│ {camera-path,interpolate,spiral,dataset} │
│ camera-path Render a camera path generated by the viewer or blender │
│ add-on. │
│ interpolate Render a trajectory that interpolates between training │
│ or eval dataset images. │
│ spiral Render a spiral trajectory (often not great). │
│ dataset Render all images in the dataset. │
╰────────────────────────────────────────────────────────────────────────────╯
usage: ns-render camera-path [-h] [CAMERA-PATH OPTIONS]
Render a camera path generated by the viewer or blender add-on.
╭─ options ──────────────────────────────────────────────────────────────────╮
│ -h, --help │
│ show this help message and exit │
│ --load-config PATH │
│ Path to config YAML file. (required) │
│ --output-path PATH │
│ Path to output video file. (default: renders/output.mp4) │
│ --image-format {jpeg,png} │
│ Image format (default: jpeg) │
│ --jpeg-quality INT │
│ JPEG quality (default: 100) │
│ --downscale-factor FLOAT │
│ Scaling factor to apply to the camera image resolution. (default: 1.0) │
│ --eval-num-rays-per-chunk {None}|INT │
│ Specifies number of rays per chunk during eval. If None, use the value │
│ in the config file. (default: None) │
│ --rendered-output-names [STR [STR ...]] │
│ Name of the renderer outputs to use. rgb, depth, etc. concatenates │
│ them along y axis (default: rgb) │
│ --depth-near-plane {None}|FLOAT │
│ Closest depth to consider when using the colormap for depth. If None, │
│ use min value. (default: None) │
│ --depth-far-plane {None}|FLOAT │
│ Furthest depth to consider when using the colormap for depth. If None, │
│ use max value. (default: None) │
│ --render-nearest-camera {True,False} │
│ Whether to render the nearest training camera to the rendered camera. │
│ (default: False) │
│ --check-occlusions {True,False} │
│ If true, checks line-of-sight occlusions when computing camera │
│ distance and rejects cameras not visible to each other (default: │
│ False) │
│ --camera-path-filename PATH │
│ Filename of the camera path to render. (default: camera_path.json) │
│ --output-format {images,video} │
│ How to save output data. (default: video) │
╰────────────────────────────────────────────────────────────────────────────╯
╭─ colormap-options options ─────────────────────────────────────────────────╮
│ Colormap options. │
│ ────────────────────────────────────────────────────────────────────────── │
│ --colormap-options.colormap │
│ {default,turbo,viridis,magma,inferno,cividis,gray,pca} │
│ The colormap to use (default: default) │
│ --colormap-options.normalize {True,False} │
│ Whether to normalize the input tensor image (default: False) │
│ --colormap-options.colormap-min FLOAT │
│ Minimum value for the output colormap (default: 0) │
│ --colormap-options.colormap-max FLOAT │
│ Maximum value for the output colormap (default: 1) │
│ --colormap-options.invert {True,False} │
│ Whether to invert the output colormap (default: False) │
╰────────────────────────────────────────────────────────────────────────────╯
usage: ns-render interpolate [-h] [INTERPOLATE OPTIONS]
Render a trajectory that interpolates between training or eval dataset images.
╭─ options ──────────────────────────────────────────────────────────────────╮
│ -h, --help │
│ show this help message and exit │
│ --load-config PATH │
│ Path to config YAML file. (required) │
│ --output-path PATH │
│ Path to output video file. (default: renders/output.mp4) │
│ --image-format {jpeg,png} │
│ Image format (default: jpeg) │
│ --jpeg-quality INT │
│ JPEG quality (default: 100) │
│ --downscale-factor FLOAT │
│ Scaling factor to apply to the camera image resolution. (default: 1.0) │
│ --eval-num-rays-per-chunk {None}|INT │
│ Specifies number of rays per chunk during eval. If None, use the value │
│ in the config file. (default: None) │
│ --rendered-output-names [STR [STR ...]] │
│ Name of the renderer outputs to use. rgb, depth, etc. concatenates │
│ them along y axis (default: rgb) │
│ --depth-near-plane {None}|FLOAT │
│ Closest depth to consider when using the colormap for depth. If None, │
│ use min value. (default: None) │
│ --depth-far-plane {None}|FLOAT │
│ Furthest depth to consider when using the colormap for depth. If None, │
│ use max value. (default: None) │
│ --render-nearest-camera {True,False} │
│ Whether to render the nearest training camera to the rendered camera. │
│ (default: False) │
│ --check-occlusions {True,False} │
│ If true, checks line-of-sight occlusions when computing camera │
│ distance and rejects cameras not visible to each other (default: │
│ False) │
│ --pose-source {eval,train} │
│ Pose source to render. (default: eval) │
│ --interpolation-steps INT │
│ Number of interpolation steps between eval dataset cameras. (default: │
│ 10) │
│ --order-poses {True,False} │
│ Whether to order camera poses by proximity. (default: False) │
│ --frame-rate INT │
│ Frame rate of the output video. (default: 24) │
│ --output-format {images,video} │
│ How to save output data. (default: video) │
╰────────────────────────────────────────────────────────────────────────────╯
╭─ colormap-options options ─────────────────────────────────────────────────╮
│ Colormap options. │
│ ────────────────────────────────────────────────────────────────────────── │
│ --colormap-options.colormap │
│ {default,turbo,viridis,magma,inferno,cividis,gray,pca} │
│ The colormap to use (default: default) │
│ --colormap-options.normalize {True,False} │
│ Whether to normalize the input tensor image (default: False) │
│ --colormap-options.colormap-min FLOAT │
│ Minimum value for the output colormap (default: 0) │
│ --colormap-options.colormap-max FLOAT │
│ Maximum value for the output colormap (default: 1) │
│ --colormap-options.invert {True,False} │
│ Whether to invert the output colormap (default: False) │
╰────────────────────────────────────────────────────────────────────────────╯
usage: ns-render spiral [-h] [SPIRAL OPTIONS]
Render a spiral trajectory (often not great).
╭─ options ──────────────────────────────────────────────────────────────────╮
│ -h, --help show this help message and exit │
│ --load-config PATH Path to config YAML file. (required) │
│ --output-path PATH Path to output video file. (default: │
│ renders/output.mp4) │
│ --image-format {jpeg,png} │
│ Image format (default: jpeg) │
│ --jpeg-quality INT JPEG quality (default: 100) │
│ --downscale-factor FLOAT │
│ Scaling factor to apply to the camera image │
│ resolution. (default: 1.0) │
│ --eval-num-rays-per-chunk {None}|INT │
│ Specifies number of rays per chunk during eval. If │
│ None, use the value in the config file. (default: │
│ None) │
│ --rendered-output-names [STR [STR ...]] │
│ Name of the renderer outputs to use. rgb, depth, │
│ etc. concatenates them along y axis (default: rgb) │
│ --depth-near-plane {None}|FLOAT │
│ Closest depth to consider when using the colormap │
│ for depth. If None, use min value. (default: None) │
│ --depth-far-plane {None}|FLOAT │
│ Furthest depth to consider when using the colormap │
│ for depth. If None, use max value. (default: None) │
│ --render-nearest-camera {True,False} │
│ Whether to render the nearest training camera to │
│ the rendered camera. (default: False) │
│ --check-occlusions {True,False} │
│ If true, checks line-of-sight occlusions when │
│ computing camera distance and rejects cameras not │
│ visible to each other (default: False) │
│ --seconds FLOAT How long the video should be. (default: 3.0) │
│ --output-format {images,video} │
│ How to save output data. (default: video) │
│ --frame-rate INT Frame rate of the output video (only for │
│ interpolate trajectory). (default: 24) │
│ --radius FLOAT Radius of the spiral. (default: 0.1) │
╰────────────────────────────────────────────────────────────────────────────╯
╭─ colormap-options options ─────────────────────────────────────────────────╮
│ Colormap options. │
│ ────────────────────────────────────────────────────────────────────────── │
│ --colormap-options.colormap │
│ {default,turbo,viridis,magma,inferno,cividis,gray,pca} │
│ The colormap to use (default: default) │
│ --colormap-options.normalize {True,False} │
│ Whether to normalize the input tensor image │
│ (default: False) │
│ --colormap-options.colormap-min FLOAT │
│ Minimum value for the output colormap (default: 0) │
│ --colormap-options.colormap-max FLOAT │
│ Maximum value for the output colormap (default: 1) │
│ --colormap-options.invert {True,False} │
│ Whether to invert the output colormap (default: │
│ False) │
╰────────────────────────────────────────────────────────────────────────────╯