diff --git a/docs/_config.yml b/docs/_config.yml deleted file mode 100644 index 277f1f2..0000000 --- a/docs/_config.yml +++ /dev/null @@ -1 +0,0 @@ -theme: jekyll-theme-cayman diff --git a/docs/index.html b/docs/index.html new file mode 100644 index 0000000..165dfd2 --- /dev/null +++ b/docs/index.html @@ -0,0 +1,171 @@ + + + + + + Hannah Schieber + + + + + + + + + + + + + + + + +
+
+
+

NeRFtrinsic Four: An End-To-End Trainable NeRF Jointly + Optimizing Diverse Intrinsic and Extrinsic Camera Parameters

+
+ +
+ +
+
+
+
+ Hannah Schieber + 1,4, Fabian Deuser + 2, Bernhard Egger + 3, Norbert Oswald + 2, and Daniel Roth4 +
+
+
+
+
+ Human-Centered Computing and Extended Reality, Friedrich-Alexander University (FAU) Erlangen-Nurnberg, + Erlangen, Germany + 1 +
+
+ Institute for Distributed Intelligent Systems University of the Bundeswehr Munich Munich, Germany + 2 +
+
+ Lehrstuhl fur Graphische Datenverarbeitung (LGDV) Friedrich-Alexander Universität (FAU) + Erlangen-Nürnberg Erlangen, Germany + 3 +
+
+ Lehrstuhl fur Graphische Datenverarbeitung (LGDV) Friedrich-Alexander Universität (FAU) + Erlangen-Nürnberg Erlangen, Germany + 4 +
+
+
+
+ +
+ + +

arixv

+
+
+ + +

Code

+
+
+ + +

Dataset

+
+
+
+
+

Abstract

+
+ Novel view synthesis using neural radiance fields (NeRF) is the state-of-the-art technique for generating + high-quality images from novel viewpoints. Existing methods require a priori knowledge about extrinsic and + intrinsic camera parameters. This limits their applicability to synthetic scenes, or real-world scenarios + with the necessity of a preprocessing step. Current research on the joint optimization of camera parameters + and NeRF focuses on refining noisy extrinsic camera parameters and often relies on the preprocessing of + intrinsic camera parameters. Further approaches are limited to cover only one single camera intrinsic. To + address these limitations, we propose a novel end-to-end trainable approach called NeRFtrinsic Four. We + utilize Gaussian Fourier features to estimate extrinsic camera parameters and dynamically predict varying + intrinsic camera parameters through the supervision of the projection error. Our approach outperforms + existing joint optimization methods on LLFF and BLEFF. In addition to these existing datasets, we introduce + a new dataset called iFF with varying intrinsic camera parameters. NeRFtrinsic Four is a step forward in + joint optimization NeRF-based view synthesis and enables more realistic and flexible rendering in real-world + scenarios with varying camera parameters. +
+
+
+
+
+

Architecture

+
+ +
+
+
+
+
+

Results

+
+
+
LLFF
+
+ +
+
+
+
BLEFF
+
+ +
+
+
+
BLEFF
+
+ +
+
+
+
+
+

iFF Dataset

+
+ +
+
+

Citation

+
+
+

+ @misc{schieber2023nerftrinsic,
+   title={NeRFtrinsic Four: An End-To-End Trainable NeRF Jointly Optimizing Diverse Intrinsic and Extrinsic Camera Parameters},
+   author={Hannah Schieber and Fabian Deuser and Bernhard Egger and Norbert Oswald and Daniel Roth},
+   year={2023},
+   eprint={2303.09412},
+   archivePrefix={arXiv},
+  primaryClass={cs.CV}
+ } +

+
+
+
+ + + + + diff --git a/docs/index.md b/docs/index.md deleted file mode 100644 index a20a197..0000000 --- a/docs/index.md +++ /dev/null @@ -1,60 +0,0 @@ - - -**Hannah Schieber (1), Fabian Deuser (2), Bernhard Egger (3),** -**Norbert Oswald (2) and Daniel Roth (1)** - -- (1) Human-Centered Computing and Extended Reality, Friedrich-Alexander University (FAU) Erlangen-Nurnberg, Erlangen, Germany -- (2) Institute for Distributed Intelligent Systems University of the Bundeswehr Munich Munich, Germany -- (3) Lehrstuhl fur Graphische Datenverarbeitung (LGDV) Friedrich-Alexander Universität (FAU) Erlangen-Nürnberg Erlangen, Germany - -contact e-mail: hannah.schieber[at]fau.de - -# Overview - -[Paper](https://arxiv.org/pdf/2303.09412.pdf) | [iFF](https://drive.google.com/file/d/1deYczPDEcsInCD4MkSKeH_ZMbq_TGGi4/view) - -# Visual improvements - -Our NeRFtrinsic Four can handle divers camera intrinsics which leads to a better result. - -![image](https://user-images.githubusercontent.com/22636930/231704734-de5774b9-7af6-4f77-ade1-92b5431bfe0a.png) - -# Architecture - -![image](https://user-images.githubusercontent.com/22636930/231704527-8c070d6b-0ac8-4432-9bd2-17725a04d191.png) - -# Results -## LLFF - - - - -## BLEFF -![image](https://user-images.githubusercontent.com/22636930/231705973-028f4b1e-27c3-4d3e-ba24-038afd04ce6c.png) - -## iFF - -![image](https://user-images.githubusercontent.com/22636930/231706040-cb0ef15e-f923-419c-a71c-4d910c5220b4.png) - -You want to work with varying intrinsics as well, check out [iFF](https://drive.google.com/file/d/1deYczPDEcsInCD4MkSKeH_ZMbq_TGGi4/view). - - -### iFF Example Images - -![image](https://user-images.githubusercontent.com/22636930/231706690-bfa8a920-4800-48aa-9104-6dc0c33d4c4b.png) - -# Citation - -You do something similar? or use parts from our code here you can cite our paper: - -``` -@misc{schieber2023nerftrinsic, - title={NeRFtrinsic Four: An End-To-End Trainable NeRF Jointly Optimizing Diverse Intrinsic and Extrinsic Camera Parameters}, - author={Hannah Schieber and Fabian Deuser and Bernhard Egger and Norbert Oswald and Daniel Roth}, - year={2023}, - eprint={2303.09412}, - archivePrefix={arXiv}, - primaryClass={cs.CV} -} -``` - diff --git a/figures/0000.png b/figures/0000.png new file mode 100644 index 0000000..ad17154 Binary files /dev/null and b/figures/0000.png differ diff --git a/figures/code.png b/figures/code.png new file mode 100644 index 0000000..eb8fee1 Binary files /dev/null and b/figures/code.png differ diff --git a/figures/paper.png b/figures/paper.png new file mode 100644 index 0000000..8f42fe1 Binary files /dev/null and b/figures/paper.png differ diff --git a/figures/teaser.png b/figures/teaser.png new file mode 100644 index 0000000..a51e3fb Binary files /dev/null and b/figures/teaser.png differ