InfiniTAM an open source, multi-platform framework for real-time, large-scale depth fusion and tracking, released under an Oxford University Innovation Academic License.
We support sparse volumes, using an implementation of our ISMAR 2015 paper, optionally with loop closure, using an implementation of our ECCV 2016 paper. A prelimiary surfel-based version of the pipeline is also included, as detailed in our 2017 Technical Report.
We are part of the Oxford Active Vision Library.
If you use our framework, please cite the relevant paper(s), as explained here.
16/08/2017 - InfiniTAM v3 released. We’ve added:
- an improved camera tracking module;
- an implementation of Glocker et al.’s keyframe-based random ferns camera relocaliser;
- a novel approach to globally-consistent TSDF-based reconstruction, based on dividing the scene into rigid submaps and optimising the relative poses between them;
- an implementation of Keller et al.’s surfel-based reconstruction approach.
- a new build script for Windows and Unix/MacOS that makes building InfiniTAM a lot easier;
- many other fixes and improvements.
06/04/2016 - The hhash_v2 branch of InfiniTAM v2 released, implementing our ICRA 2016 paper.
18/09/2015 - InfiniTAM is now part of the Oxford Active Vision Library.
30/07/2015 - InfiniTAM v2 released: over 10 times faster than v1; iOS and Android versions; export to STL; many many other fixes and improvements.
Why use it?
InfiniTAM was designed with a focus on extensibility, reusability and portability:
Depending on the scene, processing runs at over 1000fps on a single NVIDIA Titan X graphics card and real-time on iOS (over 25fps) and NVIDIA K1-based Android devices (over 40fps).
The world is captured sparsely using small voxel blocks indexed by a hash-table or surfels.
The relocalisation module can recover from tracking failure and detect loops.
A loop closure optimisation can be enabled to compensate for tracking drift (not supported for the surfel representation).
InfiniTAM swaps memory between CPU and GPU in real-time, which allows for virtually infinite environments to be built (not supported for the surfel and loop closure representations).
We provide C++ code for both CPU and GPU implementations (NVIDIA CUDA and Apple Metal) and most of it is reused between the various implementations.
The framework allows for easy integration of components that either replace existing ones or extend the capabilities of the system (e.g. 3D object detection, new tracker, etc.).
The code compiles natively on Windows, Linux, Mac OS X, iOS and Android.
The core processing library has no dependencies for the CPU version. and only CUDA for the GPU one. The user interface requires only OpenGL and GLUT. Depth can be sourced from image files or, optionally, using OpenNI 2.