Breakthrough in Initialization of Vision-based Spacecraft Shape Estimation
I am pleased to report that we have obtained some ground-breaking results in speeding up vision-based monocular 3D model abstraction for spacecraft RPOD (rendezvous, proximity operations, docking) using a super-quadric prior obtained by a Convolutional Neural Network (CNN) to warm-start a sequential 3D Gaussian Splatting (3DGS) algorithm.
The video below shows a 1:1 comparison between vanilla initialization (bottom-left: random sampling of bounding box) and our new CNN-based initialization of 3DGS using super-quadrics (bottom-right). For this test, sparse low resolution 2D images (256x256) of the Deep Space 1 spacecraft from our SPE3R dataset are used for online initialization and sequential training of 3DGS.
The overall approach and its evaluation on the complete SPE3R dataset will be documented and presented at the upcoming AAS conference in Summer.
Acknowledgements go to Redwire Space (Al Tadros) for the sponsorship and to the research team at the Stanford Space Rendezvous Laboratory including Pol Francesch Huc and Emily Bates, great job!