InvestorsHub Logo
Followers 28
Posts 7358
Boards Moderated 1
Alias Born 09/13/2010

Re: None

Saturday, 01/21/2017 7:48:58 AM

Saturday, January 21, 2017 7:48:58 AM

Post# of 10462
T-LESS: An RGB-D Dataset for 6D Pose Estimation of Texture-less Objects

Tomas Hodan, Pavel Haluza, Stepan Obdrzalek, Jiri Matas, Manolis Lourakis, Xenophon Zabulis
(Submitted on 19 Jan 2017)

We introduce T-LESS, a new public dataset for estimating the 6D pose, i.e. translation and rotation, of texture-less rigid objects. The dataset features thirty industry-relevant objects with no significant texture and no discriminative color or reflectance properties. The objects exhibit symmetries and mutual similarities in shape and/or size. Compared to other datasets, a unique property is that some of the objects are parts of others. The dataset includes training and test images that were captured with three synchronized sensors, specifically a structured-light and a time-of-flight RGB-D sensor and a high-resolution RGB camera. There are approximately 39K training and 10K test images from each sensor. Additionally, two types of 3D models are provided for each object, i.e. a manually created CAD model and a semi-automatically reconstructed one. Training images depict individual objects against a black background. Test images originate from twenty test scenes having varying complexity, which increases from simple scenes with several isolated objects to very challenging ones with multiple instances of several objects and with a high amount of clutter and occlusion. The images were captured from a systematically sampled view sphere around the object/scene, and are annotated with accurate ground truth 6D poses of all modeled objects. Initial evaluation results indicate that the state of the art in 6D object pose estimation has ample room for improvement, especially in difficult cases with significant occlusion. The T-LESS dataset is available online at cmp.felk.cvut.cz/t-less.

https://arxiv.org/abs/1701.05498

Figure 5. Examples of 3D object models. Top: Manually created CAD models. Bottom: Semi-automatically reconstructed models which include also surface color. Surface normalsatmodelverticesareincludedinbothmodeltypes.
3.6.GroundTruthPoses To obtain ground truth 6D object poses for images of a test scene, a dense 3D model of the scene was ?rst reconstructed with the system of Steinbr¨ucker et al. [44]. This wasaccomplishedusingall504RGB-Dimagesofthescene along with the sensor poses estimated using the turntable markers. The CAD object models were then manually alignedtothescenemodel. Toincreaseaccuracy,theobject models were rendered into several selected high-resolution scene images from Canon,
Join InvestorsHub

Join the InvestorsHub Community

Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.