Shafeeq Elanattil

shafeeq 

I am a PhD student at the Speech, Audio, Image and Video Technologies (SAIVT) of QUT and Data61 of CSIRO under the supervision of Dr Peyman Moghadam, Dr Simon Denman, Prof Sridha Sridharan and Prof. Clinton Fookes.I completed Masters degree in electronics systems at Indian Institute of Technology, Bombay (IITB).

Resume

Contact

Email: Shafeeq.Elanattil@Data61.csiro.au / eshafeeqe@gmail.com

Address: N3 Block, 1 Technology Ct, Pullenvale QLD 4069

Research Interest

My interests are in the area of computer vision, computer graphics, and machine learning. My current research focuses on non-rigid 3D reconstruction of human subjects with RGB-D sensors. Single-view 3D reconstruction of non-rigid objects such as humans and animals. Animals are interesting subjects because they are highly deformable and also because their ground truth 3D data are not practical to acquire in an unconstrained environment. This motivates interesting questions like "how can we learn a model that can infer the 3D shape of non-rigid objects from a single image, just from observing 2D images and videos?" Also, animal pictures are a lot of fun to work with.

I believe the key to solving this problem is having a strong prior on object pose and shape, which can resolve many of the ambiguities. I am also interested in exploring how deep learning methods can be used for building such models.

Under Review

Robust RGB-D based Non-rigid 3DReconstruction of human bodies using skeleton detection

Currently Under Review (CVIU)
Shafeeq Elanattil, Peyman Moghadam, Simon Denmon, Sridha Shridharan, Clinton Fookes

We proposing a 3D human performance capture method using a single RGB-D camera. Single-view 3D reconstruction of human performance is challenging during fast motions and self-occlusions. Our approach effectively utilises the skeleton detections from the images for reconstruction of human subjects.

Please see the results from this work: results link

Papers

Non-rigid Reconstruction with a Single Moving RGB-D Camera

Selected for Oral (Acceptance ratio < 10%)
Shafeeq Elanattil, Peyman Moghadam, Sridha Shridharan, Clinton Fookes, Mark Cox
ICPR 2018, H5 Index: 34
[project page] [arXiv preprint] [video] [bibtex]

Skeleton Driven Non-rigid Motion Tracking and 3D Reconstruction
Shafeeq Elanattil, Peyman Moghadam, Simon Denmen, Sridha Shridharan, Clinton Fookes
DICTA 2018
[project page] [arXiv preprint] [poster] [bibtex]

New Feature Detection Mechanism for Extended Kalman Filter Based Monocular SLAM with 1-point RANSAC
Shafeeq Elanattil, Agniva Sengupta
MIKE 2015
[arXiv preprint] [bibtex]

Datasets Published

Synthetic Data for Non-rigid 3D Reconstruction using a Moving RGB-D camera

This dataset recently added into SLAMBench 3.0 suit.
SLAMBench is a software suite for systematic automated reproducible evaluation of SLAM systems for robot vision challenges and scene understanding.
The publication of SLAMBench 3.0: Link

Published along with our ICPR work

The dataset consists of

  1. Complete scene geometry at first frame for evaluating canonical reconstruction.
  2. Live scene geometry at each frame in world coordinates.
  3. Ground truth of camera trajectory.
  4. Ground truth of foreground mask.
Please see the dataset document for more details.
Download Link

[bibtex]

Synthetic Human Model Dataset for Skeleton Driven Non-rigid Motion Tracking and 3D Reconstruction
Published along with our DICTA work

The dataset consists of

  1. Ground truth of human 3D geometry at each frame in world coordinate frame.
  2. Ground truth of skeleton points at each frame in world coordinate frame.
  3. Extrinsic parameters of RGB and depth cameras.
  4. RGB and depth images.
Please see the dataset document for more details.
Download Link

[bibtex]