Archives

Pose estimation for human-robot interaction

Human-robot interaction using gesture recognition typically requires that the 3D pose of a human be tracked in real time, but this can be challenging, particularly when only a single, potentially moving, camera is available. We use a mixture of random walks model for human motion that allowed for fast Rao-Blackwellised tracking, and provide a useful mechanism to map from 2D to 3D pose when only a few joint measurements were made.

Pose estimation code is available here and here. As a useful byproduct, a simplified motion model proves quite effective at estimating missing marker positions for motion capture applications. Code is available here.

Publications

Burke, M. and Lasenby, J., Estimating missing marker positions using low dimensional Kalman smoothing, Journal of Biomechanics , Volume 49 , Issue 9 , 1854 – 1858 (2016).

Burke, M. G. (2015). Fast upper body pose estimation for human-robot interaction (doctoral thesis).

Burke, M. and Lasenby, J., Single camera pose estimation using Bayesian filtering and Kinect motion priors, (2014).

Burke, M. and Lasenby, J., Fast upper body joint tracking using kinect pose priors, International Conference on Articulated Motion and Deformable Objects (Best paper award), 94-105, 2014.

Human (and cereal) following robots

This project focused on feature-based object recognition and tracking using a single camera for a target-following mobile robot. Robot controls are generated so as to maximise the chances of successfully detecting and tracking the object of interest while navigating.

This was a particular challenge at the time (2010 – before convnet fame) as object recognition approaches were extremely unreliable and moving cameras struggled with motion blur. This work was combined with LIDAR-based target tracking and obstacle avoidance to build the CSIR Autonomous Mule. Videos of the work and related publications are listed below.

 

Publications

Burke, Michael, and Willie Brink. “Estimating target orientation with a single camera for use in a human-following robot.“, Proceedings of the 21st Annual Symposium of the Pattern Recognition Association of South Africa (2010).

Burke, Michael. “Laser-Based Target Tracking using Principal Component Descriptors.”, Proceedings of the 21st Annual Symposium of the Pattern Recognition Association of South Africa (2010).

Burke, Michael, and Willie Brink. “Gain-scheduling control of a monocular vision-based human-following robot.IFAC Proceedings Volumes 44.1 (2011): 8177-8182.

Burke, Michael Glen. Visual servo control for a human-following robot. Diss. Stellenbosch University, 2011.