Inducing explainable robot programs

End-to-end learning is able to solve a wide range of control problems in robotics. Unfortunately, these systems lack interpretability and are difficult to reconfigure if there is a minor task change. For example, a robot inspecting a range of objects needs to be retrained if the order of inspection changes.

We address this by inducing a program from an end-to-end model using a generative model consisting of multiple proportional controllers. Inference under this model is challenging, so we use sensitivity analysis to extract controller goals and gains from the original model. The inferred controller trace (a sequence of controller goal states) is then simplified and controller specific grounding networks trained to predict controller goals for visual inputs, producing an interpretable and reconfigurable program describing the original learned behaviour.

Michael Burke, Svetlin Penkov, Subramanian Ramamoorthy, From explanation to synthesis: Compositional program induction for learning from demonstrationRobotics: Science and Systems (R:SS), 2019. arXiv link