A multimodal classifier generative adversarial network for carry and place tasks from ambiguous language instructions

A Magassouba, K Sugiura… - IEEE Robotics and …, 2018 - ieeexplore.ieee.org
IEEE Robotics and Automation Letters, 2018ieeexplore.ieee.org
This letter focuses on a multimodal language understanding method for carry-and-place
tasks with domestic service robots. We address the case of ambiguous instructions, that is,
when the target area is not specified. For instance “put away the milk and cereal” is a natural
instruction where there is ambiguity regarding the target area, considering environments in
daily life. Conventionally, this instruction can be disambiguated from a dialogue system, but
at the cost of time and cumbersome interaction. Instead, we propose a multimodal approach …
This letter focuses on a multimodal language understanding method for carry-and-place tasks with domestic service robots. We address the case of ambiguous instructions, that is, when the target area is not specified. For instance “put away the milk and cereal” is a natural instruction where there is ambiguity regarding the target area, considering environments in daily life. Conventionally, this instruction can be disambiguated from a dialogue system, but at the cost of time and cumbersome interaction. Instead, we propose a multimodal approach, in which the instructions are disambiguated using the robot's state and environment context. We develop the Multi-Modal Classifier Generative Adversarial Network (MMC-GAN) to predict the likelihood of different target areas considering the robot's physical limitation and the target clutter. Our approach, MMC-GAN, significantly improves accuracy compared with baseline methods that use instructions only or simple deep neural networks.
ieeexplore.ieee.org
Showing the best result for this search. See all results