A robotic system that learns the rules and relations of ownership based on interaction with objects and agents in its environment. Capable of ownership prediction through perceptual heuristics, ownership inference through Bayesian logic, and norm learning through incremental rule induction.
Pre-print (accepted AAAI 2019): That's Mine! Learning Ownership Relations and Norms for Robots
Developed primarily at the Yale Social Robotics Lab.
If marked with an asterisk *
, prerequisites are necessary even for a simulation-only compile.
scikit-learn
*: for ownership prediction via logistic regressionnumpy
*: for inference related math operationsPyAudio
: for accessing audio inputpocketsphinx
: for speech-to-text transcription
aruco_ros
: for recognition and tracking of QR codescv_bridge
: for computer vision via OpenCVsvox_tts
: for text-to-speech synthesis through SVOXhuman_robot_collaboration
: for arm control
We follow human_robot_collaboration
in using catkin_tools
.
- Make sure you're on the correct branch/version of both
human_robot_collaboration_lib
andaruco_ros
- Compile
human_robot_collaboration_lib
,aruco_ros
and 'svox_tts' if necessary - Compile
ownage_bot
:catkin build ownage_bot
If you have a ROS installation but don't have human_robot_collaboration
, aruco_ros
or 'svox_tts', you can still compile and run the learning algorithm in
simulated mode.
- Set the
OWNAGE_BOT_SIMULATION
variable:export OWNAGE_BOT_SIMULATION=1
- Switch back to full compile by calling
unset OWNAGE_BOT_SIMULATION
- Compile
ownage_bot
:catkin build ownage_bot
- You may have to delete
build/CMakeCache.txt
in your Catkin workspace for changes in the environment variables to be noticed.
(Mainly for Scazlab researchers.)
- Turn on the robot. Wait for the robot to finish its start-up phase.
- Be sure that the system you're running the code has access to the Baxter robot. This is usually done by running the
baxter.sh
script that should be provided in your Baxter installation. - Untuck the robot. @ScazLab students → we have an alias for this, so you just have to type
untuck
.
Running roslaunch ownage_bot.launch
brings up a command prompt for text-based user input. Input can consist of atomic actions, higher-level tasks, permission-based instruction (i.e. forbidding actions on specific objects), rule-based instruction (i.e. forbidding actions on based on object properties), listing and clearing various databases, etc. A detailed list of input commands is given below.
list <database>
: Lists available actions, tracked objects, learned rules, etc.list objects [simulated] <fields>...
: Lists objects as perceived by the tracker- If
simulated
is present, list all objects in the simulated environment, including those not tracked - Lists all specified fields, defaults to listing id, color, position and ownership
- If
list agents [simulated]
: Lists all agents and their names- If
simulated
is present, list all agents in the simulated environment, including those not tracked
- If
list predicates
: List names of all available predicateslist rules
: List all currently active ruleslist actions
: Lists all actions that the robot can takelist tasks
: List all higher-level tasks
reset <database>
: Resets the specified databasereset perms
: Resets the permission databasereset rules
: Resets the active rule databasereset claims
: Resets the database of ownership claimsreset objects
: Resets the database of tracked objectsreset agents
: Resets the database of tracked agentsreset simulation
: Resets and regenerates the simulated environmentreset all
: Resets all of the above
(freeze|unfreeze) <database>
: Freezes changes to databasesfreeze perms
: Freezes the permission database (default unfrozen)freeze rules
: Freezes the rules database (default unfrozen)
(disable|enable) <function>
: Disables certain learning capabilitiesdisable inference
: Disables rule-based inference of ownershipdisable extrapolate
: Disables percept-based prediction of ownership
i am <agent>
: Make<agent>
the current user and update the agent database accordingly<action>
: Calls the corresponding action<task>
: Calls the corresponding taskownedBy <oid> <aid>
: Claim that object is owned by(forbid|allow) <action> on <oid>
: Give object-specific permission for<action>
on object<oid>
(forbid|allow) <action> if <predicate> <args> [and] ...)
: Give rule forbidding or allowing a certain action under the specified conditions?<predicate> [pre-args] ? [post-args]
: Query for the arguments in the?
slot and their corresponding truth values
For testing and debugging the arm control service provided by action_provider
- Run
roslaunch ownage_bot.launch manual:=true
- For actions that have no targets, call
rosservice call /action_provider/service_left "{action: 'action_name'}"
- For actions with objects as targets, call
rosservice call /action_provider/service_left "{action: 'action_name', object: {id: object_id}}"
- For actions with locations as targets, call
rosservice call /action_provider/service_left "{action: 'action_name', location: {x: x, y: y, z: z}}"
goHome
: moves the arm to a position above its home arearelease
: turns off vacuum gripper at current height and locationmoveTo
: moves arm to specified location in 3D space (requires location)find
: moves arm over the location of specified object (requires object ID)pickUp
: picks up an object with a vacuum gripper (requires object ID)putDown
: puts down an object gently at current x-y locationreplace
: replaces object in last pick-up locationwait
: waits 3 seconds, can be interrupted by feedback from cuff button
OwnageBot is comprised of many different ROS nodes, each providing a certain functionality. They are roughly organized into similar functions below
agent_tracker
tracks the properties of all agents encountered as well as the identity of the current userobject_tracker
contains the abstract ObjectTracker class for tracking objectsaruco_tracker
inherits fromobject_tracker
to implement object tracking through ArUco tagsendpoint_tracker
inherits fromobject_tracker
to implement tracking of objects gripped by an endpoint manipulatorownership_tracker
tracks and updates the ownership probabilities of each objectbaxter_tracker
inherits from the above three nodes to combine their functionality for real-world object trackingsimulated_tracker
inherits fromownership_tracker
and implements zero-noise tracking in a simulated environment
rule_manager
manages and updates the rules learned through interaction with the environmenttask_manager
carries out assigned actions and tasks, checking if they are forbidden firstrule_instructor
automatically trains and evaluates the rule learning and ownership prediction capabilities
world_simulator
generates and stores a simulated environment, from whichsimulated tracker
gets dataworld display
shows all currently tracked objects and their x-y locations in a 2D graphical display
dialog_manager
handles both text and speech input, relaying the appropriate messages to and from other nodescommand_prompt
provides a command prompt for text input and output via cursesscreen_manager
displays the camera feed and other relevant information on the Baxter screenspeech_processor
handles speech recognition and synthesis for (quasi-)natural dialog