(7) Orion - Visualization, Control Utilities, and System Architecture
a) Control Utilities and Setup
Created a plugin for setting the initial position of Valkyrie in Gazebo:
Plugin for adding simple balancing capability:
I wrote a custom plugin that uses the x-position of the root body (the pelvis) as feedback for a PD joint position controller to move arms in opposite direction of the CoM in order to maintain CoM position over the supporting polygon. This can maintain balance indefinitely without disturbances. But, even a small disturbance force will cause Valkyrie to fall. All other joints besides the shoulders are holding their initial position.
Hand XYZ Task
// Jacobian is returned as a 6 x NUM_QDOT matrix even though this task is just for position
Matrix temp(6,NUM_QDOT);
Matrix J(3,NUM_QDOT);// q is a dummy variable - the state has already been updated in Val_Kin_Model.cpp
Vector q;
// id_ is the link number as defined in the enum in Valkyrie_Definition.cpp
ValkyrieModel::GetValkyrieModel()->getFullJacobian(q, id_, temp);
// just take the position half of the Jacobian (the bottom three rows)
J = temp.block(3,0,3,NUM_QDOT);
return J;
Head Gaze Task
From Dr. Sentis' Thesis, pg 64 - 66:
- E is a 3 × 4 representation transformation between quaternions to cartesian space angle
- Lambda [head] is the quaternion representing the current head orientation
- Lambda [goal] is the quaternion representing the desired orientation
Although, φ [error] is a 3 × 1 vector, only 2 DOFs are needed to orient the gaze towards the desired point. So,
- J [head(phi)] is the 3 × (n + 6) Jacobian corresponding to head rotation coordinates,
- R [h] is a rotation transformation from head coordinates to global coordinates,
- R [0] is the transpose of R [h]
- and
Finally, the control policy for the joint is given by:
Visualization of Trajectories in Rviz:
Added a singleton class that automatically handles publishing position markers in Gazebo.
Example syntax is as follows for the end user:
// Declare ROS_Publisher variable with some sejong MarkerChannels in your .h file:
ROS_Pub* rp_;
sejong::MarkerChannel mc1_;
sejong::MarkerChannel mc2_;
Then you get the static instance of the publisher in the constructor of your controller and create some markers:
// Get a static instance of the ROS_Pub class:
rp_ = ROS_Pub::get_ROS_Pub();
// Call this method to get a marker for a specific color or purpose:
rp_->get_new_pos_marker("actual", BLUE, mc1_, 500, 10.);
rp_->get_new_pos_marker("desired", LIME, mc2_, 500, 10.);
You can pass in a channel name, the channel's marker color, the number of markers in this channel that will remain visible, and the max update rate.
Whenever you want to publish a position, all you have to do is call the pub_pos_marker() method which takes the marker_channel and a 3D vector with your position. It handles all the publishing in the background. It is much cleaner than try to publish manually.
rp_->pub_pos_marker(mc1_,act);
rp_->pub_pos_marker(mc2_,des);
In the following screenshot, Valkyrie is using WBOSC for a hand position task.
You can see the tracking is fairly good between Desired (GREEN) and Actual (BLUE).
Next steps are to add quaternion and eventually ghosting of entire meshes.
Also, by using other marker types, specifically points instead of spheres, we could gain a lot of efficiency.
Rendering efficiency improvements for large arrays of position markers for Rviz:
I changed the marker type for position markers in the ROS_Pub class form SPHERE to SPHERE_LIST. This allows for much faster rendering of large numbers of spheres marking link positions in Rviz. I was surprised by the degree of the performance improvement. Nothing changes for the client user of the ROS_Pub class.
Visualization of Orientations in Rviz:
I finally completed the orientation visualization for Rviz. The idea is to be able to visualize an X, Y, and Z unit vector in the local frame of a particular body. Orientations are definitely trickier than I thought they would be. Here's a quick overview of my method:
To obtain the local frame's X, Y, and Z unit vectors I am using:
// with i = 0, 1, and 2 for X, Y, and Z
quaternion._transformVector( sejong::Vect3::Unit(i) );
This syntax is the same for position markers, and the same static ROS_Pub class handles both position and orientation publishing:
// Get a static instance of the ROS_Pub class:
rp_ = ROS_Pub::get_ROS_Pub();
Next, you ask ROS_Pub for orientation marker channels. It initializes the channels to your specified color. Notice that since, we typically do not want orientation trails, we set the maxNumMrks = 1, which means that each new update will replace the previous orientation marker. Also, in this case, the channel will update at the most once every 20 ms. This is to prevent the publishing from overloading your processor and stealing resources from the controller.
rp_->get_new_ori_marker("act_hand_ori", BLUE, mc3_, 1, 20.);
rp_->get_new_ori_marker("des_hand_ori", LIME, mc4_, 1, 20.);Now that the client has an orientation marker channel, in order to publish an orientation, all you need is a quaternion. Pass the quaternion to the pub_ori_marker channel along with a root position where you want the axes' origin to be located and it will plot a Red, Green, and Blue arrow on the local X, Y, and Z axes designating the body local orientation specified by the passed quaternion. The color specified in the get_new_ori_marker() method will identify the three axes in that channel with a colored sphere on the tip of the axis arrows.
rp_->pub_ori_marker(mc3_,pos, act_quaternion );
rp_->pub_ori_marker(mc4_,pos, des_quaternion );
The following video shows some example output. Axes with green dots display the desired orientation (which is a sinusoidal oscillation about global x-axis), and the axes with blue dots display the actual orientation. The controller is not implemented which is why the hand is not following the orientation motion plan.
Publishing alternate robot models to Rviz
Now using the MissionCtrl.launch file, we can publish multiple Valkyrie's at once to Rviz and control their pose independently. Also, we can adjust the opacity of each model, allowing one to be a "ghost". Here are some potential uses:
- Display both actual (Gazebo) Valkyrie pose as a ghost and the state-estimated Valkyrie pose in order to tune state estimator and train the operator to expect when or how the estimated pose may differ from the actual pose in order to give conservative commands or ask for update state and sensor information
- "Loop play" forward kinematic plans in Rviz. The operator should be able to send a high-level command and current state information to the Mission Control planner in order to preview and approve the planner's result prior to sending the same high-level command across the UDP or TCP connection to the Valkyrie_System_Manager in the Gazebo Plugin. One way of previewing the plan would be to use a ghost of Valkyrie in Rviz that plays a loop of the planned trajectory for the links going forward in time. Possible additional functionalities would be to pause the plan and drag a slider to show different future frames in time and to slow down playback for closer inspection. Note that this is only a kinematic preview.
- "Loop play" forward dynamics in Rviz. This functionality would take a planned trajectory and simulate the forward dynamics as well as the controller response. This could then be viewed in Rviz as a ghost, similar to the way that kinematic plans could be viewed. Donghyun calls this the "Dynamically Consistent Media Player." If the actual Gazebo simulation environment is too noisy, this may not be of any use because we are using torque control on the actual robot and the dynamics in the actual gazebo simulation may diverge relatively quickly from our internally simulated dynamics within Mission Control.
Alternate RobotModel's are published to Rviz by remapping to alternate robot_description's within the <node> tags in the launch file for both the joint_state_publisher and the robot_state_publisher and adding a tf_prefix to one of the robot_state_publisher's tf's. Here is an excerpted example from the launch file:
First you have to load a URDF and an alternate tf_prefix onto the ROS parameter server:
<param name="robot_description_2" textfile="$(find r5_src_simulation)/urdf_model/r5_urdf.urdf" />
<param name="tf_prefix_2" value="ghost" />
Then you can remap parameter names for the second joint_state_publisher and robot_state_publisher so that these do not conflict with or overload the URDF and tf_prefix used for the first robot.
<node name="joint_state_publisher_2" pkg="joint_state_publisher" type="joint_state_publisher" >
<remap from="robot_description" to="robot_description_2" />
<remap from="joint_states" to="joint_states_2" />
</node>
<node name="robot_state_publisher_2" pkg="robot_state_publisher" type="state_publisher">
<remap from="robot_description" to="robot_description_2" />
<remap from="joint_states" to="joint_states_2" />
<remap from="tf_prefix" to="tf_prefix_2" />
</node>
Actually, if you are using the same URDF for both models, then you actually do not need a separate robot_description for each publisher. But, you do need separate joint_states and tf_prefix's.
Here are some fun screenshots of the results:
Wrote a tf publisher in C++ for Valkyrie's root
We can now publish arbitrary virtual joint transformations to Rviz from C++. This was one of the prerequisites to streaming continuous trajectory forecasts from Donghyun's controller to Rviz for ghost visualization.
I will be rewriting a different version of the ros joint_state_publisher in C++ this weekend. To see how this works, I started looking at the python joint_state_publisher which is mostly just a GUI, so it's not super helpful.
Brainstorming about interface for new Visualization_Tools features
The Mission Control package needs to have the ability to save and play back trajectories in Rviz. We brainstormed several ideas for the interface. A lot of this functionality can be built into the ROS_Pub class. Here is some potential interface:
// Get a new joint_states_channel by specifying a name and a max necessary publishing rate (in this case 10 Hz)
rp_->new_JS_channel(js1, 10.);
Then to publish a set of joint_states for Valkyrie to a file, stamped with a simulation time, use the following format:
// Pass in the simulation time and the joint states
rp_->save_joint_states(js1, sim_time, model->Q);
Finally, you can load a particular channel with a previously saved continuous trajectory of joint states as follows:
// Pass in the previously populated joint states trajectory to begin playback
rp_->play_robot_traj(js1);
Brainstorming with the team about the GUI
This discussion resulted in attempting to lay out the specifications for interactions between separate parts of the software. The specifications are here.
Building joint state publishing capabilities into ROS_Pub Class
This allows for quick and easy publishing of a robot from anywhere in your C++ code where you can import ROS_Pub.
The API is as follows:
void new_js_channel(sejong::JS_Channel & jsc, std::string name, double minSleepTime=1000./30.);
void pub_js(sejong::JS_Channel & jsc, sejong::Vector & Q);
Here is some example code that shows how to do it:
// As usual, you have to declare the ROS_Pub pointer and a joint state channel:
ROS_Pub *rp;
rp = ROS_Pub::get_ROS_Pub();
sejong::JS_Channel jsc;// new_js_channel() is just called once to set things up:
rp->new_js_channel(jsc, "ghost");
Note that the name of the js_channel must be the same as the tf_prefix specified in the launch file.
Later when you're ready to publish to a ghost in Rviz, you use the following:
// Q is a vector of joint states. It must be of size NUM_Q
rp->pub_js(jsc, Q);
You can pass in joint states and they are plotted in Rviz in real time:
The other part of the code that you would need to recreate this is in the launch file. First you have to create a seperate tf_prefix for robot_state_publisher to read. Note this is the same name as the "name" parameter given to the js_channel when it is created. Then you have to remap from "joint_states" to "joint_states_"+name and from "tf_prefix" to tf_prefix_2".
In the launch file, it looks something like this:
In this case, it produces a ghosted spinning Valkyrie, with a second Valkyrie watching from afar:
Virtual Joint for Easily Moving Valkyrie without Walking/Balancing Controller:
This virtual joint was designed to allow us to proceed with project development while we were waiting for NASA to release the software with ihmc's walking controller.
Assisted in shaping architecture of the software for the challenge
This structure addresses bandwidth limitations, visualization requirements, internal dynamics simulation (Donghyun calls this the "Dynamically Consistent Media Player") etc.
Here was the results of our initial brainstorming session:
Made a more official version of the Software Architecture Schematic:
Implementing Ye's Phase Space Planner Paper using analytical solutions to find switching time and determine lateral plane footstep locations:
$ rename 's/default_Third\-Person_Camera_link_Third\-Person_Camera\(1\)\-/img/' /tmp/gazebo_cam_images/*.jpg$ ffmpeg -framerate 30 -start_number 200 -i /tmp/gazebo_cam_images/img%04d.jpg -c:v libx264 -r 30 -pix_fmt yuv420p /tmp/out2.mp4$ rm /tmp/gazebo_cam_images/*.jpg
transfer_time = 0.5swing_time = 0.8execution_mode = 1
Speed measurement for Valkyrie's various walking modes
I created a speedometer node for measuring walking speed of Valkyrie. It reports the total speed of a complete set of footstep commands, and it also reports the incremental average speed and footstep placement error each time the swing foot lands.
This will be used to formulate the cost function for our Reinforcement Learning optimization of footstep plan for ihmc's walking controller. The proposed idea is to use an actor-critic learning method, in which you compute a gradient of the cost function based on a single iteration of the simulation, and then use gradient descent to perturb the walking parameters. Through iteration, you should arrive at the local minima for footstep plans. Will minimize over the following variables:
sagittal_footstep_placementlateral_footstep_placementdouble_support_timeswing_time













