In this section we consider the following unicycle:
This robot has 2 dof: , the translational and rotational velocities that are applied at point E, considered as the end-effector. A camera is rigidly attached to the robot at point C. The homogeneous transformation between C and E is given by cMe. This transformation is constant.
The robot position evolves with respect to a world frame; wMe. When a new joint velocity is applied to the robot using setVelocity(), the position of the camera wrt the world frame is also updated; wMc.
To control the robot by visual servoing we need to introduce two visual features. If we consider a 3D point at position O as the target, to position the robot relative to the target we can consider the coordinate of the point in the image plane and , with the distance of point in the camera frame, as visual features. The first feature implemented in vpFeaturePoint allows to control , while the second one implemented in vpFeatureDepth. The position of the target in the world frame is given by wMo transformation. Thus the current visual feature and the desired feature .
We provide now a line by line explanation of the code.
Firstly we define cdMo the desired position the camera has to reach wrt the target. should be different from zero to be non singular. The camera has to keep a distance of 0.5 meter from the target.
With the next line, we specify the king of visual servoing control law that will be used to control our mobile robot. Since the camera is mounted on the robot, we consider the case of an eye-in-hand visual servo. The robot controller provided in vpSimulatorPioneer allows to send velocities. This controller implements also the robot jacobian that links the end-effector velocity skew vector to the control velocities . The also provided velocity twist matrix allows to transform a velocity skew vector expressed in the end-effector frame in the camera frame.
We specify then that the interaction matrix is computed from the visual features at the desired position. The constant gain that allows an exponential decrease of the features error is set to 0.2.
Then comes the material used to plot in real-time the curves that shows the evolution of the velocities, the visual error and the estimation of the depth. The corresponding lines are not explained in this tutorial, but should be easily understand by reading Tutorial: Real-time curves plotter tool.
In the visual servo loop we retrieve the robot position and compute the new position of the camera wrt the target:
std::cout << "Reached a small error. We stop the loop... " << std::endl;
break;
}
Unicycle with a moving camera
In this section we consider the following unicycle:
This robot has 3 dof: , as previously the translational and rotational velocities that are applied here at point M, and the pan of the head. The position of the end-effector E depends on position. The camera at point C is attached to the robot at point E. The homogeneous transformation between C and E is given by cMe. This transformation is constant.
You can just notice here that we compute the control law using the current interaction matrix; the one computed with the current visual feature values.