TVM  0.9.2
How to write a problem

Preliminaries

In this tutorial, we look at how to write and solve a control problem in TVM, through an Inverse Kinematics (IK) example.

We consider a 2d 3-links robot, and we want to bring its end effector to a circle (see the figure above). Let's call \( q \in \mathbb{R}^3 \) the configuration of the robot, and \( g(q) \) the position of its end effector. For a given initial \( q_0 \), we are interested in computing a trajectory \( q(t) \) converging to a value where

\( \left\| g(q) - c \right\|^2 - r^2 = 0 \qquad (1) \)

where \( c \) and \( r \) are the center and radius of the circle.

Note that the constraint above can be rewritten equivalently by introducing an intermediate variable \( x \in \mathbb{R}^2 \), and requiring that

\( \begin{align} g(q) - x = 0 \qquad (2) \\ \left\| x - c \right\|^2 - r^2 = 0 \qquad (3) \end{align} \)

The constraint of having the end effector of a robot on a circle is very specific. So, rather than writing a dedicated function for that, and use expression (1), and for the purpose of illustrating features of TVM, we will use of formulation (2)-(3).

Let us note \( e_1(q,x) = g(q) - x\), \( e_2(x) = \left\| x - c \right\|^2 - r^2\) and further assume that each joint must stay within the bounds \( [-\pi/2, \pi/2]\). The target configuration must be such that

\( \begin{align} e_1(q) = 0 & \\ e_2(q) = 0 & \\ -\pi/2 \leq q_i \leq \pi/2&, \ i=1..3 \end{align} \)

This is a static target. If we want to control the robot toward such a configuration, we need to describe how it should converge to it from a configuration that does not respect the constraints. This behavior description is what we call in TVM task dynamics. For the two equality constraints, we will specify the desired rate of reduction of the functions \( e_i \) with a simple proportional behavior: \( \dot{e}_i^* = -k_p e_i \). The control should then try to achieve \( \dot{e}_i = \dot{e}_i^* \). For the bounds, we will be using a velocity damper of the form

\( \begin{equation} \dot{d}^* = \left\{ \begin{array}{ll} -\xi \frac{d-d_s}{d_{int}-d_s} &\mbox{if}\ d \leq d_{int}\\ -\infty & \mbox{otherwise} \end{array}\right. \qquad (4) \end{equation} \)

for \( d \) defined as \( d_i^- = q_i + \pi/2 \) or \( d_i^+ = \pi/2 - q_i \) (see also tvm::task_dynamics::VelocityDamper for more details).

The trajectory of \( q \) is not uniquely defined by by the above specifications (we are only constraining 2 of the 3 degrees of freedom of the robot). As an additional requirement, that would help specifying the trajectory completely, we would like to have minimal velocity. The velocity would be minimum for \( \dot{q} = 0 \). This however will have a lower priority than fulfilling the other tasks.

What we wish to achieve (our IK problem) is thus

\( \begin{align} e_1(q) = 0 &,\ \ \dot{e}_1^* = -k_{p1} e_1 & \mbox{(high priority)} \quad (5.a)\\ e_2(q) = 0 &,\ \ \dot{e}_2^* = -k_{p2} e_2 & \mbox{(high priority)} \quad (5.b)\\ -pi/2 \leq q_i \leq \pi/2&,\ \ \dot{d}_i^{-*},\ \dot{d}_i^{+*} \mbox{as in (4)}, \ i=1..3 & \mbox{(high priority)} \quad (5.c)\\ \dot{q} = 0 & & \mbox{(low priority)} \quad (5.d) \end{align} \)

This is the problem we will write in TVM.

A triplet (function, comparison to 0, task dynamics), such as \( (e_1, =, \dot{e}_1^*) \), constitutes a task.

The task dynamics chosen are specifying the desired derivative of the error functions. The instantenous (linearized) control problem we will end up solving is thus

\( \begin{align} \min_{\dot{q}, \dot{x}}. &\ \frac{1}{2}\left\| \dot{q} \right\|^2 &\quad (6)\\ \mbox{s.t.} &\ \frac{\partial e_1}{\partial q}(x,q) \dot{q} + \frac{\partial e_1}{\partial x}(x,q) \dot{x} = -k_{p1} e_1(x,q)\\ &\ \frac{\partial e_2}{\partial x}(x) \dot{x} = -k_{p2} e_2(x)\\ &\ \dot{d}_i^{-*} \leq \dot{q}_i \leq \dot{d}_i^{+*}, \ i=1..3 \end{align} \)

However, we do not need to write it ourself. TVM takes care of it automatically.

Solving this problem for some values \( (q_k, x_k) \), we get the optimal values \( (\dot{q}^*, \dot{x}^*) \). We can then integrate with a given timestep \( dt \):

\( \begin{align} q_{k+1} = q_k + \dot{q}^*\\ x_{k+1} = x_k + \dot{x}^* \end{align} \)

The process can be repeated until convergence.

Implementation outline

To define and solve the problem above, we need to the following steps

  1. Declare and initialize the variables of our problem
  2. Define the functions \( e_1 \) and \( e_2 \) (the other functions are trivial and can be written directly)
  3. Write the control problem
  4. Create a resolution scheme
  5. Solve and integrate until convergence

To define the functions, we assume that we have the following class derived from tvm::function::abstract::Function.

  • Simple2dRobotEE, which computes the end-effector position of a 2d n-link robot (i.e. the function \( g(q) \))
  • SphereFunction, which compute the distance of a point to a sphere (that is a circle in 2d)
  • IdentityFunction, self explanatory
  • DifferenceFunction, which computes the difference of two functions. The implementation of these functions is not the topic of this example. The way to write functions is described in How to create new TVM functions..
Note
Except for IdentityFunction, these functions are example functions, not part of the TVM library.

Implementation details

Note
In the following, the code snippet are written with the assumption of these declarations:
using namespace Eigen;
using namespace tvm;
using namespace tvm::requirements;
using namespace tvm::task_dynamics;
using std::make_shared;

Variable creation

Variables do not exist on their own. They are points in some mathematical spaces. To create a variable, we thus need to have a space first. The objects tvm::Space and tvm::Variable are available through the inclusion of tvm/Variable.h. Here our variables live in Euclidean spaces ( \( \mathbb{R}^2 \) and \( \mathbb{R}^3 \)), so that simply giving the dimension of the space is enough to characterize it. Non-Euclidean manifolds can be described as well with tvm::Space (to e.g. use SO(3) or SE(3)). tvm::Space act as a factory for tvm::Variable, requiring a name to create a variable:

Space R2(2); // Creating a 2-dimensional space
VariablePtr x = R2.createVariable("x");
Space R3(3); // Creating a 3-dimensional space
VariablePtr q = R3.createVariable("q");

This is one of the only way to create variables (the other is to get a derivative or primitive of an already existing variable). tvm::Variable keeps a copy of the tvm::Space it was created from, so there is no need to pay attention to the lifetime of the tvm::Space it was create from.

It is good practice to initialize the value of the variables immediately. This can be done in one of the following two ways:

x << 0.5, 0.5; // Eigen-like initialization
q->value(Vector3d(0.4, -0.6, -0.1)); // Change value through setter.

Task function definitions

We create the function \( e_1 \) as follows:

auto g = make_shared<Simple2dRobotEE>(q, Vector2d(-3, 0), Vector3d(1, 1, 1)); // g(q)
auto idx = make_shared<function::IdentityFunction>(x); // I x
auto e1 = make_shared<Difference>(g, idx); // e_1(q,x) = g(q) - x

The first line creates the function g computing the position of the end- effector of a robot whose configuration is given by the variable q, is rooted in \( (-3,0) \), and whose 3 links have unit length (Vector3d(1,1,1) ).
The second line creates an identity function of the variable x.
The third line defines \( e_1(q,x) = g(q) - x \). Note that Difference automatically detect what are its variables, no need to specify them.

For \( e_2 \), we take a circle with center \( c = (0, 0) \) and radius 1:

auto e2 = make_shared<SphereFunction>(x, Vector2d(0, 0), 1); // e_2(x)

Writing the control problem

To describe problem (5), we need to populate an instance of tvm::ControlProblem. This is done by adding tasks (tvm::Task) paired with the way we want to solve them, e.g. the level of priority, possible weights, ..., what is known in TVM as solving requirements and is implemented by classes found in tvm::requirements. To do so, we simply use the method tvm::ControlProblem::add.
While we can construct a tvm::Task explicitely from a triplet (function, operator, task dynamics), tvm::ControlProblem::add in conjunction with some utility functions of TVM offers a lighter syntax that we will demonstrate.

ControlProblem pb;
auto t1 = pb.add(e1 == 0., Proportional(2), PriorityLevel(0));
auto t2 = pb.add(e2 == 0., Proportional(2), PriorityLevel(0));
auto t3 = pb.add(-b <= q <= b, VelocityDamper({ 1, 0.01, 0, 0.1 }), PriorityLevel(0));
auto t4 = pb.add(dot(q) == 0., { PriorityLevel(1), AnisotropicWeight(Vector3d(10,2,1)) });

The first line create a tvm::ControlProblem instance. The last four lines are a direct transcription of (5.a) - (5.d):

  • Line 2 is creating the task based on the error function \( e_1 \) that must be and equality task, with a task dynamics of type tvm::task_dynamics::Proportional whose gain \( k_p \) is 2. This task is required to be solved with a priority level of 0 (the highest priority).
    Note
    The dot after the 0 in e1==0. is important. If absent, the compiler will attempt a pointer comparison.
  • Line 3 does exactly the same for the task based on \( e_2 \).
  • Line 4 creates the task on the bounds. Here, b is the vector Vector3d b = Vector3d::Constant(tvm::constant::pi/2);. We can directly write the comparison of the variable q to its bounds -b and b without explicitely creating a function (more generally, linear expressions of the variables can be written directly). The task dynamics for this task is a velocity damper, whose constructor takes a tvm::task_dynamics::VelocityDamperConfig. The parameters read as follows: the first one (1) is \( d_{int}\), the second one (0.01) is \( d_s\). Since the third one is 0, the parameter \( \xi \) will be computed automatically. The fourth parameter is an offset for this automatic computation. See tvm::task_dynamics::VelocityDamper and tvm::task_dynamics::VelocityDamperConfig::VelocityDamperConfig for more details. This task also needs to be solved at priority level 0.
  • Line 5 introduces the task on \( \dot{q} \). It relies on the tvm::dot functions which returns the time derivative of a tvm::Variable. (dot(q,2) would return \( \ddot{q} \)). The implicit task dynamics here is that \( \dot{q}^* = 0 \), which is materialized by an instance of tvm::task_dynamics::None that we don't need to write explicitely. This task needs to be solved at priority level 1. We also give an example of weight: here we require that the weight on minimizing the velocity of the first joint is 5 times the weight for the second joint and 10 times the weight for the third one (note the use of brackets around the list of requirements).

So far, we described the problem. Now we need to specify what way we want to solve it. First, we will make a linearized version of it:

LinearizedControlProblem lpb(pb);

tvm::ControlProblem and tvm::LinearizedControlProblem are available through the inclusion of tvm/LinearizedControlProblem.h.

Then we choose a resolution scheme to solve it

scheme::WeightedLeastSquares solver(solver::DefaultLSSolverOptions{});

Here we choose a scheme, available by inclusion of tvm/scheme/WeightedLeastSquares.h, that transforms our problem into a constrained least-squares problem, where the constraints comes from the tasks at priority 0 and the objective is a sum of tasks at priority 1 (or higher, but this is out of scope) with the possible weights specified with solving requirements. This scheme is build by passing options which will in particular indicates the underlying solver to be used.

Note
TVM makes the distinction between a solver, which is essentially a routine to solve a type of problem, and a resolution scheme which assembles the problem data in a specific way and call one or several solvers on them. For example, tvm::scheme::WeightedLeastSquares "interprets" our tvm::LinearizedControlProblem as (6), and call one of the possible QP solvers as specified by the options chosen.

By including tvm/solver/defaultLeastSquareSolver.h, and using tvm::solver::DefaultLSSolverOptions (or tvm::solver::DefaultLSSolverFactory) we are choosing the first available solver supported by TVM (see the header for more details on which solver get chosen). If you want to choose a specific solver, you can simply include its header and choose the corresponding option. For example, QLD is available by including tvm/solver/QLDLeastSqaureSolver.h and using tvm::solver::QLDLSSolverOptions. Choosing a specific solver offers more fine control with the options.

Now, we can just do solver.solve(lpb). The variables of our problem have been automatically detected as being \( \dot{x} \) and \( \dot{q} \). Their counterpart in the code are dot(x) and dot(q), whose value after the call to solve are containing the solution of the problem.

If we consider as a stopping criterion for the IK that we should have \( \left\| \dot{x} \right\| \leq 10^{-8} \) and \( \left\| \dot{q} \right\| \leq 10^{-8} \), then we can write the IK as

do
{
solver.solve(lpb);
x->value(x->value() + dot(x)->value() * dt);
q->value(q->value() + dot(q)->value() * dt);
} while (dot(q)->value().norm() > 1e-8 || dot(x)->value().norm() > 1e-8);

with dt the value of the time step for integration (taken here as 0.1).

This is the total code for this example:

 
VectorXd IKExample()
{
// Creating variable x in R^2 and initialize it to [0.5, 0.5]
Space R2(2);
VariablePtr x = R2.createVariable("x");
x << 0.5, 0.5;
// Creating variable q in R^3 and initialize it to [0.4, -0.6, -0.1]
Space R3(3);
VariablePtr q = R3.createVariable("q");
q->set(Vector3d(0.4, -0.6, -0.1));
// Creating the functions
auto g = make_shared<Simple2dRobotEE>(q, Vector2d(-3, 0), Vector3d(1, 1, 1)); // g(q)
auto idx = make_shared<function::IdentityFunction>(x); // I x
auto e1 = make_shared<Difference>(g, idx); // e_1(q,x) = g(q) - x
auto e2 = make_shared<SphereFunction>(x, Vector2d(0, 0), 1); // e_2(x)
Vector3d b = Vector3d::Constant(tvm::constant::pi / 2);
// Creating the problem
ControlProblem pb;
auto t1 = pb.add(e1 == 0., Proportional(2), PriorityLevel(0));
auto t2 = pb.add(e2 == 0., Proportional(2), PriorityLevel(0));
auto t3 = pb.add(-b <= q <= b, VelocityDamper({1, 0.01, 0, 0.1}), PriorityLevel(0));
auto t4 = pb.add(dot(q) == 0., {PriorityLevel(1), AnisotropicWeight(Vector3d(10, 2, 1))});
// Linearization
LinearizedControlProblem lpb(pb);
// Creating the resolution scheme
scheme::WeightedLeastSquares solver(solver::DefaultLSSolverOptions{});
// IK loop
int i = 0;
do
{
solver.solve(lpb);
x->set(x->value() + dot(x)->value() * dt);
q->set(q->value() + dot(q)->value() * dt);
++i;
} while((dot(q)->value().norm() > 1e-8 || dot(x)->value().norm() > 1e-8) && i < maxIt);
std::cout << "At q = " << q->value().transpose() << ",\n e1 = " << e1->value().transpose() << "\n"
<< " convergence in " << i << " iterations" << std::endl;
return q->value();
}

Remarks

  • We never have to define an optimization variable as an aggregate of \( \dot{x} \) and \( \dot{q} \). This is done automatically and internally by TVM. The user just need to manipulate the variables independently, including to retrieve the solution.
  • In the definition of a task, the right hand side of the comparison operator (e.g. == in the first task) need not be 0. It can be any scalar or vector with the correct dimension (a scalar is interpreted as a vector with the same dimension as the left hand side and whose elements are all equal to the scalar). For example, if we wanted the robot end-effector to go to [-1;1] we could simply write g == Vector2d(-1,1).
  • As a shortcut, we can directly declare pb to be a tvm::LinearizedControlProblem and use the add method on it.
  • If you want to see the matrices and vectors assembled by solver, you can pass solver::DefaultLSSolverOptions().verbose(true) at its creation.
  • Thanks to our introduction of the variable \( x \), it is easy to change the shape we want the robot end-effector to reach. For example, \( x \) could be constrained on a line \( n^t x = a \). This can be done by simply replacing e2==0. by n.transpose() * x == a with n a Eigen::VectorXd and a a double.

To go further

Substitutions

The introduction of the variable \( x \) is useful to write the problem simply, without having to implement a specific function for our particular example. But it also makes the problem bigger, and that could be a concern for computation time.

However, the variable \( x \) appears in a very simple way in \( e_1 \), and thus in the linearization of problem (6), \( \frac{\partial e_1}{\partial q}(x,q) \dot{q} + \frac{\partial e_1}{\partial x}(x,q) \dot{x} = -k_{p1} e_1(x,q) \), \( \frac{\partial e_1}{\partial x}(x,q) \) is simply \( -I \), i.e. there is a simple way to express \( \dot{x} \) from \( \dot{q} \) and the other quantities of the problem.

We can tell the scheme that it can use the constraint derived from the task \( e_1 \), to pre-solve the problem in \( \dot{x} \) by adding

lpb.add(hint::Substitution(lpb.constraint(t1.get()), dot(x)));

Doing so, the scheme will reduce the problem to an optimization on \( \dot{q} \) only, by susbtituting \( \dot{x} \) by the expression deduced from the specified constraint. After solving the reduced problem, the value of \( \dot{x} \) is computed as well. In the end the problem solved will be exactly the same as what we would have got by implementing manually the equation (1), i.e. a composition of Simple2dRobotEE and SphereFunction, but it was obtained with less coding and in a more generic way.

Substitutions are working when the matrix in front of the variable to substitute is not simple, and even if the matrix is not invertible. But they are useful only when the matrix has a structure or properties that could help speed up the computations. Otherwise, using them is at best not necessary and could even degrade the performance of the resolution.

Formulation in acceleration

So far, we have used task dynamics of order one (or zero for the velocity task), and thus the variables were automatically deduced to be \( \dot{x} \) and \( \dot{q} \). We may want to use task dynamics of order two instead, in which case the linearized problem will have \( \ddot{x} \) and \( \ddot{q} \) as variable. This can be done by changing the definition of the control problem e.g.

auto t1 = pb.add(e1 == 0., ProportionalDerivative(50), PriorityLevel(0));
auto t2 = pb.add(e2 == 0., ProportionalDerivative(50), PriorityLevel(0));
auto t3 = pb.add(-b <= q <= b, VelocityDamper(dt, { 1., 0.01, 0, 0.1 }), PriorityLevel(0));
auto t4 = pb.add(dot(q, 2) == 0., { PriorityLevel(1), AnisotropicWeight(Vector3d(10,2,1)) });

The rest of the code is left unchanged, but for the need to initialize the values of \( \dot{x} \) and \( \dot{q} \), and to perform the integration from \( \ddot{x} \) and \( \ddot{q} \).

Example file

example/ProblemWritingExample.cpp

tvm::dot
VariablePtr TVM_DLLAPI dot(VariablePtr var, int ndiff=1, bool autoName=false)
Eigen
Definition: AffineExprDetail.h:104
tvm::task_dynamics
Definition: defs.h:34
tvm::requirements::AnisotropicWeight
AnisotropicWeightBase< true > AnisotropicWeight
Definition: AnisotropicWeight.h:55
tvm::requirements::PriorityLevel
PriorityLevelBase< true > PriorityLevel
Definition: PriorityLevel.h:33
tvm::VariablePtr
std::shared_ptr< Variable > VariablePtr
Definition: defs.h:65
tvm::requirements
Definition: defs.h:29
tvm::constant::pi
constexpr double pi
Definition: defs.h:107
tvm
Definition: Clock.h:11