GithubHelp home page GithubHelp logo

Comments (27)

cdellin avatar cdellin commented on May 5, 2024 2

Before I review the API (looks good at first glance!) I wanted to throw up a couple of thoughts on the vector-valued vs. scalar-valued constraint question that @mkoval referenced. I'm sure @siddhss5 can chime in and correct me (-:

  1. Vector-valued constraints are a more natural way to combine multiple constraints. This of course happens all the time (e.g. a TSR restricting the z-position and pitch of an object). These constraints can simply be stacked into a larger vector. Combining constraints into a scalar-valued function requires you to commit to a particular method (e.g. vector norm? squared norm? what weights?).
  2. Vector-valued constraints are faster to project onto. This is based a bit on my own fuzzy intuition, but a projection operator considering a vector-valued constraint (and its Jacobian) can consider all sub-constraints simultaneously, whereas a scalarized version only operates on a single gradient vector. I threw together a quick pdf comparing vector vs. scalarized when using Newton's method. It's interesting that whereas the vector version does Jacobian-pseudoinverse, the scalarized version does Jacobian-transpose.
  3. Vector-valued functions can naturally handle multiple inequality constraints. Representing these via a scalarized function requires ugly non-differentiable functions.
  4. Vector-valued constraints give planners useful derivative information on the constraint surface itself. All reasonable scalarized versions are not really constraint functions, but error/loss functions (everywhere non-negative, 0 on the constraint surface). As a consequence, they are 0 on the constraint itself, and their gradient at that point may not even exist (sub-gradients only). CHOMP, for example, uses the constraint Jacobian to project its steps onto the linearization of the constraint, which is not possible with a scalarized constraint.
  5. It's what other software packages do. For example, trajopt and the Goal Set CHOMP paper.

from aikido.

mkoval avatar mkoval commented on May 5, 2024

Trajopt

Now, let’s look at the user-defined costs and constraints. We define an vector-valued error function f(x), and the Jacobian of this function dfdx(x). f(x) returns the x and y components of the transformed vector (which both equal zero if the vector points up).

def f(x):
    robot.SetDOFValues(x, arm_inds, False)
    return tool_link.GetTransform()[:2,:3].dot(local_dir)
def dfdx(x):
    robot.SetDOFValues(x, arm_inds, False)
    world_dir = tool_link.GetTransform()[:3,:3].dot(local_dir)
    return np.array([np.cross(joint.GetAxis(), world_dir)[:2] for joint in arm_joints]).T.copy()

Question for @psigen: f(x) and dfdx(x) seem to operate on poses. What frame are these in? Does Trajopt compute the Jacobian internally?

DART

Objective

  • double eval (const Eigen::VectorXd &_x): Evaluate and return the objective function at the point x.
  • void evalGradient (const Eigen::VectorXd &_x, Eigen::Map< Eigen::VectorXd > _grad): Evaluate and return the objective function at the point x.

pr-constraint

HolonomicConstraint

  • int eval_dim(int * const n_config, int * const n_value):
  • int eval(const double * const config, double * const value)
  • int eval_jacobian(const double * const config, double * const jac)
  • int project(const double * const config, double * new_config);

from aikido.

psigen avatar psigen commented on May 5, 2024

f(x) and dfdx(x) seem to operate on poses

@mkoval: I don't think so... why do you think this?
In the example you showed (and the way I usually write them), they are operating over Active DOFs.

from aikido.

mkoval avatar mkoval commented on May 5, 2024

@psigen Look at the example I posted above. They seem to return a 3 x 1 vector.

from aikido.

psigen avatar psigen commented on May 5, 2024

f(dof_values) -> [costs]

They can return any single cost or vector of costs: each is either minimized or maximized, depending on your objective direction or metadata on how to handle constraints (HINGE, etc).

If multiple values are returned, they are treated kind of like multiple constraint functions that just happen to get evaluated together (which helps to find useful Jacobian steps when you have something that needs to minimize errors along multiple orthogonal directions, but the computation is more efficient in bulk).

from aikido.

cdellin avatar cdellin commented on May 5, 2024

Note for the uninitiated that the trajopt docs are here and the docs for the underlying constraint definition is here (from the underlying Sequential Convex Optimization library). The pr-constraint definition is here.

Note also that there's a distinction between constraints and costs.

The trajopt and pr-constraint examples are constraints -- the constraint is satisfied if the returned vector is zero-valued. The example from the trajopt tutorial for example uses the x and y coordinates of the object's z axis (in the world frame). These aren't costs to be minimized or maximized -- although you can always turn a constraint into a cost if you'd like (e.g. c(x) = norm(f(x))).

From the wording of the kido example, it looks like it's intended as a cost (since it uses the terms "objective" and "objective function").

Of course in both of these cases, it's useful to be able to compute Jacobians. When used as a cost, it is often sufficient to use a scalar-valued function (in which case, the Jacobian is just a gradient). This is what kido appears to do. When used as a constraint, it is often necessary to do root-finding (aka projection).

from aikido.

mkoval avatar mkoval commented on May 5, 2024

Of course in both of these cases, it's useful to be able to compute Jacobians. When used as a cost, it is often sufficient to use a scalar-valued function (in which case, the Jacobian is just a gradient). This is what kido appears to do. When used as a constraint, it is often necessary to do root-finding (aka projection).

Good point. I didn't see the distinction before you pointed this out. I think ErrorMethod returns a vector of constraint deviations. It's hard-coded to Vector6d for twists:

const Eigen::Vector6d& evalError(const Eigen::VectorXd& _q);

I believe this gets converted to a scalar-valued optimizer::Function that is passed to the solver. I'm having some trouble following the code path here, so maybe @jslee02 can clarify?


In summary, DART provides: (1) the scalar-valued cost function optimization::Function and (2) a 6-DOF constraintdynamics::InverseKinematics::ErrorMethod`. I think @cdellin is right - neither of these are what we are looking for.

from aikido.

mkoval avatar mkoval commented on May 5, 2024

I took a shot at an API:

class ProjectableRegion
{
public:
  virtual size_t getInputDimension() = 0;
  virtual double contains(const Eigen::VectorXd& _q) = 0;
  virtual Eigen::VectorXd project(const Eigen::VectorXd& _q) = 0;
};


class JacobianRegion
{
public:
  virtual size_t getInputDimension() = 0;
  virtual size_t getConstraintDimension() = 0;
  virtual Eigen::VectorXd getValue(const Eigen::VectorXd& _q) = 0;
  virtual Eigen::MatrixXd getJacobian(const Eigen::VectorXd& _q) = 0;
};


class ProjectableJacobianRegionAdaptor : public ProjectableRegion
{
public:
  explicit ProjectableJacobianRegionAdaptor(
    const std::unique_ptr<JacobianRegion>& _jacobianRegion);

  size_t getInputDimension() override;
  double contains(const Eigen::VectorXd& _q) override;
  Eigen::VectorXd project(const Eigen::VectorXd& _q) override;
};

It's essentially an Eigen version of pr-constraint. The major difference is that I split ProjectableRegion from JacobianRegion. Sample-based methods (e.g. CBiRRT) would use ProjectableRegion and optimization techniques (e.g. CHOMP, Trajopt) would use JacobianRegion.

I am against grouping the two classes together, like they are in pr-constraint, because it requires committing to a default projection method (Newton's method in the case of pr-constraint). It makes more sense to use the same projection code, via ProjectableJacobianRegionAdaptor, everywhere.

@cdellin @psigen @lgw903 What were you thinking?

from aikido.

cdellin avatar cdellin commented on May 5, 2024

I like this (and I'm +1 on separating projectable objects from differentiable objects).

My main concern is calling these things Regions. Mathematically, a region [0] usually denotes a open subset. But the types of constraints that we typically consider (and indeed the ones where Jacobians are most useful and Newton's method is fast) are actually measure-zero sub-manifolds (e.g. keep the glass upright) which aren't regions of the full C-space in that sense. But maybe I'm too picky?

Also, what does double ProjectableRegion::contains(const Eigen::VectorXd& _q) do?

[0] https://en.wikipedia.org/wiki/Region_%28mathematics%29

from aikido.

cdellin avatar cdellin commented on May 5, 2024

BTW I realize that my naming criticism applies equally to TSRs ... though those can define regions, whereas I don't think anyone's actually implemented a CHOMP that works with inequality constraints, which would be required to handle such regions properly. Which I suppose raises another point: how to deal with inequality constraints in this API? How does TrajOpt do it? I definitely side-stepped that issue with pr-constraint.

from aikido.

mkoval avatar mkoval commented on May 5, 2024

Thanks for the comments @cdellin!

My main concern is calling these things Regions. Mathematically, a region usually denotes a open subset.

Good point. I'm not sure if this is a serious problem because the two quotes on that WIkipedia page ignore the open/closed distinction (emphasis mine):

According to Kreyszig,

A region is a set consisting of a domain plus, perhaps, some or all of its boundary points. (The reader is warned that some authors use the term "region" for what we call a domain [following standard terminology], and others make no distinction between the two terms.)

According to Yue Kuen Kwok,

An open connected set is called an open region or domain. ...to an open region we may add none, some, or all its limit points, and simply call the new set a region.

I'm open to renaming Region if you can come up with a better name. I don't like Manifold because because it implies that region is lower-dimensional. Maybe Constraint? Perhaps @siddhss5 has a better suggestion?

Which I suppose raises another point: how to deal with inequality constraints in this API? How does TrajOpt do it?

Trajopt's Constraint class has a type() function that returns a ConstraintType of EQ or INEQ. The API is otherwise identical. I assume the only difference is whether the constraint is satisfied when value() is zero or positive.

@psigen @cdellin What do you think about using the same API in Aikido?

from aikido.

cdellin avatar cdellin commented on May 5, 2024

I'm +1 on using the same style of definition as trajopt. I would perhaps rename ProjectableJacobianRegionAdaptor to something with NewtonsMethod in it, as there are other ways to do the projection. Also, it would need to know whether the underlying JacobianRegion is an EQ or INEQ when doing a projection.

One thing that tripped me up when attempting to implement CBiRRT using an API like this was that I couldn't find a clean way to implement Dmitry's IK handshaking process for TSR chains. That's something we should look into at some point.

An alternative view on all of this is that perhaps the JacobianRegion can just be called a DifferentiableFunction, and then the same class could be used as a cost as well. (Calling it a Region implies a strong tie to a state space representation, but the way it's implemented it only works for real vector spaces, right?)

Also, what does that contains() method do? (-:

from aikido.

psigen avatar psigen commented on May 5, 2024

I like the Projectable/Differentiable split.

It would be nice to also have a mechanism for only specifying an objective function and have numerical differentiation handle the rest. I think we can probably do that using @mkoval's Adapter mechanism for various numerical differentiation mechanisms, although a shorter naming convention would be nice.

Note that I just ran into the brick wall where I need to represent a projection constraint (TSR) in TrajOpt, which only handles differentiable constraints as far as I can tell, and the performance hit for emulating one as the other seems pretty severe.
personalrobotics/or_trajopt#20

from aikido.

psigen avatar psigen commented on May 5, 2024

Hmm, it almost sounds like Projectable* stuff is defining a Region while Differential* is defining a Field, from the physics definitions, at least. @cdellin is that accurate?

I don't like Manifold because because it implies that region is lower-dimensional.

I think this might be a bias based on your usage. AFAIK the definition of a manifold doesn't require lower dimensionality, you can embed the same or higher dimensionality, as long as you can define the homeomorphism to a Euclidean-ish real vector space of that dimension. Now, as a separate issue, that homeomorphism is definitely not a requirement of our constraint function, so we might want to avoid the name for that reason.

from aikido.

psigen avatar psigen commented on May 5, 2024

After all this, maybe we should just go back to Constraint and Cost, with the definition that

  • a constraint somehow restricts the valid solution space
    (maybe defining a Region or a Field, we don't technically care)
  • a cost defines a field over the space (that can be optimized)

then we can go to town with things like:

Base classes

  • ProjectableConstraint
  • DifferentiableConstraint
  • ScalarCost
  • DifferentiableCost

Converters

  • NewtonsMethod(DifferentiableConstraint) -> ProjectableConstraint
  • FiniteDifferencer(ScalarCost) -> DifferentiableCost

Trajopt exposes several Cost (ABS, HINGE, SQUARED) and Constraint (INEQ, EQ) types, I am not sure if it is really necessary to expose this at all, as I don't really know what the optimizer can do with this information.

from aikido.

mkoval avatar mkoval commented on May 5, 2024

Hmm, it almost sounds like Projectable* stuff is defining a Region.

I disagree. As @cdellin pointed out, a Region is generally an open set. We project onto closed sets - otherwise the projection would result in an infimum that is not in the set!

I think this might be a bias based on your usage. AFAIK the definition of a manifold doesn't require lower dimensionality, you can embed the same or higher dimensionality, as long as you can define the homeomorphism to a Euclidean-ish real vector space of that dimension.

Fair enough. R^n is a manifold in R^n, even if don't typically refer to it as such. I agree with your broader point that Manifold requires more structure than we wish to enforce; i.e. that it is locally homeomorphic to Euclidean space.


I like @psigen's proposal. @cdellin @lgw903 What do you think?

We do need to specify the ConstraintType of each constraint as INEQ or EQ - the optimizer needs to know whether it is trying to satisfy constraint->getValue(q) >= 0 or constraint->getValue(q) = 0!

Let's tackle Costs in another discussion. I'd rather not open another can of worms here. 😓

from aikido.

mkoval avatar mkoval commented on May 5, 2024

@cdellin and I had a discussion offline. He re-iterated this point:

One thing that tripped me up when attempting to implement CBiRRT using an API like this was that I couldn't find a clean way to implement Dmitry's IK handshaking process for TSR chains. That's something we should look into at some point.

It's worthwhile to mockup a TSRChain end-effector constraint (which requires IK handshaking) in the proposed API before we commit to anything. It's a tricky use-case to get right and something that we frequently use on HERB.

from aikido.

mkoval avatar mkoval commented on May 5, 2024

@lgw903, @siddhss5, and I had a detailed discussion about this in person. There were two main points of discussion.

What is the equivalent of DifferentiableConstraint for an Isometry3d?

Ideally, we would define a DifferentiablePoseConstraint and provide an adaptor that uses the manipulator Jacobian to lift a DifferentablePoseConstraint into a DIfferentiableConstraint using chain rule. Unfortunately, SE(3) is not a real vector space, so we cannot simply shoehorn this into the existing DifferentiableConstraint class without committing to a particular parameterization.

@siddhss5 suggested forcing the user to define an error metric that always returns a scalar for pose constraints. The gradient is an element of the tangent space se(3), which we return as a Vector3d. We would provide one or more specialized root finder classes that specifically convert a DifferentiablePoseConstraint to a ProjectableConstraint<Isometry3d>.

This could be implemented by committing to a specific parameterization to convert the SE(3) to a real vector space, running Newton's method on the real vector space, and projecting back into SE(3) after each step. For example, QuaternionNetwonsMethod would convert SE(3) to R^7 and re-normalize the quaternion after each iteration.

The class hierarchy would look be the same as I proposed above, with two additions:

class DifferentiablePoseConstraint {
  virtual double getValue(const Eigen::Isometry3d&) = 0;
  virtual Eigen::Vector3d getGradient(const Eigen::Isometry3d&) = 0;
};

class QuaternionNewtonsMethod : public ProjectableConstraint<Eigen::Isometry3d> {
  QuaternionNewtonsMethod(DifferentiablePoseConstraintPtr);
  virtual Eigen::Isometry3d project(Eigen::Isometry3d);
};

I'd like to draw attention to two important points:

  1. Nothing in the core API commits to a parameterization; this is all inside the root finder.
  2. In most cases, we still go through the route DifferentiablePoseConstraint -> JacobianAdaptor -> DifferentiableConstraint -> NewtonsMethod for motion planning. These SE(3) root finders are only necessary for visualization and other purposes, e.g. sampling object placements from a TSR.

The key limitation of this API is that getValue() must return a scalar on DifferentiablePoseConstraint, where we support a vector on DifferentiableConstraint. This brings me too...

Should a DifferentiableConstraint be scalar-valued or vector-valued?

@cdellin suggested that we support a vector-valued function. This is fine for VectorXd, but it is unclear what the DifferentiablePoseConstraint API would look like for a vector-valued function since the input is not a real vector space. There are three possibilities:

  1. DifferentiableConstraint and DifferentiablePoseConstraint both return scalars.
  2. DifferentiableConstraint returns VectorXd and DifferentiablePoseConstraint returns a scalar
  3. Figure out a unified API that supports constraints over both R^n and SE(3).

I initially was in favor of (2), but am becoming more convinced of (1) after talking to @siddhss5. Since @cdellin is in favor of vector-valued functions, can you clarify a few points:

  • Why is a vector-valued function better than a scalar-valued function here?
  • How would we define a termination condition for the vector-valued function?
  • Should we support a different equality/inequality tag for each dimension of the vector?
  • Is there a difference between n real-valued constraints and one R^n constraint?

from aikido.

mkoval avatar mkoval commented on May 5, 2024

We just hashed this out on the whiteboard. This is the API we decided on:

projectable_whiteboard

from aikido.

cdellin avatar cdellin commented on May 5, 2024

Regarding the API, we might consider splitting the DifferentiablePoseConstraint into a DifferentiableOrientationConstraint and a regular DifferentiableConstraint on R^3 for the position component of the pose. That way, the magic SO(3) projection logic could be reused if only orientation matters (without having to be duplicated). We would also consider two additions:

  • CompoundConstraint, a type of DifferentiableConstraint which stacks two input constraints.
  • A convenience DifferentiablePoseConstraint adaptor which subclasses CompoundConstraint and operates on Isometry3d objects.

... that is, unless you were considering writing customized pose projectors that don't just operate independently on the position and orientation errors.

The other advantage of this structure is that it more closely matches the way OMPL treats SO(n) / SE(3).

from aikido.

psigen avatar psigen commented on May 5, 2024

I am in agreement with @cdellin here. Optimizers should be able to do much more with a vector-valued function. Forcing users to push things through an arbitrary error metric loses a lot of information and can mess up techniques that switch between or selectively enforce constraints during the optimization.

from aikido.

mkoval avatar mkoval commented on May 5, 2024

Originally, I thought our goal was to avoid committing to any single SE(3) or se(3) parameterization in the constraint API. I now see that this was misguided: the important thing is that users, e.g. a root finder or constrained motion planner, interpret the values as having SE(3) instead of R^n topology. The key advantage of adding a separate DifferentiablePoseConstraint class is that the compiler prevents the user from accidentally treating the constrained variables as a real vector space.


I wanted to throw up a couple of thoughts on the vector-valued vs. scalar-valued constraint question...

@cdellin Thanks for the explanation. Part of my confusion was whether a constrained planner should accept a single DifferentiableConstraint (possibly a CompoundDifferentiableConstraint) or a collection of DifferentiableConstraints that the planner combines. From your post, and our discussion on Friday, I am now convinced that the latter option is correct.

This raises an important question: How do we combine multiple DifferentialPoseConstraints that constrain multiple Frames? For example, consider constraining HERB's left end-effector to be at a fixed transform relative to his right end-effector. I suggest modifying getValue and getJacobian to accept a vector<Isometry3d> instead of a single Isometry3d. This fixes the immediate problem, but punts on the fundamental issue (see below).


Regarding the API, we might consider splitting the DifferentiablePoseConstraint into a DifferentiableOrientationConstraint and a regular DifferentiableConstraint on R^3 for the position component of the pose. That way, the magic SO(3) projection logic could be reused if only orientation matters (without having to be duplicated).

I am in favor of re-using the R^n logic to handle the translational degrees of freedom. I couldn't figure out how to make this work from an API perspective. An SE(3) constraint would be a compound DifferentiableSO3Constraint and DifferentiableRealConstraint constraint. In that case:

  1. getValue and getJacobian would have to accept both R^n and SO(3) inputs.
  2. How do I compound n real-vector constraints with m SO(3) constraints?
  3. How do I use CompoundConstraint to compound two other types of constraints?

This is a more fundamental problem than it may first appear. @cdellin and @siddhss5 both raised the point that we can't run the current pr-constraint implementation of CBiRRT on ADA because it does not support joints with SO(2) topology.

The best solution I can come up with is similar to OMPL's StateSpace concept. We would commit to a parameterization and represent all constraints using DifferentiableConstraint. Any methods that interact with the constraint would receive a StateSpace object that encodes the topology of the space, e.g. which dimensions should be interpreted as a quaternion parameterization of SO(3).

I am not sure what the minimal set of operations (e.g. projection, distance metric, etc) we would need to provide to satisfy our use cases (i.e. CHOMP, Trajopt, direct projection). I'm not even sure if these algorithms have been formulated for anything other than real vector spaces.

Hopefully @cdellin can inform us.


Forcing users to push things through an arbitrary error metric loses a lot of information and can mess up techniques that switch between or selectively enforce constraints during the optimization.

I am not sure what you mean by "switch between" constraints or "selectively enforcing constraints during the optimization." I am not sure what condition you intend to use to decide whether or not the optimizer should enforce a constraint, but I don't think any of the proposals we have discussed allows for this.

from aikido.

psigen avatar psigen commented on May 5, 2024

I'm not referring to anything the user is doing. I'm referring to the body of optimizer techniques that use unconstrained optimization sub-steps or gradient projection methods on an active set to improve performance.

from aikido.

cdellin avatar cdellin commented on May 5, 2024

I have to dive into the algorithms a bit more deeply -- my understanding is that current implementations of both CBiRRT and CHOMP require real-vector configuration spaces, but that the proofs (at least for CBiRRT) should work for C-spaces which are manifolds as long as a suitable projection operator is defined. From what I remember, Kyle actually adapted Dmitry's CBiRRT code to handle robots with circular joints, because we needed it for CHIMP.

If you want to tackle this constraint discussion over arbitrary state spaces, here's a very first pass (brain dump) at an OMPL-like class structure that might work:

// maps arbitrary State * -> vec<double> (all constraints are vector-valued)
// implements some general methods as @mkoval alluded to (projection?) TBD
// perhaps CBiRRT could take this
class Constraint {};

// maps RealVectorState * -> vec<double>
// defines methods such as getJacobian()
// CHOMP (as implemented) needs one of these
class RealVectorConstraint : public Constraint {};

// maps SO3State * -> vec<double>
// exposes fancy SO(3)-specific tangent space stuff?
class SO3Constraint : public Constraint {};

// maps arbitrary State * -> RealVectorState * (dim=3), SO3State *
// computes forward kinematics
class FK {};

// input:
// (a) a FK (implementing the forward kinematics)
// (b) a RealVectorConstraint (dim=3, for position constraint)
// (c) a SO3Constraint (for orientation constraint)
// this could be used for constraint-aware planners over general manifold C-spaces
// maps arbitrary State * -> vec<double> (all constraints are vector-valued)
class FKConstraintAdaptor : Constraint {};

// maps RealVectorState * -> RealVectorState * (dim=3), SO3State *
// can be used for HERB (not for ADA)
class RealVectorFK : public FK {};

// input:
// (a) a RealVectorFK (implementing the forward kinematics)
// (b) a RealVectorConstraint (dim=3, for position constraint)
// (c) a SO3Constraint (for orientation constraint)
// this could be used for TSRs on HERB for both CHOMP and CBiRRT (as implemented)
class RealVectorFKConstraintAdaptor : RealVectorConstraint {};

The interesting thing here is that even though HERB + TSRs use an SO(3) constraint object internally (e.g. constructed by a TSR factory), once married with the kinematics function, the result is just a RealVectorConstraint object (with getJacobian()) that can be passed to CHOMP. But no, this doesn't fix the ADA problem. I need to think about that some more.

from aikido.

gilwoolee avatar gilwoolee commented on May 5, 2024

Thanks for the great inputs, Chris. Here's a slightly generalized version of above class structure:

We have State class and SO3State SE3State SO2State SE2State RealVectorState CompoundState inheriting from State. For CompoundState, we keep only a flattened version (vector of States) and do not allow nested structure.

class Differentiable{
  Eigen::VectorXd getValue(State& s);
  Eigen::MatrixXd getJacobian(State& s);
}

class Projectable{
  bool contains(State& s);
  State project(State& s);
}

// bodyNode's transform W should satisfy innerConstraint.getValue(W) = 0 
class FKConstraintAdaptor: Differentiable{

  // innerConstraint is a SE3 constraint; its getValue takes SE3State.
  FKConstraintAdaptor(dart::dynamics::BodyNodePtr& bodyNode, 
                      DifferentiablePtr& innerConstraint);

  // input: RealVectorState s
  Eigen::VectorXd getValue(State& s){
   bodyNode.setPositions(s.value);
   return innerConstraint.getValue(bodyNode.getTransform());
  }

  // Returns Jacobian using ChainRule
  Eigen::MatrixXd getJacobian(State& s);
}

class ProjectableByDifferentiable: Projectable {
  ProjectableByDifferentiable(DifferentiablePtr& _differentiable);

  // Use Newton's method on _differentiable to project s to this constraint.
  State project(State& s);
}

I don't think we'd need RealVectorFK if we use dart's FK functionality.

With CompoundState, we can handle the ADA problem (which I assume you mean having both RealVectorState and SO2State) as well as any other combination of different state types.

It'd be the responsibility of the user(e.g. an optimizer) using Differentiable to update states (including SE3State, etc.) in a valid manner. We will be providing one version of it as ProjectableByDifferentiable (or any better name appreciated), which will use an SO3 update method using a modified version of this paper. I'm afraid any optimizer that doesn't handle SO3 or SE3 properly would operate only on RealVectorState or need some glue code.

One concern I have is the return type of Differentiable::getJacobian(). I think the mapping between CompoundState and its Jacobian would be clearer if we have the return type to be std::vector<Eigen::MatrixXd>. But vector is unnecessary for all other State types.

from aikido.

mkoval avatar mkoval commented on May 5, 2024

For those of you who weren’t in lab over the last few days, our core realization is that:

It is fundamentally impossible for us to provide a common API to R^n and SO(n) state spaces. Mapping SO(n) to R^n either introduces singularities (Euler angles, spatial twists) or implicit constraints (unit quaternions, rotation matrices). The user (e.g. a root finding algorithm) must be cognizant of the topology of the space to handle this correctly. The simplest way to represent this is an opaque State struct similar to OMPL.

To reply to @lgw903’s questions:

With CompoundState, we can handle the ADA problem (which I assume you mean having both RealVectorState and SO2State) as well as any other combination of different state types.

Yes, a mixture of SO(2) and R dimensions is what I mean by the “ADA problem.” I agree that CompoundState is an elegant solution to this.

I'm afraid any optimizer that doesn't handle SO(3) or SE(3) properly would operate only on RealVectorState or need some glue code.

I don’t see any way around this for SO(3). This space fundamentally differs from R^n and requires special treatment. In the worst case, the user can always parameterize SO(3) to Euler angles and treat those dimensions as a real vector space. This is not a great solution because of singularities, but it will at least achieve a basic level of functionality.

There shouldn’t be any special treatment for SE(2) and SE(3) since these are simply compound states containing a RealVectorState and, respectively, a SO2State or SO3State.

One concern I have is the return type of Differentiable::getJacobian(). I think the mapping between CompoundState and its Jacobian would be clearer if we have the return type to be std::vector<Eigen::MatrixXd>. But vector is unnecessary for all other State types.

If we’re using an opaque State class, then I think it’s natural to also use an opaque class to represent an element of the tangent space (perhaps this should be called a Chart, Jacobian, or TangentSpace?). This would be a MatrixXd for primitive states - or a this wrapper around a MatrixXd that provides convenient accessor methods - and a vector for CompoundState.


I generally agree with @lgw903’s modifications to @cdellin’s API proposal. My only major concern is that the adaptor classes are not sufficiently flexible to support the closed-chain and TSR chain constraints we use on HERB.

For example, I would expect the ChainRuleAdapter to be used like this:

// This is a constraint that operates on this CompoundConstraint: 
//
// SE3State <--------- HERB’s base pose 
// SE3State <--------- HERB’s right EE pose 
// SE3State <--------- HERB’s left EE pose 
//
DifferentiableConstraintPtr my_constraint = /* ... */;

// This uses the manipulator Jacobian to convert the above CompoundState into
// this CompoundState for configuration space planning:
//
// SE3State <--------- HERB’s base pose 
// SO2State        \
// SO2State         |- HERB's right arm configuration
// RealVectorState /
// SO2State        \
// SO2State         |- HERB's left arm configuration
// RealVectorState /
//
ChainRuleAdaptor my_adaptor(my_constraint, {
  // This entry tells the ChainRuleAdaptor to use the Jacobian of the
  // "left_ee_bodynode" frame to map the "left_arm" SE(3) component of
  // "my_constraint" to the DOFs contained in the "left_arm_metaskeleton".
  {"left_arm",  left_arm_metaskeleton,  left_ee_bodynode},

  // This entry does the same thing for the right arm.
  {"right_arm", right_arm_metaskeleton, right_ee_bodynode},
});

The key missing API component to implement this is how we can refer to a subset of the dimensions of a CompoundState, e.g. the placeholder "left_arm" and "right_arm" strings in this example. OMPL uses the SubstateLocation struct for this purpose. I am not thrilled about this, because the API is rather awkward, but I see no alternative. At least we can get away with a single index because if we disallow nested CompoundStates.

This is obviously complicated to implement, so we can start by handling the simpler case where my_constraint only has one SE(3) component.

from aikido.

egordon avatar egordon commented on May 5, 2024

Addressed by #32

from aikido.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.