Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project SparseMatrices, Mapping & Mask : general discussion #26

Open
ChristianDuriez opened this issue Sep 30, 2016 · 17 comments
Open

Project SparseMatrices, Mapping & Mask : general discussion #26

ChristianDuriez opened this issue Sep 30, 2016 · 17 comments
Assignees
Labels
project-main Main issue of a GitHub project

Comments

@ChristianDuriez
Copy link
Contributor

ChristianDuriez commented Sep 30, 2016

Referee: @matthieu-nesme @ChristianDuriez

Members: @JeremieA @francoisfaure @courtecuisse, Eulalie Coevoet, Igor Peterlik

Main objective: build or compute the mechanical system when forcefields, constraints etc... are under mapping

1 implementations available using Compliance plugin (and EigenMatrix) and masks
1 implementation todo using sparseMatrix of SOFA without masks. For that, the fact that we remove the particular case of InteractionForceField could greatly simplify the solution.

There are many different cases depending on the number of dofs that are concerned by the mapped values... Difficult to have the ideal implementation for all the case, but we need to allow several strategies.

Maybe possible to avoid the "explicit" use of masks given the knowledge of the sparsity of the matrices...

Subtasks:

  • Make option "use Mask" to false by default
  • Tag "InteractionForceField" as deprecated
  • First implementation for the sparseMatrix of SOFA using applyJt(SparseMatrixDeriv => not ideal for all the case
  • see if we could define a common strategy without the use of masks.
@guparan guparan added this to the [WG] Mapping & Mask milestone Sep 30, 2016
@guparan guparan added issue: discussion Open topic of discussion workgroup labels Sep 30, 2016
@hugtalbot hugtalbot changed the title Mapping & Mask Project Mapping & Mask : general discussion Feb 24, 2017
@hugtalbot
Copy link
Contributor

Hey @ChristianDuriez ,

Any news about a new implementation through a mapping mechanism?
Cheers,

Hugo

@ChristianDuriez
Copy link
Contributor Author

The project has been renamed to "Sparse Matrix" and "Sparse Vector" representation because we have similar issues with Mask, mapping, constraints, solvers etc... due to the lack of unified "sparse" representation of vectors and matrices in SOFA.

@ChristianDuriez
Copy link
Contributor Author

Several lines of thought:

  • Proposal around a new way of adding contribution in addKtoMatrix in SOFA and avoid a virtual call at each call
  • Merge MatrixDeriv with a sparse matrix representation
  • SparseVectors in SOFA
  • Simplify the (quite complex) multi-matrix things as a lot of the initial intentions have never been really implemented and can be (maybe) solved in a more simple way using efficient sparse matrices and sparse vectors.

@maxime-tournier
Copy link
Contributor

Hello @ChristianDuriez,

I've been discussing the sparse matrix issue with @matthieu-nesme for some time now. Here are some thoughts on the subject.

The biggest issue with sparse matrices is that there is no silver bullet representation that covers everyone's needs: some people like it compressed (row/column), others like to have small dense chunks instead of single floating points, and so on.

In particular, I see two major orthogonal uses of sparse matrices:

  1. getting matrix data out of components
  2. working with sparse matrices (linear algebra, factorization, assembly)

It is not at all obvious that the two operations should use the same representation, and in fact I would argue against it. For instance in the Compliant plugin, we use Eigen sparse matrices for everything, and end up doing a lot of work in order to shift matrix blocks around which is tedious and costly.

I've been toying around with alternate designs, and the simplest I found so far is to use a plain old vector of triplets (row, column, value) for fetching matrix data. More precisely, mappings/forcefields directly push_back matrix data into a std::vector<Eigen::Triplet<SReal> > through a std::back_insert_iterator.

With this design the caller is then responsible for structuring the sparse data further (sorting/converting to CSR, shifting rows/columns, handing over to another library, etc) Of course this approach is tailored for our needs and might not fit others, and performance-wise it needs thorough benchmarking anyways, but I think that using separate data structures for getting the data and working with the data instead of a single structure is the way to go.

@fjourdes
Copy link
Contributor

fjourdes commented Apr 7, 2017

@maxime-tournier just to make things clear for me.
You use an intermediate data structure to store the matrix values ( wherever they are coming from, mapping forcefield... ) and the mask data structure store the sparsity pattern. Since Flexible and Compliant rely extensively on the Mapping API, (conceptually everything can be broken down in a combination of application + linearisation around a given configuration) the only requirement in that case is to be able to express the sparsity pattern of each mapping, am I correct ?

That being said I agree that it would be ideal to have an intermediate data structure to supersede the BaseMatrix API, to fetch the matrix data ( and maybe the sparsity pattern at the same time ? ) that could fit with any linear algebra library with minimal overhead.

@maxime-tournier
Copy link
Contributor

(hey salut :-)

You are correct, the only requirement for us regarding mapping/forcefields is to get the Jacobian/Hessian sparsity pattern + values.

What I have been experimenting with is to have the mappings push sparse data into a dumb vector of triples (row, column, value) instead of a full-blown sparse matrix structure. It's up to the caller to further structure the data as needed, and to manage the triplet vector memory, etc.

This is in contrast to the current approach where the mappings may store and manage BaseSparseMatrix themselves (getJs), or where the MechanicalStates store MatrixDeriv chunks that are aggregated upwards by mappings. It also gets rid of the virtual calls in the BaseMatrix API.

@hugtalbot hugtalbot changed the title Project Mapping & Mask : general discussion Project SparseMatrices, Mapping & Mask : general discussion Apr 7, 2017
@francoisfaure
Copy link
Contributor

francoisfaure commented Apr 9, 2017 via email

@fjourdes
Copy link
Contributor

fjourdes commented Apr 9, 2017

To support the discussion, the link to sparse matrix filling tutorial provided in the Eigen library documentation : https://eigen.tuxfamily.org/dox-devel/group__TutorialSparse.html#TutorialSparseFilling

@maxime-tournier
Copy link
Contributor

maxime-tournier commented Apr 10, 2017

@francoisfaure It is true that having dense matrix blocks is a must-have for some applications, but I wonder about the API:

  • for efficiency reasons, we need the DOF types to be available if we want to implement this proposal
  • it is quite difficult to recover actual DOF types from outside the mapping/forcefield, except when working with a component at the same graph level as the mapping/forcefield, so it seems restrictive to expect the caller to know the DOF types (or is it not?)

Which leaves us with the following: back-inserting typed triplets into a container with template/overloaded insertion methods, one for each data chunk type. This way the mapping/forcefield DOF types are not part of the API (only the container type is), yet the mapping/forcefield calls the appropriate container method knowing its own DOF types.

Of course, the overloaded insertion method must not be virtual, which would defeat the whole point. The container must know how to push typed chunks into its internal state (easy), but also how to use/recover it (harder since it needs to remember the types, e.g. for sorting typed triplets later on).

This is not unfeasible, but it is not straightforward either. It also adds some complexity/overhead compared to scalar-only back-insertion. Is this worth it?

In order to remember "typed contexts" easily, we can draw inspiration from c++14's std::variant, where a small integer stores a type index from a variadic argument list, and this index is used for jumping in a static dispatch table. (I can expand on that if needed)

@francoisfaure
Copy link
Contributor

francoisfaure commented Apr 10, 2017 via email

@fjourdes
Copy link
Contributor

fjourdes commented Apr 10, 2017

For my own sake. Having an implementation of std::variant for the type of chunks would allow to support different type of chunks ( if we need to support a variety of chunk types simultaneously like scalar, Mat3x3 ) with minimal overhead, while being able to use it in the API declaration of ForceField and Mapping.

So there would be a single method in BaseForceField that could be like something

class BaseForceField
{
public: 
virtual std::vector< MatrixChunkType > getMatrixChunks() = 0; 
};

which would replace the addKToMatrix method.

Similarly in BaseMapping

class BaseMapping
{
public: 
virtual std::vector< MatrixChunkType > getMatrixChunks() = 0; 
};

which would replace the getJ method

@maxime-tournier
Copy link
Contributor

@francoisfaure

E.g. Mat<3,3,SReal>

This just shifts the same issue to chunk types, then :-)

I probably misunderstood your proposal: you mentioned having the sparse matrix type passed as a template parameter for efficiency reasons. I was simply pointing out that this template type cannot be known outside of the component, and so cannot appear in the back_insert_iterator type which is part of the hypothetical API I hinted to. Maybe you had this in mind from the start and I did not get it.

@fjourdes Maybe it would be preferable not to return a std::vector, but instead to push to an existing one like so:

class BaseForceField
{
public: 
virtual void getMatrixChunks(std::vector<MatrixChunkType>& chunks) const = 0; 
};

This way you make no assumption as to whom should manage the memory, and leave the opportunity to optimize memory allocations.

I assume the MatrixChunkType holds the type index, so you can std::visit it and dispatch using its actual type? Having a type index per chunk means having one dispatch per chunk, and I am afraid this would somewhat bring us back to the "one virtual call per element" issue :-/ But maybe the compiler would be clever enough to optimize this?

More problematic, each chunk would have the size of the largest element in the tagged union, unless we use an extra indirection.

I was more thinking of having one std::vector per chunk type in the chunk container like so:

struct chunk_container {
  // add more as needed
  using chunk_vector = std::variant< std::vector< chunk<1, 1> >, 
                                     std::vector< chunk<2, 2,> > >;
  
  chunk_vector storage[2]; // size can be inferred automatically
  
  template<int I, int J>
  void push(chunk<I, J> c) {
      static constexpr int index = chunk_index<I, J>(); // correct index in chunk_storage
      std::get< std::vector<I, J> >(storage[index]).push_back(c);
  }
};

@francoisfaure
Copy link
Contributor

francoisfaure commented Apr 10, 2017 via email

@fjourdes
Copy link
Contributor

@maxime-tournier : indeed that makes a lot more sense to do as you suggested. I just wrote down something to emphasize on what you mentioned above, which is that the concrete chunk type that will be used in the end could not be inferred beforehand at the level of the API, since it is something that depends on the template type.

@maxime-tournier
Copy link
Contributor

For the ones interested in std::variant, here is the core of it: variant.cpp. This idiom can come up pretty handy in some situations.

@hugtalbot
Copy link
Contributor

could some of you guys make a short summary of the discussion at the next SOFA meeting (Wednesday's meetings) ?

@hugtalbot
Copy link
Contributor

A first implementation is proposed in PR #276. This work aims at handling sparse matrices in all components like mapping, forcefields and so on. It is based on the existing functions applyJ and applyJT. The idea is to handle sparse matrices at the solver level, and could find the information of sparsity within forcefield (addKToMatrix for assembled cases). Work remains todo.

The PR adds a new function into the MechanicalObject (buildIdentityBlocksInJacobian), but this is a work in progress to make a proof of concept. PR will therefore be merged (after 17.06) but a mention “experimental” must be first added.

Since the implementation and the concept is open to discussion while a POC is implemented, it would be nice to have more updates in the associated GitHub discussion. A final POC will be presented at the STC#4.

It looks to me that the most important aspect is to discuss here technical aspect and the global implementation, and keep people updated of the progress.

@JeremieA I add you since the topic was of interest for you as well. Thank you all for discussing it today, let's carry on the work! Nice work @olivier-goury

@guparan guparan added project-main Main issue of a GitHub project and removed issue: discussion Open topic of discussion labels Dec 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
project-main Main issue of a GitHub project
Projects
None yet
Development

No branches or pull requests

10 participants