-
Notifications
You must be signed in to change notification settings - Fork 317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Project SparseMatrices, Mapping & Mask : general discussion #26
Comments
Hey @ChristianDuriez , Any news about a new implementation through a mapping mechanism? Hugo |
The project has been renamed to "Sparse Matrix" and "Sparse Vector" representation because we have similar issues with Mask, mapping, constraints, solvers etc... due to the lack of unified "sparse" representation of vectors and matrices in SOFA. |
Several lines of thought:
|
Hello @ChristianDuriez, I've been discussing the sparse matrix issue with @matthieu-nesme for some time now. Here are some thoughts on the subject. The biggest issue with sparse matrices is that there is no silver bullet representation that covers everyone's needs: some people like it compressed (row/column), others like to have small dense chunks instead of single floating points, and so on. In particular, I see two major orthogonal uses of sparse matrices:
It is not at all obvious that the two operations should use the same representation, and in fact I would argue against it. For instance in the Compliant plugin, we use Eigen sparse matrices for everything, and end up doing a lot of work in order to shift matrix blocks around which is tedious and costly. I've been toying around with alternate designs, and the simplest I found so far is to use a plain old vector of triplets (row, column, value) for fetching matrix data. More precisely, mappings/forcefields directly With this design the caller is then responsible for structuring the sparse data further (sorting/converting to CSR, shifting rows/columns, handing over to another library, etc) Of course this approach is tailored for our needs and might not fit others, and performance-wise it needs thorough benchmarking anyways, but I think that using separate data structures for getting the data and working with the data instead of a single structure is the way to go. |
@maxime-tournier just to make things clear for me. That being said I agree that it would be ideal to have an intermediate data structure to supersede the |
(hey salut :-) You are correct, the only requirement for us regarding mapping/forcefields is to get the Jacobian/Hessian sparsity pattern + values. What I have been experimenting with is to have the mappings push sparse data into a dumb vector of triples (row, column, value) instead of a full-blown sparse matrix structure. It's up to the caller to further structure the data as needed, and to manage the triplet vector memory, etc. This is in contrast to the current approach where the mappings may store and manage |
+1
Back-inserting triplets combines flexibility and efficiency. It is used in Eigen.
However, I think we have to seriously consider the demand for small dense chunks rather than scalars, since a significant number of applications use only 3D points as DOFs, and sorting such chunks can be much faster than scalars.
This may be easily handled using a choice of triplets, (such as Triplet<Mat3x3>) provided that we restrain the choice of chunk types to a reasonable number. All the square sizes from 1 to 12 should be way enough, and more could be added if necessary. (I am not sure that other sizes than 3x3 would be used in practice)
The sparse matrix type could be a compile-time parameter to avoid virtual calls. The back_inserter of the default type could decompose all the chunks in scalars, while implementations dedicated to 3x3 chunks could push them in vector<Triplet<Mat3x3>>.
Pr. Francois Faure
https://team.inria.fr/imagine/francois-faure/
… Le 6 avr. 2017 à 19:30, Maxime Tournier ***@***.***> a écrit :
The biggest issue with sparse matrices is that there is no silver bullet representation that covers everyone's needs: some people like it compressed (row/column), others like to have small dense chunks instead of single floating points, and so on.
In particular, I see two major orthogonal uses of sparse matrices:
getting matrix data out of components
working with sparse matrices (linear algebra, factorization, assembly)
It is not at all obvious that the two operations should use the same representation, and in fact I would argue against it. For instance in the Compliant plugin, we use Eigen sparse matrices for everything, and end up doing a lot of work in order to shift matrix blocks around which is tedious and costly.
I've been toying around with alternate designs, and the simplest I found so far is to use a plain old vector of triplets (row, column, value) for fetching matrix data. More precisely, mappings/forcefields directly push_back matrix data into a std::vector<Eigen::Triplet<SReal> > through a std::back_insert_iterator.
With this design the caller is then responsible for structuring the sparse data further (sorting/converting to CSR, shifting rows/columns, handing over to another library, etc) Of course this approach is tailored for our needs and might not fit others, and performance-wise it needs thorough benchmarking anyways, but I think that using separate data structures for getting the data and working with the data instead of a single structure is the way to go.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#26 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AHUGIb2x8bFdYI3KTJsLE_7Qy-d2FFJiks5rtSFPgaJpZM4KLEE1>.
|
To support the discussion, the link to sparse matrix filling tutorial provided in the Eigen library documentation : https://eigen.tuxfamily.org/dox-devel/group__TutorialSparse.html#TutorialSparseFilling |
@francoisfaure It is true that having dense matrix blocks is a must-have for some applications, but I wonder about the API:
Which leaves us with the following: back-inserting typed triplets into a container with Of course, the overloaded insertion method must not be This is not unfeasible, but it is not straightforward either. It also adds some complexity/overhead compared to scalar-only back-insertion. Is this worth it? In order to remember "typed contexts" easily, we can draw inspiration from c++14's |
@francoisfaure <https://github.com/francoisfaure> It is true that having dense matrix blocks is a must-have for some applications, but I wonder about the API:
for efficiency reasons, we need the DOF types to be available if we want to implement this proposal
Why ? The dense matrix blocks could be of arbitrary compile-time types, the same way as SReal. E.g. Mat<3,3,SReal> . Which may remove most of your objections.
I don’t know if it is worth it. I am just raising the question, since we (Anatoscope) do not use this.
It is time for potential users to speak up.
FF
|
For my own sake. Having an implementation of std::variant for the type of chunks would allow to support different type of chunks ( if we need to support a variety of chunk types simultaneously like scalar, Mat3x3 ) with minimal overhead, while being able to use it in the API declaration of So there would be a single method in class BaseForceField
{
public:
virtual std::vector< MatrixChunkType > getMatrixChunks() = 0;
}; which would replace the Similarly in class BaseMapping
{
public:
virtual std::vector< MatrixChunkType > getMatrixChunks() = 0;
}; which would replace the |
This just shifts the same issue to chunk types, then :-) I probably misunderstood your proposal: you mentioned having the sparse matrix type passed as a template parameter for efficiency reasons. I was simply pointing out that this template type cannot be known outside of the component, and so cannot appear in the @fjourdes Maybe it would be preferable not to return a class BaseForceField
{
public:
virtual void getMatrixChunks(std::vector<MatrixChunkType>& chunks) const = 0;
}; This way you make no assumption as to whom should manage the memory, and leave the opportunity to optimize memory allocations. I assume the More problematic, each chunk would have the size of the largest element in the tagged union, unless we use an extra indirection. I was more thinking of having one struct chunk_container {
// add more as needed
using chunk_vector = std::variant< std::vector< chunk<1, 1> >,
std::vector< chunk<2, 2,> > >;
chunk_vector storage[2]; // size can be inferred automatically
template<int I, int J>
void push(chunk<I, J> c) {
static constexpr int index = chunk_index<I, J>(); // correct index in chunk_storage
std::get< std::vector<I, J> >(storage[index]).push_back(c);
}
}; |
Hi,
yes, I was definitely thinking about something along these lines:
Le 10 avr. 2017 à 12:47, Maxime Tournier ***@***.***> a écrit :
I was more thinking of having one std::vector per chunk type in the chunk container like so:
struct chunk_container {
// add more as needed
using chunk_vector = std::variant< std::vector< chunk<1, 1> >,
std::vector< chunk<2, 2,> > >;
chunk_vector storage[2]; // size can be inferred automatically
template<int I, int J>
void push(chunk<I, J> c) {
static constexpr int index = chunk_index<I, J>(); // correct index in chunk_storage
std::get< std::vector<I, J> >(storage[index]).push_back(c);
}
};
The std::variant (which I am not familiar with) seems to elegantly solve the only issue I had in mind, namely, the need for multiple vectors with their associated push functions.
|
@maxime-tournier : indeed that makes a lot more sense to do as you suggested. I just wrote down something to emphasize on what you mentioned above, which is that the concrete chunk type that will be used in the end could not be inferred beforehand at the level of the API, since it is something that depends on the template type. |
For the ones interested in |
could some of you guys make a short summary of the discussion at the next SOFA meeting (Wednesday's meetings) ? |
A first implementation is proposed in PR #276. This work aims at handling sparse matrices in all components like mapping, forcefields and so on. It is based on the existing functions applyJ and applyJT. The idea is to handle sparse matrices at the solver level, and could find the information of sparsity within forcefield (addKToMatrix for assembled cases). Work remains todo. The PR adds a new function into the MechanicalObject (buildIdentityBlocksInJacobian), but this is a work in progress to make a proof of concept. PR will therefore be merged (after 17.06) but a mention “experimental” must be first added. Since the implementation and the concept is open to discussion while a POC is implemented, it would be nice to have more updates in the associated GitHub discussion. A final POC will be presented at the STC#4. It looks to me that the most important aspect is to discuss here technical aspect and the global implementation, and keep people updated of the progress. @JeremieA I add you since the topic was of interest for you as well. Thank you all for discussing it today, let's carry on the work! Nice work @olivier-goury |
Referee: @matthieu-nesme @ChristianDuriez
Members: @JeremieA @francoisfaure @courtecuisse, Eulalie Coevoet, Igor Peterlik
Main objective: build or compute the mechanical system when forcefields, constraints etc... are under mapping
1 implementations available using Compliance plugin (and EigenMatrix) and masks
1 implementation todo using sparseMatrix of SOFA without masks. For that, the fact that we remove the particular case of InteractionForceField could greatly simplify the solution.
There are many different cases depending on the number of dofs that are concerned by the mapped values... Difficult to have the ideal implementation for all the case, but we need to allow several strategies.
Maybe possible to avoid the "explicit" use of masks given the knowledge of the sparsity of the matrices...
Subtasks:
The text was updated successfully, but these errors were encountered: