Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvements in OpSum to TTN conversion #117

Open
17 tasks
mtfishman opened this issue Jan 9, 2024 · 2 comments
Open
17 tasks

Improvements in OpSum to TTN conversion #117

mtfishman opened this issue Jan 9, 2024 · 2 comments

Comments

@mtfishman
Copy link
Member

mtfishman commented Jan 9, 2024

Followup to #116:

  • Replace MatElem and QNArrElem with FillArrays.OneElement.
  • Rename determine_val_type to coefficient_type.
  • Default OpSum coefficient type to Float64, require users to specify OpSum{ComplexF64} if they want that.
  • Check / Improve compatibility with feature set of OpSum to MPO conversion in ITensors: support multi-site operators, ensure sorting comparisons work and are implemented consistently with ITensors implementation, implement all relevant sorting w.r.t to traversal order of tree instead of site-labels to ensure compatibility with arbitrary vertextype.
  • Copy ITensors functions being used in ttn_svd like ITensors.determineValType, ITensors.posInLink!, ITensors.MatElem, etc. to ITensorNetworks.jl and update their style. Functions like ITensors.which_op, ITensors.params, ITensors.site, ITensors.argument, etc. that come from the Ops module related to OpSum shouldn't be copied over.
  • Split off logic for building symbolic representation of TTNO into a separate function.
  • Move calc_qn outside of ttn_svd.
  • Use sparse matrix/array data structures or metagraphs for symbolic representation of TTNO (for example NDTensors.SparseArrayDOKs may be useful for that).
  • Split off logic of grouping terms by QNs.
  • Factor out logic for building link indices, make use of IndsNetwork.
  • Refactor code logic to first work without merged blocks/QNs and then optionally merge and compress as needed.
  • Support other compression schemes, like rank-revealing sparse QR.
  • Implement sequential compression as opposed to the current method which uses parallel compression (i.e. right now it compresses each link index effectively independently) to improve performance.
  • Allow compression to take into account operator information (perhaps by preprocessing by expanding in an orthonormal operator basis), not just coefficients.
  • Handle starting and ending blocks in a more elegant way, for example as part of a sparse matrix.
  • Handle vertices without any site indices (internal vertices, such as for hierarchical TTN).
  • Make sure the fermion signs of the tensors being constructed are correct and work with with automatic fermion sign system.
@mtfishman
Copy link
Member Author

A comment on the representation of the symbolic TTN object:

Seems like this data structure could be a DataGraph with a graph structure matching the IndsNetwork/TTN graph structure and a SparseArrayDOK stored on the vertices, where the number of dimensions is the degree of the graph and the elements are Scaled{coefficient_type,Prod{Op}}. Does that sound right to you?

I suppose one thing that needs to be stored is the meaning of each dimension of the SparseArrayDOK on the vertices since you want to know which dimension corresponds to which neighbor. So interestingly the best representation may be an ITensor, or maybe a NamedDimsArray wrapping a SparseArrayDOK, where the dimension names are the edges of the graph.

Originally posted by @mtfishman in #166 (comment)

@mtfishman
Copy link
Member Author

mtfishman commented May 3, 2024

Regarding the data structure used in the svd_bond_coefs(...) function:

This could be a DataGraph with that data on the edges of the graph.

I also wonder if Dict{QN,Matrix{coefficient_type}} could be a block diagonal BlockSparseMatrix where those matrices are the diagonal blocks and the QNs are the sector labels of the graded axes.

Originally posted by @mtfishman in #166 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant