Skip to content

Releases: Gaius-Augustus/learnMSA

v2.0.10

06 Feb 11:47
e80596a
Compare
Choose a tag to compare

Added argument to specify the pLM cache dir --plm_cache_dir. This is where learnMSA will download language model weights upon first use with --use_language_model.

The following default behavior changed: Sequence weights are now used by default and the --sequence_weights option was removed. Instead, a --no_sequence_weights option exists to align without sequence weights (not recommended).

v2.0.9

18 Nov 15:07
Compare
Choose a tag to compare
  • fixed missing seq. descriptions in output files
  • documented output format more properly as A2M instead of fasta
  • added command line option for pure fasta output
  • fasta output now as a line limit of 80 characters for the sequences

v2.0.8

15 Nov 13:35
Compare
Choose a tag to compare
  • Fixed issue when aligning very similar sequences with language model
  • Improved runtime without performance impact by reducing the default number of model surgery steps from 3 to 1 and by more sequence cropping during model training.

v2.0.7

27 Oct 10:17
Compare
Choose a tag to compare

Bugfixing and tensorflow 2.17 support for easier installation.

v2.0.4

10 Sep 12:31
Compare
Choose a tag to compare

Maintenance release that removes tensorflow-probability as dependency and eases installation of learnMSA via pip.

v2.0.3

03 Jul 18:31
Compare
Choose a tag to compare

Fixed some compatibility and installation issues.

v2.0.1

06 Mar 08:37
Compare
Choose a tag to compare

Added language model support (--use_language_model) for significantly improved accuracy.

With this option, learnMSA aligns about 6% points more columns correct on average on HomFam benchmark than state-of-the-art tools including learnMSA without LM support.

v1.3.4

03 Feb 09:47
Compare
Choose a tag to compare

Added sequence cropping to accomodate very long outlies.
Previously, learnMSA would be very slow, if the input includes sequences many times longer than the average. This release provides a fix by implementing the command line option --crop x to crop inputs longer than x (defaulting to 3 times the average). Cropping can be disabled (the original behavior) with --crop disable.
Cropping does only affect model training, not the decoded MSA which will always respect the full input.

v1.3.2

24 Aug 08:11
050e734
Compare
Choose a tag to compare

Fixed missing package data for the --use_language_model option.

v1.3.1

23 Aug 10:19
Compare
Choose a tag to compare
  • improved numerical stability across TF versions
  • minor bugfixing