Releases: DES-Lab/AALpy
AALpy v.1.4.2
Add PAPNI - passive learning of deterministic context-free grammars
Fix typing bug that was breaking backwards compatability
AALpy v.1.4.1
- Minor quality of life improvements
- Significant speedup in loading of models
AALpy v.1.4.0
New features
- Context-free grammar learning with KV
- Visualization of classification tree for KV
- Random generation of CFGs
- Added AutoamtaSUL which can be used in place of all SULs found in AutomtataSUL.py
- Top-level imports, eg. from aalpy import run_Lstar
- add eq operator for DeterministicAutomata based on
bisimilar
v.1.3.3
- Optimize Alergia (50% memory reduction while keeping all statistical guarantees)
- Minor bug fixes
- Addition of 2 new deterministic oracles
v.1.3.2
- Fix comparability bug in Algeria
- Add copy operator for deterministic and stochastic automata
v.1.3.1
- Speed up RPNI implementation by up to 100 times
- Various small bug fixes
- Minor quality improvements
v.1.3.0
Major note: our implementation of KV with 'rs' counterexample processing on average requires much less system interaction than L*
Major changes
- Added KV
- Optimized and rewrite non-deterministic learning
Minor additions
- minimize method for deterministic automata
- small bug fixes
v.1.2.9
- add option to ensure minimality of randomly generated automata
- minor bug fixes and optimizations
v.1.2.7
Algorithm updates
added RPNI, a passive deterministic automata learning algorithm for DFAs, Moore, and Mealy machines
non-deterministic learning does no longer rely on all weather assumption (table shrinking and dynamic observation table update)
Features updates
following functions added to all model types
mode.save()
model.visualize()
model.make_input_complete()
refactor file handler
v.1.1.13
Added passive learning of Stochastic Mealy Machines (SMMs)
Experimental setting which adapts Alergia for learning of SMMs. Active SMM learning is for the most part more sample-efficient than active MDP learning, but in the passive setting we cannot compare sample efficiency only the quality of the learned model. From initial experiments passive SMM learning is for the most part as precise as passive MDP learning, but in some cases it is even less precise. However, if the system that was used to generate data for passive learning has many input/output pairs originating from the same state, or can be efficiently encoded as SMM, passive SMM learning seems to be more precise. Note that this conclusions are made based on few experiments.
Other Changes
- minor usability tweaks
- Alergia implicit delete of data structures
- optimization of FPTA creation