Implemented Stanford's Transition Dependency Parse based on https://www.emnlp2014.org/papers/pdf/EMNLP2014082.pdf.
For more info:
https://nlp.stanford.edu/software/nndep.html
In detail, Sw contains nw = 18 elements:
(1) The top 3 words on the stack and buffer: s1, s2, s3, b1, b2, b3
(2) The first and second leftmost / rightmost children of the top two words on the stack:lc1(si), rc1(si), lc2(si), rc2(si), i = 1, 2.
(3) The leftmost of leftmost / rightmost of rightmost children of the top two words on the stack: lc1(lc1(si)), rc1(rc1(si)), i = 1, 2.
(4) We use the corresponding POS tags for St(nt = 18), and the corresponding arc labels of words excluding those 6 words on the stack/buffer for Sl (nl = 12).
Hidden Layers : 1
DependencyParser.py
run - python DependencyParser.py
Neural net with two hidden Layers:
DependencyParser_hidden_layer_2.py
run - python DependencyParser_hidden_layer_2.py
Neural net with three hidden Layers:
DependencyParser_hidden_layer_3.py
run - python DependencyParser_hidden_layer_3.py
Neural Net with three parallel hidden layers for POS, Labels and Tags:
DependencyParser_parallel.py
run python DependencyParser_parallel.py
DependencyParser_fixed.py
run python DependencyParser_fixed.py