Skip to content

Commit

Permalink
Merge branch 'release/1.3.1'
Browse files Browse the repository at this point in the history
  • Loading branch information
lukostaz committed Mar 18, 2020
2 parents cf7d62d + 946045f commit 304bae8
Show file tree
Hide file tree
Showing 8 changed files with 33 additions and 14 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ pip install -e .
```python
>> import ampligraph
>> ampligraph.__version__
'1.3.0'
'1.3.1'
```


Expand Down
2 changes: 1 addition & 1 deletion ampligraph/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
import tensorflow as tf
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)

__version__ = '1.3.0'
__version__ = '1.3.1'
__all__ = ['datasets', 'latent_features', 'discovery', 'evaluation', 'utils']

logging.config.fileConfig(pkg_resources.resource_filename(__name__, 'logger.conf'), disable_existing_loggers=False)
7 changes: 3 additions & 4 deletions ampligraph/evaluation/protocol.py
Original file line number Diff line number Diff line change
Expand Up @@ -492,6 +492,7 @@ def evaluate_performance(X, model, filter_triples=None, verbose=False, filter_un
* We compute the rank of the test triple by comparing against ALL the corruptions.
* We then compute the number of False negatives that are ranked higher than the test triple; and then
subtract this value from the above computed rank to yield the final filtered rank.
**Execution Time:** This method takes ~4 minutes on FB15K using ComplEx
(Intel Xeon Gold 6142, 64 GB Ubuntu 16.04 box, Tesla V100 16GB)
Expand All @@ -510,17 +511,15 @@ def evaluate_performance(X, model, filter_triples=None, verbose=False, filter_un
- 's': corrupt only subject.
- 'o': corrupt only object.
- 's+o': corrupt both subject and object.
- 's,o': corrupt subject and object sides independently and return 2 ranks. This corresponds to the
evaluation protocol used in literature, where head and tail corruptions are evaluated
separately.
- 's,o': corrupt subject and object sides independently and return 2 ranks. This corresponds to the \
evaluation protocol used in literature, where head and tail corruptions are evaluated separately.
.. note::
When ``corrupt_side='s,o'`` the function will return 2*n ranks as a [n, 2] array.
The first column of the array represents the subject corruptions.
The second column of the array represents the object corruptions.
Otherwise, the function returns n ranks as [n] array.
use_default_protocol: bool
Flag to indicate whether to use the standard protocol used in literature defined in
:cite:`bordes2013translating` (default: False).
Expand Down
6 changes: 3 additions & 3 deletions ampligraph/latent_features/models/ConvE.py
Original file line number Diff line number Diff line change
Expand Up @@ -554,7 +554,7 @@ def fit(self, X, early_stopping=False, early_stopping_params={}):
"""Train a ConvE (with optional early stopping).
The model is trained on a training set X using the training protocol
described in :cite:`Dettmers2016`.
described in :cite:`DettmersMS018`.
Parameters
----------
Expand Down Expand Up @@ -737,7 +737,7 @@ def fit(self, X, early_stopping=False, early_stopping_params={}):
raise e

def _initialize_eval_graph(self, mode='test'):
""" Initialize the 1-N evaluation graph with the set protocol.
""" Initialize the evaluation graph with the set protocol.
Parameters
----------
Expand Down Expand Up @@ -956,7 +956,7 @@ def get_ranks(self, dataset_handle):
logger.error(msg)
raise RuntimeError(msg)

eval_protocol = self.eval_config.get('corrupt_side', constants.DEFAULT_PROTOCOL_EVAL)
eval_protocol = self.eval_config.get('corrupt_side', constants.DEFAULT_CORRUPT_SIDE_EVAL)

if 'o' in eval_protocol:
object_ranks = self._get_object_ranks(dataset_handle)
Expand Down
6 changes: 6 additions & 0 deletions docs/changelog.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,11 @@
# Changelog

## 1.3.1
**18 Mar 2020**

- Minor bug fix in ConvE (#189)


## 1.3.0
**9 Mar 2020**

Expand Down
2 changes: 1 addition & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Modules
AmpliGraph includes the following submodules:

* **Datasets**: helper functions to load datasets (knowledge graphs).
* **Models**: knowledge graph embedding models. AmpliGraph contains **TransE**, **DistMult**, **ComplEx**, **HolE**, **ConvKB** (More to come!)
* **Models**: knowledge graph embedding models. AmpliGraph contains **TransE**, **DistMult**, **ComplEx**, **HolE**, **ConvE**, **ConvKB** (More to come!)
* **Evaluation**: metrics and evaluation protocols to assess the predictive power of the models.
* **Discovery**: High-level convenience APIs for knowledge discovery (discover new facts, cluster entities, predict near duplicates).

Expand Down
2 changes: 1 addition & 1 deletion docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,5 +66,5 @@ pip install -e .
```python
>> import ampligraph
>> ampligraph.__version__
'1.3.0'
'1.3.1'
```
20 changes: 17 additions & 3 deletions tests/ampligraph/utils/test_model_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,9 @@
import numpy.testing as npt
from ampligraph.utils import save_model, restore_model, create_tensorboard_visualizations, \
write_metadata_tsv, dataframe_to_triples
from ampligraph.latent_features import TransE
import pytest
import pickle


def test_save_and_restore_model():

Expand Down Expand Up @@ -72,14 +73,27 @@ def test_restore_model_errors():


def test_create_tensorboard_visualizations():
# TODO: This
pass
# test if tensorflow API are still operative

X = np.array([['a', 'y', 'b'],
['b', 'y', 'a'],
['a', 'y', 'c'],
['c', 'y', 'a'],
['a', 'y', 'd'],
['c', 'y', 'd'],
['b', 'y', 'c'],
['f', 'y', 'e']])
model = TransE(batches_count=1, seed=555, epochs=20, k=10, loss='pairwise',
loss_params={'margin': 5})
model.fit(X)
create_tensorboard_visualizations(model, 'tensorboard_files')


def test_write_metadata_tsv():
# TODO: This
pass


def test_dataframe_to_triples():
X = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv')
schema = [('species', 'has_sepal_length', 'sepal_length')]
Expand Down

0 comments on commit 304bae8

Please sign in to comment.