Skip to content

Releases: explosion/spaCy

v2.0.0 alpha: Neural network models, Pickle, better training & lots of API improvements

05 Jun 19:05
Compare
Choose a tag to compare

PyPi Last update: 2.0.0rc2, 2017-11-07

This is an alpha pre-release of spaCy v2.0.0 and available on pip as spacy-nightly. It's not intended for production use. The alpha documentation is available at alpha.spacy.io. Please note that the docs reflect the library's intended state on release, not the current state of the implementation. For bug reports, feedback and questions, see the spaCy v2.0.0 alpha thread.

Before installing v2.0.0 alpha, we recommend setting up a clean environment.

pip install spacy-nightly

The models are still under development and will keep improving. For more details, see the benchmarks below. There will also be additional models for German, French and Spanish.

Name Lang Capabilities Size spaCy Info
en_core_web_sm-2.0.0a4 en Parser, Tagger, NER 42MB >=2.0.0a14 ℹ️
en_vectors_web_lg-2.0.0a0 en Vectors (GloVe) 627MB >=2.0.0a10 ℹ️
xx_ent_wiki_sm-2.0.0a0 multi NER 12MB <=2.0.0a9 ℹ️

You can download a model by using its name or shortcut. To load a model, use spaCy's loader, e.g. nlp = spacy.load('en_core_web_sm') , or import it as a module (import en_core_web_sm) and call its load() method, e.g nlp = en_core_web_sm.load().

python -m spacy download en_core_web_sm

📈 Benchmarks

The evaluation was conducted on raw text with no gold standard information. Speed and accuracy are currently comparable to the v1.x models: speed on CPU is slightly lower, while accuracy is slightly higher. We expect performance to improve quickly between now and the release date, as we run more experiments and optimise the implementation.

Model spaCy Type UAS LAS NER F POS Words/s
en_core_web_sm-2.0.0a4 v2.x neural 91.9 90.0 85.0 97.1 10,000
en_core_web_sm-2.0.0a3 v2.x neural 91.2 89.2 85.3 96.9 10,000
en_core_web_sm-2.0.0a2 v2.x neural 91.5 89.5 84.7 96.9 10,000
en_core_web_sm-1.1.0 v1.x linear 86.6 83.8 78.5 96.6 25,700
en_core_web_md-1.2.1 v1.x linear 90.6 88.5 81.4 96.7 18,800

✨ Major features and improvements

  • NEW: Neural network model for English (comparable performance to the >1GB v1.x models) and multi-language NER (still experimental).
  • NEW: GPU support via Chainer's CuPy module.
  • NEW: Strings are now resolved to hash values, instead of mapped to integer IDs. This means that the string-to-int mapping no longer depends on the vocabulary state.
  • NEW: Trainable document vectors and contextual similarity via convolutional neural networks.
  • NEW: Built-in text classification component.
  • NEW: Built-in displaCy visualizers with Jupyter notebook support.
  • NEW: Alpha tokenization for Danish, Polish and Indonesian.
  • Improved language data, support for lazy loading and simple, lookup-based lemmatization for English, German, French, Spanish, Italian, Hungarian, Portuguese and Swedish.
  • Improved language processing pipelines and support for custom, model-specific components.
  • Improved and consistent saving, loading and serialization across objects, plus Pickle support.
  • Revised matcher API to make it easier to add and manage patterns and callbacks in one step.
  • Support for multi-language models and new MultiLanguage class (xx).
  • Entry point for spacy command to use instead of python -m spacy.

🚧 Work in progress (not yet implemented)

  • NEW: Neural network models for German, French and Spanish.
  • NEW: Binder, a container class for serializing collections of Doc objects.

🔴 Bug fixes

  • Fix issue #125, #228, #299, #377, #460, #606, #930: Add full Pickle support.
  • Fix issue #152, #264, #322, #343, #437, #514, #636, #785, #927, #985, #992, #1011: Fix and improve serialization and deserialization of Doc objects.
  • Fix issue #512: Improve parser to prevent it from returning two ROOT objects.
  • Fix issue #524: Improve parser and handling of noun chunks.
  • Fix issue #621: Prevent double spaces from changing the parser result.
  • Fix issue #664, #999, #1026: Fix bugs that would prevent loading trained NER models.
  • Fix issue #671, #809, #856: Fix importing and loading of word vectors.
  • Fix issue #753: Resolve bug that would tag OOV items as personal pronouns.
  • Fix issue #905, #1021, #1042: Improve parsing model and allow faster accuracy updates.
  • Fix issue #995: Improve punctuation rules for Hebrew and other non-latin languages.
  • Fix issue #1008: train command finally works correctly if used without dev_data.
  • Fix issue #1012: Improve documentation on model saving and loading.
  • Fix issue #1043: Improve NER models and allow faster accuracy updates.
  • Fix issue #1051: Improve error messages if functionality needs a model to be installed.
  • Fix issue #1071: Correct typo of "whereve" in English tokenizer exceptions.
  • Fix issue #1088: Emoji are now split into separate tokens wherever possible.

🚧 Work in progress (not yet implemented)

📖 Documentation and examples

🚧 Work in progress (not yet implemented)

⚠️ Backwards incompatibilities

Note that the old v1.x models are not compatible with spaCy v2.0.0. If you've trained your own models, you'll have to re-train them to be able to use them with the new version. For a full overview of changes in v2.0, see the alpha documentation and guide on migrating from spaCy 1.x.

Loading models

spacy.load() is now only intended for loading models – if you need an empty language class, import it directly instead, e.g. from spacy.lang.en import English. If the model you're loading is a shortcut link or package name, spaCy will expect it to be a model package, import it and call its load() method. If you supply a path, spaCy will expect it to be a model data directory and use the meta.json to initialise a language class and call nlp.from_disk() with the data path.

nlp = spacy.load('en')
nlp = spacy.load('en_core_web_sm')
nlp = spacy.load('/model-data')
nlp = English().from.disk('/model-data')
# OLD: nlp = spacy.load('en', path='/model-data')

Hash values instead of integer IDs

The StringStore now resolves all strings to hash values instead of integer IDs. This means that the string-to-int mapping no longer depends on the vocabulary state, making a lot of workflows much simpler, especially during training. However, you still need to make sure all objects have access to the same Vocab. Otherwise, spaCy won't be able to resolve hashes back to their string values.

nlp.vocab.strings[u'coffee']       # 3197928453018144401
other_nlp.vocab.strings[u'coffee'] # 3197928453018144401

Serialization

spaCy's [serializ...

Read more

v1.8.2: French model and small improvements

26 Apr 18:51
Compare
Choose a tag to compare

We've been delighted to see spaCy growing so much over the last few months. Before the v1.0 release, we asked for your feedback, which has been incredibly helpful in improving the library. As we're getting closer to v2.0 we hope you'll take a few minutes to fill out the survey, to help us understand how you're using the library, and how it can be better.

📊 Take the survey!


✨ Major features and improvements

  • Move model shortcuts to shortcuts.json to allow adding new ones without updating spaCy.
  • NEW: The first official French model (~1.3 GB) including vocab, syntax and word vectors.
python -m spacy download fr_depvec_web_lg
import fr_depvec_web_lg

nlp = fr_depvec_web_lg.load()
doc = nlp(u'Parlez-vous Français?')

🔴 Bug fixes

  • Fix reporting if train command is used without dev_data.
  • Fix issue #1019: Make Span hashable.

📖 Documentation and examples

👥 Contributors

Thanks to @raphael0202 and @julien-c for the contributions!

v1.8.1: Saving, loading and training bug fixes

23 Apr 20:00
Compare
Choose a tag to compare

We've been delighted to see spaCy growing so much over the last few months. Before the v1.0 release, we asked for your feedback, which has been incredibly helpful in improving the library. As we're getting closer to v2.0 we hope you'll take a few minutes to fill out the survey, to help us understand how you're using the library, and how it can be better.

📊 Take the survey!


🔴 Bug fixes

  • Fix issue #988: Ensure noun chunks can't be nested.
  • Fix issue #991: convert command now uses Python 2/3 compatible json.dumps.
  • Fix issue #995: Use regex library for non-latin characters to simplify punctuation rules.
  • Fix issue #999: Fix parser and NER model saving and loading.
  • Fix issue #1001: Add SPACE to Spanish tag map.
  • Fix issue #1008: train command now works correctly if used without dev_data.
  • Fix issue #1009: Language.save_to_directory() now converts strings to pathlib paths.

📖 Documentation and examples

👥 Contributors

Thanks to @dvsrepo, @beneyal and @oroszgy for the pull requests!

v1.8.0: Better NER training, saving and loading

16 Apr 21:33
Compare
Choose a tag to compare

We've been delighted to see spaCy growing so much over the last few months. Before the v1.0 release, we asked for your feedback, which has been incredibly helpful in improving the library. As we're getting closer to v2.0 we hope you'll take a few minutes to fill out the survey, to help us understand how you're using the library, and how it can be better.

📊 Take the survey!


✨ Major features and improvements

  • NEW: Add experimental Language.save_to_directory() method to make it easier to save user-trained models.
  • Add spacy.compat module to handle platform and Python version compatibility.
  • Update package command to read from existing meta.json and supply custom location to meta file.
  • Fix various compatibility issues and improve error messages in spacy.cli.

🔴 Bug fixes

  • Fix issue #701, #822, #937, #959: Updated docs for NER training and saving/loading.
  • Fix issue #968: spacy.load() now prints warning if no model is found.
  • Fix issue #970, #978: Use correct unicode paths for symlinks on Python 2 / Windows.
  • Fix issue #973: Make token.lemma and token.lemma_ attributes writeable.
  • Fix issue #983: Add spacy.compat to handle compatibility.

📖 Documentation and examples

👥 Contributors

Thanks to @tsohil and @oroszgy for the pull requests!

v1.7.5: Bug fixes and new CLI commands

07 Apr 17:02
Compare
Choose a tag to compare

We've been delighted to see spaCy growing so much over the last few months. Before the v1.0 release, we asked for your feedback, which has been incredibly helpful in improving the library. As we're getting closer to v2.0 we hope you'll take a few minutes to fill out the survey, to help us understand how you're using the library, and how it can be better.

📊 Take the survey!


✨ Major features and improvements

  • NEW: Experimental convert and model commands to convert files to spaCy's JSON format for training, and initialise a new model and its data directory.
  • Updated language data for Spanish and Portuguese.

🔴 Bug fixes

  • Error messages now show the new download commands if no model is loaded.
  • The package command now works correctly and doesn't fail when creating files.
  • Fix issue #693: Improve rules for detecting noun chunks.
  • Fix issue #758: Adding labels now doesn't cause EntityRecognizer transition bug.
  • Fix issue #862: label keyword argument is now handled correctly in doc.merge().
  • Fix issue #891: Tokens containing / infixes are now split by the tokenizer.
  • Fix issue #898: Dependencies are now deprojectivized correctly.
  • Fix issue #910: NER models with new labels now saved correctly, preventing memory errors.
  • Fix issue #934, #946: Symlink paths are now handled correctly on Windows, preventing invalid switch error.
  • Fix issue #947: Hebrew module is now added to setup.py and __init__.py.
  • Fix issue #948: Contractions are now lemmatized correctly.
  • Fix issue #957: Use regex module to avoid back-tracking on URL regex.

📖 Documentation and examples

👥 Contributors

Thanks to @ericzhao28, @Gregory-Howard, @kinow, @jreeter, @mamoit, @kumaranvpl and @dvsrepo for the pull requests!

v1.7.3: Alpha support for Hebrew, new CLI commands and bug fixes

26 Mar 15:08
Compare
Choose a tag to compare

✨ Major features and improvements

  • NEW: Alpha tokenization for Hebrew.
  • NEW: Experimental train and package commands to train a model and convert it to a Python package.
  • Enable experimental support for L1-regularized regression loss in dependency parser and named entity recognizer. Should improve fine-tuning of existing models.
  • Fix high memory usage in download command.

🔴 Bug fixes

  • Fix issue #903, #912: Base forms are now correctly protected from lemmatization.
  • Fix issue #909, #925: Use mlink to create symlinks in Python 2 on Windows.
  • Fix issue #910: Update config when adding label to pre-trained model.
  • Fix issue #911: Delete old training scripts.
  • Fix issue #918: Use --no-cache-dir when downloading models via pip.
  • Fixed infinite recursion in spacy.info.
  • Fix initialisation of languages when no model is available.

📖 Documentation and examples

👥 Contributors

Thanks to @raphael0202, @pavlin99th, @iddoberger and @solresol for the pull requests!

v1.7.2: Small fixes to beam parser and model linking

20 Mar 12:37
Compare
Choose a tag to compare

🔴 Bug fixes

  • Success message in link is now displayed correctly when using local paths.
  • Decrease beam density and fix Python 3 problem in beam_parser.
  • Fix issue #894: Model packages now install and compile paths correctly on Windows.

📖 Documentation and examples

v1.7.1: Fix data download for system installation

19 Mar 10:42
Compare
Choose a tag to compare

🔴 Bug fixes

  • Fix issue #892: Data now downloads and installs correctly on system Python.

v1.7.0: New 50 MB model, CLI, better downloads and lots of bug fixes

18 Mar 19:24
Compare
Choose a tag to compare

✨ Major features and improvements

  • NEW: Improved English model.
  • NEW: Additional smaller English model (50 MB, only 2% less accurate than larger model).
  • NEW: Command line interface to download and link models, view debugging info and print Markdown info for easy copy-pasting to GitHub.
  • NEW: Alpha support for Finnish and Bengali.
  • Updated language data for Swedish and French.
  • Simplified import of lemmatizer data to make it easier to add lemmatization for other languages.

Improved model download and installation

To increase transparency and make it easier to use spaCy with your own models, all data is now available as direct downloads, organised in individual releases. spaCy v1.7 also supports installing and loading models as Python packages. You can now choose how and where you want to keep the data files, and set up "shortcut links" to load models by name from within spaCy. For more info on this, see the new models documentation.

# out-of-the-box: download best-matching default model
python -m spacy download en

# download best-matching version of specific model for your spaCy installation
python -m spacy download en_core_web_md

# pip install .tar.gz archive from path or URL
pip install /Users/you/en_core_web_md-1.2.0.tar.gz
pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_md-1.2.0/en_core_web_md-1.2.0.tar.gz

# set up shortcut link to load installed package as "en_default"
python -m spacy link en_core_web_md en_default

# set up shortcut link to load local model as "my_amazing_model"
python -m spacy link /Users/you/data my_amazing_model
nlp1 = spacy.load('en')
nlp2 = spacy.load('en_core_web_md')
nlp3 = spacy.load('my_amazing_model')

⚠️ Backwards incompatibilities

  • IMPORTANT: Due to fixes to the lemmatizer, the previous English model (v1.1.0) is not compatible with spaCy v1.7.0. When upgrading to this version, you need to download the new model (en_core_web_md v1.2.0). The German model is still valid and will be linked to the de shortcut automatically.
  • spaCy's package manger sputnik is now deprecated. For now, we will keep maintaining our download server to support the python -m spacy.{en|de}.download all command in older versions, but it will soon re-route to download the models from GitHub instead.
  • English lemmatizer data is now stored in Python files in spacy/en and the WordNet data previously stored in corpora/en has been removed. This should not affect your code, unless you have added functionality that relies on these data files.

This will be the last major release before v2.0, which will introduce a few breaking changes to allow native deep learning integration. If you're using spaCy in production, don't forget to pin your dependencies:

# requirements.txt
spacy>=1.7.0,<2.0.0

# setup.py
install_requires=['spacy>=1.7.0,<2.0.0']

🔴 Bug fixes

  • Fix issue #401: Contractions with 's are now lemmatized correctly.
  • Fix issue #507, #711, #798: Models are now available as direct downloads.
  • Fix issue #669: Span class now has lower_ and upper_ properties.
  • Fix issue #686: Pronouns now lemmatize to -PRON-.
  • Fix issue #704: Sentence boundary detection improved with new English model.
  • Fix issue #717: Contracted verbs now have the correct lemma.
  • Fix issue #730, #763, #880, #890: A smaller English model (en_core_web_sm) is now available.
  • Fix issue #755: Add missing import to prevent exception using --force.
  • Fix issue #759: All available NUM_WORDS are now recognised correctly as like_number.
  • Fix issue #766: Add operator to matcher and make sure open patterns are closed at doc end.
  • Fix issue #768: Allow zero-width infix tokens in tokenizer exceptions.
  • Fix issue #771: Version numbers for ujson and plac are now specified correctly.
  • Fix issue #775: "Shell" and "shell" are now excluded from English tokenizer exceptions.
  • Fix issue #778: spaCy is now available on conda via conda-forge.
  • Fix issue #781: Lemmatizer is now correctly applied to OOV words.
  • Fix issue #791: Environment variables are now passed to subprocess calls in cythonize.
  • Fix issue #792: Trailing whitespace is now handled correctly by the tokenizer.
  • Fix issue #801: Update global infix rules to prevent attached punctuation in complex cases.
  • Fix issue #805: Swedish tokenizer exceptions are now imported correctly.
  • Fix issue #834: load_vectors() now accepts arbitrary space characters as word tokens.
  • Fix issue #840: Use better regex for matching URL patterns in tokenizer exceptions.
  • Fix issue #847: "Shed" and "shed" are now excluded from English tokenizer exceptions.
  • Fix issue #856: Vocab now adds missing words when importing vectors.
  • Fix issue #859: Prevent extra spaces from being added after applying token_match regex.
  • Fix issue #868: Model data can now be downloaded to any directory.
  • Fix issue #886: token.idx now matches original index when text contains newlines.

📖 Documentation and examples

👥 Contributors

This release is brought to you by @honnibal and @ines. Thanks to @magnusburton, @jktong, @JasonKessler, @sudowork, @oiwah, @raphael0202, @latkins, @ematvey, @Tpt, @wallinm1, @knub, @wehlutyk, @vaulttech, @nycmonkey, @jondoughty, @aniruddha-adhikary, @badbye, @shuvanon, @rappdw, @ericzhao28, @juanmirocks and @rominf for the pull requests!

v1.6.0: Improvements to tokenizer and tests

16 Jan 13:14
Compare
Choose a tag to compare

✨ Major features and improvements

  • Updated token exception handling mechanism to allow the usage of arbitrary functions as token exception matchers.
  • Improve how tokenizer exceptions for English contractions and punctuations are generated.
  • Update language data for Hungarian and Swedish tokenization.
  • Update to use Thinc v6 to prepare for spaCy v2.0.

🔴 Bug fixes

  • Fix issue #326: Tokenizer is now more consistent and handles abbreviations correctly.
  • Fix issue #344: Tokenizer now handles URLs correctly.
  • Fix issue #483: Period after two or more uppercase letters is split off in tokenizer exceptions.
  • Fix issue #631: Add richcmp method to Token.
  • Fix issue #718: Contractions with She are now handled correctly.
  • Fix issue #736: Times are now tokenized with correct string values.
  • Fix issue #743: Token is now hashable.
  • Fix issue #744: were and Were are now excluded correctly from contractions.

📋 Tests

  • Modernise and reorganise all tests and remove model dependencies where possible.
  • Improve test speed to ~20s for basic tests (from previously >80s) and ~100s including models (from previously >200s).
  • Add fixtures for spaCy components and test utilities, e.g. to create Doc object manually.
  • Add documentation for tests to explain conventions and organisation.

👥 Contributors

Thanks to @oroszgy, @magnusburton, @guyrosin and @danielhers and for the pull requests!