This repository contains releases of models for the spaCy NLP library. For more info on how to download, install and use the models, see the models documentation.
β οΈ Important note: Because the models can be very large and consist mostly of binary data, we can't simply provide them as files in a GitHub repository. Instead, we've opted for adding them to releases as.whl
and.tar.gz
files. This allows us to still maintain a public release history.
To install a specific model, run the following command with the model name (for
example en_core_web_sm
):
python -m spacy download [model]
- spaCy v3.x models directory
- spaCy v3.x model comparison
- spaCy v2.x models directory
- spaCy v2.x model comparison
- Individual release notes
For the spaCy v1.x models, see here.
In general, spaCy expects all model packages to follow the naming convention of
[lang]_[name]
. For our provided pipelines, we divide the name into three
components:
- type: Model capabilities:
core
: a general-purpose model with tagging, parsing, lemmatization and named entity recognitiondep
: only tagging, parsing and lemmatizationent
: only named entity recognitionsent
: only sentence segmentation
- genre: Type of text the model is trained on (e.g.
web
for web text,news
for news text) - size: Model size indicator:
sm
: no word vectorsmd
: reduced word vector table with 20k unique vectors for ~500k wordslg
: large word vector table with ~500k entries
For example, en_core_web_md
is a medium-sized English model trained on
written web text (blogs, news, comments), that includes a tagger, a dependency
parser, a lemmatizer, a named entity recognizer and a word vector table with
20k unique vectors.
Additionally, the model versioning reflects both the compatibility with spaCy,
as well as the model version. A model version a.b.c
translates to:
a
: spaCy major version. For example,2
for spaCy v2.x.b
: spaCy minor version. For example,3
for spaCy v2.3.x.c
: Model version. Different model config: e.g. from being trained on different data, with different parameters, for different numbers of iterations, with different vectors, etc.
For a detailed compatibility overview, see the
compatibility.json
. This is also the source of spaCy's
internal compatibility check, performed when you run the download
command.
If you're using an older version (v1.6.0 or below), you can still download and
install the old models from within spaCy using python -m spacy.en.download all
or python -m spacy.de.download all
. The .tar.gz
archives are also
attached to the v1.6.0 release.
To download and install the models manually, unpack the archive, drop the
contained directory into spacy/data
and load the model via spacy.load('en')
or spacy.load('de')
.
To increase transparency and make it easier to use spaCy with your own models, all data is now available as direct downloads, organised in individual releases. spaCy 1.7 also supports installing and loading models as Python packages. You can now choose how and where you want to keep the data files, and set up "shortcut links" to load models by name from within spaCy. For more info on this, see the new models documentation.
# download best-matching version of specific model for your spaCy installation
python -m spacy download en_core_web_sm
# pip install .whl or .tar.gz archive from path or URL
pip install /Users/you/en_core_web_sm-3.0.0.tar.gz
pip install /Users/you/en_core_web_sm-3.0.0-py3-none-any.whl
pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz
pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0-py3-none-any.whl
To load a model, use spacy.load()
with the model name, a shortcut link or
a path to the model data directory.
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp(u"This is a sentence.")
You can also import
a model directly via its full name and then call its
load()
method with no arguments. This should also work for older models
in previous versions of spaCy.
import spacy
import en_core_web_sm
nlp = en_core_web_sm.load()
doc = nlp(u"This is a sentence.")
In some cases, you might prefer downloading the data manually, for example to place it into a custom directory. You can download the model via your browser from the latest releases, or configure your own download script using the URL of the archive file. The archive consists of a model directory that contains another directory with the model data.
βββ en_core_web_md-3.0.0.tar.gz # downloaded archive
βββ setup.py # setup file for pip installation
βββ meta.json # copy of pipeline meta
βββ en_core_web_md # π¦ pipeline package
βββ __init__.py # init for pip installation
βββ en_core_web_md-3.0.0 # pipeline data
βββ config.cfg # pipeline config
βββ meta.json # pipeline meta
βββ ... # directories with component data
π For more info and examples, check out the models documentation.
- type: Model capabilities (e.g.
core
for general-purpose model with vocabulary, syntax, entities and word vectors, ordepent
for only vocab, syntax and entities) - genre: Type of text the model is trained on (e.g.
web
for web text,news
for news text) - size: Model size indicator (
sm
,md
orlg
)
For example, en_depent_web_md
is a medium-sized English model trained on
written web text (blogs, news, comments), that includes vocabulary, syntax and
entities.
To report an issue with a model, please open an issue on the spaCy issue tracker. Please note that no model is perfect. Because models are statistical, their expected behaviour will always include some errors. However, particular errors can indicate deeper issues with the training feature extraction or optimisation code. If you come across patterns in the model's performance that seem suspicious, please do file a report.