Skip to content

digitaldogsbody/nlpserver-fasttext

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

89 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NLP Server with FastText

NLP Server is a Python 3 Flask web service for easy access to multilingual Natural Language Processing tasks such as language detection, article extraction, entity extraction, sentiment analysis, summarization and more.

NLP Server provides a simple API for non-python programming languages to access some of the great NLP libraries that are available in python.

The server is simple to set up and easy to integrate with your programming language of choice.

DigitalDogsbody Fork

This fork adds a second language prediction service, using the FastText library, in support of the fantastic work of The Archipelago Team.

An extra endpoint has been added at /fasttext and is documented below. Additionally, the web interface, requirements file and license file have been updated. Everything else is left as upstream and the extra functionality of this fork is only for the Python version herein - ports to the PHP and Laravel versions are welcome :-)

PHP & Laravel clients

A PHP library and a Laraval package is available:

Step1: Core Installation

The upstream NLP Server project has been tested on Ubuntu, and this fork has been tested on Debian Buster (via the Archipelago nlpserver Dockerfile) but should work on other versions of Linux.

git clone https://github.com/digitaldogsbody/nlpserver-fasttext.git
cd nlpserver-fasttext

sudo apt-get install -y libicu-dev python3-pip
sudo apt-get install polyglot
sudo apt-get install python3-icu
pip3 install -r requirements.txt

Step 2: Download Polyglot models for human languages

Polyglot is used for entity extraction, sentiment analysis and embeddings (neighbouring words).

You'll need to download the models for the languages you want to use.

# For example: English and Norwegian
python3 -m polyglot download LANG:en
python3 -m polyglot download LANG:no

The /status api endpoint will list installed Polyglot language modules: http://localhost:6400/status

Step 3: Download SpaCy models for entity extraction (NER)

If you want to use the /spacy/entities endpoint for article extraction you need to download the models for the languages you want to use

# Install Spacy if not already installed
pip3 install -U spacy

# For example English, Spanish and Multi-Language
python3 -m spacy download en
python3 -m spacy download es
python3 -m spacy download xx

Step 4: Download the FastText language classification model

curl -L "https://dl.fbaipublicfiles.com/fasttext/supervised-models/lid.176.bin" -O

Detailed Installation

If you have any problems installing from requirements.txt you can instead install the libraries one by one.

sudo apt-get install -y libicu-dev
sudo apt-get install -y python3-pip

sudo pip3 install pyicu	
sudo pip3 install numpy
sudo pip3 install Flask
sudo pip3 install polyglot
sudo pip3 install morfessor
sudo pip3 install langid
sudo pip3 install newspaper3k
sudo pip3 install pycld2
sudo pip3 install gensim
sudo pip3 install spacy
sudo pip3 install readability-lxml
sudo pip3 install BeautifulSoup4
sudo pip3 install afinn
sudo pip3 install textblob
sudo pip3 install git+https://github.com/facebookresearch/fastText.git

The /status api endpoint will list missing python modules: http://localhost:6400/status

Install Recipe for forge.laravel.com servers

Add this recipe on Forge and run it as root to install NLPserver as a service with Supervisor.

# Install NLPserver
cd /home/forge/
git clone https://github.com/web64/nlpserver.git
chown -R forge:forge /home/forge/nlpserver
cd /home/forge/nlpserver

# Install pkg-config. This package is used to find the ICU version
sudo apt install pkg-config

# python packages
apt-get install -y python-numpy libicu-dev
apt-get install -y python3-pip
pip3 install -r requirements.txt

# English Language models - add other models you might require
polyglot download LANG:en
python3 -m spacy download en

# Supervisor - update paths in nlpserver.conf if different
cp nlpserver.conf /etc/supervisor/conf.d
supervisorctl reread
supervisorctl update
supervisorctl start nlpserver

Start NLP Server web service:

To start the server manually run:

$ nohup python3 nlpserver.py  >logs/nlpserver_out.log 2>logs/nlpserver_errors.log &

You can now access the web console and test that the NLP Server is working: http://localhost:6400/

API Endpoints

Endpoint Method Parameters Info Library
/status GET List installed Polyglot language models and missing python packages
/newspaper GET url Article extraction for provided URL newspaper
/newspaper POST html Article extraction for provided HTML newspaper
/readability GET url Article extraction for provided URL readability-lxml
/readability POST html Article extraction for provided HTML readability-lxml
/polyglot/entities POST text,lang Entity extraction and sentiment analysis for provided text polyglot
/polyglot/sentiment POST text,lang Sentiment analysis for provided text polyglot
/polyglot/neighbours GET word,lang Embeddings: neighbouring words polyglot
/langid GET,POST text Language detection for provided text with langid langid
/fasttext GET,POST text,predictions Language dectection for provided text with FastText fasttext
/gensim/summarize POST text,word_count Summarization of long text gensim
/spacy/entities POST text,lang Entity extraction for provided text in given language SpaCy

Usage

For API responses see /response_examples/ directory.

/newspaper - Article & Metadata Extraction

Returns article text, authors, main image, publish date and meta-data for given url or HTML.

From URL:

GET /newspaper?url=http://...

curl http://localhost:6400/newspaper?url=https://github.com/web64/nlpserver

Example JSON response: https://raw.githubusercontent.com/web64/nlpserver/master/response_examples/newspaper.json

From HTML:

POST /newspaper [html="<html>....</html>"]

curl -d "html=<html>...</html>" http://localhost:6400/newspaper

Language Detection with langid

GET|POST /langid?text=what+language+is+this

curl http://localhost:6400/langid?text=what+language+is+this

Returns language code of provided text

langid: {
  "language": "en",
  "score": -42.31864953041077
}

Language Detection with FastText

GET|POST /fasttext?text=what+language+is+this

curl http://localhost:6400/fasttext?text=what+language+is+this

Returns language code of provided text

"fasttext": {
  "language": "en",
  "score": 0.9485139846801758
}

An optional parameter predictions allows more than one candidate language to be predicted by fasttext:

curl http://localhost:6400/fasttext?text=what+language+is+this&predictions=3
"fasttext": {
    "language": "en",
    "score": 0.9485139846801758,
    "results": [
      [
        "en",
        0.9485139846801758
      ],
      [
        "bn",
        0.009047050029039383
      ],
      [
        "ru",
        0.005073812324553728
      ]
    ]
  }

Polyglot Entity Extraction & Sentiment Analysis

POST /polyglot/entities [params: text]

curl -d "text=The quick brown fox jumps over the lazy dog" http://localhost:6400/polyglot/entities

SpaCy Entity Extraction (NER)

POST /spacy/entities [params: text, lang]

Note: You'll need to have downloaded the language models for the language you are using.

# For example for English:
python -m spacy download en
curl -d "text=President Donald Trump says dialogue with North Korea is productive" http://localhost:6400/spacy/entities
"entities": {
    "GPE": {
      "0": "North Korea"
    },
    "PERSON": {
      "0": "Donald Trump"
    }
  }

Sentiment Analysis

POST /polyglot/sentiment [params: text, lang (optional)]

curl -d "text=This is great!" http://localhost:6400/polyglot/sentiment
{
  "message": "Sentiment Analysis API - POST only",
  "sentiment": 1.0,
}

Text summarization

POST /gensim/summarize [params: text, word_count (optional)]

Generates summary for long text. Size of summary by adding a word_count parameter with the maximum number of words in summary.

Neighbouring words

GET /polyglot/neighbours?word=WORD [&lang=en ]

Uses Polyglot's Embeddings to provide neighbouring words for

curl http://localhost:6400/polyglot/neighbours?word=obama
"neighbours": [
    "Bush",
    "Reagan",
    "Clinton",
    "Ahmadinejad",
    "Nixon",
    "Karzai",
    "McCain",
    "Biden",
    "Huckabee",
    "Lula"
  ]

/readability - Article Extraction

Note: In most cases Newspaper performs better than Readability.

From URL:

GET /readability?url=https://github.com/web64/nlpserver

curl http://localhost:6400/newspaper?url=https://github.com/web64/nlpserver

From HTML:

POST /readability [html="<html>....</html>"]

curl -d "html=<html>...</html>" http://localhost:6400/newspaper

Run as a service:

First, install Supervisor if not already installed

sudo apt-get update && sudo apt-get install python-setuptools
sudo apt install supervisor

Copy nlpserver.conf to /etc/supervisor/supervisord.conf and edit paths. Then run this to start the NLPserver:

sudo supervisorctl reread
sudo supervisroctl update
sudo supervisorctl start nlpserver

Contribute

If you are familiar with NLP or Python, please let us know how this project can be improved!

Future tasks

About

NLP Web Service

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 69.5%
  • HTML 28.9%
  • Dockerfile 1.4%
  • Shell 0.2%