Skip to content

An Example of Building and Deploying A Serverless Text Classification Web App

License

Notifications You must be signed in to change notification settings

WorkInTheDark/TextClassificationApp

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Building and Deploying A Text Classification Web App


Web App: https://docwebapp-j3zdo3lhcq-uc.a.run.app/

About


In this project, over a series of blog posts I'll be buidling a model document classification, also known as text classification and deploying the model as part of a web application to predict the topic of research papers from their abstract.

1st Blog Post: Dealing With Imbalanced Data


In the first blog post I will be working with the Scikit-learn library and an imbalanced dataset (corpus) that I will create from summaries of papers published on arxiv. The topic of each paper is already labeled as the category therefore alleviating the need for me to label the dataset. The imbalance in the dataset will be caused by the imbalance in the number of samples in each of the categories we are trying to predict. Imbalanced data occurs quite frequently in classification problems and makes developing a good model more challenging. Often times it is too expensive or not possible to get more data on the classes that have to few samples. Developing strategies for dealing with imbalanced data is therefore paramount for creating a good classification model. I will cover some of the basics of dealing with imbalanced data using the Imbalance-Learn library as well as building a Naive Bayes classifier and Support Vector Machine using from Scikit-learn. I will also over the basics of term frequency-inverse document frequency and visualizing it using the Plotly library.

2nd Blog Post: Using The Natural Language Toolkit


In this blogpost I picked up from the last one and went over using the Natural Language Toolkit (NLTK) to improve the performance of our text classification models. Specifically, we went over how to remove stopwords, stemming and lemmitization. I applied each of these to the weighted Support Vector Machine model and performed a grid search to find the optimal parameters to use for our models. Finally I persist our model to disk using Joblib so that we can use it as part of a rest API.

3rd Blog Post: A Machine Learning Powered Web App


In this post we'll build out a serverless web app using a few technologies. The advantage of using a serverless framework for me is cost effectiveness: I don't pay much at all unless people use my web app a ton and I don't expect people to visit this app very often. However, due to the serverless framework I will have issues with latency, which I can live with. I'll first go over how to convert my text classification model from the last post into a Rest API using FastAPI and Joblib. Using our model in this way will allow us to send our paper abstracts as json through an HTTP request and get back the predicted topic label for the paper abstract. After this I'll build out a web application usign FastAPI and Bootstrap. Using Bootstrap allows us to have a beautiful responsive website without having to write HTML or JavaScript. Finally, I'll go over deploying both the model API and Web app using Docker and Google Cloud Run to build out a serverless web application!

How To Run This:


To use the notebooks in this project first download Docker and then you can start the notebook with the command:

docker-compose up

and going to the posted url. To recreate the restapi and web app use the commands listed in modealapi and webapp respectively.s

About

An Example of Building and Deploying A Serverless Text Classification Web App

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.8%
  • Other 0.2%