Package Developed as a Subproject of: Golino, H., & Teles, M. UVAi Vanguard: Transforming UVA’s Academic Landscape with AI Large Language Models. Funded by the Jefferson Trust.
With transforEmotion
you can use cutting-edge transformer models for zero-shot emotion classification of text, image, and video in R, all without the need for a GPU, subscriptions, paid services, or using Python.
- How to install the package?
- How to run sentiment analysis on text?
- How to run facial expression recognition on images?
- How to run facial expression recognition on videos?
You can find the latest stable version on CRAN. Install it in R with:
install.packages("transforEmotion")
If you want to use the latest development version, you can install it from GitHub using the devtools
package.
if(!"devtools" %in% row.names(installed.packages())){
install.packages("devtools")
}
devtools::install_github("atomashevic/transforEmotion")
After installing the package, load it in R.
# Load package
library(transforEmotion)
After loading package for the first time, you need to setup the Python virtual environment. This will download the necessary Python packages and models. This step can take a few minutes but it is only required once after installing the package on a new system.
# Run Python setup
setup_miniconda()
Warning
If you using radian console in VSCode or in a terminal emulator, you won't be able to set up the transforEmotion package. Radian is written in Python and (in most cases) already runs in your default Python environment. This prevents transforEmotion package from setting up the new virtual environment and installing the correct versions of necessary Python packages. Switch to default R console and everything should work fine.
Next load some data with text for analysis. The example below uses item descriptions from the personality trait extraversion in the NEO-PI-R inventory found on the IPIP website.
# Load data
data(neo_ipip_extraversion)
For the example, the positively worded item descriptions will be used.
# Example text
text <- neo_ipip_extraversion$friendliness[1:5]
Next, the text can be loaded in the function transformer_scores()
to obtain the probability that item descriptions correspond to a certain class. The classes defined below are the facets of extraversion in the NEO-PI-R. The example text data draws from the friendliness facet.
# Cross-Encoder DistilRoBERTa
transformer_scores(
text = text,
classes = c(
"friendly", "gregarious", "assertive",
"active", "excitement", "cheerful"
)
)
The default transformer model is DistilRoBERTa. The model is fast and accurate.
Another model that can be used is BART, a much larger and more computationally intensive model (slower prediction times). The BART model tends to be more accurate but the accuracy gains above DistilRoBERTa are negotiatiable.
# Facebook BART Large
transformer_scores(
text = text,
classes = c(
"friendly", "gregarious", "assertive",
"active", "excitement", "cheerful"
),
transformer = "facebook-bart"
)
Any Text Classification Model with a Pipeline on huggingface
Text classification models with a pipeline on huggingface can be used so long as there is a pipeline available for them. Below is an example of Typeform's DistilBERT model.
# Directly from huggingface: typeform/distilbert-base-uncased-mnli
transformer_scores(
text = text,
classes = c(
"friendly", "gregarious", "assertive",
"active", "excitement", "cheerful"
),
transformer = "typeform/distilbert-base-uncased-mnli"
)
The rag
function is designed to enhance text generation using Retrieval-Augmented Generation (RAG) techniques. This function allows users to input text data or specify a path to local PDF files, which are then used to retrieve relevant documents.
The rag function supports various large language models (LLMs), including TinyLLAMA, LLAMA-2, Mistral-7B, Orca-2, and Phi-2, each offering different levels of computational efficiency and quality. The default model is TinyLLAMA, which is the fastest model.
Here's an example based on the decription of this package. First, we specify the text data.
text <- "With `transforEmotion` you can use cutting-edge transformer models for zero-shot emotion
classification of text, image, and video in R, *all without the need for a GPU,
subscriptions, paid services, or using Python. Implements sentiment analysis
using [huggingface](https://huggingface.co/) transformer zero-shot classification model pipelines.
The default pipeline for text is
[Cross-Encoder's DistilRoBERTa](https://huggingface.co/cross-encoder/nli-distilroberta-base)
trained on the [Stanford Natural Language Inference](https://huggingface.co/datasets/snli) (SNLI) and
[Multi-Genre Natural Language Inference](https://huggingface.co/datasets/multi_nli) (MultiNLI) datasets.
Using similar models, zero-shot classification transformers have demonstrated superior performance
relative to other natural language processing models
(Yin, Hay, & Roth, [2019](https://arxiv.org/abs/1909.00161)).
All other zero-shot classification model pipelines can be implemented using their model name
from https://huggingface.co/models?pipeline_tag=zero-shot-classification."
And then we run the rag
function.
rag(text, query = "What is the use case for transforEmotion package?")
This code will provide the output similar to this one.
The use case for transforEmotion package is to use cutting-edge transformer
models forzero-shot emotion classification of text, image, and video in R,
without the need for a GPU, subscriptions, paid services, or using Python.
This package implements sentiment analysis using the Cross-Encoder's DistilRoBERTa
model trained on the Stanford Natural Language Inference (SNLI) and MultiNLI datasets.
Using similar models, zero-shot classification transformers have demonstrated
superior performance relative to other natural language processing models
(Yin, Hay, & Roth, [2019](https://arxiv.org/abs/1909.00161)).
The transforEmotion package can be used to implement these models and other
zero-shot classification model pipelines from the HuggingFace library.>
For Facial Expression Recognition (FER) task from images we use Open AI's CLIP transformer model. Two input arguments are needed: the path to image and list of emotion labels.
Path can be either local or an URL. Here's an example of using a URL of Mona Lisa's image from Wikipedia.
# Image URL or local filepath
image <- 'https://cdn.mos.cms.futurecdn.net/xRqbwS4odpkSQscn3jHECh-650-80.jpg'
# Array of emotion labels
emotions <- c("excitement", "happiness", "pride", "anger", "fear", "sadness", "neutral")
# Run FER
image_scores(image, emotions)
You can define up to 10 emotions. The output is a data frame with 1 row and columns corresponding to emotions. The values are FER scores for each emotion.
If there is no face detected in the image, the output will be a 0x0 data frame.
If there are mulitple faces detected in the image, by default the function will return the FER scores for the larget (focal) face. Alternative is to select the face on the left or the right side of the image. This can be done by specifying the face_selection
argument.
Video processing works by extracting frames from the video and then running the image processing function on each frame. Two input arguments are needed: the path to video and list of emotion labels.
Path can be either local filepath or a YouTube URL. Support for other video hosting platforms is not yet implemented.
# Video URL or local filepath
video_url <- "https://www.youtube.com/watch?v=hdYNcv-chgY&ab_channel=Conservatives"
# Array of emotion labels
emotions <- c("excitement", "happiness", "pride", "anger", "fear", "sadness", "neutral")
# Run FER on `nframes` of the video
result <- video_scores(video_url, classes = emotions,
nframes = 10, save_video = TRUE,
save_frames = TRUE, video_name = 'boris-johnson',
start = 10, end = 120)
Working with videos is more computationally complex. This example extracts only 10 frames from the video and I shouldn't take longer than few minutes on an average laptop without GPU (depending on your internet connection needed to download the entire video and CLIP model). In research applicatons, we will usually extract 100-300 frames from the video. This can take much longer, so pantience is advised while waiting for the results.
Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., ... & Zettlemoyer, L. (2019). Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., & Sutskever, I. (2021). Learning Transferable Visual Models From Natural Language Supervision. arXiv preprint arXiv:2103.00020
Yin, W., Hay, J., & Roth, D. (2019). Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. arXiv preprint arXiv:1909.00161.