Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FAQ and Troubleshooting for PyABSA [使用方法和常见问题] #189

Closed
yangheng95 opened this issue May 5, 2022 · 28 comments
Closed

FAQ and Troubleshooting for PyABSA [使用方法和常见问题] #189

yangheng95 opened this issue May 5, 2022 · 28 comments

Comments

@yangheng95
Copy link
Owner

yangheng95 commented May 5, 2022

Here are the most asked questions and advice for troubleshooting:

About ABSADataset

We really suggest you share your dataset in ABSADatasets, which helps the community to provide better checkpoints and develop better models. The datasets are released under the author's license and only for research.
Thanks to the contributors, we have collected many ABSADatasets that are enough to train the universal checkpoints which are available now at HuggingFace Space.

We already provide a data processing tool for you to annotate your own dataset, and download it and run the page on a browser to annotate.

Meanwhile, PyABSA provides the tutorial to generate an inference set for aspect-based sentiment classification, and convert the APC datasets to ATEPC datasets.

Put your dataset in the same location of 'integrated_datasets'(run any training script to download this folder), and PyABSA auto detects your training set, test set, and valid(dev) set (if any).

You can use the path as the dataset param or keyword to locate your dataset, refer to ABSADatasets for how to use your dataset in PyABSA. If you got any problems, please report in time. Make sure your dataset is encoded using UTF-8

About Checkpoint

This is a personal project which has no hosting server support, I have to utilize the public service to distribute checkpoints, e.g., Google Drive, Baidu Netdisk, Huggingface Hub.

Generally, use available_checkpoints() can show you the available checkpoints depending on your version, and the checkpoints will be downloaded from Google Drive automatically. But if the checkpoints get donwloaded frequently, the google will disable automatic downloading function, while you can download it manually via a broswer.

If you have no access to Google Drive, please check Baidu Netdisk for available checkpoints and download manually.

About Config

The config implementations of the aspect-based sentiment classification (ABSC/ASC), the aspect-term extract & sentiment classification, and the sentence level text classification (TC) are similar, here is an example of a config setting:

transformers_based_config = {'model': LCF_ATEPC,  # model class, check available models in APCModelList, ATEPCModelList and TCModelList, 
                              'optimizer': "adamw",  # Optimizer class and str are both acceptable (from pytorch)
                              'learning_rate': 0.00003,  # The default learning of transformers-based models generally ranges in [1e-5, 5e-5]  
                              'pretrained_bert': "microsoft/deberta-v3-base",   # The pretrained_bert accepts model from the Huggingface Hub or local model, which use the AutoModel implementation
                              'cache_dataset': True,  # Dont cache the dataset in development, changing a param in the config probably triggers new caching process
                              'warmup_step': -1,   # Default to not use warmup_step, this is an experimental feature
                              'use_bert_spc': False,  # Use [CLS] + Context + [SEP] + aspect +[SEP] input format , which is helpful in ABSA
                              'show_metric': False,  # Dispaly classification report during/after training                              'max_seq_len': 80,  # The max text  input length in modeling, longer texts will be truncated
                              'patience': 5,  # The patience tells trainer to stop in the `patience`  of epochs
                              'SRD': 3,  # This param is for local context focus mechanism, you don't need to change this param generally
                              'use_syntax_based_SRD': False,  # This paramtogetherto use syntax-based SRD in all models involving LCF mechanism
                              'lcf': "cdw",  # Type of LCF mechanism, accepts 'cdm' and 'cdw'
                              'window': "lr",  # This param only effects in LSA-models, refer to the paper of LSA
                              'dropout': 0.5,  # Refer to the original paper of dropout
                              'l2reg': 0.000001,  # This param is related to specific model, you need try some values to find the best setting
                              'num_epoch': 10,  # If you have enough, please set it to 30-40
                              'batch_size': 16,  # If you have enough, please set it to 32 or 64
                              'initializer': 'xavier_uniform_',  # No used in transformers-based models
                              'seed': 52,  # This param accepts a integer or a list/set of integers  
                              'polarities_dim': 2,  # Deprecated param, will be auto set according to the labels of the dataset
                              'log_step': 50,  # Accepts -1 (means evaluate every epoch) or an interger
                              'gradient_accumulation_steps': 1,  # Unused
                              'dynamic_truncate': True,  # This param applies a aspect-centeried truncation instead of head truncation
                              'srd_alignment': True,  # for srd_alignment, try to align the tree nodes of syntax (SpaCy) and tokenization (transformers) 
                              'evaluate_begin': 0  # No evaluation until epoch 'evaluate_begin', aims at saving time
                              }

About Tutorial

This repo is mainly developed and maintained by myself, and it is not the main project, so I do have not enough to prepare documentation.
As an alternative, This repo provides many tutorials in the demos folder to help you find as much as features of PyABSA. If there is anything you can't figure out, please make an issue.

About Model

Someone may want to use the best model, however it depends on the dataset. We make an simple performance table of our model on public dataset, you can compare it to other repo/tool before deciding which one to use. Generally speaking, Fast-LCF will be a good choice for all senarios.

About Task

Now, we only support aspect-based sentiment classification, aspect term extraction & sentiment classification, and sentence level text classification. You can develop your own model based on PyABSA and share with us, or introduce some new tasks into PyABSA, even just not tightly integrated.

About Documentation

No plan of writing documentation yet, if someone would do it, we may do it togther.

@yangheng95 yangheng95 changed the title FAQ and Troubleshooting for PyABSA FAQ and Troubleshooting for PyABSA [使用方法和常见问题] May 20, 2022
@yangheng95 yangheng95 pinned this issue Jul 3, 2022
@vsf365
Copy link

vsf365 commented Jul 14, 2022

Hi there!

I am analyzing tweets about immigration and trying to find the sentiment associated with the tweet, whether positive or negative. However, I was running into some issues with the word "immigrants" being used in a sentence.

For example, I plugged in the sentence "I think that illegal immigrants are detrimental to U.S. society" to the online aspect-based sentiment analysis (https://huggingface.co/spaces/Gradio-Blocks/Multilingual-Aspect-Based-Sentiment-Analysis), and did not get any results for a sentiment associated with "immigrants", when it should be negative. Here is a screenshot of what it looks like on my end.

Screen Shot 2022-07-14 at 11 35 41 AM

However, when changing the sentence to "I think that illegal ice cream parlors are detrimental to U.S. society", there is a negative sentiment associated with "ice cream" with a 0.9999 confidence level. Here is a screenshot of this test.

Screen Shot 2022-07-14 at 11 36 01 AM

I was wondering why this was happening and if there is a way to make it such that the word "immigrants" can be associated with a sentiment.

Thanks!

@yangheng95
Copy link
Owner Author

yangheng95 commented Jul 14, 2022

The result is highly dependent on the training data. Although our dataset contains update 60K ABSA training data, which is much more than other repos, the immigrant or related corpus are not included. So it means you need to collect and annoate some data(2K+ examples are necessary) and train our models, you can find the training script in the demo folder. And you can annoate our dataset via this tool: https://github.com/yangheng95/ABSADatasets/tree/v1.2/DPT

@yangheng95 yangheng95 unpinned this issue Jul 23, 2022
@baihuajun24
Copy link

Hi there!
image
I am using checkpoint='multilingual' aspect extractor and I noticed if a long sentence is used for inference, the result is not showing. Is this problem caused by the input sentence size, what is the max length? How can I enable longer sentence to be used as inference input?

@yangheng95
Copy link
Owner Author

Hi there! image I am using checkpoint='multilingual' aspect extractor and I noticed if a long sentence is used for inference, the result is not showing. Is this problem caused by the input sentence size, what is the max length? How can I enable longer sentence to be used as inference input?

The max seq len is determined in training, if you want to use largest max seq len = 512, you need to retrain the model. It will be very expensive. So you'd better to split your long sentence into shorter text and combine the output.

@christianjosef27
Copy link

Hello! I would like to train your model with a train dataset containing student reviews. However, I am not sure how to use my train dataset since I had not used your annotation tool/data preparation tool.

The train dataset is an excel file (also see attached screenshot) that contains about 6k rows, containing the columns "sentence" (split all reviews by sentences), "aspect term" (term that occurs in sentence), "sentiment" (negative, positive or neutral) and "aspect category" (one of my own defined categories..).
train dataset screenshot

Can you tell me how I can transform this dataset to serve the needs of format in order to train your model?

Thanks so much, Christian

@yangheng95
Copy link
Owner Author

Which taks do you need to do, APC, ATEPC or ASTE? See the demo https://huggingface.co/spaces/yangheng/PyABSA for details.

@christianjosef27
Copy link

Which taks do you need to do, APC, ATEPC or ASTE? See the demo https://huggingface.co/spaces/yangheng/PyABSA for details.

I want to do ATEPC. I understand that with this task I can extract 1 or multiple aspect terms and determine their sentiments.
I hope I can use my train dataset somehow, otherwise it would be a lot of work to create it again with your provided tool.

Thanks for your help!

@christianjosef27
Copy link

Which taks do you need to do, APC, ATEPC or ASTE? See the demo https://huggingface.co/spaces/yangheng/PyABSA for details.

I want to do ATEPC. I understand that with this task I can extract 1 or multiple aspect terms and determine their sentiments. I hope I can use my train dataset somehow, otherwise it would be a lot of work to create it again with your provided tool.

Thanks for your help!

***PS: the column "aspect categories" of my train dataset are not a must. I just added them because I might use them later on.

@yangheng95
Copy link
Owner Author

Which taks do you need to do, APC, ATEPC or ASTE? See the demo https://huggingface.co/spaces/yangheng/PyABSA for details.

I want to do ATEPC. I understand that with this task I can extract 1 or multiple aspect terms and determine their sentiments. I hope I can use my train dataset somehow, otherwise it would be a lot of work to create it again with your provided tool.
Thanks for your help!

***PS: the column "aspect categories" of my train dataset are not a must. I just added them because I might use them later on.

First convert you data to APC format as following (You need to write the code yourself):
image

Then, convert it to ATEPC dataset: convert_apc_

@christianjosef27
Copy link

Which taks do you need to do, APC, ATEPC or ASTE? See the demo https://huggingface.co/spaces/yangheng/PyABSA for details.

I want to do ATEPC. I understand that with this task I can extract 1 or multiple aspect terms and determine their sentiments. I hope I can use my train dataset somehow, otherwise it would be a lot of work to create it again with your provided tool.
Thanks for your help!

***PS: the column "aspect categories" of my train dataset are not a must. I just added them because I might use them later on.

First convert you data to APC format as following (You need to write the code yourself): image

Then, convert it to ATEPC dataset: convert_apc_

Okay I see, thank you for those hints! I will try that, however I was wondering the 2 points:

1-So is it true that this way is also a "common way" of doing it/or lets say possible without too big obstacles?
Or is it rather hard to convert my format into the APC format? (I have coding background but not a lot)

2-And the 2. step to convert to ATEPC dataset is basically just using that function of yours right?

@yangheng95
Copy link
Owner Author

Yes. I think it simple to code. The second step only calls a api.

@christianjosef27
Copy link

Yes. I think it simple to code. The second step only calls a api.

Okay Im gonna try that, thanks for your quick help! :)

@christianjosef27
Copy link

Hello, I now have a the APC-format of my training set (>8k) in form of 1 text file like this:
example_APC-format

However, I am not sure how to perform the next steps in order to prepare my train data to finally fit it to your model.
I tried to do the 2nd step of converting to atepc dataset but it didnt work as the module was not found
(I have passed my local txt file as argument):
import_problem_convert_to-atepc

I have not done any train test split yet (therefore I have not renamed the txt file to *test.apc.data etc.....) And I have not registered my text file.

I would appreciate if you could help me out with next steps.

@yangheng95
Copy link
Owner Author

Hello, I now have a the APC-format of my training set (>8k) in form of 1 text file like this: example_APC-format

However, I am not sure how to perform the next steps in order to prepare my train data to finally fit it to your model. I tried to do the 2nd step of converting to atepc dataset but it didnt work as the module was not found (I have passed my local txt file as argument): import_problem_convert_to-atepc

I have not done any train test split yet (therefore I have not renamed the txt file to *test.apc.data etc.....) And I have not registered my text file.

I would appreciate if you could help me out with next steps.

try from pyabsa import convert_apc_set_to_atepc_set

@christianjosef27
Copy link

Hello, I now have a the APC-format of my training set (>8k) in form of 1 text file like this: example_APC-format
However, I am not sure how to perform the next steps in order to prepare my train data to finally fit it to your model. I tried to do the 2nd step of converting to atepc dataset but it didnt work as the module was not found (I have passed my local txt file as argument): import_problem_convert_to-atepc
I have not done any train test split yet (therefore I have not renamed the txt file to *test.apc.data etc.....) And I have not registered my text file.
I would appreciate if you could help me out with next steps.

try from pyabsa import convert_apc_set_to_atepc_set

If I try 'from pyabsa import convert_apc_set_to_atepc_set' I get following error:
import_error

However, that might have worked yesterday, but I uninstalled and reinstalled pyabsa. It seems installed correctly using pip, but this error proves I am missing something. I will try to reinstall again.

@yangheng95
Copy link
Owner Author

Try pip install datasets

@christianjosef27
Copy link

Thank you! That worked now. I only saw that 1.8k tuples raised the IgnoreError as their aspects were "NULL".
However, now I have a file "universities.apc.dataset.txt.atepc"

Any hints how to proceed? Im not sure how to do the train/test/valid split on that ATEPC Object now.
But assuming I have then 3 files (*.train.data.atepc - *.test.data.atepc - *.val.data.atepc ) I have to upload in your atepc_datasets folder right?

And the step Register your dataset in PyABSA afterwards is also necessary I assume?

@yangheng95
Copy link
Owner Author

Please show me some examples of your annotated data to find what is wrong. It optional to register your dataset if you would like to share your dataset with the community.

@christianjosef27
Copy link

Example of train data in excel:
EX_train_excel

Example of train data in APC format:
EX_train_APC

I also annotated NULL aspects (for every sentence I didnt find an exact aspect for my use case). I might code it in the end to an overall category. But I am not sure if NULL aspects are supported in pyabsa. In that caseI might just not use them.

Okay, i will be able to share yes.

@yangheng95
Copy link
Owner Author

The NULL label is not supported in APC, ATEPC subtasks. However, you can try write code to convert by yourself to adapt your data to ASTE or ACOS subtasks.

@yangheng95
Copy link
Owner Author

The NULL label is not supported in APC, ATEPC subtasks. However, you can try write code to convert by yourself to adapt your data to ASTE or ACOS subtasks.

Please refer to the https://github.com/yangheng95/ABSADatasets/tree/v2.0/datasets for annotation

@christianjosef27
Copy link

Okay, and is there any function for train test splits by pyabsa? What have you used to make your train test splits?

Not sure if train test split by sklearn works for that and whether I can split the 1 Atepc object I have

@yangheng95
Copy link
Owner Author

I don't really get it by Atepc object. But generally you can split using sklearn

@christianjosef27
Copy link

Hello, I have a question about combining the Tasks ATE and APC:
I trained different models on my custom data.

I have one model_1 that performs good in APC but bad in ATE.
I have another model_2 that performs good in ATE but bad in APC.

Is there a way to combine those 2 models to enhance performance of both tasks?
E.g. using the model_2 to get atepc-results, and then passing the extracted aspects to model_1?

Many thianks for any hints to a newbie in ML...

@HMUniversalis
Copy link

Hi there,

I'm trying to make ABSA using the triplets code (ASTE), but the code is giving me the following results. Can you tell me what the problem is?"

code

!pip install pyabsa -U
from pyabsa import ModelSaveOption
from pyabsa import DeviceTypeOption
from pyabsa import DatasetItem
from pyabsa import AspectSentimentTripletExtraction as ASTE

results

Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Requirement already satisfied: pyabsa in /usr/local/lib/python3.10/dist-packages (1.16.27)
Collecting pyabsa
Using cached pyabsa-2.3.1-py3-none-any.whl (526 kB)
Requirement already satisfied: findfile>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from pyabsa) (2.0.0)
Requirement already satisfied: pytorch-warmup in /usr/local/lib/python3.10/dist-packages (from pyabsa) (0.1.1)
Requirement already satisfied: metric-visualizer>=0.9.6 in /usr/local/lib/python3.10/dist-packages (from pyabsa) (0.9.7)
Requirement already satisfied: termcolor in /usr/local/lib/python3.10/dist-packages (from pyabsa) (2.3.0)
Requirement already satisfied: transformers>=4.18.0 in /usr/local/lib/python3.10/dist-packages (from pyabsa) (4.28.1)
Requirement already satisfied: protobuf<4.0.0 in /usr/local/lib/python3.10/dist-packages (from pyabsa) (3.20.3)
Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from pyabsa) (4.65.0)
Requirement already satisfied: gitpython in /usr/local/lib/python3.10/dist-packages (from pyabsa) (3.1.31)
Requirement already satisfied: torch>=1.0.0 in /usr/local/lib/python3.10/dist-packages (from pyabsa) (2.0.0+cu118)
Requirement already satisfied: sentencepiece in /usr/local/lib/python3.10/dist-packages (from pyabsa) (0.1.99)
Requirement already satisfied: spacy in /usr/local/lib/python3.10/dist-packages (from pyabsa) (3.5.2)
Requirement already satisfied: autocuda>=0.16 in /usr/local/lib/python3.10/dist-packages (from pyabsa) (0.16)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from pyabsa) (3.1)
Requirement already satisfied: seqeval in /usr/local/lib/python3.10/dist-packages (from pyabsa) (1.2.2)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.10/dist-packages (from pyabsa) (4.5.0)
Collecting boostaug>=2.3.5
Using cached boostaug-2.3.5-py3-none-any.whl (16 kB)
Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from pyabsa) (1.5.3)
Requirement already satisfied: update-checker in /usr/local/lib/python3.10/dist-packages (from pyabsa) (0.18.0)
Requirement already satisfied: scipy>=1.10.0 in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (1.10.1)
Requirement already satisfied: openpyxl in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (3.0.10)
Requirement already satisfied: tabulate in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (0.8.10)
Requirement already satisfied: matplotlib>=3.6.3 in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (3.7.1)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (1.22.4)
Requirement already satisfied: tikzplotlib in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (0.10.1)
Requirement already satisfied: xlsxwriter in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (3.1.0)
Requirement already satisfied: click in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (8.1.3)
Requirement already satisfied: natsort in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (8.3.1)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch>=1.0.0->pyabsa) (3.12.0)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.0.0->pyabsa) (1.11.1)
Requirement already satisfied: triton==2.0.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.0.0->pyabsa) (2.0.0)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.0.0->pyabsa) (3.1.2)
Requirement already satisfied: lit in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.0.0->pyabsa) (16.0.2)
Requirement already satisfied: cmake in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.0.0->pyabsa) (3.25.2)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.18.0->pyabsa) (23.1)
Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.18.0->pyabsa) (0.13.3)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.18.0->pyabsa) (6.0)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from transformers>=4.18.0->pyabsa) (2.27.1)
Requirement already satisfied: huggingface-hub<1.0,>=0.11.0 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.18.0->pyabsa) (0.14.1)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.18.0->pyabsa) (2022.10.31)
Requirement already satisfied: gitdb<5,>=4.0.1 in /usr/local/lib/python3.10/dist-packages (from gitpython->pyabsa) (4.0.10)
Requirement already satisfied: python-dateutil>=2.8.1 in /usr/local/lib/python3.10/dist-packages (from pandas->pyabsa) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->pyabsa) (2022.7.1)
Requirement already satisfied: scikit-learn>=0.21.3 in /usr/local/lib/python3.10/dist-packages (from seqeval->pyabsa) (1.2.2)
Requirement already satisfied: pydantic!=1.8,!=1.8.1,<1.11.0,>=1.7.4 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (1.10.7)
Requirement already satisfied: thinc<8.2.0,>=8.1.8 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (8.1.9)
Requirement already satisfied: srsly<3.0.0,>=2.4.3 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (2.4.6)
Requirement already satisfied: smart-open<7.0.0,>=5.2.1 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (6.3.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (67.7.2)
Requirement already satisfied: typer<0.8.0,>=0.3.0 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (0.7.0)
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (1.0.9)
Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.11 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (3.0.12)
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (2.0.7)
Requirement already satisfied: pathy>=0.10.0 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (0.10.1)
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (3.0.8)
Requirement already satisfied: spacy-loggers<2.0.0,>=1.0.0 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (1.0.4)
Requirement already satisfied: wasabi<1.2.0,>=0.9.1 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (1.1.1)
Requirement already satisfied: catalogue<2.1.0,>=2.0.6 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (2.0.8)
Requirement already satisfied: langcodes<4.0.0,>=3.2.0 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (3.3.0)
Requirement already satisfied: smmap<6,>=3.0.1 in /usr/local/lib/python3.10/dist-packages (from gitdb<5,>=4.0.1->gitpython->pyabsa) (5.0.0)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.11.0->transformers>=4.18.0->pyabsa) (2023.4.0)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.6.3->metric-visualizer>=0.9.6->pyabsa) (1.0.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.6.3->metric-visualizer>=0.9.6->pyabsa) (1.4.4)
Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.6.3->metric-visualizer>=0.9.6->pyabsa) (8.4.0)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.6.3->metric-visualizer>=0.9.6->pyabsa) (3.0.9)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.6.3->metric-visualizer>=0.9.6->pyabsa) (0.11.0)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.6.3->metric-visualizer>=0.9.6->pyabsa) (4.39.3)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.1->pandas->pyabsa) (1.16.0)
Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.10/dist-packages (from requests->transformers>=4.18.0->pyabsa) (2.0.12)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->transformers>=4.18.0->pyabsa) (2022.12.7)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->transformers>=4.18.0->pyabsa) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers>=4.18.0->pyabsa) (1.26.15)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn>=0.21.3->seqeval->pyabsa) (3.1.0)
Requirement already satisfied: joblib>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from scikit-learn>=0.21.3->seqeval->pyabsa) (1.2.0)
Requirement already satisfied: blis<0.8.0,>=0.7.8 in /usr/local/lib/python3.10/dist-packages (from thinc<8.2.0,>=8.1.8->spacy->pyabsa) (0.7.9)
Requirement already satisfied: confection<1.0.0,>=0.0.1 in /usr/local/lib/python3.10/dist-packages (from thinc<8.2.0,>=8.1.8->spacy->pyabsa) (0.0.4)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.0.0->pyabsa) (2.1.2)
Requirement already satisfied: et-xmlfile in /usr/local/lib/python3.10/dist-packages (from openpyxl->metric-visualizer>=0.9.6->pyabsa) (1.1.0)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.0.0->pyabsa) (1.3.0)
Requirement already satisfied: webcolors in /usr/local/lib/python3.10/dist-packages (from tikzplotlib->metric-visualizer>=0.9.6->pyabsa) (1.13)
Installing collected packages: boostaug, pyabsa
Attempting uninstall: boostaug
Found existing installation: boostaug 2.2.5
Uninstalling boostaug-2.2.5:
Successfully uninstalled boostaug-2.2.5
Attempting uninstall: pyabsa
Found existing installation: PyABSA 1.16.27
Uninstalling PyABSA-1.16.27:
Successfully uninstalled PyABSA-1.16.27
Successfully installed boostaug-2.3.5 pyabsa-2.3.1
:914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module()
:914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module()
[2023-05-09 17:23:59] (2.3.1) PyABSA(2.3.1): If your code crashes on Colab, please use the GPU runtime. Then run "pip install pyabsa[dev] -U" and restart the kernel.
Or if it does not work, you can use v1.16.27

[New Feature] Aspect Sentiment Triplet Extraction since v2.1.0 (https://github.com/yangheng95/PyABSA/tree/v2/examples-v2/aspect_sentiment_triplet_extration)
[New Feature] Aspect CategoryOpinion Sentiment Quadruple Extraction since v2.2.0 (https://github.com/yangheng95/PyABSA/tree/v2/examples-v2/aspect_opinion_sentiment_category_extraction)

I am using Colab with GPU. I tried versions 2.3.1, 2.0.27, 1.16.27. None of them worked.

@dantaninecz
Copy link

I am having an issue running the ATEPCTrainer.

`warnings.filterwarnings("ignore")

config = (
ATEPC.ATEPCConfigManager.get_atepc_config_english()
) # this config contains 'pretrained_bert', it is based on pretrained models

config.model = ATEPC.ATEPCModelList.FAST_LCF_ATEPC # FAST_LCF_ATEPC improved version of LCF_ATEPC, base version BERT_BASE_ATEPC

config.get_atepc_config_english
config.
config.batch_size = 8
config.patience = 2
config.log_step = -1
config.seed = [1]
config.verbose = False # If verbose == True, PyABSA will output the model structure and several processed data examples
config.notice = (
"A training example for aspect term extraction" # for memos usage
)

trainer = ATEPC.ATEPCTrainer(
config=config,
dataset=train_set_path,
from_checkpoint="english", # if you want to resume training from our pretrained checkpoints, you can pass the checkpoint name here
auto_device=DeviceTypeOption.AUTO, # use cuda if available
checkpoint_save_mode=ModelSaveOption.SAVE_MODEL_STATE_DICT, # save state dict only instead of the whole model
load_aug=False, # there are some augmentation dataset for integrated datasets, you use them by setting load_aug=True to improve performance
)`

produces:
image

Thanks in advance for looking!

@SupritYoung
Copy link

Hi there,

I'm trying to make ABSA using the triplets code (ASTE), but the code is giving me the following results. Can you tell me what the problem is?"

code

!pip install pyabsa -U from pyabsa import ModelSaveOption from pyabsa import DeviceTypeOption from pyabsa import DatasetItem from pyabsa import AspectSentimentTripletExtraction as ASTE

results

Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Requirement already satisfied: pyabsa in /usr/local/lib/python3.10/dist-packages (1.16.27) Collecting pyabsa Using cached pyabsa-2.3.1-py3-none-any.whl (526 kB) Requirement already satisfied: findfile>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from pyabsa) (2.0.0) Requirement already satisfied: pytorch-warmup in /usr/local/lib/python3.10/dist-packages (from pyabsa) (0.1.1) Requirement already satisfied: metric-visualizer>=0.9.6 in /usr/local/lib/python3.10/dist-packages (from pyabsa) (0.9.7) Requirement already satisfied: termcolor in /usr/local/lib/python3.10/dist-packages (from pyabsa) (2.3.0) Requirement already satisfied: transformers>=4.18.0 in /usr/local/lib/python3.10/dist-packages (from pyabsa) (4.28.1) Requirement already satisfied: protobuf<4.0.0 in /usr/local/lib/python3.10/dist-packages (from pyabsa) (3.20.3) Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from pyabsa) (4.65.0) Requirement already satisfied: gitpython in /usr/local/lib/python3.10/dist-packages (from pyabsa) (3.1.31) Requirement already satisfied: torch>=1.0.0 in /usr/local/lib/python3.10/dist-packages (from pyabsa) (2.0.0+cu118) Requirement already satisfied: sentencepiece in /usr/local/lib/python3.10/dist-packages (from pyabsa) (0.1.99) Requirement already satisfied: spacy in /usr/local/lib/python3.10/dist-packages (from pyabsa) (3.5.2) Requirement already satisfied: autocuda>=0.16 in /usr/local/lib/python3.10/dist-packages (from pyabsa) (0.16) Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from pyabsa) (3.1) Requirement already satisfied: seqeval in /usr/local/lib/python3.10/dist-packages (from pyabsa) (1.2.2) Requirement already satisfied: typing-extensions in /usr/local/lib/python3.10/dist-packages (from pyabsa) (4.5.0) Collecting boostaug>=2.3.5 Using cached boostaug-2.3.5-py3-none-any.whl (16 kB) Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from pyabsa) (1.5.3) Requirement already satisfied: update-checker in /usr/local/lib/python3.10/dist-packages (from pyabsa) (0.18.0) Requirement already satisfied: scipy>=1.10.0 in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (1.10.1) Requirement already satisfied: openpyxl in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (3.0.10) Requirement already satisfied: tabulate in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (0.8.10) Requirement already satisfied: matplotlib>=3.6.3 in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (3.7.1) Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (1.22.4) Requirement already satisfied: tikzplotlib in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (0.10.1) Requirement already satisfied: xlsxwriter in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (3.1.0) Requirement already satisfied: click in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (8.1.3) Requirement already satisfied: natsort in /usr/local/lib/python3.10/dist-packages (from metric-visualizer>=0.9.6->pyabsa) (8.3.1) Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch>=1.0.0->pyabsa) (3.12.0) Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.0.0->pyabsa) (1.11.1) Requirement already satisfied: triton==2.0.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.0.0->pyabsa) (2.0.0) Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.0.0->pyabsa) (3.1.2) Requirement already satisfied: lit in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.0.0->pyabsa) (16.0.2) Requirement already satisfied: cmake in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.0.0->pyabsa) (3.25.2) Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.18.0->pyabsa) (23.1) Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.18.0->pyabsa) (0.13.3) Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.18.0->pyabsa) (6.0) Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from transformers>=4.18.0->pyabsa) (2.27.1) Requirement already satisfied: huggingface-hub<1.0,>=0.11.0 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.18.0->pyabsa) (0.14.1) Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.18.0->pyabsa) (2022.10.31) Requirement already satisfied: gitdb<5,>=4.0.1 in /usr/local/lib/python3.10/dist-packages (from gitpython->pyabsa) (4.0.10) Requirement already satisfied: python-dateutil>=2.8.1 in /usr/local/lib/python3.10/dist-packages (from pandas->pyabsa) (2.8.2) Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->pyabsa) (2022.7.1) Requirement already satisfied: scikit-learn>=0.21.3 in /usr/local/lib/python3.10/dist-packages (from seqeval->pyabsa) (1.2.2) Requirement already satisfied: pydantic!=1.8,!=1.8.1,<1.11.0,>=1.7.4 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (1.10.7) Requirement already satisfied: thinc<8.2.0,>=8.1.8 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (8.1.9) Requirement already satisfied: srsly<3.0.0,>=2.4.3 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (2.4.6) Requirement already satisfied: smart-open<7.0.0,>=5.2.1 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (6.3.0) Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (67.7.2) Requirement already satisfied: typer<0.8.0,>=0.3.0 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (0.7.0) Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (1.0.9) Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.11 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (3.0.12) Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (2.0.7) Requirement already satisfied: pathy>=0.10.0 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (0.10.1) Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (3.0.8) Requirement already satisfied: spacy-loggers<2.0.0,>=1.0.0 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (1.0.4) Requirement already satisfied: wasabi<1.2.0,>=0.9.1 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (1.1.1) Requirement already satisfied: catalogue<2.1.0,>=2.0.6 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (2.0.8) Requirement already satisfied: langcodes<4.0.0,>=3.2.0 in /usr/local/lib/python3.10/dist-packages (from spacy->pyabsa) (3.3.0) Requirement already satisfied: smmap<6,>=3.0.1 in /usr/local/lib/python3.10/dist-packages (from gitdb<5,>=4.0.1->gitpython->pyabsa) (5.0.0) Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.11.0->transformers>=4.18.0->pyabsa) (2023.4.0) Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.6.3->metric-visualizer>=0.9.6->pyabsa) (1.0.7) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.6.3->metric-visualizer>=0.9.6->pyabsa) (1.4.4) Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.6.3->metric-visualizer>=0.9.6->pyabsa) (8.4.0) Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.6.3->metric-visualizer>=0.9.6->pyabsa) (3.0.9) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.6.3->metric-visualizer>=0.9.6->pyabsa) (0.11.0) Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.6.3->metric-visualizer>=0.9.6->pyabsa) (4.39.3) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.1->pandas->pyabsa) (1.16.0) Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.10/dist-packages (from requests->transformers>=4.18.0->pyabsa) (2.0.12) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->transformers>=4.18.0->pyabsa) (2022.12.7) Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->transformers>=4.18.0->pyabsa) (3.4) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers>=4.18.0->pyabsa) (1.26.15) Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn>=0.21.3->seqeval->pyabsa) (3.1.0) Requirement already satisfied: joblib>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from scikit-learn>=0.21.3->seqeval->pyabsa) (1.2.0) Requirement already satisfied: blis<0.8.0,>=0.7.8 in /usr/local/lib/python3.10/dist-packages (from thinc<8.2.0,>=8.1.8->spacy->pyabsa) (0.7.9) Requirement already satisfied: confection<1.0.0,>=0.0.1 in /usr/local/lib/python3.10/dist-packages (from thinc<8.2.0,>=8.1.8->spacy->pyabsa) (0.0.4) Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.0.0->pyabsa) (2.1.2) Requirement already satisfied: et-xmlfile in /usr/local/lib/python3.10/dist-packages (from openpyxl->metric-visualizer>=0.9.6->pyabsa) (1.1.0) Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.0.0->pyabsa) (1.3.0) Requirement already satisfied: webcolors in /usr/local/lib/python3.10/dist-packages (from tikzplotlib->metric-visualizer>=0.9.6->pyabsa) (1.13) Installing collected packages: boostaug, pyabsa Attempting uninstall: boostaug Found existing installation: boostaug 2.2.5 Uninstalling boostaug-2.2.5: Successfully uninstalled boostaug-2.2.5 Attempting uninstall: pyabsa Found existing installation: PyABSA 1.16.27 Uninstalling PyABSA-1.16.27: Successfully uninstalled PyABSA-1.16.27 Successfully installed boostaug-2.3.5 pyabsa-2.3.1 :914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: APICoreClientInfoImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _PyDriveImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _OpenCVImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _BokehImportHook.find_spec() not found; falling back to find_module() :914: ImportWarning: _AltairImportHook.find_spec() not found; falling back to find_module() [2023-05-09 17:23:59] (2.3.1) PyABSA(2.3.1): If your code crashes on Colab, please use the GPU runtime. Then run "pip install pyabsa[dev] -U" and restart the kernel. Or if it does not work, you can use v1.16.27

[New Feature] Aspect Sentiment Triplet Extraction since v2.1.0 (https://github.com/yangheng95/PyABSA/tree/v2/examples-v2/aspect_sentiment_triplet_extration) [New Feature] Aspect CategoryOpinion Sentiment Quadruple Extraction since v2.2.0 (https://github.com/yangheng95/PyABSA/tree/v2/examples-v2/aspect_opinion_sentiment_category_extraction)

I am using Colab with GPU. I tried versions 2.3.1, 2.0.27, 1.16.27. None of them worked.

same question...

@yangheng95
Copy link
Owner Author

Please try 1.16.28 or latest v2. Otherwise you can clone the source code and search the load_state_dict. Add strict=False to this function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants