Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ModuleNotFoundError after calling pythainlp.sentiment(string, 'ulmfit') #172

Closed
LXZE opened this issue Jan 22, 2019 · 6 comments
Closed

Comments

@LXZE
Copy link
Contributor

LXZE commented Jan 22, 2019

Describe the bug
After the installation of Pythainlp version 1.7 inside Google Colab, I have called a function pythainlp.sentiment(string, 'ulmfit') and then an error occurred as below example.

To Reproduce
Steps to reproduce the behaviour:

  1. Create new Google colab's notebook, setting notebook to use Python3 and Hardware accelerator as GPU
  2. Code as below
!pip install pythainlp
!pip install torchvision
!pip install https://github.com/fastai/fastai/archive/1.0.22.zip
import pythainlp
test_string = 'ประโยคทดสอบ'
print(pythainlp.sentiment(test_string, 'ulmfit'))

Note
After I've tried to install torchvision and fastai in the latest version and v1.0.22 (according to this issue).
The fastai library still causes an error ModuleNotFoundError: No module named 'fastai.lm_rnn' So, I raise this issue.

Expected behaviour
According to pythainlp 1.7 document on sentiment analysis,
This method should return a prediction result from a string and not raise an error after calling sentiment(string, 'ulmfit) method without any additional code that importing all of the required libraries.

Example
Here is a google colab's notebook link

Desktop

  • OS: macOS High Sierra 10.13.4
  • Python Version: 3.6 (Google Colab)
  • Version: pythainlp=1.7

Suggestion
As this issue might be about Fastai's problem, I suggest that this method should automatically install a torch library and the specific version of Fastai, or make a model become compatible with the latest version.

@wannaphong
Copy link
Member

Please use fastai 0.7 (PyThaiNLP 1.7). We updata to Fastai 1.0 in the next release.
We will remove sentiment in the next release (PyThaiNLP 2.0).

@LXZE
Copy link
Contributor Author

LXZE commented Jan 24, 2019

I've tried fastai library version 0.7 with the same code and this error occurred

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-2-3f17b1579a73> in <module>()
      3 test_string = 'ประโยคทดสอบ'
      4 
----> 5 print(pythainlp.sentiment(test_string, 'ulmfit'))

/usr/local/lib/python3.6/dist-packages/pythainlp/sentiment/__init__.py in sentiment(text, engine)
     36                 return classifier.classify(featurized_test_sentence)
     37         elif engine=='ulmfit':
---> 38                 from pythainlp.sentiment import ulmfit_sent
     39                 tag=ulmfit_sent.get_sentiment(text)
     40                 sa=""
...
...
...
/usr/local/lib/python3.6/dist-packages/torchtext/data/field.py in Field()
    116     # numeric type.
    117     dtypes = {
--> 118         torch.float32: float,
    119         torch.float: float,
    120         torch.float64: float,

AttributeError: module 'torch' has no attribute 'float32'

Ps. So, are you going to remove the sentiment analysis function on next release and separate this task to other library or something?

@wannaphong
Copy link
Member

wannaphong commented Jan 24, 2019

@LXZE Fastai 0.7 need PyTorch 0.4.1.
You can train the sentiment analysis from https://github.com/PyThaiNLP/pythainlp/blob/dev/notebooks/ulmfit_sentiment_example.ipynb in the next release (PyThaiNLP 2.0). Because we don't have enough contributors to maintain this function.

@AlgoBeach
Copy link

AlgoBeach commented Jan 24, 2019 via email

@cstorm125
Copy link
Member

@LXZE Seems like there is a bug with the versions of pre-1.0 fastai. I've tried combing through the commits but still no luck. We are removing sentiment analysis because

  1. It is redundant with text classification.
  2. We trained the sentiment on product reviews which can hardly generalize to other types of sentiment. It would be better if you just train on your own domain-dependent datasets. We are also working on releasing a few diverse training datasets for this purpose.

@LXZE
Copy link
Contributor Author

LXZE commented Jan 28, 2019

Understood. I would train the model according to the given example on my own later. Closing this issue now.

@LXZE LXZE closed this as completed Jan 28, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants