Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(semanticTextSim): Semantic Text sim algorithm using doc2vec #60

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

hastagAB
Copy link
Member

Description

New Open Source License Scanning Algorithm: Semantic Text Similarity find similarity between documents according to its semantics.
The Gensim implementation of Doc2Vec converts the whole document (unlike word2vec) into vector with their labels.
The Doc2Vec model is trained using the filename as its label and license text as the document.
The current training dataset is the txt format of license-list-data provided by SPDX.

Files

  • semanticTextSim.py (Implementation of Algorithm)
  • spdxDoc2Vec.model (the Trained model in binary)
  • train.py (Code to train the model)
  • text (Folder containing all the spdx license dataset)

Test

  • Test the agent for scanning any file for license statements

atarashi -a semanticTextSim <pathToFile>
Currently, it returns the license name with the highest Cosine Sim Score.

Note: The agent is able to return top ten most similar license names with its sim score

  • Train the model (Optional)

cd to semanticTextSim folder.
Run Command: python train.py

@hastagAB hastagAB requested a review from GMishx August 24, 2019 09:48
@hastagAB hastagAB added the GSOC-19 Label to tag pull request which are part of the GSOC 2019 run actvities label Aug 24, 2019
@hastagAB hastagAB force-pushed the feat/semanticTextSim branch from 0bf4d1a to 97b7df6 Compare November 25, 2019 15:03
@amanjain97
Copy link
Collaborator

@hastagAB Can you please share the results about time taken to find the license and precision of the algorithm.

@amanjain97
Copy link
Collaborator

amanjain97 commented Nov 26, 2019

@hastagAB When these license files are used ? It seems that a lot of files may slow down the build process.
If they are only used once, (I think only for creating doctovec model) so we can download during build process and create the model and delete them later. It will make the process more clean (I am not sure). @GMishx Please review the process.

@hastagAB
Copy link
Member Author

hastagAB commented Jan 2, 2020

@hastagAB Can you please share the results about time taken to find the license and precision of the algorithm.

I have tested all the algorithms using our evaluator and created a report to check the accuracy and time.

As of now, semanticTextSim has very less accuracy because the dataset used and the code is at its primitive stage.
This PR is created to add the primary establishment of the doc2vec ML model and the model will be improved gradually by manually creating a better training dataset.

@hastagAB
Copy link
Member Author

hastagAB commented Jan 2, 2020

@hastagAB When these license files are used ? It seems that a lot of files may slow down the build process.
If they are only used once, (I think only for creating doctovec model) so we can download during build process and create the model and delete them later. It will make the process more clean (I am not sure). @GMishx Please review the process.

Yes, I totally agree with your suggested method. An alternative method might be to re-use the model's binary file only. once the model is created we don't need to train it again and again, so we can skip the training part as well as the files used for training.
What do you suggest? @amanjain97 @GMishx

@hastagAB hastagAB marked this pull request as draft August 28, 2020 09:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
GSOC-19 Label to tag pull request which are part of the GSOC 2019 run actvities
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants