πΉπππ π³ππππππ β¦ πππππππ β¦ π³πππ β¦ π·ππ ππ πΈππππππ β¦ π²πππππππππ β¦ π³πππππ β¦ ππ πππππ
Don't let your resume be a roadblock from getting your next job. Use Resume Matcher!
The Resume Matcher takes your resume and job descriptions as input, parses them using Python, and mimics the functionalities of an ATS, providing you with insights and suggestions to make your resume ATS-friendly.
The process is as follows:
-
Parsing: The system uses Python to parse both your resume and the provided job description, just like an ATS would.
-
Keyword Extraction: The tool uses advanced machine learning algorithms to extract the most relevant keywords from the job description. These keywords represent the skills, qualifications, and experiences the employer seeks.
-
Key Terms Extraction: Beyond keyword extraction, the tool uses textacy to identify the main key terms or themes in the job description. This step helps in understanding the broader context of what the resume is about.
-
Vector Similarity Using FastEmbedd: The tool uses FastEmbedd, a highly efficient embedding system, to measure how closely your resume matches the job description. The more similar they are, the higher the likelihood that your resume will pass the ATS screening.
Follow these steps to set up the environment and run the application.
-
Fork the repository here.
-
Clone the forked repository.
git clone https://github.com/<YOUR-USERNAME>/Resume-Matcher.git cd Resume-Matcher
-
Create a Python Virtual Environment:
-
Using virtualenv:
Note: Check how to install virtualenv on your system here link.
virtualenv env
OR
-
Create a Python Virtual Environment:
python -m venv env
-
-
Activate the Virtual Environment.
-
On Windows.
env\Scripts\activate
-
On macOS and Linux.
source env/bin/activate
OPTIONAL (For pyenv users)
Run the application with pyenv (Refer this article)
-
Build dependencies (on ubuntu)
sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev python openssl
sudo apt-get install build-essential zlib1g-dev libffi-dev libssl-dev libbz2-dev libreadline-dev libsqlite3-dev liblzma-dev libncurses-dev sudo apt-get install python-tk python3-tk tk-dev sudo apt-get install build-essential zlib1g-dev libffi-dev libssl-dev libbz2-dev libreadline-dev libsqlite3-dev liblzma-dev
-
pyenv installer
curl https://pyenv.run | bash
-
Install desired python version
pyenv install -v 3.11.0
-
pyenv with virtual enviroment
pyenv virtualenv 3.11.0 venv
-
Activate virtualenv with pyenv
pyenv activate venv
-
-
Install Dependencies:
pip install -r requirements.txt
-
Prepare Data:
- Resumes: Place your resumes in PDF format in the
Data/Resumes
folder. Remove any existing contents in this folder. - Job Descriptions: Place your job descriptions in PDF format in the
Data/JobDescription
folder. Remove any existing contents in this folder.
- Resumes: Place your resumes in PDF format in the
-
Parse Resumes to JSON:
python run_first.py
-
Run the Application:
streamlit run streamlit_app.py
Note: For local versions, you do not need to run "streamlit_second.py" as it is specifically for deploying to Streamlit servers.
Additional Note: The Vector Similarity part is precomputed to optimize performance due to the resource-intensive nature of sentence encoders that require significant GPU and RAM resources. If you are interested in leveraging this feature in a Google Colab environment for free, refer to the upcoming blog (link to be provided) for further guidance.
-
Build the image and start application
docker-compose up
-
Open
localhost:80
on your browser
The full stack Next.js (React and FastAPI) web application allows users to interact with the Resume Matcher tool interactively via a web browser.
Warning
The results returned from through the web app are currently entirely mocked / faked. This means that the results returned are not real and are just for demonstration purposes. This will be implemented with real data results in a future release.
To run the full stack web application (frontend client and backend api servers), follow the instructions over on the webapp README file.
- Create an account in ngrok and get you token
- Go to archive/resume_matcher_colab.ipynb and run the notebook.
- Enter your ngrok token and run the notebook.
- Copy the url and open it in your browser.
- Visit Cohere website registration and create an account.
- Go to API keys and copy your cohere api key.
- Visit Qdrant website and create an account.
- Get your api key and cluster url.
- Go to open dashboard in qdrant and enter your api key for only the first time
Note: Please make sure that Qdrant_client's version is higher than v1.1
Note: This part needs updating w.r.t to the new FastEmbed changes.
This project uses Black for code formatting. We believe this helps to keep the code base consistent and reduces the cognitive load when reading code.
Before submitting your pull request, please make sure your changes are in accordance with the Black style guide. You can format your code by running the following command in your terminal:
black .
We also use pre-commit to automatically check for common issues before commits are submitted. This includes checks for code formatting with Black.
If you haven't already, please install the pre-commit hooks by running the following command in your terminal:
pip install pre-commit
pre-commit install
Now, the pre-commit hooks will automatically run every time you commit your changes. If any of the hooks fail, the commit will be aborted.
Pull Requests & Issues are not just welcomed, they're celebrated! Let's create together.
π Join our lively Discord community and discuss away!
π‘ Spot a problem? Create an issue!
π©βπ» Dive in and help resolve existing issues.
π Share your thoughts in our Discussions & Announcements.
π Explore and improve our Landing Page. PRs always welcome!
π Contribute to the Resume Matcher Docs and help people get started with using the software.
Your support means the world to us π. We're nurturing this project with an open-source community spirit, and we have an ambitious roadmap ahead! Here are some ways you could contribute and make a significant impact:
β¨ Transform our Streamlit dashboard into something more robust.
π‘ Improve our parsing algorithm, making data more accessible.
π Share your insights and experiences in a blog post to help others.
Take the leap, contribute, and let's grow together! π