A web application designed to help users identify the most in-demand skills, education, and experience requirements for various job titles. This project leverages a mix of modern technologies including Next.js, React, TypeScript, .NET, Python, and PostgreSQL. It scrapes job postings from Indeed, processes the data, and presents insights through a dynamic, user-friendly interface.
Job Market Trend Analysis:
Allows users to search for job titles and discover trending skills, educational qualifications, and experience levels.
Customizable Filters:
Users can filter results based on recency, company, location, and job level.
Dynamic SQL Queries:
Backend built with .NET, dynamically generating SQL queries to fetch data from PostgreSQL.
Ethical Data Scraping:
Python/Scrapy web scraper collects data from Indeed while adhering to ethical scraping practices.
Machine Learning Integration (Planned):
Future enhancements to include ML algorithms for standardized data extraction from varied job posting formats.
Frontend:
Next.js, React, TypeScript
Backend:
.NET Web API
Data Scraping:
Python, Scrapy
Database:
PostgreSQL
Deployment:
Azure Static Web Apps (Frontend), Azure App Service (Backend), Azure Database for PostgreSQL Flexible Server (Database)
Frontend:
- Navigate to the frontend directory:
cd skills-scope
- Install and update the dependencies:
npm install
andnpm update
- Start the local development server:
npm run dev
Backend:
- Navigate to the backend directory from the root:
cd skills-scope-backend
- Check .NET 8.0 SDK installation:
dotnet --version
- If not, Download .NET SDK
- Compile the API and install the dependencies:
dotnet build
- Start the backend server:
dotnet run
Web Scraper:
- Navigate to the web scraper from the root:
cd skills-scope-data/job_listing_scraper
- Check Python 3.8+ installation:
python --version
- If not, Download Python
- Create a virtual environment:
python -m venv venv
- This helps to manage dependencies separately for the project
- Activate the virtual environment:
- Windows:
.\venv\Scripts\activate
- macOS:
source venv/bin/activate
- Windows:
- Install Scrapy:
pip install scrapy
- Run a scraper:
scrapy crawl [spider_name]
Database:
- Install PostgreSQL
- Create a local database
- Update the connection string in
appsettings.Development.json
to match your local database settings. - Run the SQL scripts provided in
skills-scope-backend/Data/SqlScripts
using PostgreSQL command line or a database management tool.
Simple and Intuitive Interface:
SkillsScope is designed with a user-friendly interface, making it accessible for both technical and non-technical users. Follow these easy steps to navigate and utilize the application:
Accessing the Application:
Open your web browser and go toskillsscope.com
to access the production version of SkillsScope, or usehttp://localhost:3000
for the development version after starting the local development server as outlined in the Getting Started section.Conducting a Search:
Enter a job title in the search bar to view the most in-demand skills and requirements.Applying Filters:
Optionally, use the filters for more specific results based on recency, company, location, and job level.Viewing Results:
The results will be displayed using a horzontal bar chart, where the percentage for each requirement represents the frequency it is included in job postings for the searched job title. Use the tabs to switch between results for skills, education, and experience.
Errors related to npm
or webpack
can typically be resolved with the following steps:
- Delete the
node_modules
folder by runningrm -rf node_modules
. - Delete the
package-lock.json
file by runningrm -f package-lock.json
. - Clean the npm cache by running
npm cache clean --force
. - Clean the Next.js cache
rm -rf .next/cache
. - Reinstall all packages by running
npm install
.
© 2024 Jacob Kerames