Skip to content

Python and Pandas are known to have issues around scalability and efficiency. You will learn how to use libraries such as Modin, Dask, Ray, Vaex etc to overcome the problems faced by Pandas.

Notifications You must be signed in to change notification settings

AlieyaZawanie/Python-big-data

 
 

Repository files navigation

Stars Badge Forks Badge Pull Requests Badge Issues Badge GitHub contributors Visitors

Don't forget to hit the ⭐ if you like this repo.

About Us

The information on this Github is part of the materials for the subject High Performance Data Processing (SECP3133). This folder contains general big data information as well as big data case studies using Malaysian datasets. This case study was created by a Bachelor of Computer Science (Data Engineering), Universiti Teknologi Malaysia student.

📚 Big data processing

Big data processing involves the systematic handling and analysis of vast and complex datasets that exceed the capabilities of traditional data processing methods. It encompasses the storage, retrieval, and manipulation of massive volumes of information to extract valuable insights. Key steps include data ingestion, where large datasets are collected from various sources, and preprocessing, involving cleaning and transformation to ensure data quality. Advanced analytics, machine learning, and data mining techniques are then applied to uncover patterns, trends, and correlations within the data. Big data processing is integral to informed decision-making, enabling organizations to derive meaningful conclusions from their data, optimize operations, and gain a competitive edge in today's data-driven landscape.

Notes

Big Data: Pandas

Big Data processing with Pandas, a powerful Python library for data manipulation and analysis, involves implementing strategies to handle large datasets efficiently. Scaling to sizable datasets requires adopting techniques such as processing data in smaller chunks using the 'chunksize' parameter in Pandas read_csv function. This approach facilitates reading and processing large datasets in more manageable portions, preventing memory overload. To further optimize memory usage, it's essential to leverage Pandas' features like data types optimization, using more memory-efficient data types when possible. Additionally, utilizing advanced functionalities like the 'skiprows' parameter and filtering columns during data import can significantly enhance performance. By mastering these strategies, one can effectively manage and analyze vast datasets in Python with Pandas, ensuring both computational efficiency and memory optimization in the face of Big Data challenges.

Big Data: Alternatives to Pandas for Processing Large Datasets

Modin

Dask

Datatable

🎖️ Comparison between libraries

Big Data: Case study

Lab

Pandas

Modin

Dask

Comparison between libraries

Contribution 🛠️

Please create an Issue for any improvements, suggestions or errors in the content.

You can also contact me using Linkedin for any other queries or feedback.

Visitors

About

Python and Pandas are known to have issues around scalability and efficiency. You will learn how to use libraries such as Modin, Dask, Ray, Vaex etc to overcome the problems faced by Pandas.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%