- Table of Contents
- Overview
- Features
- repository Structure
- Modules
- Getting Started
- Roadmap
- Contributing
- License
- Acknowledgments
Shellbox is a versatile software development utility equipped with robust script orchestration features for increased efficiency and standardization across projects. It offers facilities for Dockerized application builds, PyPi uploading, and environment setup with Micromamba and PyFlink to ensure seamless development. Shellbox goes further in offering easy project maintenance solutions for cleaning, testing, and running scripts. It streamlines filesystem operations involving file name modifications and directory transfers. Additionally, Shellbox provides an intuitive template for Python project collaborations. Thus, it encapsulates essential development operations in value-added scripts.
Feature | Description | |
---|---|---|
βοΈ | Architecture | The system structures comprehensive automation and management script strategies suiting Linux bash environment. It provides advanced solutions to script packaging and software installations along with maintenance, testing tooling and file manipulations. |
π | Documentation | The codebase lacks comments and README for explaining the purpose and workflow of scripts. It does not adhere to the standard best practices of maintaining rich in-code documentation. |
π | Dependencies | Most scripts are standalone and require standard bash/shell tools. Specific scripts ferry dependencies on Docker, Deepsource and Python-related (pip, PyFlink) functions. |
𧩠| Modularity | The codebase is organized into distinct directories: builds, common, files, install and templates. Scripts are disjoint, catering to perform an individual dedicated task strongly advocating the modularity principle. |
π§ͺ | Testing | Test management for a Python project is encapsulated in the test.sh script leveraging pytest and coverage tooling in certain environments. No specific tests for these shell scripts. |
β‘οΈ | Performance | Performance validates effectively on the Linux Bash environment. Specifically, micromamba.sh and pyflink.sh feature performance-based code rendering faster installations. |
π | Security | No explicit security measures are in place. Relies heavily on the user's awareness or setting correct permissions to ensure the security of shell scripts execution. |
π | Version Control | Not applicable directly to each script. However, in the broader sense, it's feasible that version control is handled by Git as is standard for most GitHub repositories. |
π | Integrations | A strong connection with Python tooling (pip, PyFlink), packaging (Docker), and static code review platform (Deepsource). |
πΆ | Scalability | It's scale friendly as each script independently caters single dedicated use-case, proficient scalability-excel featuresβbe seen while hosting python project structure, ensuring Docker deployments. |
βββ shellbox/
βββ .deepsource.toml
βββ builds/
β βββ docker.sh
β βββ pypi.sh
βββ common/
β βββ clean.sh
β βββ run.sh
β βββ test.sh
βββ files/
β βββ modify_filenames.sh
β βββ move_directory.sh
βββ install/
β βββ micromamba.sh
β βββ pyflink.sh
βββ templates/
βββ create_py_project.sh
Root
File | Summary |
---|---|
.deepsource.toml | The code represents a project root directory structure for a script-based application written in shell. Main functionalities include: building the system using Docker and PyPi, ensuring standard program maintenance by providing clean, run, and test features, allowing filesystem management operations, handling software installs via Micromamba and PyFlink, and offering template for creating Python projects. It utilizes DeepSource for static code analysis on Shell scripts. |
Install
File | Summary |
---|---|
micromamba.sh | The script installs the latest version of Micromamba for Linux or MacOS. It identifies the OS, downloads Micromamba, assigns execution permissions, and relocates the file for global access. It initializes Micromamba, configures the default usage of conda-forge channel, and sets the channel priority to strict. The process terminates with a completion message. |
pyflink.sh | This script automates the environment setup for working with PyFlink. It checks and installs Java 11 and Python 3.7 if they're not present. It downloads and extracts PyFlink from its official source, moves it to the pyflink directory and sets the necessary environment variables. The script also sets related aliases for zsh. Once these steps are done, the shell environment is ready for PyFlink development. |
Builds
File | Summary |
---|---|
pypi.sh | The pypi.sh script defines several operations to clean the previous Python package data, build a new package, and deploy it to PyPI (Python Package Index). The deployment details include the package name, repository URL, username, and API key for package upload. Post successful package upload, a success message shows up in the console. |
docker.sh | The builds/docker.sh script simplifies the process of managing a Docker image. Firstly, it constructs a FULL_IMAGE_NAME variable from user-defined components. After creating Docker Buildx, three primary functions are called: build_image builds a Docker image using local context, publish_image publishes the newly created image to a Docker registry, and buildx_image efficiently constructs multi-platform images. The process culminates with echoing a completion statement along with the full image's name created. |
Common
File | Summary |
---|---|
run.sh | The common/run.sh script initiates a series of operations starting with the activation of the Conda environment my_env, followed by the upgrade of Python package installer, pip, in this environment. It sets bash-specific options to handle errors and enables pipelining. The start and completion times of these operations are displayed. The surrounding directory tree contains additional scripts for building, testing, cleaning, file modifications, moving directories, installations, and project creation. |
clean.sh | The clean.sh script, best understood as a maintenance script, eliminates build, test, and temporary files from a Python project environment. The removal process has specialized functions catering to distinct targets-build artifacts, Python file artifacts, test and coverage artifacts, backup and Python cache files. This script, depending on the argument passed, invokes the respective function to remove the associated files and directories, enhancing hygiene and order in the working environment. |
test.sh | The test.sh script in the common directory activates the readmeai Conda environment and employs the coverage utility to execute pytest tests on the readmeai project's source code, ignoring files and folders patterned as init.py and tests. The coverage report displays missed details and execution stops if coverage dips below 90%. The directory tree presents a project structure supporting Docker and PyPI builds, installation scripts, efficient file management, and Python project templates. |
Files
File | Summary |
---|---|
modify_filenames.sh | The bash script primarily functions to identify files within the specified directory (/GitHub/readme-ai/docs) and alter their filenames. It performs two alterations: converting the characters to lowercase and replacing underscores with hyphens. Changes to filenames are printed for user confimation. If the folder isn't found, the script simply exits. |
move_directory.sh | The move_directory.sh script, residing in the files directory, is designed to move a certain directory from a specified current location to a destination on the system. The script first checks existence of both source and destination directories. If both exist, it executes the move, otherwise, it logs relevant warning messages. |
Templates
File | Summary |
---|---|
create_py_project.sh | The given script automates the process of setting up a new Python project. It creates the required directory structure (such as conf, scripts, setup etc.), initial files with their necessary code (like logger.py, conf.py, etc.), configures logger, command line argument parser, configurations, testing setup, and scripting boilerplates. It also prepares the project for Docker deployment and integration, generates necessary configuration files and adds an MIT license file, a configurable.gitignore and a Makefile with commonly used routines. |
Contributions are welcome! Here are several ways you can contribute:
- Submit Pull Requests: Review open PRs, and submit your own PRs.
- Join the Discussions: Share your insights, provide feedback, or ask questions.
- Report Issues: Submit bugs found or log feature requests for ELI64S.
Click to expand
- Fork the Repository: Start by forking the project repository to your GitHub account.
- Clone Locally: Clone the forked repository to your local machine using a Git client.
git clone <your-forked-repo-url>
- Create a New Branch: Always work on a new branch, giving it a descriptive name.
git checkout -b new-feature-x
- Make Your Changes: Develop and test your changes locally.
- Commit Your Changes: Commit with a clear and concise message describing your updates.
git commit -m 'Implemented new feature x.'
- Push to GitHub: Push the changes to your forked repository.
git push origin new-feature-x
- Submit a Pull Request: Create a PR against the original project repository. Clearly describe the changes and their motivations.
Once your PR is reviewed and approved, it will be merged into the main branch.
This project is protected under the Apache-2.0 license License. For more details, refer to the Apache License file.