Those are utility functions to facilitate day-to-day jobs on the command line. All functions follow the following format:
Action [Target] [Quantity] [Quality] [Destination]
Action is any verb that defined actions to be executed, such as get, peek or replace.
Target can be any IT object like file, memory or connection.
Quantity can be any quantifiable output like 5.
Quality can show, for example, direction (back/forth) when applying replacement.
Destination can be any destination (such as a file) where the pattern is searched.
Vitali Avagyan: @vitali87
Put utility.sh
file into a location of your choice, e.g. into
~/Documents/utility/
. Then add this line to your~/.bashrc
file:
source ~/Documents/utility/utility.sh
Note: if you are using zsh
shell, then add these two lines (bash completion script compatibility mode) before sourcing the .sh
file:
autoload bashcompinit
bashcompinit
source ~/Documents/utility/utility.sh
Install the required packages to run all functions:
make install
Navigate to the desired folder (or alternatively specify the full path) containing the archive file - thunderbird in this example - and execute the following command:
extract thunderbird-91.3.0.tar.bz2
To run tests, run the following command
TBC
It will fail with an informative message.
Here are some related projects:
Optimisations in the code are highly welcomed:
- refactors
- performance improvements
- accessibility
👩💻 I'm currently working on: https://github.com/vitali87/python-CLI
🧠 I'm currently learning advanced OOP: https://github.com/vitali87/oop_book
👯♀️ I'm looking to collaborate on anything related to data science, linux, python and cloud
🤔 I'm looking for help with new ideas for automation
💬 Ask me about anything tech related.
📫 How to reach me eheva87@gmail.com
😄 Pronouns he/his
⚡️ Fun fact...
Amazon Web Services (AWS), R, Python, MatLab, MySQL, GNU/Linux, Apache Spark, AIMMS/GAMS, Kubernetes
Highly skilled Data Scientist/ML Engineer specialised in statistics, cloud/Big Data computing, optimisation, simulation, and software engineering.
Hands-on experience with many programming, modelling, and database languages including R, Python, MatLab, SQL, AIMMS/GAMS with their various accompanying packages. Proficient in data aggregation, visualization, statistical/ machine learning, and optimisation techniques applied to energy, finance, and supply chain/logistics sectors.
Well-versed with Linux (Kali) and cloud-based programming techniques in Amazon Web Services (AWS), distributed Machine Learning (Apache Spark), and Deep Learning (Keras/TF).
Excellent writing, verbal and communication skills. Has very good interpersonal and transferable skills demonstrated by the outcome/impact of conducted work both verbally and through infographics/presentation.
Contributions are always welcome!
See contributing.md
for ways to get started.
Please adhere to this project's code of conduct
.
Coming soon...
Coming soon...
To run this project, you will need to add the following environment variables to your .env file
If you have any feedback, please reach out to us at eheva87@gmail.com
-
Additional utility functions
-
more things to come
Clone the project
git clone https://github.com/vitali87/utility.git
Go to the project directory
cd utility
Install dependencies
make install
For support, email eheva87@gmail.com or send a message to my LinkedIn
If you find this project helpful and would like to support my work, you can buy me a coffee: