-
Notifications
You must be signed in to change notification settings - Fork 1
5. Worker
With the help of the worker, the most important feature of the platform is implemented: distributed execution of program code. What is a worker? In general, it is a device connected to a local network and registered in the Workty. More precisely, this is the processor core on this device. This means that when the device is registered in the Workty, for example, in case of using the Raspberry Pi 3 Model B with 4 cores ARM Cortex-A53 processor the user will be able to access 4 new workers. I'll make a reservation that the number of processor cores used is regulated only by you, and you can add 2 or 1 core (worker) to the database instead of 4. Of course, the more workers registered in the system, the more tasks you can perform in parallel. From the platform's point of view, the type of device being connected does not matter, that is, the system does not distinguish a worker based on Raspberry Pi 3 Model B from a worker, for example, Banana Pi Pro. Let's call this property as a worker agnostic. With the worker figured out now let's talk about another important element of the form - coderunner. It runs the workty instance code on the worker. As itself implemented on nodejs and can execute the code in any programming language, i.e the implementation of the workty is not limited to only javascript, it can be in python, ruby on rails or other popular language. Important! At this point in version 1.0.0 only coderunner is implemented, capable of running code written in javascript. So, now we know about the worker and coderunner, and I'll list the processors and devices that were already checked for compatibility:
- Raspberry Pi Model B CPU ARM1176JZ-F 700MHz
- Raspberry Pi 2 Model B CPU 900 MHz 32-bit quad-core ARM Cortex-A7
- Raspberry Pi 3 Model B CPU 1.2 GHz 64-bit quad-core ARM Cortex-A53
- Banana Pi Pro CPU A20 ARM Cortex-A7 Dual-Core
- Parallella CPU Dual-core 32-bit ARM Cortex-A9 with NEON at 1 GHz
- Odroid-c2 CPU (ARMv8) Cortex-A53 1.5Ghz quad core CPU
- CPU Intel Core i7-3820
The main idea if you can run nodejs on the device, then you can also run coderunner and that means the device is supported and compatible with the Workty platform.
The recommended method for deploying worker is docker image.
Before you begin, read the Database section. If you have already installed the database, you can proceed to install the Supervisor server. Here is the list of utilities needed to get started:
-
NodeJS 4.x+ Required
-
npmjs Required
-
gruntjs Highly recommended
-
Docker 1.6.x+ Highly recommended
- lsb-release Required for Docker
- python-pip Required for Docker
- docker-py Required for Docker
-
Ansible Optional
- TC Ansible runner Required for Ansible
The worker also supports starting from the docker image, the Dockerfile file can be found in the worker/deployment/dockerfiles folder (version nodejs 4.4.0 is supported), currently arm6, arm7, arm8 are supported. Before building docker image I recommend using the script located on the worker/deployment/gulpfile.js path to create a zip archive with all the necessary files, including Dockerfile. The command to run the gulp script:
gulp --color --gulpfile ./worker/deployment/gulpfile.js --build_version=1.0.0 --arch=arm7 --branch_type=dev --outputlocaldir=/opt/workty-worker-app
Let's look at the input parameters:
- build_version The build number. Required. Default value 1.0.0
- arch The architecture type of the processor, arm6, arm7, arm8 are supported. Required. Default value arm7
- branch_type The name of the branch for the build type dev or prod. Required. Default value dev
- outputlocaldir The name of the output folder for the zip archive. Required
After a successful build, you will find the zip archive along the path for the default values /opt/workty-worker-app/dev/arm7/1.0.0/workty-worker-app-dev-arm7-1.0.0.zip. Now you are ready to build the docker image.
In this section, I use it to automate the deployment of all parts of the system, in particular the worker. In my example, the teamcity + ansible pair is used, so you need to download the plugin TC Ansible runner . All the necessary files are located in the worker/ansible folder, currently arm6, arm7 are supported. Let's review them in detail inventoty file for arm7:
inventory
[nodes-arm7-stpete1]
192.168.2.2
[nodes-arm7-stpete1:vars]
ports_range=3000-3003
[all:vars]
ansible_connection=ssh
ansible_user=pi
ansible_ssh_pass=Pi
By default, credentials pi/Pi are used to access ssh on the worker device 192.168.2.2, on which it will be installed. Create an account with such values and allow to connect via ssh. Note that here we use the range of port 3000-3003, i.e all 4 cores of the device will be allocated as workers.
run-latest-docker-image.yml
---
- hosts: all
gather_facts: yes
tasks:
- name: Login private registry '{{ docker_private_registry_host }}'
docker_login:
registry: '{{ docker_private_registry_host }}'
username: '{{ docker_private_registry_username }}'
password: '{{ docker_private_registry_password }}'
email: pi@workty.com
- name: Find all running containers
shell: docker ps | awk '{ print $0 }' | sed -n '2p' | grep -o "{{ docker_private_registry_imageprefixname }}-[^*]*" | awk '{ print $1 }'
register: running_containers_names
- name: Stop the running containers
shell: docker stop {{ item }}
with_items:
- "{{ running_containers_names.stdout_lines }}"
ignore_errors: True
- name: Remove the containers
shell: docker rm {{ item }}
with_items:
- "{{ running_containers_names.stdout_lines }}"
ignore_errors: True
- name: Run the new container '{{ docker_private_registry_imagename }}'
docker:
name: '{{ docker_private_registry_imageprefixname }}-{{ docker_private_registry_image_version }}'
image: '{{ docker_private_registry_host }}/{{ docker_private_registry_username }}/{{ docker_private_registry_imagename }}'
pull: always
net: host
state: reloaded
restart_policy: always
memory_limit: 0
ports:
- 3000:3000
- 3001:3001
- 3002:3002
- 3003:3003
volumes:
/mnt/workty:/mnt/workty
env:
SERVICE_TAGS: ["{{ ansible_default_ipv4.address }}", '{{ docker_private_registry_image_version }}', "{{ ansible_lsb }}"]
SERVICE_NAME: "{{ ansible_hostname }}-{{ ansible_machine }}-{{ docker_private_registry_branch_type }}-{{ docker_private_registry_image_version }}"
As you can see, we also use the mapping of 4 sockets, one for each core. There is also a value for mapping the local folder /mnt/workty, this is because all workers use the NFS file system to store the files needed when running. This file uses a private docker repository to store docker images. A list of all variables enclosed in {{}} is passed to the ansible-playbook script via the command line:
-vvvv --extra-vars "{'docker_private_registry_host':'192.168.2.1:5000', 'docker_private_registry_username':'pi', 'docker_private_registry_password':'pi', 'docker_private_registry_imagename':'workty-worker-app-dev-arm7:1.0.0', 'docker_private_registry_imageprefixname': 'workty-worker-app-dev-arm7', 'docker_private_registry_branch_type':'dev', 'docker_private_registry_image_version':'1.0.0'}"
If necessary, you can parameterize these values, in teamcity, via the Build Configuration Settings -> Parameters tab.