-
Notifications
You must be signed in to change notification settings - Fork 32
Development
Sirepo is fully open source, as are most of its codes. We are happy to support you, just submit an issue if you have questions.
Sirepo runs on Linux. We use Vagrant and VirtualBox. You will need to install Vagrant/Virtualbox manually before doing anything below.
We deploy using Docker.
We rely heavily on a simple curl installer structure in our download repo. If you are not comfortable with curl installers, please feel free to follow the installer scripts mentioned below.
If you use a Mac, read on. Otherwise, skip to PC Install. We use Macs so they are the best supported.
Once installed, run the server.
Once Vagrant is installed, run the vagrant-sirepo-dev installer on your Mac:
mkdir v
cd v
curl https://radia.run | vagrant_dev_no_nfs_src=1 vagrant_dev_no_mounts=1 bash -s vagrant-sirepo-dev
vagrant ssh
The directory must be named "v", which will be used as a basis
for the hostname v.radia.run
. The rest of this page assumes
v.radia.run
is the hostname.
The vagrant_dev_no_nfs_src=1
turns off sharing ~/src
between the
host (Mac) and guest (VM). This depends on how you develop. If you
would like to use an IDE like PyCharm, you might want to share ~/src
with the VM. This way you can edit files locally on your Mac. In this case,
you would use the command:
curl https://radia.run | bash -s vagrant-sirepo-dev
If you do this, you may want to have a symlink on your mac from
/home/vagrant
to /Users/<your-user>
so that you can directly
reference file names in error messages output by sirepo. Make sure /home
on your Mac is chmod 755
.
The host defaults to v.radia.run
(ip 10.10.10.10). You can also
specify a different host as an argument to vagrant-sirepo-dev
, e.g.
curl https://radia.run | bash -s vagrant-sirepo-dev v3.radia.run
The host must be of the form v[1-9].radia.run
.
Next step: Simple Server Execution.
You can develop on Windows or Linux with Vagrant. You just have to run the install manually.
Linux Note: Always use the repos configured by vagrantup.com and virtualbox.org, and not the default that comes with your distro. We know for sure that Ubuntu's VirtualBox doesn't work properly.
Once you have installed VirtualBox and Vagrant, create a directory, and use this Vagrantfile:
# -*-ruby-*-
Vagrant.configure("2") do |config|
config.vm.box = "generic/fedora36"
config.vm.hostname = "v.radia.run"
config.vm.network "private_network", ip: "10.10.10.10"
config.vm.provider "virtualbox" do |v|
v.customize ["guestproperty", "set", :id, "/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold", 5000]
# https://stackoverflow.com/a/36959857/3075806
v.customize ["setextradata", :id, "VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled", "0"]
# If you see network restart or performance issues, try this:
# https://github.com/mitchellh/vagrant/issues/8373
# v.customize ["modifyvm", :id, "--nictype1", "virtio"]
#
# Needed for compiling some the larger codes
v.memory = 8192
v.cpus = 4
end
config.ssh.forward_x11 = false
# https://stackoverflow.com/a/33137719/3075806
# Undo mapping of hostname to 127.0.0.1
config.vm.provision "shell",
inline: "sed -i '/127.0.0.1.*v.radia.run/d' /etc/hosts"
end
Then install the vbguest plugin:
> vagrant plugin install vagrant-vbguest
This will make sure your time on the machine stays up to date, and also allow you to mount directories from the host. Once the plugin is installed, run:
> vagrant up
Once booted:
> vagrant ssh
And inside the guest VM run the redhat-base installer:
$ curl https://radia.run | bash -s redhat-dev
$ exit
This sets up a lot of environment so logging out is a good idea, then login again and run the sirepo-dev installer:
$ vagrant ssh
$ curl https://radia.run | bash -s sirepo-dev
$ exit
This installs all the codes used by sirepo, and it's fully automatic so
go have lunch, and it will be done. Make sure you exit
, because you
will need to refresh your login environment.
Next step: Simple Server Execution.
Once installed by one of the methods above, you will have a sirepo development environment. To run sirepo locally, run:
$ cd ~/src/radiasoft/sirepo
$ sirepo service http
For different modes of simple server execution, see etc/run.sh.
Navigate to v.radia.run:8080 (note: 8080 not the normal 8000) to access Sirepo.
Vagrant sets up a private network. You can access the server at http://v.radia.run:8000. However, some networks block resolutions to private internet addresses so you may have to visit http://10.10.10.10:8000 (this is the case, for example, on Macs with no active internet connection).
The sirepo service http
setup is used for basic application
development using the
local job driver.
However, you may want to use the
sbatch
or
docker
drivers for multi-node execution environments.
Job execution is handled by four process components:
-
Tornado api server
sirepo service server
which receives messages from the GUI -
Tornado supervisor
sirepo job_supervisor
which brokers messages between the server and agents -
Tornado agents
sirepo job_agent
, started by job drivers which execute allow the supervisor to execute jobs in different environments. -
Command line job commands (
sirepo job_cmd
), started by agents, which run thetemplate
functions in the execution environment.
The job supervisor environment can support executing codes in a local process, on Docker-enabled nodes, on Slurm clusters, and at NERSC (assuming the user has credentials for accessing NERSC).
We can configure three of these environments for development automatically.
The tornado api server and job supervisor can be started with this command
sirepo service http
This is sufficient for a single node execution in development or in private network environment. Do not run the local driver in public environments or where security is a concern.
To enable Docker execution, you will need to install docker on your VM:
sudo su - -c 'radia_run redhat-docker'
This will require a reboot and a logout/login. Once you have Docker setup, you will start the job supervisor:
The server can be started with:
bash etc/run-server.sh docker
The job supervisor using the docker job driver:
bash etc/run-supervisor.sh docker
You can run Slurm jobs locally, too, but you need to install Slurm:
radia_run slurm-dev
The server can be started with:
bash etc/run-server.sh sbatch
Start the supervisor:
bash etc/run-supervisor.sh sbatch
If you want to run sbatch on another node, you can specify that
configure it with (on a Mac or Linux), e.g. create a VM called v8.radia.run
:
mkdir ~/v8
cd ~/v8
radia_run vagrant-sirepo-dev
vssh
radia_run slurm-dev
Then start the supervisor on v.radia.run
with:
bash etc/run-supervisor.sh sbatch v8.radia.run
In order to run on cori.nersc.gov, you need to a socket open so that
Cori can reach the server. This can be accomplished through a reverse proxy or socat
running on a server with a public
IP address.
You start the server basically the same way.
bash etc/run-server.sh nersc
Let's say the public IP address is 1.2.3.4
and the server is running
on port 8001 on your VM (v.radia.run
) on that public server.
Start socat
which forward port 8001:
socat -d TCP-LISTEN:8001,fork,reuseaddr TCP:v.radia.run:8001
The supervisor is started with:
bash etc/run-supervisor.sh nersc 1.2.3.4:8001 <nersc_user>
To be able to reach Sirepo running on the remote server from the browser on your computer you'll want to setup
ssh local forwarding. In your ~/.ssh/config
add
Host foo
HostName 1.2.3.4
LocalForward 8000 v.radia.run:8000
Then go to 127.0.0.1:8000 in your browser and traffic will be forwarded.
The <nersc_user>
must be a user that has a sirepo
development environment setup on cori.nersc.gov.
To setup the development environment on NERSC you'll need to do a few things.
SSH into nersc ssh <username>@cori.nersc.gov
Install a python environment curl radia.run | bash -s nersc-pyenv
Install sirepo and pykern:
$ mkdir -p ~/src/radiasoft/
$ cd $_
$ git clone https://github.com/radiasoft/pykern.git
$ cd pykern
$ pip install -e .
$ cd ../
$ git clone https://github.com/radiasoft/sirepo.git
$ pip install -e .
Pull the shifter image shifterimg pull docker:radiasoft/sirepo:dev
You can run Sirepo without any of the scientific codes:
$ SIREPO_FEATURE_CONFIG_SIM_TYPES=myapp sirepo service http
This runs the demo app, which is available at the following link: http://v.radia.run:8000/myapp.
As user vagrant:
radia_run sirepo-dev
If radia_run fails, run with debug:
radia_run debug sirepo-dev
First, you need to setup Docker on CentOS/RHEL.
Here's a sample "full stack" server configuration. It runs with a
specific IP address (10.10.10.40
), because it is bound to a specific
domain name sirepo.v4.radia.run
. It requires you have Vagrant and
VirtualBox installed, and that you are on a Mac or Linux box to
execute the initial curl installer.
mkdir v4
cd v4
curl https://radia.run | bash -s vagrant-centos7
vagrant ssh
sudo su - -c 'radia_run redhat-docker'
# first time disables SELinux; you'll see a message saying this
exit
vagrant reload
vagrant ssh
sudo su - -c 'radia_run redhat-docker'
sudo su -
yum install -y nginx
cd /
curl -s -S -L https://github.com/radiasoft/sirepo/wiki/images/v4-root.tgz | tar xzf -
systemctl daemon-reload
systemctl restart docker
docker pull radiasoft/sirepo:dev
systemctl start sirepo_job_supervisor
systemctl start sirepo
systemctl start nginx
You can access the server as https://sirepo.v4.radia.run from the local host.
This assumes you will be viewing notebooks in a web browser on your host machine. You may need to install jupyter lab yourself. To do so, log in to the VM and enter:
cd ~/src/radiasoft/sirepo
bash etc/run.sh jupyterhub
You can visit http://<hostname>:8080/jupyter
to get to the jupyter UI.
Jupyter is a "moderated" sim type. In development you are the user and moderator.
At the login prompt enter vagrant@localhost.localdomain
. Mail is delivered in ~/mail
in the VM.
Follow the links in the mail messages to moderate your user and give permission to access the Jupyter UI
- We have a dedicated repo for developing jupyterlab extensions
- See rsjupyterlab repo for more
- Jupyterhub has templates that define the jupyterhub UI which live in
$(pyenv prefix)/share/jupyterhub/templates
. - To modify the UI in a clean way, you can create child templates that
inherit from the ones in
$(pyenv prefix)/share/jupyterhub/templates
. For example,
{% extends "templates/page.html" %}
{% block nav_bar_right_items %}
<li>
item
</li>
{{ super() }}
{% endblock %}
The above example inherits from the jupyterhub source code templates and adds a list item to the nav_bar_right_items block. The call to super() ensures that all the stuff in the parent template nav_bar_right_items populates the block in that part of the template
- Your child templates should live in
sirepo/package_data/jupyterhub_templates/
- jupyterhub_conf.py.jinja points jupyterhub to your child templates
with this line:
c.JupyterHub.template_paths = [sirepo.jupyterhub.template_dirs()]
- If the jupyterhub dev server crashes due to some error, then the other server processes might not properly exit. This can result in issues when you try to restart jupyterhub dev server
- To fix this, you may need to kill these processes manually eg.
pkill -f uwsgi
pkill -f nginx
pkill -f sirepo
- other process might be left running in which case use
ps x
to inspect. You can dolsof -i :<port no.>
see what is running on port thenpkill -f <that process name>
as well.
The FLASH code is a proprietary code. Users must be granted access in order to use it.
In order to work on Sirepo with the FLASH code developers must either build the FLASH proprietary tarball from source or have access to an already built version (see sections below).
This method builds the proprietary FLASH tarball from the FLASH and Radiasoft source code. If you wish to work on Sirepo and don't need to build from source you only need to follow the instruction in the section below (Developing from tarball)
- You will need a copy of the FLASH source code (FLASH-4.6.2.tar.gz). You can get one yourself from the FLASH website. You need to be authorized by the FLASH Center for Computational Science for use of the FLASH code.
cd ~/src/radiasoft && gcl rsconf && cd rsconf && rsconf build
mv <path-to>/FLASH-4.6.2.tar.gz ~/src/radiasoft/rsconf/proprietary
- Follow the radiasoft/download Development Notes to start a development installer server
cd ~/src/radiasoft/rsconf && rsconf build
radia_run flash-tarball # Make sure to run this in the window where you exported the install_server
- You should now see
flash-dev.tar.gz
in ~/src/radiasoft/rsconf/proprietary - Now follow the steps in the section below for working on Sirepo
- Make sure you have
flash-dev.tar.gz
in ~/src/radiasoft/rsconf/proprietary. If you don't, follow the instructions above. cd ~/src/radiasoft/sirepo && rm -rf run # Removing the run dir forces the sirepo dev setup to copy the FLASH tarball into the proper location. This can be done manually
- Start the Sirepo server with the FLASH code enabled (ex
SIREPO_FEATURE_CONFIG_PROPRIETARY_SIM_TYPES=flash sirepo service http
) - Visit v.radia.run:8000/flash
License: http://www.apache.org/licenses/LICENSE-2.0.html
Copyright ©️ 2015–2020 RadiaSoft LLC. All Rights Reserved.
- Activait
- Controls
- elegant
- FLASH
- Genesis
- JSPEC
- JupyterHub
- MAD-X
- OPAL
- Radia
- Shadow
- Synchrotron Radiation Workshop (SRW)
- Warp PBA
- Warp VND
- Zgoubi
- Authentication and Account Creation
- How Your Sirepo Workspace Works
- Navigating the Sirepo Simulations Interface
- How to upload a lattice file
- How to share a Sirepo simulation via URL
- How Example simulations work
- How to report a bug in Sirepo
- Using lattice files in Sirepo
- Resetting an Example Simulation to default
- Backup SRW Sirepo simulations
- SRW Aperture
- SRW Brilliance Report
- SRW Circular Cylinder Mirror
- SRW CRL
- SRW Crystal
- SRW Electron Beam
- SRW Elliptical Cylinder Mirror
- SRW Fiber
- SRW Flux
- SRW Fully Coherent Gaussian Beam
- SRW Import Python or JSON Simulation File
- SRW Initial Wavefront Simulation Grid
- SRW Intensity Report
- SRW Planar Mirror
- SRW Power Density Report
- SRW Propagation Parameters
- SRW Single Electron Spectrum Report
- SRW Spherical Mirror
- SRW Toroid Mirror
- SRW Watchpoint
- SRW Additional Documentation