Skip to content

Commit

Permalink
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' into extension-docs
Browse files Browse the repository at this point in the history
pavithraes authored Sep 15, 2023
2 parents 228b4c1 + 71e515f commit 425e472
Showing 11 changed files with 768 additions and 331 deletions.
25 changes: 17 additions & 8 deletions docs/docs/how-tos/domain-registry.md
Original file line number Diff line number Diff line change
@@ -61,11 +61,16 @@ Finally, set the token value as an environment variable:
export CLOUDFLARE_TOKEN="cloudflaretokenvalue"
```

Also, add the flag `--dns-provider=cloudflare` to the [Nebari `deploy` command][nebari-deploy].
Also, add a `dns` section to the `nebari-config.yaml` file.

```yaml
dns:
provider: cloudflare
```
## Using other DNS providers
Currently, Nebari only supports CloudFlare for [automatic DNS registration](link to automatic section below). If an alternate DNS provider is desired, change the `--dns-provider` flag from `cloudflare` to `none` on the Nebari `deploy` command.
Currently, Nebari only supports CloudFlare for [automatic DNS registration](link to automatic section below). If an alternate DNS provider is desired, change the `dns.provider` field from `cloudflare` to `none` in the `nebari-config.yaml` file.

Below are the links to detailed documentation on how to create and manage DNS records on a few providers:

@@ -81,18 +86,22 @@ The amount of time this takes varies for each DNS provider. Validate such inform

## Automatic DNS provision

Nebari has an extra flag for deployments that grants management and the creation of the DNS records for you automatically. For automatic DNS provision add `--dns-auto-provision` to your Nebari `deploy` command:
Nebari also supports management and the creation of the DNS records for you automatically. For automatic DNS provision add `dns.auto-provision` to your Nebari config file:

```bash
nebari deploy -c nebari-config \
--dns-provider cloudflare \
--dns-auto-provision
```yaml
dns:
provider: cloudflare
auto-provision: true
```

This will set the DNS provider as Cloudflare and automatically handle the creation or updates to the Nebari domain DNS records on Cloudflare.

:::warning
The usage of `--dns-auto-provision` is restricted to Cloudflare as it is the only fully integrated DNS provider that Nebari currently supports.
The usage of `dns.auto-provision` is restricted to Cloudflare as it is the only fully integrated DNS provider that Nebari currently supports.
:::

:::warning
Earlier version of Nebari supports dns settings through `--dns-provider` and `--dns-auto-provision` flags in the `deploy` command. But this feature is removed in favor of using the `nebari-config.yaml` file.
:::

When you are done setting up the domain name, you can refer back to the [Nebari deployment documentation][nebari-deploy] and continue the remaining steps.
126 changes: 103 additions & 23 deletions docs/docs/how-tos/using-argo.md
Original file line number Diff line number Diff line change
@@ -1,48 +1,123 @@
---
id: using-argo
title: Automate workflows with Argo
title: Automate your first workflow with Argo
description: Argo workflow management
---

# Automate workflows with Argo

Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo
workflows comes enabled by default with Nebari deployments.
[Argo Workflows](https://argoproj.github.io/workflows) is an open source container-native
workflow engine for orchestrating parallel jobs on Kubernetes. In other words,
Argo helps you run a sequence of tasks or functions without you having to be
present (it will manage the server resources for you). Argo workflows
comes enabled by default with Nebari deployments.

## Access Argo Server
Access control for Argo on Nebari is done through Keycloak user groups. All
users in the `admin` or `developer` groups have access to Argo.

If Argo Workflows is enabled, users can access argo workflows server at: `your-nebari-domain.com/argo`. Log in via
Keycloak with your usual credentials.
:::note
Also see the [Set up Argo Workflows documentation](/docs/how-tos/setup-argo).
:::


## Access the Argo Server

If Argo Workflows is enabled, users can access Argo Workflows UI at:
`your-nebari-domain.com/argo`. Log in via Keycloak with your usual credentials.

You can also download the
[Argo CLI](https://github.com/argoproj/argo-workflows/releases) if you prefer
a command line experience.

## Introduction to the Argo UI

Navigate to the Argo UI at `your-nebari-domain.com/argo`.

![Argo Server Landing Page](/img/how-tos/argo_landing_page.png)

From this page, you can see all the Argo servers currently running for each
workflow.

For kubernetes deployments, it important to note that these are
active pods. The two workflows shown in the UI above indicate that the workflows
are complete (the green check), but that the server is still running.

:::warning
We highly recommend setting the default timeout, otherwise the Argo pods will not
be culled on their own!
:::

You can click on each individual workflow to see the DAG and details for each
step in the workflow.

![Argo workflow detail](/img/how-tos/argo_workflow_details.png)

## Submit a workflow via Argo Server
## Submit a workflow

You can submit a workflow by clicking "SUBMIT NEW WORKFLOW" on the landing page assuming you have the appropriate
permissions.
You can submit a workflow through the UI by clicking "+ SUBMIT NEW WORKFLOW" on
the landing page. Argo offers a template for the workflow yaml format.

![Argo Server Landing Page](/img/tutorials/argo_server_landing_page.png)
![Argo UI submit new workflow](/img/how-tos/argo_submit_new_workflow.png)

Click `+ CREATE` when you're ready to submit. The yaml format is not the only
option for generating workflows. Argo also allows you to create workflows via
python. More information on how to generate these specifications will follow.

## Submit a workflow via Argo CLI

You can submit or manage workflows via the Argo CLI. The Argo CLI can be downloaded from the
[Argo Releases](https://github.com/argoproj/argo-workflows/releases) page. After downloading the CLI, you can get your
token from the Argo Server UI by clicking on the user tab in the bottom left corner and then clicking "Copy To
Clipboard". You'll need to make a few edits to access to what was copied for Argo CLI to work correctly. The base href
should be `ARGO_BASE_HREF=/argo` in the default nebari installation and you need to set the namespace where Argo was
deployed (dev by default) `ARGO_NAMESPACE=dev`. After setting those variables and the others copied from the Argo Server
UI, you can check that things are working by running `argo list`.
You can also submit or manage workflows via the Argo CLI. The Argo CLI can be
downloaded from the
[Argo Releases](https://github.com/argoproj/argo-workflows/releases) page.

You can submit a workflow through the CLI using `argo submit my-workflow.yaml`.

The `argo list` command will list all the running workflows.

If you've just submitted a workflow and you want to check on it, you can run
`argo get @latest` to get the latest submitted workflow.

You can also access the logs for a workflow using
`argo logs -n workflow_name @latest`.

For more information on Argo workflows via the UI or the CLI, you can visit the
[Argo docs](https://argoproj.github.io/argo-workflows/workflow-concepts/).

[Hera](https://hera-workflows.readthedocs.io/) is a framework for building and
submitting Argo workflows in Python. Learn more in the [Argo Workflows walkthrough tutorial](/docs/tutorials/argo-workflows-walkthrough).

## Access your Nebari environments and file system while on an Argo pod (BETA)

![Argo Workflows User Tab](/img/tutorials/argo_workflows_user_tab.png)
Once you move beyond the "Hello World" Argo examples, you may realize that the
conda environments and the persistent storage you have on Nebari would be
really useful in your temporary Argo pods. Lucky for you, we've solved that
problem for you!

## Jupyterflow-Override (Beta)
Nebari comes with [Nebari Workflow Controller (BETA)](https://github.com/nebari-dev/nebari-workflow-controller), abbreviated as NWC,
which transfers the user's environment variables, home and shared directories,
docker image, and available conda environments to the server where the Workflow
is running. Users can then run a script that loads and saves from their home
directory with a particular conda environment.

All of these things are enabled when users add the `jupyterflow-override` label
to their workflow as in this example using Hera:

```python
from hera.workflows import Workflow
Workflow(
...
labels = {`jupyterflow-override`: 'true'},
)
```

Behind the scenes, NWC will override a portion of the workflow spec, mount
directories, etc. The label can be added to the Workflow in a kubernetes
manifest, via Hera, the Argo CLI, or via the Argo Server Web UI.

:::note
This feature requires that you have a Jupyter user pod running when the "jupyterflow-override" workflow is submitted. The workflow will not be created if you don't have a Jupyter user pod running.
:::

New users of Argo Workflows are often frustrated because the Argo Workflow pods do not have access to the same conda environments and shared files as the Jupyterlab user pod by default. To help with this use case, Nebari comes with [Nebari Workflow Controller](https://github.com/nebari-dev/nebari-workflow-controller) which overrides a portion of the Workflow spec when the
`jupyterflow-override` label is applied to a workflow. The Jupyterlab user pod's environment variables, home and shared directories, docker image, and more will be added to the Workflow. Users can then e.g. run a script that loads and saves from their home directory with a particular conda environment. This works whether the label is added to the Workflow in a kubernetes manifest, via Hera, the argo CLI, or via the Argo Server Web UI. However, this does require that a Jupyter user pod be running when the workflow is submitted. The Workflow pod will have the same resources (cpu, memory) that the user pod has.

### Example
### YAML Example

```
api: argoproj.io/v1alpha1
@@ -73,3 +148,8 @@ The jupyterflow-override feature is in beta so please [leave some feedback](http
## Additional Argo Workflows Resources

Refer to the [Argo documentation](https://argoproj.github.io/argo-workflows/) for further details on Argo Workflows.

## Next Steps

Now that you have had an introduction, check out the [more detailed tutorial](/tutorials/argo-workflows-walkthrough.md) on
Argo for some practical examples!
592 changes: 302 additions & 290 deletions docs/docs/references/RELEASE.md

Large diffs are not rendered by default.

340 changes: 340 additions & 0 deletions docs/docs/tutorials/argo-workflows-walkthrough.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,340 @@
---
id: argo-workflows-walkthrough
title: Argo Workflows Walkthrough
description: A walk through several example workflows
---

# Argo Workflows Walkthrough

## Introduction

Using a workflow manager can help you automate ETL pipelines, schedule regular
analysis, or just chain together a sequence of functions. Argo is available on
Nebari for workflow management. If you haven't already, check out the
[introductory documentation on using Argo](/how-tos/using-argo.md).

For this tutorial we'll be using the
[Hera](https://hera-workflows.readthedocs.io/) interface to Argo. This will
allow us to write a workflow script in Python.

## The most basic of all examples

The following workflow is perhaps the simplest possible workflow. There are
quite a few bits here so we'll step through it.

### Global Configuration

Nebari sets up several environment variables with tokens, etc. that enable us to
use Argo more smoothly. However, there are two global configuration settings
that we'll need to manually add to each workflow script.

```python
from hera.shared import global_config
import os

global_config_host = f"https://{os.environ['ARGO_SERVER'].rsplit(':')[0]}{os.environ['ARGO_BASE_HREF']}/"
global_config.host = global_config_host
global_config.token = os.environ['ARGO_TOKEN']
```

### Workflow Labels

Next, we'll set some labels on our workflows. Because Nebari uses a service
account token by default, we need to tell Argo which user we are. We also need
to tell Argo to use the Nebari Workflow Controller so that we have access to
our Nebari file system and conda environments from within the Argo pod
([more information](/how-tos/using-argo.md#access-your-nebari-environments-and-file-system-while-on-an-argo-pod-beta)).

The workflow labels must be hexadecimal ASCII while the usernames have no such
constraint so we have a helper function `sanitize_labels` to ensure that our
label is valid for Argo.

```python
import re

def sanitize_label(label: str) -> str:
"""
The sanitize_label function converts all characters that are not
alphanumeric or a - to their hexadecimal ASCII equivalent. This is
because kubernetes will complain if certain characters are being
used. This is the same approach taken by Jupyter for sanitizing.
On the Nebari Workflow Controller, there is a `desanitize_label`
function that reverses these changes so we can then perform a user
look up.
>>> sanitize_label("user@email.com")
user-40email-2ecom
Parameters
----------
label: str
Username to be sanitized (typically Jupyterhub username)
Returns
-------
Hexadecimall ASCII equivalent of `label`
"""
label = label.lower()
pattern = r"[^A-Za-z0-9]"
return re.sub(pattern, lambda x: "-" + hex(ord(x.group()))[2:], label)



username = os.environ["JUPYTERHUB_USER"]
labels = {
"workflows.argoproj.io/creator-preferred-username": sanitize_label(username),
'jupyterflow-override': 'true',
}
```

### Time to Live Strategy

By default, Argo does not destroy servers after completion of a workflow.
Because this can cause substantial unexpected cloud costs, we _highly_
recommend _always_ setting the "Time to live strategy", or `TTLStrategy` on
every workflow.

:::note
The Argo UI will only show the workflow details until the
`TTLStrategy` time has elapsed so make sure you have enough time to evaluate
logs, etc. before those details are removed.
:::

```python
from hera.workflows.models import TTLStrategy

DEFAULT_TTL = 90
ttl_strategy = TTLStrategy(
seconds_after_completion=DEFAULT_TTL,
seconds_after_success=DEFAULT_TTL,
seconds_after_failure=DEFAULT_TTL,
)
```

### Extra parameters

We also will be setting a `node_selector` parameter, but it is optional. If you
do not include it, Argo will run on your current user instance. If you include
it, Argo will run on the server type that you request. These server types
correspond to the names in your `nebari-config.yml`. Nebari Jupyter instances
are always in the `user` group so that's a good place to start, but you may
want to use other CPU or GPU configurations that have been specified in your
config. Its also important to note that if the node pool is limited to (1) node
in the config, Argo will not be able to spin up. Also note that key in this
dictionary refers to the cloud-specific kubernetes node selector label. For
example, AWS uses "eks.amazonaws.com/nodegroup" while GCP uses
"cloud.google.com/gke-nodepool".

The `namespace` parameter is set to `dev` by default, but Nebari sets it up
as part of the environment variables so we'll pull it from there. The
`generate_name` parameter allows us to give our job a prefix and Argo will add
a suffix to ensure uniqueness. Lastly, we'll give the workflow an `entrypoint`.
This parameter needs to match with the name of the `Steps` parameter you're
using (or `DAG`).

### Workflow constructor

Let's put this all together and have a closer look.

```python
from hera.workflows import Steps, script
from hera.workflows import Workflow

@script()
def echo(message: str):
print(message)


with Workflow(
generate_name="hello-user",
entrypoint="steps",
node_selector={"eks.amazonaws.com/nodegroup": "user"},
labels=labels,
namespace=os.environ["ARGO_NAMESPACE"],
ttl_strategy=ttl_strategy,
) as w:
with Steps(name="steps"):
echo(arguments={"message": "hello"})

w.create()
```

Workflows are essentially managing a series of tasks for us. There are two basic
mechanisms to construct these in Argo - `Steps` and `DAG`s.

For this example, we've used `Steps`. When you build your workflow with `Steps`
as a context manager as we've done here, you can add as many methods as you'd
like (for example, duplicating the call to `echo()`) and Argo will separate
out these commands into individual DAG points in a series.

For example, if I wanted to run two separate functions, it might look like this

```python
...
with Steps(name="steps"):
echo(arguments={"message": "hello"})
echo(arguments={"message": "goodbye"})
```

Each step would have its own resource management and logs within Argo. You can
also tell Hera that the function calls within the `Steps` context manager
should be run in parallel.

If you'd like even more control over your workflow, for example diamond
or branching workflows, the `DAG` constructor will allow you to specify that
level of complexity.

## Beyond "Hello World"

We've proven that we can run **something** on Argo, but what about actually
running some Python code? Let's review the requirements.

We've already discussed setting the `jupyter-overrides` label which tells
Nebari to mount our home directory and the conda environments onto our Argo
pod. We will also need to use a Docker image which has conda set up and
initialized. We'll grab the Nebari Jupyter image. This has the added benefit
of bringing parity between running a code on your Nebari instance and running
on Argo.

### "Argo Assistant" code

As you've seen, we're creating quite a bit of peripheral code and we're about
to add even more. Let's bring some structure in to help us organize things.
For this, we have a little "Argo Assistant" code that will help us out.

```python
import logging
import os
import subprocess
from pathlib import Path

from hera.workflows import Container, Parameter, Steps, Workflow
from hera.workflows.models import TTLStrategy

LOGGER = logging.getLogger()

DEFAULT_TTL = 90 # seconds
DEFAULT_ARGO_NODE_TYPE = "user"
DEFAULT_K8S_SELECTOR_LABEL = "eks.amazonaws.com/nodegroup"


class NebariWorkflow(Workflow):
"""Hera Workflow object with required/reasonable default for running
on Nebari
"""

def __init__(self, **kwargs):
super().__init__(**kwargs)

if "ttl_strategy" in kwargs.keys():
self.ttl_strategy = kwargs["ttl_strategy"]
else:
self.ttl_strategy = TTLStrategy(
seconds_after_completion=DEFAULT_TTL,
seconds_after_success=DEFAULT_TTL,
seconds_after_failure=DEFAULT_TTL,
)

if "node_selector" in kwargs.keys():
self.node_selector = kwargs["node_selector"]
else:
self.node_selector = {DEFAULT_K8S_SELECTOR_LABEL: DEFAULT_ARGO_NODE_TYPE}

self.namespace = os.environ["ARGO_NAMESPACE"]

self.labels = {
"workflows.argoproj.io/creator-preferred-username": sanitize_label(username),
'jupyterflow-override': 'true',
}


def create_conda_command(
script_path,
conda_env,
stdout_path="stdout.txt",
):
"""Workflows need to be submitted via a bash command that runs a
python script. This function creates a conda run command that
will run a script from a given location using a given conda
environment.
Parameters
----------
script_path: str
Path to the python script (including extension) to be run on Argo
conda_env: str
Conda environment name in which to the run the `script_path`
stdout_path: str
Local Nebari path (for your user) for standard output from
the given script. Defaults to `stdout.txt`.
Returns
-------
String bash command
"""

conda_command = f'conda run -n {conda_env} python "{script_path}" >> {stdout_path}'
return conda_command


def create_bash_container(name="bash-container"):
"""Create a workflow container that is able to receive bash commands"""
bash_container = Container(
name="bash-container",
image="thiswilloverridden",
inputs=[
Parameter(name="bash_command")
], # inform argo that an input called bash_command is coming
command=["bash", "-c"],
args=["{{inputs.parameters.bash_command}}"], # use the input parameter
)
return bash_container


def submit_argo_script(script_path, conda_env, stdout_path="stdout.txt"):
"""Submit a script to be run via Argo in a specific environment"""
validated = validate_submission(script_path, conda_env)

if not validated:
raise RuntimeError("Unable to submit Argo workflow")

conda_command = create_conda_command(script_path, conda_env, stdout_path)

LOGGER.debug("Submitting command {conda_command} to Argo")

with NebariWorkflow(
generate_name="workflow-name-",
entrypoint="steps",
) as w:
bash_container = create_bash_container()
with Steps(
name="steps", # must match Workflow entrypoint
annotations={"go": "here"},
):
bash_container(
name="step-name",
arguments=[Parameter(name="bash_command", value=conda_command)],
)

workflow = w.create()
return workflow

```

Next, you'll need to create a python script and a conda environment. Then to
submit the workflow to Argo you would run the high level command:

```
path = '/path/to/pyfile.py'
nebari_conda_env = 'analyst-workflow-env'
submit_argo_script(path, nebari_conda_env)
```

Now you can go to the Argo UI and monitor progress!

## Conclusion

Well done! You've learned how to submit a python workflow to Argo and have a
few extra tools to help you along.
15 changes: 5 additions & 10 deletions docs/docs/tutorials/run-notebooks-as-a-job.md
Original file line number Diff line number Diff line change
@@ -12,7 +12,6 @@ This is a new feature still in beta so please [leave some feedback](https://gith
There is one known issue with the `Update Job Definition` and `Resume` job definition which is related to Nebari-Workflow-Controller issue, [captured here](https://github.com/nebari-dev/nebari-workflow-controller/issues/18). The current workaround for those who need to update (or pause) your job definitions, is simply to delete the current job definition and create a new one as and when needed.
:::


A common task that many Nebari users have is submitting their notebooks to run as a script or to run on a predefined schedule. This is now possible with [Jupyter-Scheduler](https://jupyter-scheduler.readthedocs.io/en/latest/index.html#), a JupyterLab extension that has been expanded and integrated into Nebari. This also means that you can view the status of the jobs by vising the `<nebari-domain>/argo` endpoint.

The idea of notebook jobs is useful in situations where you need no human interaction, besides submitting it as a job, and the results can be efficiently saved to your home directory, the cloud or other similar storage locations. It is also useful in situations where the notebook might run for a long time and the user needs to shut down their JupyterLab server.
@@ -28,15 +27,14 @@ Jupyter-Scheduler is included by default in the base Nebari JupyterLab image and
When using a conda-store environment, please ensure that the [`papermill` package](https://papermill.readthedocs.io/en/latest/) is included.
:::


## Submitting a Notebook as a Jupyter-Scheduler Job

To submit your notebook as a Jupyter-Scheduler Job, simply click the `Jupyter-Scheduler` icon on the top of your notebook toolbar.


![Jupyter-Scheduler UI - location of the icon on the notebook toolbar](/img/tutorials/jupyter-scheduler-icon.png)

This will open the Jupyter-Scheduler UI. From here you can specify:

- the notebook **job name**
- the **input file** the use (this will default to the file from which the icon was clicked)
- the **environment** to run the notebook with
@@ -48,17 +46,16 @@ This will open the Jupyter-Scheduler UI. From here you can specify:

Once created, the status and output of the notebook job can be viewed from the Jupyter-Scheduler UI:


![Jupyter-Scheduler UI - view the notebook job status](/img/tutorials/jupyter-scheduler-job-status.png)

Click on the notebook job name to view more information about the job. From here, you can view:

- the **job ID**
- when the job was **created**
- the **start time** and **end time**

![Jupyter-Scheduler UI - view the notebook job details](/img/tutorials/jupyter-scheduler-job-details.png)


:::info
As mentioned above, the notebook job will run as an Argo-Workflows workflow. This means these jobs (workflows) are viewable from the Argo-Workflows UI at `<nebari-domain>/argo`. The name of the workflow is prefixed with `job-<job-id>`.

@@ -85,6 +82,7 @@ You can check [crontab.guru](https://crontab.guru) which is a nifty tool that tr
</div>

When a job definition is created, a new job is created at each time interval specified by the schedule. These created jobs can be inspected like a regular notebook job. From here you can:

- **delete** the job definition
- **pause** the job job definition
- view details such as the **status** of the job definition
@@ -105,7 +103,6 @@ Unlike a regular notebook job, job definitions create Argo-Workflows cron-workfl
Notebook jobs that run on a schedule will run indefinitely so it's the responsibility of the job creator to either delete or pause the job if then they are no longer needed.
:::


## Debugging failed jobs

Occasionally notebook jobs will fail to run and it's helpful to understand why.
@@ -114,18 +111,16 @@ Occasionally notebook jobs will fail to run and it's helpful to understand why.
<img src="/img/tutorials/jupyter-scheduler-job-failed.png" alt="Jupyter-Scheduler UI - view the details of a failed job." width="60%"/>
</div>


If there is an issue with the notebook code itself, viewing the notebook job logs will help the user get a better idea of what went wrong. These are the steps to get the logs:

1. Document the job ID
1. Document the job ID
2. Launch a terminal session
3. Navigate to `~/.local/share/jupyter/scheduler_staging_area`
4. Find the folder that corresponds to the job ID that failed
5. Open (or `cat`) the `output.ipynb` to view the notebook as it was run


:::info
Notebook job details can also be viewed from the `<nebari-domiain>/argo` UI.
:::

Lastly, if the job fails without writing to this `scheduler_staging_area`, or the job status is stuck in `In progress` mode for an extended period of time, have an administrator try and view the specific logs on the user's JupyterLab server pod or on the workflow pod itself.
Lastly, if the job fails without writing to this `scheduler_staging_area`, or the job status is stuck in `In progress` mode for an extended period of time, have an administrator try and view the specific logs on the user's JupyterLab server pod or on the workflow pod itself.
1 change: 1 addition & 0 deletions docs/sidebars.js
Original file line number Diff line number Diff line change
@@ -48,6 +48,7 @@ module.exports = {
"tutorials/kbatch",
"tutorials/cost-estimate-report",
"tutorials/jupyter-scheduler",
"tutorials/argo-workflows-walkthrough",
],
},
{
Binary file added docs/static/img/how-tos/argo_landing_page.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.

0 comments on commit 425e472

Please sign in to comment.