Skip to content

Commit ab1b1c0

Browse files
authored
Remove gputil dependency and minor cleanup (mlc-ai#15)
1 parent 2cbd64d commit ab1b1c0

File tree

3 files changed

+5
-5
lines changed

3 files changed

+5
-5
lines changed

README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ This project takes a step to change that status quo and bring more diversity to
1111
Building special client apps for those applications is one option (which we also support), but won’t it be even more amazing if we can simply open a browser and directly bring AI natively to your browser tab? There is some level of readiness in the ecosystem. WebAssembly allows us to port more lower-level runtimes onto the web. To solve the compute problem, WebGPU is getting matured lately and enables native GPU executions on the browser.
1212

1313
We are just seeing necessary elements coming together on the client side, both in terms of hardware and browser ecosystem. Still, there are big hurdles to cross, to name a few:
14+
1415
* We need to bring the models somewhere without the relevant GPU-accelerated Python frameworks.
1516
* Most of the AI frameworks have a heavy reliance on optimized computed libraries that are maintained by hardware vendors. We need to start from zero. To get the maximum benefit, we might also need to produce variants per client environment.
1617
* Careful planning of memory usage so we can fit the models into memory.
@@ -20,6 +21,7 @@ We do not want to only do it for just one model. Instead, we would like to prese
2021
## Get Started
2122

2223
We have a [Jupyter notebook](https://github.com/mlc-ai/web-stable-diffusion/blob/main/walkthrough.ipynb) that walks you through all the stages, including
24+
2325
* elaborate the key points of web ML model deployment and how we do to meet these points,
2426
* import the stable diffusion model,
2527
* optimize the model,
@@ -28,6 +30,7 @@ We have a [Jupyter notebook](https://github.com/mlc-ai/web-stable-diffusion/blob
2830
* deploy the model on web with WebGPU runtime.
2931

3032
If you want to go through these steps in command line, please follow the commands below:
33+
3134
<details><summary>Commands</summary>
3235

3336
* Install TVM Unity. You can either

requirements.txt

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
11
accelerate
22
diffusers
3-
gputil
43
transformers

walkthrough.ipynb

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -34,8 +34,7 @@
3434
"!python3 -m pip install --pre torch --upgrade --index-url https://download.pytorch.org/whl/nightly/cpu\n",
3535
"!python3 -m pip install -r requirements.txt\n",
3636
"\n",
37-
"import GPUtil\n",
38-
"has_gpu = len(GPUtil.getGPUs()) > 0\n",
37+
"has_gpu = !nvidia-smi -L\n",
3938
"cudav = \"-cu116\" if has_gpu else \"\" # check https://mlc.ai/wheels if you have a different CUDA version\n",
4039
"\n",
4140
"!python3 -m pip install mlc-ai-nightly{cudav} -f https://mlc.ai/wheels"
@@ -54,7 +53,6 @@
5453
"metadata": {},
5554
"outputs": [],
5655
"source": [
57-
"from typing import Dict, List, Tuple\n",
5856
"from platform import system\n",
5957
"\n",
6058
"import tvm\n",
@@ -1882,7 +1880,7 @@
18821880
"name": "python",
18831881
"nbconvert_exporter": "python",
18841882
"pygments_lexer": "ipython3",
1885-
"version": "3.10.9"
1883+
"version": "3.8.16"
18861884
}
18871885
},
18881886
"nbformat": 4,

0 commit comments

Comments
 (0)