diff --git a/README.md b/README.md
index ec5ac09..63e1236 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,26 @@
+
# VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models
+- [TL;DR](#tl-dr)
+- [News](#news)
+- [Installation](#installation)
+ * [Dependencies](#dependencies)
+ * [Install VPTQ on your machine](#install-vptq-on-your-machine)
+- [Evaluation](#evaluation)
+ * [Models from Open Source Community](#models-from-open-source-community)
+ * [Language Generation Example](#language-generation-example)
+ * [Terminal Chatbot Example](#terminal-chatbot-example)
+ * [Python API Example](#python-api-example)
+ * [Gradio Web App Example](#gradio-web-app-example)
+- [Tech Report](#tech-report)
+ * [Early Results from Tech Report](#early-results-from-tech-report)
+- [Road Map](#road-map)
+- [Project main members:](#project-main-members-)
+- [Acknowledgement](#acknowledgement)
+- [Publication](#publication)
+- [Star History](#star-history)
+- [Limitation of VPTQ](#limitation-of-vptq)
+- [Contributing](#contributing)
+- [Trademarks](#trademarks)
## TL;DR
@@ -10,45 +32,14 @@ VPTQ can compress 70B, even the 405B model, to 1-2 bits without retraining and m
* Agile Quantization Inference: low decode overhead, best throughput, and TTFT
-**🚀Free Huggingface Demo🚀** [Huggingface Demo](https://huggingface.co/spaces/VPTQ-community/VPTQ-Demo) – Try it now and witness the power of extreme low-bit quantization!
-
-**Example: Run Llama 3.1 70b on RTX4090 (24G @ ~2bits) in real time**
-![Llama3 1-70b-prompt](https://github.com/user-attachments/assets/d8729aca-4e1d-4fe1-ac71-c14da4bdd97f)
-
----
-
-**VPTQ is an ongoing project. If the open-source community is interested in optimizing and expanding VPTQ, please feel free to submit an issue or DM.**
-
----
-
## News
+- [2024-10-6] 🚀 **Try it on Google Colab**
- [2024-10-5] 🚀 **Add free Huggingface Demo**: [Huggingface Demo](https://huggingface.co/spaces/VPTQ-community/VPTQ-Demo)
- [2024-10-4] ✏️ Updated the VPTQ tech report and fixed typos.
- [2024-9-20] 🌐 Inference code is now open-sourced on GitHub—join us and contribute!
- [2024-9-20] 🎉 VPTQ paper has been accepted for the main track at EMNLP 2024.
-## [**Tech Report**](https://github.com/microsoft/VPTQ/blob/main/VPTQ_tech_report.pdf)
-
-Scaling model size significantly challenges the deployment and inference of Large Language Models (LLMs). Due to the redundancy in LLM weights, recent research has focused on pushing weight-only quantization to extremely low-bit (even down to 2 bits). It reduces memory requirements, optimizes storage costs, and decreases memory bandwidth needs during inference. However, due to numerical representation limitations, traditional scalar-based weight quantization struggles to achieve such extreme low-bit. Recent research on Vector Quantization (VQ) for LLMs has demonstrated the potential for extremely low-bit model quantization by compressing vectors into indices using lookup tables.
-
-Read tech report at [**Tech Report**](https://github.com/microsoft/VPTQ/blob/main/VPTQ_tech_report.pdf) and [**arXiv Paper**](https://arxiv.org/pdf/2409.17066)
-
-### Early Results from Tech Report
-VPTQ achieves better accuracy and higher throughput with lower quantization overhead across models of different sizes. The following experimental results are for reference only; VPTQ can achieve better outcomes under reasonable parameters, especially in terms of model accuracy and inference speed.
-
-
-
-| Model | bitwidth | W2↓ | C4↓ | AvgQA↑ | tok/s↑ | mem(GB) | cost/h↓ |
-| ----------- | -------- | ---- | ---- | ------ | ------ | ------- | ------- |
-| LLaMA-2 7B | 2.02 | 6.13 | 8.07 | 58.2 | 39.9 | 2.28 | 2 |
-| | 2.26 | 5.95 | 7.87 | 59.4 | 35.7 | 2.48 | 3.1 |
-| LLaMA-2 13B | 2.02 | 5.32 | 7.15 | 62.4 | 26.9 | 4.03 | 3.2 |
-| | 2.18 | 5.28 | 7.04 | 63.1 | 18.5 | 4.31 | 3.6 |
-| LLaMA-2 70B | 2.07 | 3.93 | 5.72 | 68.6 | 9.7 | 19.54 | 19 |
-| | 2.11 | 3.92 | 5.71 | 68.7 | 9.7 | 20.01 | 19 |
-
----
## Installation
@@ -60,7 +51,7 @@ VPTQ achieves better accuracy and higher throughput with lower quantization over
- Accelerate >= 0.33.0
- latest datasets
-### Installation
+### Install VPTQ on your machine
> Preparation steps that might be needed: Set up CUDA PATH.
```bash
@@ -73,12 +64,17 @@ export PATH=/usr/local/cuda-12/bin/:$PATH # set dependent on your environment
pip install git+https://github.com/microsoft/VPTQ.git --no-build-isolation
```
-## Evaluation
-### Colab Demo
-Have a fun on Colab.
+**Example: Run Llama 3.1 70b on RTX4090 (24G @ ~2bits) in real time**
+![Llama3 1-70b-prompt](https://github.com/user-attachments/assets/d8729aca-4e1d-4fe1-ac71-c14da4bdd97f)
+
+---
+
+**VPTQ is an ongoing project. If the open-source community is interested in optimizing and expanding VPTQ, please feel free to submit an issue or DM.**
+
+---
-
+## Evaluation
### Models from Open Source Community
@@ -161,6 +157,29 @@ python -m vptq.app
---
+## Tech Report
+[VPTQ_tech_report](https://github.com/microsoft/VPTQ/blob/main/VPTQ_tech_report.pdf)
+
+Scaling model size significantly challenges the deployment and inference of Large Language Models (LLMs). Due to the redundancy in LLM weights, recent research has focused on pushing weight-only quantization to extremely low-bit (even down to 2 bits). It reduces memory requirements, optimizes storage costs, and decreases memory bandwidth needs during inference. However, due to numerical representation limitations, traditional scalar-based weight quantization struggles to achieve such extreme low-bit. Recent research on Vector Quantization (VQ) for LLMs has demonstrated the potential for extremely low-bit model quantization by compressing vectors into indices using lookup tables.
+
+Read tech report at [**Tech Report**](https://github.com/microsoft/VPTQ/blob/main/VPTQ_tech_report.pdf) and [**arXiv Paper**](https://arxiv.org/pdf/2409.17066)
+
+### Early Results from Tech Report
+VPTQ achieves better accuracy and higher throughput with lower quantization overhead across models of different sizes. The following experimental results are for reference only; VPTQ can achieve better outcomes under reasonable parameters, especially in terms of model accuracy and inference speed.
+
+
+
+| Model | bitwidth | W2↓ | C4↓ | AvgQA↑ | tok/s↑ | mem(GB) | cost/h↓ |
+| ----------- | -------- | ---- | ---- | ------ | ------ | ------- | ------- |
+| LLaMA-2 7B | 2.02 | 6.13 | 8.07 | 58.2 | 39.9 | 2.28 | 2 |
+| | 2.26 | 5.95 | 7.87 | 59.4 | 35.7 | 2.48 | 3.1 |
+| LLaMA-2 13B | 2.02 | 5.32 | 7.15 | 62.4 | 26.9 | 4.03 | 3.2 |
+| | 2.18 | 5.28 | 7.04 | 63.1 | 18.5 | 4.31 | 3.6 |
+| LLaMA-2 70B | 2.07 | 3.93 | 5.72 | 68.6 | 9.7 | 19.54 | 19 |
+| | 2.11 | 3.92 | 5.71 | 68.7 | 9.7 | 20.01 | 19 |
+
+---
+
## Road Map
- [ ] Merge the quantization algorithm into the public repository.
- [ ] Submit the VPTQ method to various inference frameworks (e.g., vLLM, llama.cpp, exllama).
diff --git a/notebooks/vptq_example.ipynb b/notebooks/vptq_example.ipynb
index c55542e..39c9630 100644
--- a/notebooks/vptq_example.ipynb
+++ b/notebooks/vptq_example.ipynb
@@ -15,13 +15,13 @@
},
{
"cell_type": "markdown",
+ "metadata": {
+ "id": "Do2VPIj93_EE"
+ },
"source": [
"## Install VPTQ package and requirements\n",
"The latest transformers and accelerate is essential."
- ],
- "metadata": {
- "id": "Do2VPIj93_EE"
- }
+ ]
},
{
"cell_type": "code",
@@ -77,29 +77,29 @@
},
"outputs": [
{
- "output_type": "stream",
"name": "stderr",
+ "output_type": "stream",
"text": [
"Replacing linear layers...: 100%|██████████| 399/399 [00:00<00:00, 1325.70it/s]\n"
]
},
{
- "output_type": "display_data",
"data": {
- "text/plain": [
- "Fetching 11 files: 0%| | 0/11 [00:00, ?it/s]"
- ],
"application/vnd.jupyter.widget-view+json": {
+ "model_id": "b19fd7475a9a4ff78387e55e102d70d7",
"version_major": 2,
- "version_minor": 0,
- "model_id": "b19fd7475a9a4ff78387e55e102d70d7"
- }
+ "version_minor": 0
+ },
+ "text/plain": [
+ "Fetching 11 files: 0%| | 0/11 [00:00, ?it/s]"
+ ]
},
- "metadata": {}
+ "metadata": {},
+ "output_type": "display_data"
},
{
- "output_type": "stream",
"name": "stderr",
+ "output_type": "stream",
"text": [
"/usr/local/lib/python3.10/dist-packages/accelerate/utils/modeling.py:1589: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
" return torch.load(checkpoint_file, map_location=torch.device(\"cpu\"))\n"
@@ -118,12 +118,12 @@
},
{
"cell_type": "markdown",
- "source": [
- "## Inference example with text generation"
- ],
"metadata": {
"id": "8tGG2cdp5Rzt"
- }
+ },
+ "source": [
+ "## Inference example with text generation"
+ ]
},
{
"cell_type": "code",
@@ -137,8 +137,8 @@
},
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"Explain: Do Not Go Gentle into That Good Night\n",
"\n",
@@ -154,56 +154,16 @@
},
{
"cell_type": "markdown",
- "source": [
- "## Measure token generation Speed"
- ],
- "metadata": {
- "id": "wAxK5x8v5cyw"
- }
- },
- {
- "cell_type": "code",
- "execution_count": 11,
"metadata": {
- "id": "dJyiSorVsEId",
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "outputId": "9f37c2a5-1fa8-4752-fb0d-4ef9ef9fa4f7"
+ "id": "Zh4vgbkL8fBx"
},
- "outputs": [
- {
- "output_type": "stream",
- "name": "stdout",
- "text": [
- "CPU times: user 1.26 s, sys: 15.6 ms, total: 1.28 s\n",
- "Wall time: 1.27 s\n"
- ]
- }
- ],
- "source": [
- "%%time\n",
- "inputs = tokenizer(\"hello \", return_tensors=\"pt\").to(\"cuda\")\n",
- "out = m.generate(**inputs, max_new_tokens=10, pad_token_id=2)"
- ]
- },
- {
- "cell_type": "markdown",
"source": [
"## Generate token in streaming mode"
- ],
- "metadata": {
- "id": "Zh4vgbkL8fBx"
- }
+ ]
},
{
"cell_type": "code",
- "source": [
- "inputs = tokenizer(\"share me a story,\", return_tensors=\"pt\").to(\"cuda\")\n",
- "\n",
- "streamer = transformers.TextStreamer(tokenizer)\n",
- "_ = m.generate(**inputs, streamer=streamer, max_new_tokens=60,pad_token_id=tokenizer.eos_token_id)"
- ],
+ "execution_count": 21,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
@@ -211,11 +171,10 @@
"id": "ea9zSTeq5sB2",
"outputId": "67eb4e3e-1502-4ae4-981c-718092c6bfe9"
},
- "execution_count": 21,
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"share me a story, please\n",
"Certainly! Here's a short story for you:\n",
@@ -225,16 +184,22 @@
"In the heart of a bustling city, there was a small, forgotten garden hidden behind a row of tall buildings. It was a place where the city's residents often forgot to visit, but it was a sanctuary for\n"
]
}
+ ],
+ "source": [
+ "inputs = tokenizer(\"share me a story,\", return_tensors=\"pt\").to(\"cuda\")\n",
+ "\n",
+ "streamer = transformers.TextStreamer(tokenizer)\n",
+ "_ = m.generate(**inputs, streamer=streamer, max_new_tokens=60,pad_token_id=tokenizer.eos_token_id)"
]
},
{
"cell_type": "code",
- "source": [],
+ "execution_count": null,
"metadata": {
"id": "54KJpMVB8pL2"
},
- "execution_count": null,
- "outputs": []
+ "outputs": [],
+ "source": []
}
],
"metadata": {
@@ -252,32 +217,25 @@
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
- "b19fd7475a9a4ff78387e55e102d70d7": {
+ "014f974a068a41e2a8ee4efa6574ce01": {
"model_module": "@jupyter-widgets/controls",
- "model_name": "HBoxModel",
"model_module_version": "1.5.0",
+ "model_name": "DescriptionStyleModel",
"state": {
- "_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
- "_model_name": "HBoxModel",
+ "_model_name": "DescriptionStyleModel",
"_view_count": null,
- "_view_module": "@jupyter-widgets/controls",
- "_view_module_version": "1.5.0",
- "_view_name": "HBoxView",
- "box_style": "",
- "children": [
- "IPY_MODEL_9983dd28235948808ea7f893a8dd8e38",
- "IPY_MODEL_81458c21349d4ec4b78cd4bd77d81010",
- "IPY_MODEL_1f9811fac71a43ff97fd6fa5a751ff55"
- ],
- "layout": "IPY_MODEL_f1549ea8ee0649c899568a4a364762ba"
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "StyleView",
+ "description_width": ""
}
},
- "9983dd28235948808ea7f893a8dd8e38": {
+ "1f9811fac71a43ff97fd6fa5a751ff55": {
"model_module": "@jupyter-widgets/controls",
- "model_name": "HTMLModel",
"model_module_version": "1.5.0",
+ "model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
@@ -289,61 +247,31 @@
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
- "layout": "IPY_MODEL_2acf41455ccc4ab6ac1ef310976b2eff",
+ "layout": "IPY_MODEL_ba602363b88f45ffb302a6cc5da0b57d",
"placeholder": "",
- "style": "IPY_MODEL_014f974a068a41e2a8ee4efa6574ce01",
- "value": "Fetching 11 files: 100%"
- }
- },
- "81458c21349d4ec4b78cd4bd77d81010": {
- "model_module": "@jupyter-widgets/controls",
- "model_name": "FloatProgressModel",
- "model_module_version": "1.5.0",
- "state": {
- "_dom_classes": [],
- "_model_module": "@jupyter-widgets/controls",
- "_model_module_version": "1.5.0",
- "_model_name": "FloatProgressModel",
- "_view_count": null,
- "_view_module": "@jupyter-widgets/controls",
- "_view_module_version": "1.5.0",
- "_view_name": "ProgressView",
- "bar_style": "success",
- "description": "",
- "description_tooltip": null,
- "layout": "IPY_MODEL_f7d3edf25e844d51b993ef3eb3f95b1d",
- "max": 11,
- "min": 0,
- "orientation": "horizontal",
- "style": "IPY_MODEL_cdd40ba6109a401592f1c3df004908b6",
- "value": 11
+ "style": "IPY_MODEL_27e99e0950e949c186656def4d1a6f6f",
+ "value": " 11/11 [00:00<00:00, 292.30it/s]"
}
},
- "1f9811fac71a43ff97fd6fa5a751ff55": {
+ "27e99e0950e949c186656def4d1a6f6f": {
"model_module": "@jupyter-widgets/controls",
- "model_name": "HTMLModel",
"model_module_version": "1.5.0",
+ "model_name": "DescriptionStyleModel",
"state": {
- "_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
- "_model_name": "HTMLModel",
+ "_model_name": "DescriptionStyleModel",
"_view_count": null,
- "_view_module": "@jupyter-widgets/controls",
- "_view_module_version": "1.5.0",
- "_view_name": "HTMLView",
- "description": "",
- "description_tooltip": null,
- "layout": "IPY_MODEL_ba602363b88f45ffb302a6cc5da0b57d",
- "placeholder": "",
- "style": "IPY_MODEL_27e99e0950e949c186656def4d1a6f6f",
- "value": " 11/11 [00:00<00:00, 292.30it/s]"
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "StyleView",
+ "description_width": ""
}
},
- "f1549ea8ee0649c899568a4a364762ba": {
+ "2acf41455ccc4ab6ac1ef310976b2eff": {
"model_module": "@jupyter-widgets/base",
- "model_name": "LayoutModel",
"model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
@@ -392,10 +320,77 @@
"width": null
}
},
- "2acf41455ccc4ab6ac1ef310976b2eff": {
+ "81458c21349d4ec4b78cd4bd77d81010": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "FloatProgressModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "FloatProgressModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "ProgressView",
+ "bar_style": "success",
+ "description": "",
+ "description_tooltip": null,
+ "layout": "IPY_MODEL_f7d3edf25e844d51b993ef3eb3f95b1d",
+ "max": 11,
+ "min": 0,
+ "orientation": "horizontal",
+ "style": "IPY_MODEL_cdd40ba6109a401592f1c3df004908b6",
+ "value": 11
+ }
+ },
+ "9983dd28235948808ea7f893a8dd8e38": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "HTMLModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "HTMLModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "HTMLView",
+ "description": "",
+ "description_tooltip": null,
+ "layout": "IPY_MODEL_2acf41455ccc4ab6ac1ef310976b2eff",
+ "placeholder": "",
+ "style": "IPY_MODEL_014f974a068a41e2a8ee4efa6574ce01",
+ "value": "Fetching 11 files: 100%"
+ }
+ },
+ "b19fd7475a9a4ff78387e55e102d70d7": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "HBoxModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "HBoxModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "HBoxView",
+ "box_style": "",
+ "children": [
+ "IPY_MODEL_9983dd28235948808ea7f893a8dd8e38",
+ "IPY_MODEL_81458c21349d4ec4b78cd4bd77d81010",
+ "IPY_MODEL_1f9811fac71a43ff97fd6fa5a751ff55"
+ ],
+ "layout": "IPY_MODEL_f1549ea8ee0649c899568a4a364762ba"
+ }
+ },
+ "ba602363b88f45ffb302a6cc5da0b57d": {
"model_module": "@jupyter-widgets/base",
- "model_name": "LayoutModel",
"model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
@@ -444,25 +439,26 @@
"width": null
}
},
- "014f974a068a41e2a8ee4efa6574ce01": {
+ "cdd40ba6109a401592f1c3df004908b6": {
"model_module": "@jupyter-widgets/controls",
- "model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
+ "model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
- "_model_name": "DescriptionStyleModel",
+ "_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
+ "bar_color": null,
"description_width": ""
}
},
- "f7d3edf25e844d51b993ef3eb3f95b1d": {
+ "f1549ea8ee0649c899568a4a364762ba": {
"model_module": "@jupyter-widgets/base",
- "model_name": "LayoutModel",
"model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
@@ -511,26 +507,10 @@
"width": null
}
},
- "cdd40ba6109a401592f1c3df004908b6": {
- "model_module": "@jupyter-widgets/controls",
- "model_name": "ProgressStyleModel",
- "model_module_version": "1.5.0",
- "state": {
- "_model_module": "@jupyter-widgets/controls",
- "_model_module_version": "1.5.0",
- "_model_name": "ProgressStyleModel",
- "_view_count": null,
- "_view_module": "@jupyter-widgets/base",
- "_view_module_version": "1.2.0",
- "_view_name": "StyleView",
- "bar_color": null,
- "description_width": ""
- }
- },
- "ba602363b88f45ffb302a6cc5da0b57d": {
+ "f7d3edf25e844d51b993ef3eb3f95b1d": {
"model_module": "@jupyter-widgets/base",
- "model_name": "LayoutModel",
"model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
@@ -578,25 +558,10 @@
"visibility": null,
"width": null
}
- },
- "27e99e0950e949c186656def4d1a6f6f": {
- "model_module": "@jupyter-widgets/controls",
- "model_name": "DescriptionStyleModel",
- "model_module_version": "1.5.0",
- "state": {
- "_model_module": "@jupyter-widgets/controls",
- "_model_module_version": "1.5.0",
- "_model_name": "DescriptionStyleModel",
- "_view_count": null,
- "_view_module": "@jupyter-widgets/base",
- "_view_module_version": "1.2.0",
- "_view_name": "StyleView",
- "description_width": ""
- }
}
}
}
},
"nbformat": 4,
"nbformat_minor": 0
-}
\ No newline at end of file
+}