diff --git a/README.md b/README.md
index 3d766d95fab..435246e616d 100644
--- a/README.md
+++ b/README.md
@@ -33,11 +33,11 @@ learning frameworks.
### Post-Training Compression Algorithms
-| Compression algorithm | OpenVINO | PyTorch | TensorFlow | ONNX |
-| :------------------------------------------------------------------------------------------------------- | :-------: | :-------: | :-----------: | :-----------: |
-| [Post-Training Quantization](./docs/usage/post_training_compression/post_training_quantization/Usage.md) | Supported | Supported | Supported | Supported |
-| [Weights Compression](./docs/usage/post_training_compression/weights_compression/Usage.md) | Supported | Supported | Not supported | Not supported |
-| [Activation Sparsity](./nncf/experimental/torch/sparsify_activations/ActivationSparsity.md) | Not supported | Experimental |Not supported| Not supported |
+| Compression algorithm | OpenVINO | PyTorch | TorchFX | TensorFlow | ONNX |
+| :------------------------------------------------------------------------------------------------------- | :-------: | :-------: | :-----------: | :-----------: | :-----------: |
+| [Post-Training Quantization](./docs/usage/post_training_compression/post_training_quantization/Usage.md) | Supported | Supported | Experimental | Supported | Supported |
+| [Weights Compression](./docs/usage/post_training_compression/weights_compression/Usage.md) | Supported | Supported | Not supported | Not supported | Not supported |
+| [Activation Sparsity](./nncf/experimental/torch/sparsify_activations/ActivationSparsity.md) | Not supported | Experimental | Not supported| Not supported| Not supported |
### Training-Time Compression Algorithms
@@ -138,6 +138,43 @@ quantized_model = nncf.quantize(model, calibration_dataset)
+TorchFX
+
+```python
+import nncf
+import torch.fx
+from torchvision import datasets, models
+from torch._export import capture_pre_autograd_graph
+from nncf.torch.dynamic_graph.patch_pytorch import unpatch_torch_operators
+
+# Unpatch torch operators first
+unpatch_torch_operators()
+
+# Instantiate your uncompressed model
+model = models.mobilenet_v2()
+
+# Provide validation part of the dataset to collect statistics needed for the compression algorithm
+val_dataset = datasets.ImageFolder("/path", transform=transforms.Compose([transforms.ToTensor()]))
+dataset_loader = torch.utils.data.DataLoader(val_dataset)
+
+# Step 1: Initialize the transformation function
+def transform_fn(data_item):
+ images, _ = data_item
+ return images
+
+# Step 2: Initialize NNCF Dataset
+calibration_dataset = nncf.Dataset(dataset_loader, transform_fn)
+
+# Step 3: Export model to TorchFX
+input_shape = (1, 3, 224, 224)
+fx_model = capture_pre_autograd_graph(model.eval(), args=torch.ones(input_shape))
+
+# Step 4: Run the quantization pipeline
+quantized_fx_model = nncf.quantize(fx_model, calibration_dataset)
+
+```
+
+
TensorFlow
```python
diff --git a/docs/Algorithms.md b/docs/Algorithms.md
index d6e23cebfdb..16883c69ad9 100644
--- a/docs/Algorithms.md
+++ b/docs/Algorithms.md
@@ -2,7 +2,7 @@
## Post-training Compression
-- [Post Training Quantization (PTQ)](./usage/post_training_compression/post_training_quantization/Usage.md) (OpenVINO, PyTorch, ONNX, TensorFlow)
+- [Post Training Quantization (PTQ)](./usage/post_training_compression/post_training_quantization/Usage.md) (OpenVINO, PyTorch, TorchFX, ONNX, TensorFlow)
- Symmetric and asymmetric quantization modes
- Signed and unsigned
- Per tensor/per channel
diff --git a/docs/usage/post_training_compression/post_training_quantization/Usage.md b/docs/usage/post_training_compression/post_training_quantization/Usage.md
index 0a078a4399f..2e61cc6c220 100644
--- a/docs/usage/post_training_compression/post_training_quantization/Usage.md
+++ b/docs/usage/post_training_compression/post_training_quantization/Usage.md
@@ -51,7 +51,7 @@ Every backend has its own return value format for the data transformation functi
backend inference framework.
Below are the formats of data transformation function for each supported backend.
-PyTorch, TensorFlow, OpenVINO
+PyTorch, TorchFX, TensorFlow, OpenVINO
The return format of the data transformation function is directly the input tensors consumed by the model. \
_If you are not sure that your implementation of data transformation function is correct you can validate it by using the