Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/scripts/validate_binaries.sh
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ conda run -n "${CONDA_ENV}" python --version

# Install pytorch, torchrec and fbgemm as per
# installation instructions on following page
# https://github.com/pytorch/torchrec#installations
# https://github.com/meta-pytorch/torchrec#installations


# figure out CUDA VERSION
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/build-wheels-linux.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ jobs:
- name: Checkout torchrec repository
uses: actions/checkout@v4
with:
repository: pytorch/torchrec
repository: meta-pytorch/torchrec
- name: Filter Generated Built Matrix
id: filter
env:
Expand All @@ -49,10 +49,10 @@ jobs:
echo "matrix=${MATRIX_BLOB}" >> "${GITHUB_OUTPUT}"
build:
needs: filter-matrix
name: pytorch/torchrec
name: meta-pytorch/torchrec
uses: pytorch/test-infra/.github/workflows/build_wheels_linux.yml@main
with:
repository: pytorch/torchrec
repository: meta-pytorch/torchrec
ref: ""
test-infra-repository: pytorch/test-infra
test-infra-ref: main
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -121,4 +121,4 @@ jobs:
s3-bucket: doc-previews
if-no-files-found: error
path: docs
s3-prefix: pytorch/torchrec/${{ github.event.pull_request.number }}
s3-prefix: meta-pytorch/torchrec/${{ github.event.pull_request.number }}
2 changes: 1 addition & 1 deletion .github/workflows/validate-binaries.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ jobs:
package_type: "wheel"
os: "linux"
channel: ${{ inputs.channel }}
repository: "pytorch/torchrec"
repository: "meta-pytorch/torchrec"
smoke_test: "source ./.github/scripts/validate_binaries.sh"
with_cuda: enable
with_rocm: false
4 changes: 2 additions & 2 deletions README.MD
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ Check out the [Getting Started](https://pytorch.org/torchrec/setup-torchrec.html

2. Clone TorchRec.
```
git clone --recursive https://github.com/pytorch/torchrec
git clone --recursive https://github.com/meta-pytorch/torchrec
cd torchrec
```

Expand Down Expand Up @@ -108,7 +108,7 @@ Check out the [Getting Started](https://pytorch.org/torchrec/setup-torchrec.html

## Contributing

See [CONTRIBUTING.md](https://github.com/pytorch/torchrec/blob/main/CONTRIBUTING.md) for details about contributing to TorchRec!
See [CONTRIBUTING.md](https://github.com/meta-pytorch/torchrec/blob/main/CONTRIBUTING.md) for details about contributing to TorchRec!

## Citation

Expand Down
4 changes: 2 additions & 2 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ We evaluate the performance of two EmbeddingBagCollection modules:

1. `EmbeddingBagCollection` (EBC) ([code](https://pytorch.org/torchrec/torchrec.modules.html#torchrec.modules.embedding_modules.EmbeddingBagCollection)): a simple module backed by [torch.nn.EmbeddingBag](https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html).

2. `FusedEmbeddingBagCollection` (Fused EBC) ([code](https://github.com/pytorch/torchrec/blob/main/torchrec/modules/fused_embedding_bag_collection.py#L299)): a module backed by [FBGEMM](https://github.com/pytorch/FBGEMM) kernels which enables more efficient, high-performance operations on embedding tables. It is equipped with a fused optimizer, and UVM caching/management that makes much larger memory available for GPUs.
2. `FusedEmbeddingBagCollection` (Fused EBC) ([code](https://github.com/meta-pytorch/torchrec/blob/main/torchrec/modules/fused_embedding_bag_collection.py#L299)): a module backed by [FBGEMM](https://github.com/pytorch/FBGEMM) kernels which enables more efficient, high-performance operations on embedding tables. It is equipped with a fused optimizer, and UVM caching/management that makes much larger memory available for GPUs.


## Module architecture and running setup
Expand All @@ -24,7 +24,7 @@ Other setup includes:

## How to run

After the installation of Torchrec (see "Binary" in the "Installation" section, [link](https://github.com/pytorch/torchrec)), run the following command under the benchmark directory (/torchrec/torchrec/benchmarks):
After the installation of Torchrec (see "Binary" in the "Installation" section, [link](https://github.com/meta-pytorch/torchrec)), run the following command under the benchmark directory (/torchrec/torchrec/benchmarks):

```
python ebc_benchmarks.py [--mode MODE] [--cpu_only]
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ def main(argv: List[str]) -> None:
description="TorchRec: Pytorch library for recommendation systems",
long_description=readme,
long_description_content_type="text/markdown",
url="https://github.com/pytorch/torchrec",
url="https://github.com/meta-pytorch/torchrec",
license="BSD-3",
keywords=[
"pytorch",
Expand Down
2 changes: 1 addition & 1 deletion torchrec/distributed/train_pipeline/pipeline_stage.py
Original file line number Diff line number Diff line change
Expand Up @@ -449,7 +449,7 @@ def forward_hook(
) -> None:
# Note: tricky part - a bit delicate choreography between
# StagedPipeline and this class
# (see https://github.com/pytorch/torchrec/pull/2239 for details)
# (see https://github.com/meta-pytorch/torchrec/pull/2239 for details)
# wait_dist need to be called as post_forward hook
# at the end of the batch N, so that the data is awaited
# before start of the next batch.
Expand Down
4 changes: 2 additions & 2 deletions torchrec/distributed/train_pipeline/runtime_forwards.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ def __call__(self, *input, **kwargs) -> Awaitable:
self._name in self._context.input_dist_tensors_requests
), f"Invalid PipelinedForward usage, input_dist of {self._name} is not available, probably consumed by others"
# we made a basic assumption that an embedding module (EBC, EC, etc.) should only be evoked only
# once in the model's forward pass. For more details: https://github.com/pytorch/torchrec/pull/3294
# once in the model's forward pass. For more details: https://github.com/meta-pytorch/torchrec/pull/3294
request = self._context.input_dist_tensors_requests.pop(self._name)
assert isinstance(request, Awaitable)
with record_function("## wait_sparse_data_dist ##"):
Expand Down Expand Up @@ -125,7 +125,7 @@ def __call__(
self._name in self._context.embedding_a2a_requests
), f"Invalid PipelinedForward usage, input_dist of {self._name} is not available, probably consumed by others"
# we made a basic assumption that an embedding module (EBC, EC, etc.) should only be evoked only
# once in the model's forward pass. For more details: https://github.com/pytorch/torchrec/pull/3294
# once in the model's forward pass. For more details: https://github.com/meta-pytorch/torchrec/pull/3294

ctx = self._context.module_contexts.pop(self._name)
cur_stream = torch.get_device_module(self._device).current_stream()
Expand Down
4 changes: 2 additions & 2 deletions torchrec/distributed/train_pipeline/train_pipelines.py
Original file line number Diff line number Diff line change
Expand Up @@ -521,7 +521,7 @@ def detach(self) -> torch.nn.Module:
Detaches the model from sparse data dist (SDD) pipeline. A user might want to get
the original model back after training. The original model.forward was previously
modified by the train pipeline. for more please see:
https://github.com/pytorch/torchrec/pull/2076
https://github.com/meta-pytorch/torchrec/pull/2076

To use the pipeline after detaching the model, pipeline.attach(model)
needs to be called.
Expand All @@ -547,7 +547,7 @@ def attach(
"""
should be used with detach function. these functions should only be used from user code,
when user want to switch the train pipeline. for more please see:
https://github.com/pytorch/torchrec/pull/2076
https://github.com/meta-pytorch/torchrec/pull/2076
"""
if model:
self._model = model
Expand Down
2 changes: 1 addition & 1 deletion torchrec/inference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ export FBGEMM_LIB=""
Here, we generate the DLRM model in Torchscript and save it for model loading later on.

```
git clone https://github.com/pytorch/torchrec.git
git clone https://github.com/meta-pytorch/torchrec.git

cd ~/torchrec/torchrec/inference/
python3 dlrm_packager.py --output_path /tmp/model.pt
Expand Down
Loading