-
Notifications
You must be signed in to change notification settings - Fork 358
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Optimize hub.py download #1022
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to C++ style guidelines:
diff --git a/workspace/core/lowering/register_trt_placeholder_ops.cpp b/tmp/changes.txt
index 5ba8171..17d7d3f 100644
--- a/workspace/core/lowering/register_trt_placeholder_ops.cpp
+++ b/tmp/changes.txt
@@ -10,7 +10,10 @@ c10::AliasAnalysisKind aliasAnalysisFromSchema() {
RegisterOperators trt_placeholder_ops_reg({
/// Op marks a Tensor to be conveted from an Torch Tensor
/// to a TRT constant Tensor
- Operator("trt::const(Tensor val) -> Tensor", [](Stack& stack) { /*noop*/ }, aliasAnalysisFromSchema()),
+ Operator(
+ "trt::const(Tensor val) -> Tensor",
+ [](Stack& stack) { /*noop*/ },
+ aliasAnalysisFromSchema()),
});
} // namespace jit
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to C++ style guidelines:
diff --git a/workspace/core/lowering/register_trt_placeholder_ops.cpp b/tmp/changes.txt
index 5ba8171..17d7d3f 100644
--- a/workspace/core/lowering/register_trt_placeholder_ops.cpp
+++ b/tmp/changes.txt
@@ -10,7 +10,10 @@ c10::AliasAnalysisKind aliasAnalysisFromSchema() {
RegisterOperators trt_placeholder_ops_reg({
/// Op marks a Tensor to be conveted from an Torch Tensor
/// to a TRT constant Tensor
- Operator("trt::const(Tensor val) -> Tensor", [](Stack& stack) { /*noop*/ }, aliasAnalysisFromSchema()),
+ Operator(
+ "trt::const(Tensor val) -> Tensor",
+ [](Stack& stack) { /*noop*/ },
+ aliasAnalysisFromSchema()),
});
} // namespace jit
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
tests/modules/hub.py
Outdated
snapshot_file = 'model_snapshot.txt' | ||
skip_download = False | ||
|
||
# If model repository already setup |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like what this should be is a list of models which have been downloaded and we check if the model we are downloading is in the list, because we can always add more. And this would require someone to delete the file to redownload
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the code.
@narendasan: I refactored the Model download script and added tracking of downloaded files. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/setup.py
--- /workspace/tests/modules/hub.py (original)
+++ /workspace/tests/modules/hub.py (reformatted)
@@ -88,6 +88,7 @@
def forward(self, x):
return F.adaptive_avg_pool2d(x, (5, 5))
+
# Sample Nested Module (for module-level fallback testing)
class ModuleFallbackSub(nn.Module):
@@ -98,6 +99,7 @@
def forward(self, x):
return self.relu(self.conv(x))
+
class ModuleFallbackMain(nn.Module):
@@ -110,6 +112,7 @@
def forward(self, x):
return self.relu(self.conv(self.layer1(x)))
+
# Sample Looping Modules (for loop fallback testing)
class LoopFallbackEval(nn.Module):
@@ -122,6 +125,7 @@
add_list = torch.cat((add_list, torch.tensor([x.shape[1]]).to(x.device)), 0)
return x + add_list
+
class LoopFallbackNoEval(nn.Module):
def __init__(self):
@@ -131,6 +135,7 @@
for _ in range(x.shape[1]):
x = x + torch.ones_like(x)
return x
+
# Sample Conditional Model (for testing partitioning and fallback in conditionals)
class FallbackIf(torch.nn.Module):
@@ -156,21 +161,23 @@
x = self.conv1(x)
return x
+
class ModelManifest:
+
def __init__(self):
self.version_matches = False
if not os.path.exists(MANIFEST_FILE) or os.stat(MANIFEST_FILE).st_size == 0:
self.manifest = {}
- self.manifest.update({'version' : torch_version})
+ self.manifest.update({'version': torch_version})
else:
with open(MANIFEST_FILE, 'r') as f:
self.manifest = json.load(f)
if self.manifest['version'] == torch_version:
self.version_matches = True
else:
- print("Torch version: {} mismatches with manifest's version: {}. Re-downloading all models".format(torch_version, self.manifest['version']))
+ print("Torch version: {} mismatches with manifest's version: {}. Re-downloading all models".format(
+ torch_version, self.manifest['version']))
self.manifest["version"] = torch_version
-
def download(self, models):
if self.version_matches:
@@ -194,13 +201,13 @@
record = json.dumps(manifest_record)
f.write(record)
f.truncate()
-
+
def get_manifest(self):
return self.manifest
-
+
def if_version_matches(self):
return self.version_matches
-
+
def get(self, n, m):
print("Downloading {}".format(n))
m["model"] = m["model"].eval().cuda()
@@ -214,8 +221,9 @@
if m["path"] == "both" or m["path"] == "script":
script_model = torch.jit.script(m["model"])
torch.jit.save(script_model, script_filename)
-
- self.manifest.update({n : [traced_filename, script_filename]})
+
+ self.manifest.update({n: [traced_filename, script_filename]})
+
def export_model(model, model_name, version_matches):
if version_matches and os.path.exists(model_name):
@@ -225,7 +233,7 @@
torch.jit.save(model, model_name)
-def generate_custom_models(manifest, matches = False):
+def generate_custom_models(manifest, matches=False):
# Pool
model = Pool().eval().cuda()
x = torch.ones([1, 3, 10, 10]).cuda()
@@ -252,7 +260,8 @@
loop_fallback_no_eval_script_model = torch.jit.script(loop_fallback_no_eval_model)
scripted_loop_fallback_no_eval_name = "loop_fallback_no_eval_scripted.jit.pt"
export_model(loop_fallback_no_eval_script_model, scripted_loop_fallback_no_eval_name, matches)
- manifest.update({"torchtrt_loop_fallback_no_eval": [scripted_loop_fallback_name, scripted_loop_fallback_no_eval_name]})
+ manifest.update(
+ {"torchtrt_loop_fallback_no_eval": [scripted_loop_fallback_name, scripted_loop_fallback_no_eval_name]})
# Conditional
conditional_model = FallbackIf().eval().cuda()
@@ -289,7 +298,7 @@
traced_bert_uncased_name = "bert_case_uncased_traced.jit.pt"
traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
export_model(traced_model, traced_bert_uncased_name, matches)
- manifest.update({"torchtrt_bert_case_uncased" : [traced_bert_uncased_name]})
+ manifest.update({"torchtrt_bert_case_uncased": [traced_bert_uncased_name]})
manifest = ModelManifest()
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/test_trt_intercompatibility.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/tests/modules/hub.py
Reformatting /workspace/tests/py/test_api.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_ptq_to_backend.py
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to C++ style guidelines:
diff --git a/workspace/core/lowering/register_trt_placeholder_ops.cpp b/tmp/changes.txt
index 5ba8171..17d7d3f 100644
--- a/workspace/core/lowering/register_trt_placeholder_ops.cpp
+++ b/tmp/changes.txt
@@ -10,7 +10,10 @@ c10::AliasAnalysisKind aliasAnalysisFromSchema() {
RegisterOperators trt_placeholder_ops_reg({
/// Op marks a Tensor to be conveted from an Torch Tensor
/// to a TRT constant Tensor
- Operator("trt::const(Tensor val) -> Tensor", [](Stack& stack) { /*noop*/ }, aliasAnalysisFromSchema()),
+ Operator(
+ "trt::const(Tensor val) -> Tensor",
+ [](Stack& stack) { /*noop*/ },
+ aliasAnalysisFromSchema()),
});
} // namespace jit
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to C++ style guidelines:
diff --git a/workspace/core/lowering/register_trt_placeholder_ops.cpp b/tmp/changes.txt
index 5ba8171..17d7d3f 100644
--- a/workspace/core/lowering/register_trt_placeholder_ops.cpp
+++ b/tmp/changes.txt
@@ -10,7 +10,10 @@ c10::AliasAnalysisKind aliasAnalysisFromSchema() {
RegisterOperators trt_placeholder_ops_reg({
/// Op marks a Tensor to be conveted from an Torch Tensor
/// to a TRT constant Tensor
- Operator("trt::const(Tensor val) -> Tensor", [](Stack& stack) { /*noop*/ }, aliasAnalysisFromSchema()),
+ Operator(
+ "trt::const(Tensor val) -> Tensor",
+ [](Stack& stack) { /*noop*/ },
+ aliasAnalysisFromSchema()),
});
} // namespace jit
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/setup.py
--- /workspace/tests/modules/hub.py (original)
+++ /workspace/tests/modules/hub.py (reformatted)
@@ -88,6 +88,7 @@
def forward(self, x):
return F.adaptive_avg_pool2d(x, (5, 5))
+
# Sample Nested Module (for module-level fallback testing)
class ModuleFallbackSub(nn.Module):
@@ -98,6 +99,7 @@
def forward(self, x):
return self.relu(self.conv(x))
+
class ModuleFallbackMain(nn.Module):
@@ -110,6 +112,7 @@
def forward(self, x):
return self.relu(self.conv(self.layer1(x)))
+
# Sample Looping Modules (for loop fallback testing)
class LoopFallbackEval(nn.Module):
@@ -122,6 +125,7 @@
add_list = torch.cat((add_list, torch.tensor([x.shape[1]]).to(x.device)), 0)
return x + add_list
+
class LoopFallbackNoEval(nn.Module):
def __init__(self):
@@ -131,6 +135,7 @@
for _ in range(x.shape[1]):
x = x + torch.ones_like(x)
return x
+
# Sample Conditional Model (for testing partitioning and fallback in conditionals)
class FallbackIf(torch.nn.Module):
@@ -156,21 +161,23 @@
x = self.conv1(x)
return x
+
class ModelManifest:
+
def __init__(self):
self.version_matches = False
if not os.path.exists(MANIFEST_FILE) or os.stat(MANIFEST_FILE).st_size == 0:
self.manifest = {}
- self.manifest.update({'version' : torch_version})
+ self.manifest.update({'version': torch_version})
else:
with open(MANIFEST_FILE, 'r') as f:
self.manifest = json.load(f)
if self.manifest['version'] == torch_version:
self.version_matches = True
else:
- print("Torch version: {} mismatches with manifest's version: {}. Re-downloading all models".format(torch_version, self.manifest['version']))
+ print("Torch version: {} mismatches with manifest's version: {}. Re-downloading all models".format(
+ torch_version, self.manifest['version']))
self.manifest["version"] = torch_version
-
def download(self, models):
if self.version_matches:
@@ -194,13 +201,13 @@
record = json.dumps(manifest_record)
f.write(record)
f.truncate()
-
+
def get_manifest(self):
return self.manifest
-
+
def if_version_matches(self):
return self.version_matches
-
+
def get(self, n, m):
print("Downloading {}".format(n))
m["model"] = m["model"].eval().cuda()
@@ -214,10 +221,11 @@
if m["path"] == "both" or m["path"] == "script":
script_model = torch.jit.script(m["model"])
torch.jit.save(script_model, script_filename)
-
- self.manifest.update({n : [traced_filename, script_filename]})
-
-def generate_custom_models(manifest, version_matches = False):
+
+ self.manifest.update({n: [traced_filename, script_filename]})
+
+
+def generate_custom_models(manifest, version_matches=False):
# Pool
traced_pool_name = "pooling_traced.jit.pt"
if not (version_matches and os.path.exists(traced_pool_name)):
@@ -248,7 +256,8 @@
loop_fallback_no_eval_model = LoopFallbackNoEval().eval().cuda()
loop_fallback_no_eval_script_model = torch.jit.script(loop_fallback_no_eval_model)
torch.jit.save(loop_fallback_no_eval_script_model, scripted_loop_fallback_no_eval_name)
- manifest.update({"torchtrt_loop_fallback_no_eval": [scripted_loop_fallback_name, scripted_loop_fallback_no_eval_name]})
+ manifest.update(
+ {"torchtrt_loop_fallback_no_eval": [scripted_loop_fallback_name, scripted_loop_fallback_no_eval_name]})
# Conditional
scripted_conditional_name = "conditional_scripted.jit.pt"
@@ -287,7 +296,7 @@
traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
torch.jit.save(traced_model, traced_bert_uncased_name)
- manifest.update({"torchtrt_bert_case_uncased" : [traced_bert_uncased_name]})
+ manifest.update({"torchtrt_bert_case_uncased": [traced_bert_uncased_name]})
manifest = ModelManifest()
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/test_trt_intercompatibility.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/tests/modules/hub.py
Reformatting /workspace/tests/py/test_api.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_ptq_to_backend.py
ERROR: Some files do not conform to style guidelines
08b9853
to
6d149bc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to C++ style guidelines:
diff --git a/workspace/core/partitioning/partitioning.cpp b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/core/lowering/register_trt_placeholder_ops.cpp b/tmp/changes.txt
index 5ba8171..17d7d3f 100644
--- a/workspace/core/lowering/register_trt_placeholder_ops.cpp
+++ b/tmp/changes.txt
@@ -10,7 +10,10 @@ c10::AliasAnalysisKind aliasAnalysisFromSchema() {
RegisterOperators trt_placeholder_ops_reg({
/// Op marks a Tensor to be conveted from an Torch Tensor
/// to a TRT constant Tensor
- Operator("trt::const(Tensor val) -> Tensor", [](Stack& stack) { /*noop*/ }, aliasAnalysisFromSchema()),
+ Operator(
+ "trt::const(Tensor val) -> Tensor",
+ [](Stack& stack) { /*noop*/ },
+ aliasAnalysisFromSchema()),
});
} // namespace jit
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to C++ style guidelines:
diff --git a/workspace/core/partitioning/partitioning.cpp b/tmp/changes.txt
old mode 100755
new mode 100644
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
/blossom-ci |
1 similar comment
/blossom-ci |
👎 Promotion blocked, new vulnerability foundVulnerability report
|
Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>
Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>
…file doesn't exists Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>
Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>
Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>
eb6cf1a
to
2e1764a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/setup.py
--- /workspace/tests/modules/custom_models.py (original)
+++ /workspace/tests/modules/custom_models.py (reformatted)
@@ -2,6 +2,7 @@
import torch.nn as nn
from transformers import BertModel, BertTokenizer, BertConfig
import torch.nn.functional as F
+
# Sample Pool Model (for testing plugin serialization)
class Pool(nn.Module):
@@ -84,5 +85,3 @@
x = self.log_sig(x)
x = self.conv1(x)
return x
-
-
Reformatting /workspace/tests/py/test_api.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_ptq_to_backend.py
Reformatting /workspace/tests/modules/custom_models.py
--- /workspace/tests/modules/hub.py (original)
+++ /workspace/tests/modules/hub.py (reformatted)
@@ -126,35 +126,36 @@
name = m["model"]
config = BertConfig(
- vocab_size_or_config_json_file=32000,
- hidden_size=768,
- num_hidden_layers=12,
- num_attention_heads=12,
- intermediate_size=3072,
- torchscript=True,
- )
+ vocab_size_or_config_json_file=32000,
+ hidden_size=768,
+ num_hidden_layers=12,
+ num_attention_heads=12,
+ intermediate_size=3072,
+ torchscript=True,
+ )
m["model"] = BertModel(config)
m["model"].eval()
m["model"] = BertModel.from_pretrained(name, torchscript=True)
traced_model = torch.jit.trace(m["model"], x)
torch.jit.save(traced_model, traced_filename)
- manifest.update({n : [traced_filename]})
+ manifest.update({n: [traced_filename]})
else:
m["model"] = m["model"].eval().cuda()
if m["path"] == "both" or m["path"] == "trace":
trace_model = torch.jit.trace(m["model"], [x])
torch.jit.save(trace_model, traced_filename)
- manifest.update({n : [traced_filename]})
+ manifest.update({n: [traced_filename]})
if m["path"] == "both" or m["path"] == "script":
script_model = torch.jit.script(m["model"])
torch.jit.save(script_model, script_filename)
if n in manifest.keys():
files = list(manifest[n]) if type(manifest[n]) != list else manifest[n]
files.append(script_filename)
- manifest.update({n : files})
+ manifest.update({n: files})
else:
manifest.update({n: [script_filename]})
return manifest
+
def download_models(version_matches, manifest):
# Download all models if torch version is different than model version
@@ -169,8 +170,8 @@
if (m["path"] == "both" and os.path.exists(scripted_filename) and os.path.exists(traced_filename)) or \
(m["path"] == "script" and os.path.exists(scripted_filename)) or \
(m["path"] == "trace" and os.path.exists(traced_filename)):
- print("Skipping {} ".format(n))
- continue
+ print("Skipping {} ".format(n))
+ continue
manifest = get(n, m, manifest)
@@ -208,4 +209,5 @@
f.write(record)
f.truncate()
+
main()
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/test_trt_intercompatibility.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/tests/modules/hub.py
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/setup.py
--- /workspace/tests/modules/hub.py (original)
+++ /workspace/tests/modules/hub.py (reformatted)
@@ -111,23 +111,24 @@
if n == "bert-base-uncased":
traced_model = m["model"]
torch.jit.save(traced_model, traced_filename)
- manifest.update({n : [traced_filename]})
+ manifest.update({n: [traced_filename]})
else:
m["model"] = m["model"].eval().cuda()
if m["path"] == "both" or m["path"] == "trace":
trace_model = torch.jit.trace(m["model"], [x])
torch.jit.save(trace_model, traced_filename)
- manifest.update({n : [traced_filename]})
+ manifest.update({n: [traced_filename]})
if m["path"] == "both" or m["path"] == "script":
script_model = torch.jit.script(m["model"])
torch.jit.save(script_model, script_filename)
if n in manifest.keys():
files = list(manifest[n]) if type(manifest[n]) != list else manifest[n]
files.append(script_filename)
- manifest.update({n : files})
+ manifest.update({n: files})
else:
manifest.update({n: [script_filename]})
return manifest
+
def download_models(version_matches, manifest):
# Download all models if torch version is different than model version
@@ -142,8 +143,8 @@
if (m["path"] == "both" and os.path.exists(scripted_filename) and os.path.exists(traced_filename)) or \
(m["path"] == "script" and os.path.exists(scripted_filename)) or \
(m["path"] == "trace" and os.path.exists(traced_filename)):
- print("Skipping {} ".format(n))
- continue
+ print("Skipping {} ".format(n))
+ continue
manifest = get(n, m, manifest)
@@ -184,4 +185,5 @@
f.write(record)
f.truncate()
+
main()
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/test_trt_intercompatibility.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/tests/modules/hub.py
--- /workspace/tests/modules/custom_models.py (original)
+++ /workspace/tests/modules/custom_models.py (reformatted)
@@ -2,6 +2,7 @@
import torch.nn as nn
from transformers import BertModel, BertTokenizer, BertConfig
import torch.nn.functional as F
+
# Sample Pool Model (for testing plugin serialization)
class Pool(nn.Module):
@@ -98,16 +99,15 @@
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
config = BertConfig(
- vocab_size_or_config_json_file=32000,
- hidden_size=768,
- num_hidden_layers=12,
- num_attention_heads=12,
- intermediate_size=3072,
- torchscript=True,
- )
+ vocab_size_or_config_json_file=32000,
+ hidden_size=768,
+ num_hidden_layers=12,
+ num_attention_heads=12,
+ intermediate_size=3072,
+ torchscript=True,
+ )
model = BertModel(config)
model.eval()
model = BertModel.from_pretrained(model_name, torchscript=True)
traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
return traced_model
-
Reformatting /workspace/tests/py/test_api.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_ptq_to_backend.py
Reformatting /workspace/tests/modules/custom_models.py
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/setup.py
--- /workspace/tests/modules/custom_models.py (original)
+++ /workspace/tests/modules/custom_models.py (reformatted)
@@ -2,6 +2,7 @@
import torch.nn as nn
from transformers import BertModel, BertTokenizer, BertConfig
import torch.nn.functional as F
+
# Sample Pool Model (for testing plugin serialization)
class Pool(nn.Module):
@@ -98,16 +99,15 @@
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
config = BertConfig(
- vocab_size_or_config_json_file=32000,
- hidden_size=768,
- num_hidden_layers=12,
- num_attention_heads=12,
- intermediate_size=3072,
- torchscript=True,
- )
+ vocab_size_or_config_json_file=32000,
+ hidden_size=768,
+ num_hidden_layers=12,
+ num_attention_heads=12,
+ intermediate_size=3072,
+ torchscript=True,
+ )
model = BertModel(config)
model.eval()
model = BertModel.from_pretrained(model_name, torchscript=True)
traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
return traced_model
-
Reformatting /workspace/tests/py/test_api.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_ptq_to_backend.py
Reformatting /workspace/tests/modules/custom_models.py
--- /workspace/tests/modules/hub.py (original)
+++ /workspace/tests/modules/hub.py (reformatted)
@@ -111,23 +111,24 @@
if n == "bert-base-uncased":
traced_model = m["model"]
torch.jit.save(traced_model, traced_filename)
- manifest.update({n : [traced_filename]})
+ manifest.update({n: [traced_filename]})
else:
m["model"] = m["model"].eval().cuda()
if m["path"] == "both" or m["path"] == "trace":
trace_model = torch.jit.trace(m["model"], [x])
torch.jit.save(trace_model, traced_filename)
- manifest.update({n : [traced_filename]})
+ manifest.update({n: [traced_filename]})
if m["path"] == "both" or m["path"] == "script":
script_model = torch.jit.script(m["model"])
torch.jit.save(script_model, script_filename)
if n in manifest.keys():
files = list(manifest[n]) if type(manifest[n]) != list else manifest[n]
files.append(script_filename)
- manifest.update({n : files})
+ manifest.update({n: files})
else:
manifest.update({n: [script_filename]})
return manifest
+
def download_models(version_matches, manifest):
# Download all models if torch version is different than model version
@@ -142,8 +143,8 @@
if (m["path"] == "both" and os.path.exists(scripted_filename) and os.path.exists(traced_filename)) or \
(m["path"] == "script" and os.path.exists(scripted_filename)) or \
(m["path"] == "trace" and os.path.exists(traced_filename)):
- print("Skipping {} ".format(n))
- continue
+ print("Skipping {} ".format(n))
+ continue
manifest = get(n, m, manifest)
@@ -184,4 +185,5 @@
f.write(record)
f.truncate()
+
main()
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/test_trt_intercompatibility.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/tests/modules/hub.py
ERROR: Some files do not conform to style guidelines
Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
Reformatting /workspace/tests/py/test_ptq_to_backend.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/test_trt_intercompatibility.py
Reformatting /workspace/tests/modules/custom_models.py
Reformatting /workspace/tests/modules/hub.py
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_api.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
--- /workspace/py/setup.py (original)
+++ /workspace/py/setup.py (reformatted)
@@ -242,8 +242,7 @@
dir_path + "/../bazel-TRTorch/external/tensorrt/include",
dir_path + "/../bazel-Torch-TensorRT/external/tensorrt/include",
dir_path + "/../bazel-TensorRT/external/tensorrt/include",
- dir_path + "/../bazel-tensorrt/external/tensorrt/include",
- dir_path + "/../"
+ dir_path + "/../bazel-tensorrt/external/tensorrt/include", dir_path + "/../"
],
extra_compile_args=[
"-Wno-deprecated",
Reformatting /workspace/py/setup.py
ERROR: Some files do not conform to style guidelines
Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to C++ style guidelines:
diff --git a/workspace/tests/core/lowering/test_module_fallback_passes.cpp b/tmp/changes.txt
index d57b8c9..d2ea9dc 100644
--- a/workspace/tests/core/lowering/test_module_fallback_passes.cpp
+++ b/tmp/changes.txt
@@ -20,7 +20,6 @@ TEST(Lowering, NotateModuleForFallbackWorksCorrectly) {
std::unordered_set<std::string> mods_to_mark;
mods_to_mark.insert("custom_models.ModuleFallbackSub");
-
torch_tensorrt::core::lowering::passes::NotateModuleForFallback(mod, "", "forward", mods_to_mark);
auto g = mod.get_method("forward").graph();
ERROR: Some files do not conform to style guidelines
5ab0781
to
176b907
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to C++ style guidelines:
diff --git a/workspace/tests/core/lowering/test_module_fallback_passes.cpp b/tmp/changes.txt
index d57b8c9..d2ea9dc 100644
--- a/workspace/tests/core/lowering/test_module_fallback_passes.cpp
+++ b/tmp/changes.txt
@@ -20,7 +20,6 @@ TEST(Lowering, NotateModuleForFallbackWorksCorrectly) {
std::unordered_set<std::string> mods_to_mark;
mods_to_mark.insert("custom_models.ModuleFallbackSub");
-
torch_tensorrt::core::lowering::passes::NotateModuleForFallback(mod, "", "forward", mods_to_mark);
auto g = mod.get_method("forward").graph();
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
Signed-off-by: Anurag Dixit a.dixit91@gmail.com
Description
Optimizing hub.py to download model repository only if required. The models are downloaded and deserialized only when the model_snapshot file is missing (first time) OR the version recorded is different.
This PR will reduce the CI pipeline jobs turnaround time.
Fixes # (issue)
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: