-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathtriton.err
178 lines (174 loc) · 23.3 KB
/
triton.err
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
WARNING: Could not find any nv files on this host!
INFO: underlay of /etc/condor required more than 50 (105) bind mounts
13:4: not a valid test operator: (
13:4: not a valid test operator: 450.216.04
I0202 15:32:58.967638 3508532 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x151f6e000000' with size 268435456
I0202 15:32:58.968543 3508532 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0202 15:32:59.026994 3508532 model_lifecycle.cc:459] loading: model1:1
I0202 15:32:59.027189 3508532 model_lifecycle.cc:459] loading: output-model:1
I0202 15:32:59.027409 3508532 model_lifecycle.cc:459] loading: model2:1
I0202 15:32:59.027579 3508532 model_lifecycle.cc:459] loading: input-model:1
I0202 15:33:04.439917 3508532 tensorflow.cc:2536] TRITONBACKEND_Initialize: tensorflow
I0202 15:33:04.439991 3508532 tensorflow.cc:2546] Triton TRITONBACKEND API version: 1.10
I0202 15:33:04.440002 3508532 tensorflow.cc:2552] 'tensorflow' TRITONBACKEND API version: 1.10
I0202 15:33:04.440012 3508532 tensorflow.cc:2576] backend configuration:
{"cmdline":{"auto-complete-config":"true","min-compute-capability":"6.000000","backend-directory":"/opt/tritonserver/backends","default-max-batch-size":"4"}}
I0202 15:33:04.442578 3508532 tensorflow.cc:2642] TRITONBACKEND_ModelInitialize: output-model (version 1)
2023-02-02 07:33:04.444367: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: /home/mly/O3-replay/.testhermes/output-model/1/model.savedmodel
2023-02-02 07:33:04.503356: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2023-02-02 07:33:04.503586: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: /home/mly/O3-replay/.testhermes/output-model/1/model.savedmodel
2023-02-02 07:33:04.505318: I tensorflow/core/platform/cpu_feature_guard.cc:194] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE3
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-02-02 07:33:14.549423: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1637] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3109 MB memory: -> device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:07:00.0, compute capability: 6.1
2023-02-02 07:33:14.609972: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:354] MLIR V1 optimization pass is not enabled
2023-02-02 07:33:14.612495: I tensorflow/cc/saved_model/loader.cc:231] Restoring SavedModel bundle.
2023-02-02 07:33:14.746361: I tensorflow/cc/saved_model/loader.cc:215] Running initialization op on SavedModel bundle at path: /home/mly/O3-replay/.testhermes/output-model/1/model.savedmodel
2023-02-02 07:33:14.754409: I tensorflow/cc/saved_model/loader.cc:325] SavedModel load for tags { serve }; Status: success: OK. Took 10310059 microseconds.
I0202 15:33:14.756888 3508532 tensorflow.cc:2642] TRITONBACKEND_ModelInitialize: model1 (version 1)
2023-02-02 07:33:14.757595: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: /home/mly/O3-replay/.testhermes/model1/1/model.savedmodel
2023-02-02 07:33:14.786104: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2023-02-02 07:33:14.786159: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: /home/mly/O3-replay/.testhermes/model1/1/model.savedmodel
2023-02-02 07:33:14.789112: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1637] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3109 MB memory: -> device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:07:00.0, compute capability: 6.1
2023-02-02 07:33:14.859868: I tensorflow/cc/saved_model/loader.cc:231] Restoring SavedModel bundle.
2023-02-02 07:33:15.518928: I tensorflow/cc/saved_model/loader.cc:215] Running initialization op on SavedModel bundle at path: /home/mly/O3-replay/.testhermes/model1/1/model.savedmodel
2023-02-02 07:33:15.616876: I tensorflow/cc/saved_model/loader.cc:325] SavedModel load for tags { serve }; Status: success: OK. Took 859284 microseconds.
I0202 15:33:15.667657 3508532 tensorflow.cc:2642] TRITONBACKEND_ModelInitialize: input-model (version 1)
2023-02-02 07:33:15.668600: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: /home/mly/O3-replay/.testhermes/input-model/1/model.savedmodel
2023-02-02 07:33:15.670764: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2023-02-02 07:33:15.670816: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: /home/mly/O3-replay/.testhermes/input-model/1/model.savedmodel
2023-02-02 07:33:15.673781: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1637] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3109 MB memory: -> device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:07:00.0, compute capability: 6.1
2023-02-02 07:33:15.676182: I tensorflow/cc/saved_model/loader.cc:231] Restoring SavedModel bundle.
2023-02-02 07:33:15.698430: I tensorflow/cc/saved_model/loader.cc:215] Running initialization op on SavedModel bundle at path: /home/mly/O3-replay/.testhermes/input-model/1/model.savedmodel
2023-02-02 07:33:15.705335: I tensorflow/cc/saved_model/loader.cc:325] SavedModel load for tags { serve }; Status: success: OK. Took 36744 microseconds.
I0202 15:33:15.707648 3508532 tensorflow.cc:2642] TRITONBACKEND_ModelInitialize: model2 (version 1)
2023-02-02 07:33:15.708577: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: /home/mly/O3-replay/.testhermes/model2/1/model.savedmodel
2023-02-02 07:33:15.732339: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2023-02-02 07:33:15.732422: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: /home/mly/O3-replay/.testhermes/model2/1/model.savedmodel
2023-02-02 07:33:15.735342: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1637] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3109 MB memory: -> device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:07:00.0, compute capability: 6.1
2023-02-02 07:33:15.789472: I tensorflow/cc/saved_model/loader.cc:231] Restoring SavedModel bundle.
2023-02-02 07:33:16.172849: I tensorflow/cc/saved_model/loader.cc:215] Running initialization op on SavedModel bundle at path: /home/mly/O3-replay/.testhermes/model2/1/model.savedmodel
2023-02-02 07:33:16.249708: I tensorflow/cc/saved_model/loader.cc:325] SavedModel load for tags { serve }; Status: success: OK. Took 541138 microseconds.
I0202 15:33:16.280556 3508532 tensorflow.cc:2691] TRITONBACKEND_ModelInstanceInitialize: output-model (GPU device 0)
2023-02-02 07:33:16.281115: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: /home/mly/O3-replay/.testhermes/output-model/1/model.savedmodel
2023-02-02 07:33:16.282434: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2023-02-02 07:33:16.282476: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: /home/mly/O3-replay/.testhermes/output-model/1/model.savedmodel
2023-02-02 07:33:16.285038: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1637] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3109 MB memory: -> device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:07:00.0, compute capability: 6.1
2023-02-02 07:33:16.286869: I tensorflow/cc/saved_model/loader.cc:231] Restoring SavedModel bundle.
2023-02-02 07:33:16.309803: I tensorflow/cc/saved_model/loader.cc:215] Running initialization op on SavedModel bundle at path: /home/mly/O3-replay/.testhermes/output-model/1/model.savedmodel
2023-02-02 07:33:16.316806: I tensorflow/cc/saved_model/loader.cc:325] SavedModel load for tags { serve }; Status: success: OK. Took 35697 microseconds.
I0202 15:33:16.316957 3508532 tensorflow.cc:2691] TRITONBACKEND_ModelInstanceInitialize: model1_0_0 (GPU device 0)
2023-02-02 07:33:16.317305: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: /home/mly/O3-replay/.testhermes/model1/1/model.savedmodel
I0202 15:33:16.317356 3508532 model_lifecycle.cc:694] successfully loaded 'output-model' version 1
2023-02-02 07:33:16.333447: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2023-02-02 07:33:16.333502: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: /home/mly/O3-replay/.testhermes/model1/1/model.savedmodel
2023-02-02 07:33:16.336171: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1637] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3109 MB memory: -> device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:07:00.0, compute capability: 6.1
2023-02-02 07:33:16.386959: I tensorflow/cc/saved_model/loader.cc:231] Restoring SavedModel bundle.
2023-02-02 07:33:16.947536: I tensorflow/cc/saved_model/loader.cc:215] Running initialization op on SavedModel bundle at path: /home/mly/O3-replay/.testhermes/model1/1/model.savedmodel
2023-02-02 07:33:17.049028: I tensorflow/cc/saved_model/loader.cc:325] SavedModel load for tags { serve }; Status: success: OK. Took 731725 microseconds.
I0202 15:33:17.049356 3508532 tensorflow.cc:2691] TRITONBACKEND_ModelInstanceInitialize: input-model (GPU device 0)
2023-02-02 07:33:17.050051: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: /home/mly/O3-replay/.testhermes/input-model/1/model.savedmodel
2023-02-02 07:33:17.051426: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2023-02-02 07:33:17.051472: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: /home/mly/O3-replay/.testhermes/input-model/1/model.savedmodel
2023-02-02 07:33:17.054169: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1637] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3109 MB memory: -> device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:07:00.0, compute capability: 6.1
2023-02-02 07:33:17.056333: I tensorflow/cc/saved_model/loader.cc:231] Restoring SavedModel bundle.
2023-02-02 07:33:17.077851: I tensorflow/cc/saved_model/loader.cc:215] Running initialization op on SavedModel bundle at path: /home/mly/O3-replay/.testhermes/input-model/1/model.savedmodel
2023-02-02 07:33:17.084786: I tensorflow/cc/saved_model/loader.cc:325] SavedModel load for tags { serve }; Status: success: OK. Took 34749 microseconds.
I0202 15:33:17.085008 3508532 tensorflow.cc:2691] TRITONBACKEND_ModelInstanceInitialize: model2_0_0 (GPU device 0)
I0202 15:33:17.085384 3508532 model_lifecycle.cc:694] successfully loaded 'input-model' version 1
2023-02-02 07:33:17.085461: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: /home/mly/O3-replay/.testhermes/model2/1/model.savedmodel
2023-02-02 07:33:17.099526: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2023-02-02 07:33:17.099602: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: /home/mly/O3-replay/.testhermes/model2/1/model.savedmodel
2023-02-02 07:33:17.102226: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1637] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3109 MB memory: -> device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:07:00.0, compute capability: 6.1
2023-02-02 07:33:17.144683: I tensorflow/cc/saved_model/loader.cc:231] Restoring SavedModel bundle.
2023-02-02 07:33:17.496541: I tensorflow/cc/saved_model/loader.cc:215] Running initialization op on SavedModel bundle at path: /home/mly/O3-replay/.testhermes/model2/1/model.savedmodel
2023-02-02 07:33:17.573699: I tensorflow/cc/saved_model/loader.cc:325] SavedModel load for tags { serve }; Status: success: OK. Took 488243 microseconds.
I0202 15:33:17.573896 3508532 tensorflow.cc:2691] TRITONBACKEND_ModelInstanceInitialize: model1_0_1 (GPU device 0)
2023-02-02 07:33:17.574495: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: /home/mly/O3-replay/.testhermes/model1/1/model.savedmodel
2023-02-02 07:33:17.591108: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2023-02-02 07:33:17.591184: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: /home/mly/O3-replay/.testhermes/model1/1/model.savedmodel
2023-02-02 07:33:17.593872: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1637] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3109 MB memory: -> device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:07:00.0, compute capability: 6.1
2023-02-02 07:33:17.660093: I tensorflow/cc/saved_model/loader.cc:231] Restoring SavedModel bundle.
2023-02-02 07:33:18.153042: I tensorflow/cc/saved_model/loader.cc:215] Running initialization op on SavedModel bundle at path: /home/mly/O3-replay/.testhermes/model1/1/model.savedmodel
2023-02-02 07:33:18.251816: I tensorflow/cc/saved_model/loader.cc:325] SavedModel load for tags { serve }; Status: success: OK. Took 677325 microseconds.
I0202 15:33:18.252027 3508532 tensorflow.cc:2691] TRITONBACKEND_ModelInstanceInitialize: model2_0_1 (GPU device 0)
I0202 15:33:18.252496 3508532 model_lifecycle.cc:694] successfully loaded 'model1' version 1
2023-02-02 07:33:18.253079: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: /home/mly/O3-replay/.testhermes/model2/1/model.savedmodel
2023-02-02 07:33:18.265839: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2023-02-02 07:33:18.265889: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: /home/mly/O3-replay/.testhermes/model2/1/model.savedmodel
2023-02-02 07:33:18.268510: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1637] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3109 MB memory: -> device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:07:00.0, compute capability: 6.1
2023-02-02 07:33:18.319333: I tensorflow/cc/saved_model/loader.cc:231] Restoring SavedModel bundle.
2023-02-02 07:33:18.672331: I tensorflow/cc/saved_model/loader.cc:215] Running initialization op on SavedModel bundle at path: /home/mly/O3-replay/.testhermes/model2/1/model.savedmodel
2023-02-02 07:33:18.750301: I tensorflow/cc/saved_model/loader.cc:325] SavedModel load for tags { serve }; Status: success: OK. Took 497226 microseconds.
I0202 15:33:18.751071 3508532 model_lifecycle.cc:694] successfully loaded 'model2' version 1
I0202 15:33:18.751880 3508532 model_lifecycle.cc:459] loading: ensemble:1
I0202 15:33:18.752439 3508532 model_lifecycle.cc:694] successfully loaded 'ensemble' version 1
I0202 15:33:18.752817 3508532 server.cc:563]
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+
I0202 15:33:18.752903 3508532 server.cc:590]
+------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Backend | Path | Config |
+------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
| tensorflow | /opt/tritonserver/backends/tensorflow2/libtriton_tensorflow2.so | {"cmdline":{"auto-complete-config":"true","min-compute-capability":"6.000000","backend-directory":"/opt/tritonserver/backends","default-max-batch-size":"4"}} |
+------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
I0202 15:33:18.752993 3508532 server.cc:633]
+--------------+---------+--------+
| Model | Version | Status |
+--------------+---------+--------+
| ensemble | 1 | READY |
| input-model | 1 | READY |
| model1 | 1 | READY |
| model2 | 1 | READY |
| output-model | 1 | READY |
+--------------+---------+--------+
I0202 15:33:18.783746 3508532 metrics.cc:864] Collecting metrics for GPU 0: GeForce GTX 1050 Ti
I0202 15:33:18.784214 3508532 metrics.cc:757] Collecting CPU metrics
I0202 15:33:18.784495 3508532 tritonserver.cc:2264]
+----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Option | Value |
+----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| server_id | triton |
| server_version | 2.29.0 |
| server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data statistics trace logging |
| model_repository_path[0] | /home/mly/O3-replay/.testhermes |
| model_control_mode | MODE_NONE |
| strict_model_config | 0 |
| rate_limit | OFF |
| pinned_memory_pool_byte_size | 268435456 |
| cuda_memory_pool_byte_size{0} | 67108864 |
| response_cache_byte_size | 0 |
| min_supported_compute_capability | 6.0 |
| strict_readiness | 1 |
| exit_timeout | 30 |
+----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
I0202 15:33:18.793564 3508532 grpc_server.cc:4819] Started GRPCInferenceService at 0.0.0.0:8001
I0202 15:33:18.794436 3508532 http_server.cc:3477] Started HTTPService at 0.0.0.0:8000
I0202 15:33:18.836296 3508532 http_server.cc:184] Started Metrics Service at 0.0.0.0:8002
W0202 15:33:19.785243 3508532 metrics.cc:621] Unable to get power usage for GPU 0. Status:Success, value:0.000000
W0202 15:33:19.785314 3508532 metrics.cc:645] Unable to get energy consumption for GPU 0. Status:Success, value:0
W0202 15:33:20.785686 3508532 metrics.cc:621] Unable to get power usage for GPU 0. Status:Success, value:0.000000
W0202 15:33:20.785726 3508532 metrics.cc:645] Unable to get energy consumption for GPU 0. Status:Success, value:0
W0202 15:33:21.786422 3508532 metrics.cc:621] Unable to get power usage for GPU 0. Status:Success, value:0.000000
W0202 15:33:21.786477 3508532 metrics.cc:645] Unable to get energy consumption for GPU 0. Status:Success, value:0
I0202 15:37:18.768605 3508532 server.cc:264] Waiting for in-flight requests to complete.
I0202 15:37:18.769886 3508532 server.cc:280] Timeout 30: Found 0 model versions that have in-flight inferences
I0202 15:37:18.772655 3508532 server.cc:295] All models are stopped, unloading models
I0202 15:37:18.772681 3508532 server.cc:302] Timeout 30: Found 5 live models and 0 in-flight non-inference requests
I0202 15:37:18.773186 3508532 tensorflow.cc:2729] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0202 15:37:18.773187 3508532 tensorflow.cc:2729] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0202 15:37:18.773188 3508532 tensorflow.cc:2729] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0202 15:37:18.773217 3508532 tensorflow.cc:2729] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0202 15:37:18.774351 3508532 tensorflow.cc:2668] TRITONBACKEND_ModelFinalize: delete model state
I0202 15:37:18.774372 3508532 tensorflow.cc:2668] TRITONBACKEND_ModelFinalize: delete model state
I0202 15:37:18.774402 3508532 model_lifecycle.cc:579] successfully unloaded 'ensemble' version 1
I0202 15:37:18.792955 3508532 model_lifecycle.cc:579] successfully unloaded 'output-model' version 1
I0202 15:37:18.792974 3508532 model_lifecycle.cc:579] successfully unloaded 'input-model' version 1
I0202 15:37:18.836370 3508532 tensorflow.cc:2729] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0202 15:37:18.846349 3508532 tensorflow.cc:2729] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0202 15:37:18.870921 3508532 tensorflow.cc:2668] TRITONBACKEND_ModelFinalize: delete model state
I0202 15:37:18.871022 3508532 tensorflow.cc:2668] TRITONBACKEND_ModelFinalize: delete model state
I0202 15:37:18.922165 3508532 model_lifecycle.cc:579] successfully unloaded 'model2' version 1
I0202 15:37:18.944463 3508532 model_lifecycle.cc:579] successfully unloaded 'model1' version 1
I0202 15:37:19.772781 3508532 server.cc:302] Timeout 29: Found 0 live models and 0 in-flight non-inference requests