You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Both gave me the same error, "TypeError: Descriptors cannot be created directly", which seems to be an issue in running preload for inbuilt extensions.
I also tried downgrading protobuf using pip install protobuf==3.20.*, but still have the same issue.
Run webui-user with COMMANDLINE_ARGS = --use-zluda --update-check --skip-ort --opt-sdp-attention --opt-split-attention
What should have happened?
WebUI should open normally with a fresh install.
What browsers do you use to access the UI ?
Mozilla Firefox
Sysinfo
Unable to get sysinfo, due to following error:
Traceback (most recent call last):
File "D:\stable-diffusion-webui-amdgpu\launch.py", line 48, in
main()
File "D:\stable-diffusion-webui-amdgpu\launch.py", line 29, in main
filename = launch_utils.dump_sysinfo()
File "D:\stable-diffusion-webui-amdgpu\modules\launch_utils.py", line 693, in dump_sysinfo
text = sysinfo.get()
File "D:\stable-diffusion-webui-amdgpu\modules\sysinfo.py", line 46, in get
res = get_dict()
File "D:\stable-diffusion-webui-amdgpu\modules\sysinfo.py", line 119, in get_dict
"Extensions": get_extensions(enabled=True, fallback_disabled_extensions=config.get('disabled_extensions', [])),
AttributeError: 'str' object has no attribute 'get'
Console logs
*** Error running preload() for D:\stable-diffusion-webui-amdgpu\extensions-builtin\LDSR\preload.py
Traceback (most recent call last):
File "D:\stable-diffusion-webui-amdgpu\modules\script_loading.py", line 30, in preload_extensions
module = load_module(preload_script)
File "D:\stable-diffusion-webui-amdgpu\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "D:\stable-diffusion-webui-amdgpu\extensions-builtin\LDSR\preload.py", line 2, in<module>
from modules import paths
File "D:\stable-diffusion-webui-amdgpu\modules\paths.py", line 60, in<module>
import sgm # noqa: F401
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\__init__.py", line 1, in<module>
from .models import AutoencodingEngine, DiffusionEngine
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\__init__.py", line 1, in<module>
from .autoencoder import AutoencodingEngine
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\autoencoder.py", line 6, in<module>
import pytorch_lightning as pl
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\__init__.py", line 35, in<module>
from pytorch_lightning.callbacks import Callback # noqa: E402
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\callbacks\__init__.py", line 28, in<module>
from pytorch_lightning.callbacks.pruning import ModelPruning
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\callbacks\pruning.py", line 31, in<module>
from pytorch_lightning.core.module import LightningModule
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\core\__init__.py", line 16, in<module>
from pytorch_lightning.core.module import LightningModule
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\core\module.py", line 48, in<module>
from pytorch_lightning.trainer.connectors.logger_connector.fx_validator import _FxValidator
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\trainer\__init__.py", line 17, in<module>
from pytorch_lightning.trainer.trainer import Trainer
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 58, in<module>
from pytorch_lightning.loops import PredictionLoop, TrainingEpochLoop
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\loops\__init__.py", line 15, in<module>
from pytorch_lightning.loops.batch import TrainingBatchLoop # noqa: F401
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\loops\batch\__init__.py", line 15, in<module>
from pytorch_lightning.loops.batch.training_batch_loop import TrainingBatchLoop # noqa: F401
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\loops\batch\training_batch_loop.py", line 20, in<module>
from pytorch_lightning.loops.optimization.manual_loop import _OUTPUTS_TYPE as _MANUAL_LOOP_OUTPUTS_TYPE
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\loops\optimization\__init__.py", line 15, in<module>
from pytorch_lightning.loops.optimization.manual_loop import ManualOptimization # noqa: F401
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\loops\optimization\manual_loop.py", line 23, in<module>
from pytorch_lightning.loops.utilities import _build_training_step_kwargs, _extract_hiddens
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\loops\utilities.py", line 29, in<module>
from pytorch_lightning.strategies.parallel import ParallelStrategy
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\strategies\__init__.py", line 15, in<module>
from pytorch_lightning.strategies.bagua import BaguaStrategy # noqa: F401
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\strategies\bagua.py", line 29, in<module>
from pytorch_lightning.plugins.precision import PrecisionPlugin
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\plugins\__init__.py", line 7, in<module>
from pytorch_lightning.plugins.precision.apex_amp import ApexMixedPrecisionPlugin
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\plugins\precision\__init__.py", line 18, in<module>
from pytorch_lightning.plugins.precision.fsdp_native_native_amp import FullyShardedNativeNativeMixedPrecisionPlugin
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\plugins\precision\fsdp_native_native_amp.py", line 24, in<module>
from torch.distributed.fsdp.fully_sharded_data_parallel import MixedPrecision
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\__init__.py", line 1, in<module>
from ._flat_param import FlatParameter as FlatParameter
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\_flat_param.py", line 30, in<module>
from torch.distributed.fsdp._common_utils import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\_common_utils.py", line 35, in<module>
from torch.distributed.fsdp._fsdp_extensions import FSDPExtensions
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\_fsdp_extensions.py", line 8, in<module>
from torch.distributed._tensor import DeviceMesh, DTensor
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_tensor\__init__.py", line 6, in<module>
import torch.distributed._tensor.ops
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_tensor\ops\__init__.py", line 2, in<module>
from .embedding_ops import *# noqa: F403
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_tensor\ops\embedding_ops.py", line 8, in<module>
import torch.distributed._functional_collectives as funcol
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_functional_collectives.py", line 12, in<module>
from . import _functional_collectives_impl as fun_col_impl
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_functional_collectives_impl.py", line 36, in<module>
from torch._dynamo import assume_constant_result
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\__init__.py", line 2, in<module>
from . import convert_frame, eval_frame, resume_execution
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 40, in<module>
from . import config, exc, trace_rules
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\trace_rules.py", line 50, in<module>
from .variables import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\variables\__init__.py", line 34, in<module>
from .higher_order_ops import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\variables\higher_order_ops.py", line 13, in<module>
import torch.onnx.operators
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\onnx\__init__.py", line 59, in<module>
from ._internal.onnxruntime import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\onnx\_internal\onnxruntime.py", line 36, in<module>
import onnx
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\__init__.py", line 6, in<module>
from onnx.external_data_helper import load_external_data_for_model, write_external_data_tensors, convert_model_to_external_data
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\external_data_helper.py", line 9, in<module>
from .onnx_pb import TensorProto, ModelProto, AttributeProto, GraphProto
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\onnx_pb.py", line 4, in<module>
from .onnx_ml_pb2 import *# noqa
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\onnx_ml_pb2.py", line 33, in<module>
_descriptor.EnumValueDescriptor(
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\google\protobuf\descriptor.py", line 789, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
---
*** Error running preload() for D:\stable-diffusion-webui-amdgpu\extensions-builtin\Lora\preload.py
Traceback (most recent call last):
File "D:\stable-diffusion-webui-amdgpu\modules\script_loading.py", line 30, in preload_extensions
module = load_module(preload_script)
File "D:\stable-diffusion-webui-amdgpu\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "D:\stable-diffusion-webui-amdgpu\extensions-builtin\Lora\preload.py", line 2, in<module>
from modules import paths
File "D:\stable-diffusion-webui-amdgpu\modules\paths.py", line 60, in<module>
import sgm # noqa: F401
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\__init__.py", line 1, in<module>
from .models import AutoencodingEngine, DiffusionEngine
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\__init__.py", line 1, in<module>
from .autoencoder import AutoencodingEngine
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\autoencoder.py", line 6, in<module>
import pytorch_lightning as pl
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\__init__.py", line 35, in<module>
from pytorch_lightning.callbacks import Callback # noqa: E402
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\callbacks\__init__.py", line 31, in<module>
from pytorch_lightning.callbacks.stochastic_weight_avg import StochasticWeightAveraging
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\callbacks\stochastic_weight_avg.py", line 28, in<module>
from pytorch_lightning.strategies import DDPFullyShardedStrategy, DeepSpeedStrategy
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\strategies\__init__.py", line 15, in<module>
from pytorch_lightning.strategies.bagua import BaguaStrategy # noqa: F401
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\strategies\bagua.py", line 29, in<module>
from pytorch_lightning.plugins.precision import PrecisionPlugin
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\plugins\__init__.py", line 11, in<module>
from pytorch_lightning.plugins.precision.fsdp_native_native_amp import FullyShardedNativeNativeMixedPrecisionPlugin
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\plugins\precision\__init__.py", line 18, in<module>
from pytorch_lightning.plugins.precision.fsdp_native_native_amp import FullyShardedNativeNativeMixedPrecisionPlugin
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\plugins\precision\fsdp_native_native_amp.py", line 24, in<module>
from torch.distributed.fsdp.fully_sharded_data_parallel import MixedPrecision
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\__init__.py", line 1, in<module>
from ._flat_param import FlatParameter as FlatParameter
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\_flat_param.py", line 30, in<module>
from torch.distributed.fsdp._common_utils import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\_common_utils.py", line 35, in<module>
from torch.distributed.fsdp._fsdp_extensions import FSDPExtensions
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\_fsdp_extensions.py", line 8, in<module>
from torch.distributed._tensor import DeviceMesh, DTensor
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_tensor\__init__.py", line 6, in<module>
import torch.distributed._tensor.ops
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_tensor\ops\__init__.py", line 2, in<module>
from .embedding_ops import *# noqa: F403
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_tensor\ops\embedding_ops.py", line 8, in<module>
import torch.distributed._functional_collectives as funcol
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_functional_collectives.py", line 12, in<module>
from . import _functional_collectives_impl as fun_col_impl
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_functional_collectives_impl.py", line 36, in<module>
from torch._dynamo import assume_constant_result
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\__init__.py", line 2, in<module>
from . import convert_frame, eval_frame, resume_execution
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 40, in<module>
from . import config, exc, trace_rules
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\trace_rules.py", line 50, in<module>
from .variables import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\variables\__init__.py", line 34, in<module>
from .higher_order_ops import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\variables\higher_order_ops.py", line 13, in<module>
import torch.onnx.operators
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\onnx\__init__.py", line 59, in<module>
from ._internal.onnxruntime import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\onnx\_internal\onnxruntime.py", line 36, in<module>
import onnx
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\__init__.py", line 6, in<module>
from onnx.external_data_helper import load_external_data_for_model, write_external_data_tensors, convert_model_to_external_data
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\external_data_helper.py", line 9, in<module>
from .onnx_pb import TensorProto, ModelProto, AttributeProto, GraphProto
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\onnx_pb.py", line 4, in<module>
from .onnx_ml_pb2 import *# noqa
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\onnx_ml_pb2.py", line 33, in<module>
_descriptor.EnumValueDescriptor(
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\google\protobuf\descriptor.py", line 789, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
---
*** Error running preload() for D:\stable-diffusion-webui-amdgpu\extensions-builtin\ScuNET\preload.py
Traceback (most recent call last):
File "D:\stable-diffusion-webui-amdgpu\modules\script_loading.py", line 30, in preload_extensions
module = load_module(preload_script)
File "D:\stable-diffusion-webui-amdgpu\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "D:\stable-diffusion-webui-amdgpu\extensions-builtin\ScuNET\preload.py", line 2, in<module>
from modules import paths
File "D:\stable-diffusion-webui-amdgpu\modules\paths.py", line 60, in<module>
import sgm # noqa: F401
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\__init__.py", line 1, in<module>
from .models import AutoencodingEngine, DiffusionEngine
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\__init__.py", line 1, in<module>
from .autoencoder import AutoencodingEngine
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\autoencoder.py", line 6, in<module>
import pytorch_lightning as pl
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\__init__.py", line 35, in<module>
from pytorch_lightning.callbacks import Callback # noqa: E402
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\callbacks\__init__.py", line 31, in<module>
from pytorch_lightning.callbacks.stochastic_weight_avg import StochasticWeightAveraging
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\callbacks\stochastic_weight_avg.py", line 28, in<module>
from pytorch_lightning.strategies import DDPFullyShardedStrategy, DeepSpeedStrategy
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\strategies\__init__.py", line 15, in<module>
from pytorch_lightning.strategies.bagua import BaguaStrategy # noqa: F401
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\strategies\bagua.py", line 29, in<module>
from pytorch_lightning.plugins.precision import PrecisionPlugin
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\plugins\__init__.py", line 11, in<module>
from pytorch_lightning.plugins.precision.fsdp_native_native_amp import FullyShardedNativeNativeMixedPrecisionPlugin
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\plugins\precision\__init__.py", line 18, in<module>
from pytorch_lightning.plugins.precision.fsdp_native_native_amp import FullyShardedNativeNativeMixedPrecisionPlugin
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\plugins\precision\fsdp_native_native_amp.py", line 24, in<module>
from torch.distributed.fsdp.fully_sharded_data_parallel import MixedPrecision
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\__init__.py", line 1, in<module>
from ._flat_param import FlatParameter as FlatParameter
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\_flat_param.py", line 30, in<module>
from torch.distributed.fsdp._common_utils import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\_common_utils.py", line 35, in<module>
from torch.distributed.fsdp._fsdp_extensions import FSDPExtensions
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\_fsdp_extensions.py", line 8, in<module>
from torch.distributed._tensor import DeviceMesh, DTensor
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_tensor\__init__.py", line 6, in<module>
import torch.distributed._tensor.ops
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_tensor\ops\__init__.py", line 2, in<module>
from .embedding_ops import *# noqa: F403
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_tensor\ops\embedding_ops.py", line 8, in<module>
import torch.distributed._functional_collectives as funcol
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_functional_collectives.py", line 12, in<module>
from . import _functional_collectives_impl as fun_col_impl
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_functional_collectives_impl.py", line 36, in<module>
from torch._dynamo import assume_constant_result
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\__init__.py", line 2, in<module>
from . import convert_frame, eval_frame, resume_execution
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 40, in<module>
from . import config, exc, trace_rules
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\trace_rules.py", line 50, in<module>
from .variables import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\variables\__init__.py", line 34, in<module>
from .higher_order_ops import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\variables\higher_order_ops.py", line 13, in<module>
import torch.onnx.operators
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\onnx\__init__.py", line 59, in<module>
from ._internal.onnxruntime import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\onnx\_internal\onnxruntime.py", line 36, in<module>
import onnx
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\__init__.py", line 6, in<module>
from onnx.external_data_helper import load_external_data_for_model, write_external_data_tensors, convert_model_to_external_data
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\external_data_helper.py", line 9, in<module>
from .onnx_pb import TensorProto, ModelProto, AttributeProto, GraphProto
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\onnx_pb.py", line 4, in<module>
from .onnx_ml_pb2 import *# noqa
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\onnx_ml_pb2.py", line 33, in<module>
_descriptor.EnumValueDescriptor(
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\google\protobuf\descriptor.py", line 789, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
---
*** Error running preload() for D:\stable-diffusion-webui-amdgpu\extensions-builtin\SwinIR\preload.py
Traceback (most recent call last):
File "D:\stable-diffusion-webui-amdgpu\modules\script_loading.py", line 30, in preload_extensions
module = load_module(preload_script)
File "D:\stable-diffusion-webui-amdgpu\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "D:\stable-diffusion-webui-amdgpu\extensions-builtin\SwinIR\preload.py", line 2, in<module>
from modules import paths
File "D:\stable-diffusion-webui-amdgpu\modules\paths.py", line 60, in<module>
import sgm # noqa: F401
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\__init__.py", line 1, in<module>
from .models import AutoencodingEngine, DiffusionEngine
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\__init__.py", line 1, in<module>
from .autoencoder import AutoencodingEngine
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\autoencoder.py", line 6, in<module>
import pytorch_lightning as pl
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\__init__.py", line 35, in<module>
from pytorch_lightning.callbacks import Callback # noqa: E402
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\callbacks\__init__.py", line 31, in<module>
from pytorch_lightning.callbacks.stochastic_weight_avg import StochasticWeightAveraging
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\callbacks\stochastic_weight_avg.py", line 28, in<module>
from pytorch_lightning.strategies import DDPFullyShardedStrategy, DeepSpeedStrategy
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\strategies\__init__.py", line 15, in<module>
from pytorch_lightning.strategies.bagua import BaguaStrategy # noqa: F401
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\strategies\bagua.py", line 29, in<module>
from pytorch_lightning.plugins.precision import PrecisionPlugin
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\plugins\__init__.py", line 11, in<module>
from pytorch_lightning.plugins.precision.fsdp_native_native_amp import FullyShardedNativeNativeMixedPrecisionPlugin
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\plugins\precision\__init__.py", line 18, in<module>
from pytorch_lightning.plugins.precision.fsdp_native_native_amp import FullyShardedNativeNativeMixedPrecisionPlugin
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\plugins\precision\fsdp_native_native_amp.py", line 24, in<module>
from torch.distributed.fsdp.fully_sharded_data_parallel import MixedPrecision
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\__init__.py", line 1, in<module>
from ._flat_param import FlatParameter as FlatParameter
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\_flat_param.py", line 30, in<module>
from torch.distributed.fsdp._common_utils import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\_common_utils.py", line 35, in<module>
from torch.distributed.fsdp._fsdp_extensions import FSDPExtensions
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\_fsdp_extensions.py", line 8, in<module>
from torch.distributed._tensor import DeviceMesh, DTensor
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_tensor\__init__.py", line 6, in<module>
import torch.distributed._tensor.ops
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_tensor\ops\__init__.py", line 2, in<module>
from .embedding_ops import *# noqa: F403
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_tensor\ops\embedding_ops.py", line 8, in<module>
import torch.distributed._functional_collectives as funcol
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_functional_collectives.py", line 12, in<module>
from . import _functional_collectives_impl as fun_col_impl
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_functional_collectives_impl.py", line 36, in<module>
from torch._dynamo import assume_constant_result
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\__init__.py", line 2, in<module>
from . import convert_frame, eval_frame, resume_execution
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 40, in<module>
from . import config, exc, trace_rules
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\trace_rules.py", line 50, in<module>
from .variables import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\variables\__init__.py", line 34, in<module>
from .higher_order_ops import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\variables\higher_order_ops.py", line 13, in<module>
import torch.onnx.operators
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\onnx\__init__.py", line 59, in<module>
from ._internal.onnxruntime import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\onnx\_internal\onnxruntime.py", line 36, in<module>
import onnx
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\__init__.py", line 6, in<module>
from onnx.external_data_helper import load_external_data_for_model, write_external_data_tensors, convert_model_to_external_data
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\external_data_helper.py", line 9, in<module>
from .onnx_pb import TensorProto, ModelProto, AttributeProto, GraphProto
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\onnx_pb.py", line 4, in<module>
from .onnx_ml_pb2 import *# noqa
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\onnx_ml_pb2.py", line 33, in<module>
_descriptor.EnumValueDescriptor(
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\google\protobuf\descriptor.py", line 789, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
---
Traceback (most recent call last):
File "D:\stable-diffusion-webui-amdgpu\launch.py", line 48, in<module>main()
File "D:\stable-diffusion-webui-amdgpu\launch.py", line 39, in main
prepare_environment()
File "D:\stable-diffusion-webui-amdgpu\modules\launch_utils.py", line 664, in prepare_environment
from modules import devices
File "D:\stable-diffusion-webui-amdgpu\modules\devices.py", line 6, in<module>
from modules import errors, shared, npu_specific
File "D:\stable-diffusion-webui-amdgpu\modules\shared.py", line 6, in<module>
from modules import shared_cmd_options, shared_gradio_themes, options, shared_items, sd_models_types
File "D:\stable-diffusion-webui-amdgpu\modules\shared_items.py", line 4, in<module>
from modules import script_callbacks, scripts, ui_components
File "D:\stable-diffusion-webui-amdgpu\modules\script_callbacks.py", line 11, in<module>
from modules import errors, timer, extensions, shared, util
File "D:\stable-diffusion-webui-amdgpu\modules\extensions.py", line 9, in<module>
from modules import shared, errors, cache, scripts
File "D:\stable-diffusion-webui-amdgpu\modules\cache.py", line 9, in<module>
from modules.paths import data_path, script_path
File "D:\stable-diffusion-webui-amdgpu\modules\paths.py", line 60, in<module>
import sgm # noqa: F401
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\__init__.py", line 1, in<module>
from .models import AutoencodingEngine, DiffusionEngine
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\__init__.py", line 1, in<module>
from .autoencoder import AutoencodingEngine
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\autoencoder.py", line 6, in<module>
import pytorch_lightning as pl
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\__init__.py", line 35, in<module>
from pytorch_lightning.callbacks import Callback # noqa: E402
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\callbacks\__init__.py", line 31, in<module>
from pytorch_lightning.callbacks.stochastic_weight_avg import StochasticWeightAveraging
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\callbacks\stochastic_weight_avg.py", line 28, in<module>
from pytorch_lightning.strategies import DDPFullyShardedStrategy, DeepSpeedStrategy
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\strategies\__init__.py", line 15, in<module>
from pytorch_lightning.strategies.bagua import BaguaStrategy # noqa: F401
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\strategies\bagua.py", line 29, in<module>
from pytorch_lightning.plugins.precision import PrecisionPlugin
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\plugins\__init__.py", line 11, in<module>
from pytorch_lightning.plugins.precision.fsdp_native_native_amp import FullyShardedNativeNativeMixedPrecisionPlugin
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\plugins\precision\__init__.py", line 18, in<module>
from pytorch_lightning.plugins.precision.fsdp_native_native_amp import FullyShardedNativeNativeMixedPrecisionPlugin
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\plugins\precision\fsdp_native_native_amp.py", line 24, in<module>
from torch.distributed.fsdp.fully_sharded_data_parallel import MixedPrecision
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\__init__.py", line 1, in<module>
from ._flat_param import FlatParameter as FlatParameter
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\_flat_param.py", line 30, in<module>
from torch.distributed.fsdp._common_utils import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\_common_utils.py", line 35, in<module>
from torch.distributed.fsdp._fsdp_extensions import FSDPExtensions
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\fsdp\_fsdp_extensions.py", line 8, in<module>
from torch.distributed._tensor import DeviceMesh, DTensor
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_tensor\__init__.py", line 6, in<module>
import torch.distributed._tensor.ops
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_tensor\ops\__init__.py", line 2, in<module>
from .embedding_ops import *# noqa: F403
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_tensor\ops\embedding_ops.py", line 8, in<module>
import torch.distributed._functional_collectives as funcol
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_functional_collectives.py", line 12, in<module>
from . import _functional_collectives_impl as fun_col_impl
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\distributed\_functional_collectives_impl.py", line 36, in<module>
from torch._dynamo import assume_constant_result
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\__init__.py", line 2, in<module>
from . import convert_frame, eval_frame, resume_execution
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 40, in<module>
from . import config, exc, trace_rules
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\trace_rules.py", line 50, in<module>
from .variables import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\variables\__init__.py", line 34, in<module>
from .higher_order_ops import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\_dynamo\variables\higher_order_ops.py", line 13, in<module>
import torch.onnx.operators
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\onnx\__init__.py", line 59, in<module>
from ._internal.onnxruntime import (
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\onnx\_internal\onnxruntime.py", line 36, in<module>
import onnx
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\__init__.py", line 6, in<module>
from onnx.external_data_helper import load_external_data_for_model, write_external_data_tensors, convert_model_to_external_data
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\external_data_helper.py", line 9, in<module>
from .onnx_pb import TensorProto, ModelProto, AttributeProto, GraphProto
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\onnx_pb.py", line 4, in<module>
from .onnx_ml_pb2 import *# noqa
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnx\onnx_ml_pb2.py", line 33, in<module>
_descriptor.EnumValueDescriptor(
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\google\protobuf\descriptor.py", line 789, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
Press any key to continue...
Additional information
No response
The text was updated successfully, but these errors were encountered:
Hey, please try the following:
Open up a cmd and run pip cache purge
Then delete the venv folder.
Then remove --opt-sdp-attention and --opt-split-attention from the webui-user.bat
Then relaunch the webui-user.bat
Checklist
What happened?
I am unable to start webui-user after pulling the latest update. I then deleted the folder and did a fresh install using the zluda installation guide in https://github.com/CS1o/Stable-Diffusion-Info/wiki/Installation-Guides.
Both gave me the same error, "TypeError: Descriptors cannot be created directly", which seems to be an issue in running preload for inbuilt extensions.
I also tried downgrading protobuf using pip install protobuf==3.20.*, but still have the same issue.
Steps to reproduce the problem
What should have happened?
WebUI should open normally with a fresh install.
What browsers do you use to access the UI ?
Mozilla Firefox
Sysinfo
Unable to get sysinfo, due to following error:
Traceback (most recent call last):
File "D:\stable-diffusion-webui-amdgpu\launch.py", line 48, in
main()
File "D:\stable-diffusion-webui-amdgpu\launch.py", line 29, in main
filename = launch_utils.dump_sysinfo()
File "D:\stable-diffusion-webui-amdgpu\modules\launch_utils.py", line 693, in dump_sysinfo
text = sysinfo.get()
File "D:\stable-diffusion-webui-amdgpu\modules\sysinfo.py", line 46, in get
res = get_dict()
File "D:\stable-diffusion-webui-amdgpu\modules\sysinfo.py", line 119, in get_dict
"Extensions": get_extensions(enabled=True, fallback_disabled_extensions=config.get('disabled_extensions', [])),
AttributeError: 'str' object has no attribute 'get'
Console logs
Additional information
No response
The text was updated successfully, but these errors were encountered: