Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: node:internal/process/esm_loader:40 #3686

Open
4 of 5 tasks
susnick opened this issue Oct 16, 2024 · 0 comments
Open
4 of 5 tasks

[Bug]: node:internal/process/esm_loader:40 #3686

susnick opened this issue Oct 16, 2024 · 0 comments
Labels
bug Something isn't working triage This needs an (initial) review

Comments

@susnick
Copy link

susnick commented Oct 16, 2024

Checklist

  • The issue has not been resolved by following the troubleshooting guide
  • The issue exists on a clean installation of Fooocus
  • The issue exists in the current version of Fooocus
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

Using gradio api receive error
node:internal/process/esm_loader:40
internalBinding('errors').triggerUncaughtException(
^
{
type: 'status',
stage: 'error',
endpoint: '/predict',
fn_index: 13,
message: null,
queue: false,
time: 2024-10-16T18:26:15.489Z
}

Steps to reproduce the problem

run code shown

import { client } from "@gradio/client";

const response_0 = await fetch("https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png");
const exampleImage = await response_0.blob();

const app = await client("https://xxxxxxx.com/", {auth:['user','password'});
const result = await app.predict(10, [
exampleImage, // blob in 'Image' Image component
]);

console.log(result.data);

What should have happened?

Not received an error.

What browsers do you use to access Fooocus?

Google Chrome

Where are you running Fooocus?

Locally

What operating system are you using?

WIndows 11

Console logs

C:\Users\Scott\Downloads\Fooocus_win64_2-5-0>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --port 7866 --listen 0.0.0.0
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py', '--port', '7866', '--listen', '0.0.0.0']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.5.5
[Cleanup] Attempting to delete content of temp dir C:\Users\Scott\AppData\Local\Temp\fooocus
[Cleanup] Cleanup successful
Total VRAM 12282 MB, total RAM 65277 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : native
VAE dtype: torch.bfloat16
Using pytorch cross attention
Refiner unloaded.
Running on local URL:  http://0.0.0.0:7866
model_type EPS
UNet ADM Dimension 2816
IMPORTANT: You are using gradio version 3.41.2, however version 5.0.1 is available, please upgrade.
--------
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
Base model loaded: C:\Users\Scott\Downloads\Fooocus_win64_2-5-0\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
VAE loaded: None
Request to load LoRAs [('sd_xl_offset_example-lora_1.0.safetensors', 0.1)] for model [C:\Users\Scott\Downloads\Fooocus_win64_2-5-0\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [C:\Users\Scott\Downloads\Fooocus_win64_2-5-0\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [C:\Users\Scott\Downloads\Fooocus_win64_2-5-0\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.30 seconds
Started worker with PID 55052
App started successful. Use the app with http://localhost:7866/ or 0.0.0.0:7866

To create a public link, set `share=True` in `launch()`.
Traceback (most recent call last):
  File "C:\Users\Scott\Downloads\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\Scott\Downloads\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\gradio\blocks.py", line 1429, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "C:\Users\Scott\Downloads\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\gradio\blocks.py", line 1239, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "C:\Users\Scott\Downloads\Fooocus_win64_2-5-0\Fooocus\modules\gradio_hijack.py", line 277, in preprocess
    assert isinstance(x, str)
AssertionError

Additional information

No response

@susnick susnick added bug Something isn't working triage This needs an (initial) review labels Oct 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage This needs an (initial) review
Projects
None yet
Development

No branches or pull requests

1 participant