Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QEMU error when trying to restore an ARM64 Linux VM #5769

Closed
rxhfcy opened this issue Oct 4, 2023 · 10 comments
Closed

QEMU error when trying to restore an ARM64 Linux VM #5769

rxhfcy opened this issue Oct 4, 2023 · 10 comments
Milestone

Comments

@rxhfcy
Copy link

rxhfcy commented Oct 4, 2023

Describe the issue
I got a QEMU error when trying to restore an ARM64 Linux VM. Doesn't happen every time but I decided to report this anyway, in case this is useful.

Here's what I did when I saw the error:

  1. Open a Fedora 38 ARM64 VM
  2. Close the VM by clicking the red "close" button of the VM window
  3. Try to reopen/restore the VM

What happened: i got this error message (see screen shot): "QEMU error: QEMU exited from an error: Is another process using the image (clip clip) ... efi_vars.fd"
Expected: work normally every time

Edit: this was probably irrelevant: By the way: efi_vars.fd is huge (4.05 GB), I don't remember doing anything weird (see issue #5702)

Screen shot of the error:
Screenshot

Configuration

  • UTM Version: (beta) 4.4.2 (90)
  • macOS Version: 14.0 (23A344)
  • Mac Chip (Intel, M1, ...): M1

Debug log
debug.log

Upload VM
config.plist.zip

@osy
Copy link
Contributor

osy commented Oct 7, 2023

I believe this issue should be fixed in 4.4.2

@rxhfcy
Copy link
Author

rxhfcy commented Oct 7, 2023

I believe this issue should be fixed in 4.4.2

I'm pretty sure I was using 4.4.2 when I saw this.

  • I had renamed v4.4.2 "UTM.app" -> "UTM 4.4.2 beta.app",
  • ...and quoting from the beginning of debug.log:
    Launching: qemu-system-aarch64 -L "/Applications/UTM 4.4.2 beta.app/Contents/Resources/qemu" ...

@osy
Copy link
Contributor

osy commented Oct 7, 2023

Oh in that case, could it be possible you have another instance of UTM running?

@rxhfcy
Copy link
Author

rxhfcy commented Oct 7, 2023

I believe this issue should be fixed in 4.4.2

I'm pretty sure I was using 4.4.2 when I saw this.

...or maybe the erroneous efi_vars.fd or whatever caused the bug itself was caused by a bug in a previous version of UTM, and that bug had already been fixed in 4.4.2.

I think I only saw the error the first time I opened the VM with 4.4.2, so that's entirely possible.

PS. While I was typing this, you had asked this question

Oh in that case, could it be possible you have another instance of UTM running?

my answer to that: I usually try not to do that but it's possible of course.

@osy
Copy link
Contributor

osy commented Oct 9, 2023

I think I stumbled upon this bug:

    2469 Thread_9560018   DispatchQueue_1: com.apple.main-thread  (serial)
    + 2469 ???  (in <unknown binary>)  [0x20a000000000]
    +   2469 ???  (in <unknown binary>)  [0x13d62eff0]
    +     2469 qemu_coroutine_new  (in qemu-aarch64-softmmu) + 296  [0x104059d10]
    +       2469 _sigtramp  (in libsystem_platform.dylib) + 56  [0x19c91ea24]
    +         2469 coroutine_trampoline  (in qemu-aarch64-softmmu) + 124  [0x104059e68]
    +           2469 coroutine_bootstrap  (in qemu-aarch64-softmmu) + 44  [0x104059fd0]
    +             2469 monitor_qmp_dispatcher_co  (in qemu-aarch64-softmmu) + 532  [0x103fa9acc]
    +               2469 monitor_qmp_dispatch  (in qemu-aarch64-softmmu) + 76  [0x103fa9e38]
    +                 2469 qmp_send_response  (in qemu-aarch64-softmmu) + 152  [0x103fa9804]
    +                   2469 monitor_puts  (in qemu-aarch64-softmmu) + 100  [0x103fa809c]
    +                     2469 monitor_flush_locked  (in qemu-aarch64-softmmu) + 80  [0x103fa7fc0]
    +                       2469 qemu_chr_write  (in qemu-aarch64-softmmu) + 156  [0x103fa420c]
    +                         2469 qemu_chr_write_buffer  (in qemu-aarch64-softmmu) + 176  [0x103fa4340]
    +                           2469 spice_chr_write  (in qemu-aarch64-softmmu) + 116  [0x103b64de8]
    +                             2469 spice_server_char_device_wakeup  (in spice-server.1) + 108  [0x102d315dc]
    +                               2469 red_char_device_wakeup  (in spice-server.1) + 32  [0x102cdee80]
    +                                 2469 red_char_device_read_from_device  (in spice-server.1) + 308  [0x102cdf3c0]
    +                                   2469 red_char_device_read_one_msg_from_device  (in spice-server.1) + 56  [0x102ce10d4]
    +                                     2469 spicevmc_chardev_read_msg_from_dev  (in spice-server.1) + 480  [0x102d46318]
    +                                       2469 spicevmc_red_channel_queue_data  (in spice-server.1) + 68  [0x102d46624]
    +                                         2469 red_channel_client_pipe_add_push  (in spice-server.1) + 40  [0x102d1a868]
    +                                           2469 red_channel_client_push  (in spice-server.1) + 456  [0x102d19698]
    +                                             2469 g_object_unref  (in gobject-2.0.0) + 1132  [0x10274a888]
    +                                               2469 red_channel_client_finalize  (in spice-server.1) + 100  [0x102d1bbe8]
    +                                                 2469 red_stream_free  (in spice-server.1) + 52  [0x102d3b66c]
    +                                                   2469 red_stream_push_channel_event  (in spice-server.1) + 72  [0x102d3b720]
    +                                                     2469 main_dispatcher_channel_event  (in spice-server.1) + 116  [0x102d0f614]
    +                                                       2469 reds_handle_channel_event  (in spice-server.1) + 52  [0x102d2d38c]
    +                                                         2469 adapter_channel_event  (in spice-server.1) + 104  [0x102cfab48]
    +                                                           2469 channel_event  (in qemu-aarch64-softmmu) + 608  [0x103b4f838]
    +                                                             2469 qapi_event_send_spice_disconnected  (in qemu-aarch64-softmmu) + 228  [0x104030b90]
    +                                                               2469 qapi_event_emit  (in qemu-aarch64-softmmu) + 512  [0x103fa8778]
    +                                                                 2469 monitor_qapi_event_emit  (in qemu-aarch64-softmmu) + 120  [0x103fa9388]
    +                                                                   2469 qmp_send_response  (in qemu-aarch64-softmmu) + 152  [0x103fa9804]
    +                                                                     2469 monitor_puts  (in qemu-aarch64-softmmu) + 64  [0x103fa8078]
    +                                                                       2469 qemu_mutex_lock_impl  (in qemu-aarch64-softmmu) + 80  [0x104048050]
    +                                                                         2469 _pthread_mutex_firstfit_lock_slow  (in libsystem_pthread.dylib) + 248  [0x19c8eaa5c]
    +                                                                           2469 _pthread_mutex_firstfit_lock_wait  (in libsystem_pthread.dylib) + 84  [0x19c8ed0c4]
    +                                                                             2469 __psynch_mutexwait  (in libsystem_kernel.dylib) + 8  [0x19c8b2be8]

There is a deadlock when trying to write a log message in middle of sending another log message.

Do you have debug logging enabled? If so, disable it.

@osy osy added this to the v4.4 milestone Oct 9, 2023
@osy osy closed this as completed in d915953 Oct 9, 2023
@DanMelbourne
Copy link

I have this problem too. UTM Version 4.4.4 (92)

QEMU error: QEMU exited from an error: Is another process using the image [/Users/dan/Library/Containers/com.utmapp.UTM/Data/Documents/Home Assistant OS.utm/Data/haos_generic-aarch64-11.0.qcow2]?

I've rebooted the Mac so it can't be a stray process. Must be something in the VM file itself or in UTM's settings. Is there some kind of lock file for VMs I can delete?

@osy
Copy link
Contributor

osy commented Oct 25, 2023

Hi, you're commenting on a closed issue so it's unlikely to get any attention. If you are experiencing this problem on the latest version, open a new issue with all the requested information.

@DanMelbourne
Copy link

Thanks Osy. will do.

@shooding
Copy link

If you start UTM from command line, you can see logs about the reason.

/Applications/UTM.app/Contents/MacOS/UTM

Mine is

2024-04-29 17:14:04.854 UTM[42943:1184477] [QEMULogging(0x600001a29940)] 2024-04-29 17:14:04,854 DEBUG GSpice-../src/spice-session.c:2104 main-1:0: connect ready
2024-04-29 17:14:04.854 UTM[42943:1184477] [QEMULogging(0x600001a29940)] 2024-04-29 17:14:04,854 DEBUG GSpice-../src/spice-session.c:2279 main-1:0: open host: Could not connect: Connection refused

the spice session is not connected.

@raven1779
Copy link

@osy hello can you help me with this issue?

QEMU error: QEMU exited from an error: qemu-aarch64-softmmu: -drive if=none,media=cdrom,id=drive8DE59B3B-0696-425E-BF49-8AFA6361A371,file=/Users/hartanto/Downloads/22631.2861.231204-0538.23H2_NI_RELEASE_SVC_REFRESH_CLIENTCONSUMER_RET_A64FRE_en-us.iso,readonly=on: Failed to lock byte 100

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants