Skip to content

Conversation

@lstein
Copy link
Collaborator

@lstein lstein commented Aug 2, 2023

What type of PR is this? (check all applicable)

  • [X ] Feature

Have you discussed this change with the InvokeAI team?

  • Yes
  • No, because:

Have you updated all relevant documentation?

  • Yes
  • No - will be in release notes

Description

On CUDA systems, this PR adds a new slider to the install-time configure script for adjusting the VRAM cache and suggests a good starting value based on the user's max VRAM (this is subject to verification).

On non-CUDA systems this slider is suppressed.

Please test on both CUDA and non-CUDA systems using:

invokeai-configure --root ~/invokeai-main/ --skip-sd --skip-support

To see and test the default values, move invokeai.yaml out of the way before running.

Note added 8 August 2023

This PR also fixes the configure and model install scripts so that if the window is too small to fit the user interface, the user will be prompted to interactively resize the window and/or change font size (with the option to give up). This will prevent npyscreen from generating its horrible tracebacks.

Related Tickets & Documents

  • Related Issue #
  • Closes #

QA Instructions, Screenshots, Recordings

Added/updated tests?

  • Yes
  • No : please replace this line with details on why tests
    have not been included

[optional] Are there any post deployment tasks we need to perform?

Copy link
Contributor

@ebr ebr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Max VRAM slider works great. (I did hit an edge case where npyscreen crashed after resizing the window, but can't consistently reproduce it)

@Millu Millu enabled auto-merge August 9, 2023 02:21
@Millu Millu merged commit 37c9b85 into main Aug 9, 2023
@Millu Millu deleted the feat/select-vram-in-config branch August 9, 2023 02:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants