Skip to content

Commit

Permalink
New automatic layers (#1012)
Browse files Browse the repository at this point in the history
* Henk's version of the fsize algo

This is the current version of the fsize algo based on Pyro's algorithm with added padding.

* Update koboldcpp.py

Add debugs and bump padding

* Pyro version

Pyro didn't agree with my version, so here is a test with his version

* Polish new auto layers

This one cleans up some debug prints, restores the max behavior in case the old alg suits someone better and changes the 200 layers to be the actual max for all backends so users have a better feel for the models.

* Remove 10% margin

The new version has been much more accurate, for low vram systems I only notice 1 layer difference. Getting rid of it so users can test if its still in safe margins like I expect. On a 6GB system it results in 18 layers instead of 17 being chosen for Tiefighter.

* Restore 500MB buffer to play it safe

I'm not feeling confident most people keep their vram usage under 1GB with background tasks. For now since we are aiming to have it work on as many systems as possible I restore the 500MB extra space since the fsize inflation is gone.

* Cap layers at maximum

When using the auto predict we don't want to go over the maximum amount of layers. Users should have a realistic feel for how large the model is.

For example when I was using the new auto guesser to communicate if a larger model would fit on someone's system at a higher context, it originally made me think that the model had 60 layers. In reality it had less.

This commit will take the layers of the model, and add 3 extra since that is the highest amount of additional layers a backend adds for the context handling (Most its 1).

* Remove old max layer code

Turns out at extreme contexts on new models such as Nemo the old code is incorrectly assuming we can offload everything. Its also redundant to check for max layers the old way since I capped our new guesses.

Old code is now removed to simplify it, and it changed the nemo guess from 43 layers to 15 layers. Still looking into the 15 part, still seems to high but can be the old algo taking over.

* Restructure algorithm into multiple parts

As requested the different calculations in the algorithm now have their own sections and names so its easier to understand what parts are being used. This also fixes the typo that was caused as a result of it being harder to read, the typo made no difference during execution and the algorithm is confirmed to still work the same.
  • Loading branch information
henk717 authored Jul 22, 2024
1 parent e2b36aa commit e493f14
Showing 1 changed file with 14 additions and 14 deletions.
28 changes: 14 additions & 14 deletions koboldcpp.py
Original file line number Diff line number Diff line change
Expand Up @@ -607,21 +607,21 @@ def autoset_gpu_layers(filepath,ctxsize,gpumem): #shitty algo to determine how m
csmul = 1.2
elif cs and cs > 2048:
csmul = 1.1
if mem < fsize*1.6*csmul:
ggufmeta = read_gguf_metadata(filepath)
if not ggufmeta or ggufmeta[0]==0: #fail to read or no layers
sizeperlayer = fsize*csmul*0.052
layerlimit = int(min(200,mem/sizeperlayer))
else:
layers = ggufmeta[0]
headcount = ggufmeta[1]
headkvlen = (ggufmeta[2] if ggufmeta[2] > 0 else 128)
ratio = mem/(fsize*csmul*1.5)
if headcount > 0:
ratio = max(ratio,mem/(fsize*1.34 + (layers*headcount*headkvlen*cs*4.25)))
layerlimit = int(ratio*layers)
ggufmeta = read_gguf_metadata(filepath)
if not ggufmeta or ggufmeta[0]==0: #fail to read or no layers
sizeperlayer = fsize*csmul*0.052
layerlimit = int(min(200,mem/sizeperlayer))
else:
layerlimit = 200 # assume full offload
layers = ggufmeta[0]
headcount = ggufmeta[1]
headkvlen = (ggufmeta[2] if ggufmeta[2] > 0 else 128)
ratio = mem/(fsize*csmul*1.5)
computemem = layers*4*headkvlen*cs*4*1.25 # For now the first 4 is the hardcoded result for a blasbatchsize of 512. Ideally we automatically calculate blasbatchsize / 4 but I couldn't easily grab the value yet - Henk
contextmem = layers*headcount*headkvlen*cs*4
reservedmem = 1.5*1024*1024*1024 # Users often don't have their GPU's VRAM worth of memory, we assume 500MB to avoid driver swapping + 500MB for the OS + 500MB for background apps / browser - Henk
if headcount > 0:
ratio = max(ratio, (mem - reservedmem - computemem) / (fsize + contextmem))
layerlimit = min(int(ratio*layers), (layers + 3))
return layerlimit
except Exception as ex:
return 0
Expand Down

0 comments on commit e493f14

Please sign in to comment.