Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

epic: Cortex.cpp Installer MVP (Local & Network Installer) #1030

Closed
2 of 3 tasks
dan-homebrew opened this issue Aug 27, 2024 · 17 comments · Fixed by #1122
Closed
2 of 3 tasks

epic: Cortex.cpp Installer MVP (Local & Network Installer) #1030

dan-homebrew opened this issue Aug 27, 2024 · 17 comments · Fixed by #1122
Assignees
Labels
category: app shell Installer, updaters, distributions P0: critical Mission critical

Comments

@dan-homebrew
Copy link
Contributor

dan-homebrew commented Aug 27, 2024

Spec

Tasks

  • Tests

Bugs

@dan-homebrew dan-homebrew converted this from a draft issue Aug 27, 2024
@dan-homebrew dan-homebrew changed the title Finalize Cortex installation methods Cortex Installer and Updater Aug 28, 2024
@hiento09 hiento09 changed the title Cortex Installer and Updater Cortex Installer and Uninstaller Sep 5, 2024
@hiento09 hiento09 changed the title Cortex Installer and Uninstaller Cortex.cpp Installation and Uninstallation Sep 5, 2024
@0xSage 0xSage added the P0: critical Mission critical label Sep 6, 2024
@dan-homebrew dan-homebrew changed the title Cortex.cpp Installation and Uninstallation Cortex.cpp Installer MVP Sep 6, 2024
@dan-homebrew
Copy link
Contributor Author

dan-homebrew commented Sep 6, 2024

@hiento09 Can I check: will you be working on the Uninstaller as part of this story?

  • I have scoped it down to Installer MVP for now
  • I'm not 100% sure how an uninstaller will work on Mac or Linux - ?

@dan-homebrew dan-homebrew added the category: app shell Installer, updaters, distributions label Sep 6, 2024
@hiento09
Copy link
Contributor

hiento09 commented Sep 6, 2024

@hiento09 Can I check: will you be working on the Uninstaller as part of this story?

  • I have scoped it down to Installer MVP for now
  • I'm not 100% sure how an uninstaller will work on Mac or Linux - ?

I drafted the uninstaller script and integrated it to the installer, can demo with the team on today TGIF but it's not fully finished yet

@dan-homebrew
Copy link
Contributor Author

dan-homebrew commented Sep 16, 2024

@hiento09 I am adding #1217, to track a follow-up discussion on whether we should pre-package llama.cpp.

We need to optimize for user ease-of-use, and made trade off some size complexity.

@dan-homebrew
Copy link
Contributor Author

dan-homebrew commented Sep 18, 2024

@hiento09 I'm re-opening this issue, to leave comments for knowledge capture in the future:

Overview

Goal

As much as possible, I would like to reuse the infrastructure from cortex engines install

  • i.e. just pulling from a local source in the installer
  • vs. writing (and maintaining) more code

Ideal Outcome

  • Installer has hardware detection lib
  • Installer installs cortex to binary
  • Installer then runs engines install but first searches install local lib, then pulls from remote if unavailable

We would then have a fewer number of installers, by maintaining two types of installers:

  • Universal Installers = come with everything packaged, large file size but low frustration
  • Alpine Installers = minimal, pulls from remote

I prefer Universal (vs. "Full") and Alpine (vs. "Network"), but am open to other options (e.g. Nvidia uses Full and Network). I think we need to communicate that we are packaging multiple binary options, which may be useful especially if people have multiple GPU types in their computers in the future.

Local Installers

  • Windows (universal)
  • Mac (universal)
  • Linux

Network Installers

  • Windows
  • Mac
  • Linux

We can pack both ARM and x64 into a universal installer, as I don't think they are very big

@dan-homebrew dan-homebrew reopened this Sep 18, 2024
@github-project-automation github-project-automation bot moved this from Completed to Scheduled in Jan & Cortex Sep 18, 2024
@hiento09
Copy link
Contributor

hiento09 commented Sep 18, 2024

Alpine Installers

Minimal, pulls from remote resources => already implemented in the current nightly build.

Universal Installers: Comes with everything packaged.

Pros: Lower frustration, allows offline installation.
Cons:

  • Large file size.
  • Requires changes to the current code => Need the Cortex team to provide an ETA.
  • The installer needs to include engine files.
  • GitHub releases don't support file sizes > 2GB, so we might not be able to publish this file on GitHub. We could consider pushing it to S3 instead (see: GitHub Docs).

I envision two approaches for this:

  • Approach 1:
    • Following Daniel’s ideal, I will package the entire llamacpp with the installer and extract it into a specific Temp folder.
    • cortex engines install will first check this folder. If the required tar.gz engine file exists, it will install it. If not, it will search for the file on the internet.3
  • Approach 2:
    • Compile cpuinfo into a binary file.
    • The installer will use this binary to detect the correct engines, then extract and copy the engine files to the appropriate path (PowerShell for Windows, Bash for Linux and macOS).

cc @dan-homebrew

@dan-homebrew
Copy link
Contributor Author

dan-homebrew commented Sep 18, 2024

@hiento09 I think 2gb is a good "upper limit" for us to try stay within, I think the win-cu11 and win-cu12 added together is only 1gb.

Quick questions:

  • I thought Approach 1 and 2 are actually both needed
  • first detect hardware info (I think we are using hwinfo, or are we using cpuinfo?)
  • then pick it out from either local installer or remote).

Ref: #1229

@hiento09
Copy link
Contributor

Yeah I mean hwinfo

@hiento09 I think 2gb is a good "upper limit" for us to try stay within, I think the win-cu11 and win-cu12 added together is only 1gb.

Quick questions:

  • I thought Approach 1 and 2 are actually both needed
  • first detect hardware info (I think we are using hwinfo, or are we using cpuinfo?)
  • then pick it out from either local installer or remote).

Ref: #1229

@hiento09
Copy link
Contributor

hiento09 commented Sep 18, 2024

@dan-homebrew
It is different: The current source code for the cortex engines install command does not support local lookup or download. In approach 1, the developer will have to make significant changes to the code on the development side.

Approach 2 would mean the developer doesn’t need to modify the current code, and I would make the changes in the installer, which would involve heavy modifications to the installer.

@dan-homebrew
Copy link
Contributor Author

dan-homebrew commented Sep 18, 2024

@hiento09 I see what you mean now. I would like to adopt Approach 1.

  1. Installer will "install" the correct cortex.cpp version to /bin
  2. Installer, possibly at post-install step, will call cortex engines install llamacpp
  3. cortex engines install llamacpp should search for local engine files in installer, and if not available then pull from remote
  4. We may have to, in the longer-run, do a cortex engines update llamacpp step as well

This is why I tagged @namchuai in #1217 (or should it be @vansangpfiev?)

  • You need to let me know a time estimate of how long it takes for this
  • We will need to discuss how cortex engines install llamacpp might work for local engine files, especially since installer packages may be differ by operating system.

From my POV, it is easier for us to maintain most of the logic in cortex engines install, while keeping the installer straightforward and maintainable.

@dan-homebrew
Copy link
Contributor Author

dan-homebrew commented Sep 18, 2024

@hiento09 I see what you mean now. I would like to adopt Approach 1.

  1. Installer will "install" the correct cortex.cpp version to /bin
  2. Installer, possibly at post-install step, will call cortex engines install llamacpp
  3. cortex engines install llamacpp should search for local engine files in installer, and if not available then pull from remote
  4. We may have to, in the longer-run, do a cortex engines update llamacpp step as well

This is why I tagged @namchuai in #1217 (or should it be @vansangpfiev?)

  • You need to let me know a time estimate of how long it takes for this
  • We will need to discuss how cortex engines install llamacpp might work for local engine files, especially since installer packages may be differ by operating system.

From my POV, it is easier for us to maintain most of the logic in cortex engines install, while keeping the installer straightforward and maintainable.

@hiento09 @namchuai @vansangpfiev It may make sense for cortex engines install llamacpp to be able to take in a --source parameter, to make things easier for the C++ team:

cortex engines install llamacpp --source <path_or_url>
  • hwinfo can detect the AVX, GPU etc
  • Installer can pass the source path to cortex engines

From my naive POV, this would make the changes on cortex engines a lot simpler.

@vansangpfiev
Copy link
Contributor

You need to let me know a time estimate of how long it takes for this

@dan-homebrew - I will start to work on this ticket after done with the testing for llamacpp log issue. Hopefully can have something to integrate to installer on Friday.

cortex engines install llamacpp --source <path_or_url>

Yep, I totally agree with the above approach.

@dan-homebrew
Copy link
Contributor Author

You need to let me know a time estimate of how long it takes for this

@dan-homebrew - I will start to work on this ticket after done with the testing for llamacpp log issue. Hopefully can have something to integrate to installer on Friday.

cortex engines install llamacpp --source <path_or_url>

Yep, I totally agree with the above approach.

Thanks for picking this up, @vansangpfiev

@dan-homebrew dan-homebrew changed the title epic: Cortex.cpp Installer MVP epic: Cortex.cpp Installer MVP (Universal and Alpine) Sep 19, 2024
@dan-homebrew dan-homebrew changed the title epic: Cortex.cpp Installer MVP (Universal and Alpine) epic: Cortex.cpp Installer MVP (Local & Network Installer) Sep 19, 2024
@hiento09
Copy link
Contributor

hiento09 commented Sep 24, 2024

@dan-homebrew dan-homebrew moved this from In Review to Review + QA in Jan & Cortex Sep 29, 2024
@gabrielle-ong
Copy link
Contributor

QA (WIP)
Windows: windows-dev-tensorrt-llm
Ubuntu: test-ubuntu-app-cpu-2
Mac: test-hackintosh-sonoma-app

  • check no internet
  • Check that installer successful without internet
  • check that engine dependencies eg Llamacpp installed - cortex engines list

For reference how to turn off network on VMs in Proxmox:

  1. On instance settings > Hardware > Network Device (net0) > Edit
  2. VLAN Tag > set to empty (no VLAN) [usual value is 2]
  3. Now test on the instance, it will not have internet

To check:
[ ] Bug on linux and mac - 10s delay to next command prompt, when without internet

@gabrielle-ong
Copy link
Contributor

gabrielle-ong commented Oct 1, 2024

@hiento09 for Mac Local Installer v127 - it is is very small at 10MB, is that expected?
vs the Linux and windows local installer are 1.7GB and 1.8GB

edited image-
Image

@hiento09
Copy link
Contributor

hiento09 commented Oct 1, 2024

@hiento09 for Mac Local Installer v127 - it is is very small at 10MB, is that expected? vs the Linux and windows local installer are 1.7GB and 1.8GB

edited image- Image

Yes, that's expected @gabrielle-ong because there is only 2 cortex.llamacpp variants for macos.
image

@gabrielle-ong
Copy link
Contributor

Closing this HUGE EPIC, thank you @hiento09!

@gabrielle-ong gabrielle-ong moved this from Review + QA to Completed in Jan & Cortex Oct 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: app shell Installer, updaters, distributions P0: critical Mission critical
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

6 participants