Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: Cortex exited with code null immediately after loading a model #1105

Closed
2 of 4 tasks
mshpp opened this issue Jun 22, 2024 · 8 comments
Closed
2 of 4 tasks

bug: Cortex exited with code null immediately after loading a model #1105

mshpp opened this issue Jun 22, 2024 · 8 comments
Assignees
Labels
category: hardware management Related to hardware & compute type: bug Something isn't working

Comments

@mshpp
Copy link

mshpp commented Jun 22, 2024

  • I have searched the existing issues

Current behavior

Trying to load even the TinyLLaMa Chat 1.1B model doesn't work, Cortex seems to crash immediately after loading the model. This occurs on a fresh AppImage under Fedora 40.

Minimum reproduction step

  1. Open Jan
  2. Download TinyLLaMa when the app prompts you to do so
  3. Write something as the input
  4. Press enter

Expected behavior

The model should load and run without a problem.

Screenshots / Logs

2024-06-22T20:03:36.375Z [CORTEX]::CPU information - 2
2024-06-22T20:03:36.377Z [CORTEX]::Debug: Request to kill cortex
2024-06-22T20:03:36.429Z [CORTEX]::Debug: cortex process is terminated
2024-06-22T20:03:36.430Z [CORTEX]::Debug: Spawning cortex subprocess...
2024-06-22T20:03:36.431Z [CORTEX]::Debug: Spawn cortex at path: /home/user/jan/extensions/@janhq/inference-cortex-extension/dist/bin/linux-cpu/cortex-cpp, and args: 1,127.0.0.1,3928
2024-06-22T20:03:36.432Z [APP]::/home/user/jan/extensions/@janhq/inference-cortex-extension/dist/bin/linux-cpu
2024-06-22T20:03:36.550Z [CORTEX]::Debug: cortex is ready
2024-06-22T20:03:36.551Z [CORTEX]::Debug: Loading model with params {"cpu_threads":2,"ctx_len":2048,"prompt_template":"<|system|>\n{system_message}<|user|>\n{prompt}<|assistant|>","llama_model_path":"/home/user/jan/models/tinyllama-1.1b/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf","ngl":23,"system_prompt":"<|system|>\n","user_prompt":"<|user|>\n","ai_prompt":"<|assistant|>","model":"tinyllama-1.1b"}
2024-06-22T20:03:36.746Z [CORTEX]::Debug: cortex exited with code: null
2024-06-22T20:03:37.661Z [CORTEX]::Error: Load model failed with error TypeError: fetch failed
2024-06-22T20:03:37.661Z [CORTEX]::Error: TypeError: fetch failed

Jan version

0.5.1

In which operating systems have you tested?

  • macOS
  • Windows
  • Linux

Environment details

Operating System: Fedora 40
Processor: Intel Core i5-3320M, 2C/4T
RAM: 16Gb

@mshpp mshpp added the type: bug Something isn't working label Jun 22, 2024
@vansangpfiev
Copy link
Contributor

Thank you for reporting the issue.
Could you please provide more information, it would help us to debug:

  • output for cat /proc/cpuinfo
  • attach the app.log, it is in ~/jan/logs/ directory

@mshpp
Copy link
Author

mshpp commented Jun 24, 2024

Sure, both files are attached
cpuinfo.txt
app.log

@vansangpfiev
Copy link
Contributor

From the log and cpuinfo, I think Cortex crashed because we don't have avx build with fma flag off.
Could you please download a nightly build for cortex.llamacpp and replace the .so lib in the path: /home/user/jan/extensions/@janhq/inference-cortex-extension/dist/bin/linux-cpu/cortex-cpp/engines/cortex.llamacpp/
Download link:

https://github.com/janhq/cortex.llamacpp/releases/download/v0.1.18-25.06.24/cortex.llamacpp-0.1.18-25.06.24-linux-amd64-noavx.tar.gz

@mshpp
Copy link
Author

mshpp commented Jul 1, 2024

This works, now I can get a response from the model. However, it seems that only the first round of inference works -- that is, I can only reliably get a single answer. The next one gets stuck on loading for a long time, and then it either completes or it doesn't and keeps loading. This is with the same TinyLLaMA model, which runs quite fast even on my hardware.

@vansangpfiev
Copy link
Contributor

Thanks for trying the nightly build. Could you please share the app.log again?

@Van-QA Van-QA removed their assignment Jul 3, 2024
@dan-homebrew
Copy link
Contributor

dan-homebrew commented Aug 28, 2024

Potentially linked to: #1144?

(Unlikely: user was able to have 1 round of inference work)

@dan-homebrew
Copy link
Contributor

@mshpp I am closing this issue, as we are moving towards a cortex.cpp which is in full C++. The JS version had too many stability issues (like this).

I will be adding this to our Test Checklist for the C++ v0.1. We'll reach out again for you to test v0.1 when we release it (ETA 2 weeks)

@dan-homebrew
Copy link
Contributor

Tracked in #1147

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: hardware management Related to hardware & compute type: bug Something isn't working
Projects
Archived in project
Development

No branches or pull requests

6 participants