-
Notifications
You must be signed in to change notification settings - Fork 181
bug: Cortex exited with code null immediately after loading a model #1105
Description
- I have searched the existing issues
Current behavior
Trying to load even the TinyLLaMa Chat 1.1B model doesn't work, Cortex seems to crash immediately after loading the model. This occurs on a fresh AppImage under Fedora 40.
Minimum reproduction step
- Open Jan
- Download TinyLLaMa when the app prompts you to do so
- Write something as the input
- Press enter
Expected behavior
The model should load and run without a problem.
Screenshots / Logs
2024-06-22T20:03:36.375Z [CORTEX]::CPU information - 2
2024-06-22T20:03:36.377Z [CORTEX]::Debug: Request to kill cortex
2024-06-22T20:03:36.429Z [CORTEX]::Debug: cortex process is terminated
2024-06-22T20:03:36.430Z [CORTEX]::Debug: Spawning cortex subprocess...
2024-06-22T20:03:36.431Z [CORTEX]::Debug: Spawn cortex at path: /home/user/jan/extensions/@janhq/inference-cortex-extension/dist/bin/linux-cpu/cortex-cpp, and args: 1,127.0.0.1,3928
2024-06-22T20:03:36.432Z [APP]::/home/user/jan/extensions/@janhq/inference-cortex-extension/dist/bin/linux-cpu
2024-06-22T20:03:36.550Z [CORTEX]::Debug: cortex is ready
2024-06-22T20:03:36.551Z [CORTEX]::Debug: Loading model with params {"cpu_threads":2,"ctx_len":2048,"prompt_template":"<|system|>\n{system_message}<|user|>\n{prompt}<|assistant|>","llama_model_path":"/home/user/jan/models/tinyllama-1.1b/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf","ngl":23,"system_prompt":"<|system|>\n","user_prompt":"<|user|>\n","ai_prompt":"<|assistant|>","model":"tinyllama-1.1b"}
2024-06-22T20:03:36.746Z [CORTEX]::Debug: cortex exited with code: null
2024-06-22T20:03:37.661Z [CORTEX]::Error: Load model failed with error TypeError: fetch failed
2024-06-22T20:03:37.661Z [CORTEX]::Error: TypeError: fetch failed
Jan version
0.5.1
In which operating systems have you tested?
- macOS
- Windows
- Linux
Environment details
Operating System: Fedora 40
Processor: Intel Core i5-3320M, 2C/4T
RAM: 16Gb