Skip to content

Releases: shubham0204/SmolChat-Android

v0.0.4

31 Jan 17:01
Compare
Choose a tag to compare
  • The 'Download Models' screen now includes an interface to browse HuggingFace models and download them from the app
    directly (#17)
  • Improved error handling for errors occurring in the native code (#31)
  • The 'Chat Settings' screen now includes a field to configure the model's context size (#34)
  • Sync with upstream llama.cpp (particularly for DeepSeek support)

v0.0.3

31 Dec 09:10
Compare
Choose a tag to compare
  • This version comes with performance improvements for arm64 devices by compiling llama.cpp with Arm64-specific CPU
    flags. (#18)
  • The chat messages can now highlight links/URLs and make them clickable.
  • The height of the list in the select model dialog is fixed to avoid overlapping with the Add Model button. (#12)
  • The dialog to select models includes a button to sort models by name or date added.
  • The time taken (in seconds) to generate the response is now displayed below the last response from the LLM. (#7)

v0.0.2

08 Dec 08:41
Compare
Choose a tag to compare
  • Clear previous chat messages when LLMInference::load_model is called
  • Allow rendering non-ASCII characters on the chat interface generated by the LLMs/SLMs
  • Show token generation speed (in tokens/second) for the latest message in the chat interface (#1)

v0.0.1

02 Dec 16:36
Compare
Choose a tag to compare

This release:

  • adds a button to stop response generation
  • disables chat options when no chat is selected