Skip to content

Latest commit

 

History

History
63 lines (63 loc) · 3.72 KB

CHANGELOG.md

File metadata and controls

63 lines (63 loc) · 3.72 KB

Change Log

  • v0.18 (11 Oct 2024):
    • Added support for Ollama servers. (No HTTPS proxy is required as Ollama uses unencrypted HTTP.)
  • v0.17 (5 Oct 2024):
    • Corrected JSON request and reply bug in Hugging Face due to API changes
    • Max new tokens for Hugging Face is hardcoded at 400
  • v0.16 (3 Sept 2024):
    • Corrected parsing bug in finding end of value for content key
    • Changed default model in config file to gpt-4o
  • v0.15 (17 Sept 2023):
  • v0.14 (25 June 2023):
    • Increased delay after receiving no bytes to 2 seconds to confirm no more incoming packets
  • v0.13 (11 June 2023):
    • Corrected parsing bug where ChatGPT JSON reply now provides in human-readable output with newlines
  • v0.12 (10 May 2023):
    • (New feature) Ability to read using SmoothTalker by Firstbyte and Creative Text-to-Speech Reader
    • Word wrap function to avoid breaking up words at the end of line
    • Consolidate some of the parsed arguments
  • v0.11 (30 Apr 2023):
    • (New feature) Support for Hugging Face API
  • v0.10 (30 Apr 2023):
    • Can customise path to configuration file
  • v0.9 (27 Apr 2023):
    • (New feature) Ability to append conversation history and debug messages to text file
    • (New feature) Display timestamp as a debug option
    • Remove FAR pointers
    • Reduced user entry buffer to 1600 bytes
    • Reduced API body buffer to 12000 bytes
    • Reduced SEND_RECEIVE buffer to 14000 bytes
  • v0.8 (9 Apr 2023):
    • Corrected small bug in Code Page 437 parsing UTF-8 characters starting with 0xE2 that does not return designated unknown character if unknown character is encountered.
  • v0.7 (8 Apr 2023):
    • Corrected bug in previous release where previous message/reply memory is not freed after program ends.
    • Now will use one-time malloc allocations of previous message (5000), temp message (5000), GPT reply (8000) buffers to avoid memory fragmentation.
    • Correct memory allocation issue of not using __far when required
  • v0.6 (8 Apr 2023):
    • Added new feature to send the previous request and ChatGPT reply to give the model more context in answering the latest request.
      • Previous request and ChatGPT reply has to be cached
      • API_BODY_SIZE_BUFFER increased to 15000 bytes
    • Corrected bug in incorrect printing of uint16_t outgoing port value in debug mode
  • v0.5 (5 Apr 2023):
    • Corrected bug where the size of bytes to read from MTCP is always the same even though buffer already has some bytes inside from previous read.
  • v0.4 (1 Apr 2023):
    • Updated to use MTCP 2023-03-31
  • v0.3 (1 Apr 2023):
    • Escape " and \ characters of user input
    • Print \ without escape from JSON
    • Add a 4096 byte buffer for post-escaped message string, API Body buffer increased to 6144 bytes
    • Further wait for 200ms after the last non-zero byte receive to be sure there are no more bytes incoming from the socket
    • Compiled with Open Watcom 2.0 Beta (2023-04-01 build)
  • v0.2 (30 Mar 2023):
    • Compiled with Open Watcom 2.0 Beta (2023-03-04 build) that solves the issue of app not starting on some PCs.
    • Show date and time of compilation
    • Will parse and print quotes that were escaped from the JSON reply
    • Reduce size of user text entry buffer from 10240 to 2048 characters to reduce memory usage.
    • Use the same buffer for send and receive on the socket to further cut down memory usage.
    • API Body buffer dropped to 4096 bytes
    • Print only one decimal point for temperature at start
  • v0.1 (26 Mar 2023):
    • Initial release