This is a repository accompanying the blogpost
It ingests a set of meeting notes called meetingbank creates summaries for each of the entries via a local Ollama instance. The Rails application then displays the meeting notes, along with the pre-generated summaries, and a chat window where you can queryan LLM about the specific meeting selected. The LLM uses the currently selected meeting as well as the chat history as context for its replies.
This application also highlights the importance of cleaning your sources, as shown below where the LLM tries to confidently assert that there's some tension tension in the meeting when in fact there was a mistranscription in the notes.
- Install Ollama https://ollama.com/download/linux
ollama run gemma2:2b
to download the modeldocker compose up
to run postgres with pgvectorruby seeds.rb
to seed the database (optional if not using the pre-generated seeds)bin/setup
to setup dependencies
- https://github.com/patterns-ai-core/langchainrb/blob/main/lib/langchain/llm/ollama.rb
- https://github.com/rails/rails/tree/main/guides/bug_report_templates
- https://greg.molnar.io/blog/a-single-file-rails-application/
- https://github.com/hopsoft/sr_mini
- https://thoughtbot.com/blog/talking-to-actioncable-without-rails
- https://www.mintbit.com/blog/subscribing-sending-and-receiving-actioncable-messages-with-js
- https://github.com/sonyarianto/react-without-buildsteps