The phenomenon known as "tip of the tongue," where individuals struggle to recall movie titles despite vividly remembering plot details, presents a unique cognitive challenge. Our project seeks to harness the capabilities of Large Language Models (LLMs) to bridge the gap between human memory and artificial intelligence. By utilizing LLMs, we aim to transform abstract plot descriptions into accessible cinematic knowledge.
These models excel in pattern recognition within extensive natural language datasets, allowing them to generate responses that mimic the data's style and structure. Our research focuses on leveraging these capabilities by training a language model on a comprehensive dataset of movie plots that spans various genres and eras. This approach addresses the widespread issue of memory retrieval in relation to movies, enhancing how people interact with and recall their cinematic experiences.
We conduct a series of experiments to fine-tune a pre-trained language model using detailed movie plot descriptions and related metadata. The effectiveness of our model will be assessed through rigorous analysis at various stages of training and fine-tuning, aiming to demonstrate significant performance improvements in cinematic knowledge retrieval. This research problem is relatively underexplored in the AI space, and given that our ideation and implementation are not based on existing literature, our project was conducted purely as an experimental endeavor to assess feasibility and establish a proof of concept.
Report: https://github.com/vrinda41198/CineSleuth/blob/main/Report.pdf