Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Google - Gemini Long Context | Kaggle #928

Open
1 task
ShellLM opened this issue Nov 4, 2024 · 1 comment
Open
1 task

Google - Gemini Long Context | Kaggle #928

ShellLM opened this issue Nov 4, 2024 · 1 comment
Labels
ai-platform model hosts and APIs llm Large Language Models use-cases user use case descriptions

Comments

@ShellLM
Copy link
Collaborator

ShellLM commented Nov 4, 2024

Google - Gemini Long Context | Kaggle

Snippet

GOOGLE · ANALYTICS COMPETITION · A MONTH TO GO

Join Competition
more_horiz
Google - Gemini Long Context
Demonstrate interesting use cases for Gemini's long context window

Overview
A differentiating factor for the Gemini 1.5 model is its large context window that supports context caching. 

This competition is an open-ended call-to-action to share public Kaggle Notebooks and YouTube Videos demonstrating interesting use cases for Gemini 1.5's long context window.

Full Content

TITLE: Google - Gemini Long Context | Kaggle

Snippet

"menu
Skip to
content

Create
Home
Competitions
Datasets
Models
Code
Discussions
Learn
More

View Active Events

Sign In

Register
Kaggle uses cookies from Google to deliver and enhance the quality of its services and to analyze traffic.
Learn more
OK, Got it.
GOOGLE · ANALYTICS COMPETITION · A MONTH TO GO

Join Competition
more_horiz
Google - Gemini Long Context
Demonstrate interesting use cases for Gemini's long context window

Overview
A differentiating factor for the Gemini 1.5 model is its large context window that supports context caching.

This competition is an open-ended call-to-action to share public Kaggle Notebooks and YouTube Videos demonstrating interesting use cases for Gemini 1.5's long context window.

Start
18 days ago
Close
a month to go
Description
link
keyboard_arrow_up
Gemini 1.5 introduced a major breakthrough in AI with its notably large context window. It can process up to 2 million tokens at once vs. the typical 32,000 - 128,000 tokens. This is equivalent to being able to remember roughly 100,000 lines of code, 10 years of text messages, or 16 average English novels.

With large context windows, methods like vector databases and RAG (that were built to overcome short context windows) become less important, and more direct methods such as in-context retrieval become viable instead. Likewise, methods like many-shot prompting where models are provided with hundreds or thousands of examples of a task as either a replacement or a supplement for fine-tuning also become possible.

In initial tests, the Google Deepmind team saw very promising results, with state-of-the-art performance in long-document QA, long-video QA, and long-context ASR. They shared an entire code base with Gemini 1.5 and had it successfully create documentation. They also had the model "watch" the film Sherlock JR from 1924, and it answered questions correctly.

This competition challenges you to stress test Gemini 1.5's long context window by building public Kaggle Notebooks and YouTube Videos that demonstrate creative use cases. We're eager to see what you build!

Submission Instructions
link
keyboard_arrow_up
To make a submission to the competition, use this Google Form:

Include a link to a public Kaggle notebook that is attached to this competition.
Include a link to a public Kaggle dataset that contains the data your model used as context.
Include a link to a YouTube or YouTube Short video that outlines your completed project.
Eligibility Requirements:

The notebook made use of either the Gemini-1.5-Pro, Gemini-1.5-Flash or Gemini-1.5-Flash-8B API.
The notebook demonstrated how to process inputs greater than 100,000 tokens, and contained discussion of why it was helpful for the selected use case.
The video summarized a notebook from the Gemini long-context window competition where the author was a contributing member.
The video is public, less than 5 minutes long, and was posted to either YouTube or YouTube Shorts.
Evaluation
link
keyboard_arrow_up
Notebook: Evaluation Rubric (50pts)
Useful: The model produced outputs that were helpful or high quality. [0-10pts]
Informative: The discussion about how or why long context windows enabled the chosen use case was both detailed and accurate. [0-10pts]
Interesting: The notebook demonstrated a use case that was somehow interesting or engaging. [0-10pts]
Documented: The notebook was well-documented and demonstrated best practices. [0-10pts]
Efficient: The notebook made efficient use of context-caching where applicable. [0-5pts]
Novel: The notebook demonstrated a use case that was somehow surprising, new, or novel. [0-5pts]
Video: Evaluation Rubric (40pts)
Accurate: Presented information that was accurate and that made use of current best practices. [0-10pts]
Informative: Discussed topics such as long-context windows, context-caching, and how/why they were central to the project. [0-10pts]
Instructional: Serves as a valuable learning resource for Gemini 1.5 API users. [0-10pts]
Entertaining: The video was enjoyable to watch and the production quality was professional. [0-10pts]
Timeline
link
keyboard_arrow_up
October 17, 2024 - Start Date.
December 1, 2024 - Entry Deadline. You must accept the competition rules before this date in order to compete.
December 1, 2024 - Team Merger Deadline. This is the last day participants may join or merge teams.
December 1, 2024 - Final Submission Deadline.
All deadlines are at 11:59 PM UTC on the corresponding day unless otherwise noted. The competition organizers reserve the right to update the contest timeline if they deem it necessary.

Prizes
link
keyboard_arrow_up
1st place: $25,000 USD
2nd place: $25,000 USD
3rd place: $25,000 USD
4th place: $25,000 USD
Citation
link
keyboard_arrow_up
Paige Bailey, Paul Mooney, Ashley Chow, and Addison Howard. Google - Gemini Long Context. https://kaggle.com/competitions/gemini-long-context, 2024. Kaggle.

Cite
Competition Host
Google
Prizes & Awards
$100,000
Does not award Points or Medals
Participation
4,102 Entrants
Tags
Text Generation
NLP
Video Text
Video Generation

URL

https://www.kaggle.com/competitions/gemini-long-context

Suggested labels

None

@ShellLM ShellLM added ai-platform model hosts and APIs llm Large Language Models use-cases user use case descriptions labels Nov 4, 2024
@ShellLM
Copy link
Collaborator Author

ShellLM commented Nov 4, 2024

Related content

#848 similarity score: 0.85
#625 similarity score: 0.85
#363 similarity score: 0.85
#706 similarity score: 0.84
#831 similarity score: 0.83
#750 similarity score: 0.83

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ai-platform model hosts and APIs llm Large Language Models use-cases user use case descriptions
Projects
None yet
Development

No branches or pull requests

1 participant