Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

n-gram Keywords need delimiting in OpenAI() #1546

Open
zilch42 opened this issue Sep 26, 2023 · 6 comments
Open

n-gram Keywords need delimiting in OpenAI() #1546

zilch42 opened this issue Sep 26, 2023 · 6 comments

Comments

@zilch42
Copy link
Contributor

zilch42 commented Sep 26, 2023

Hi Maarten,

I think there is a bug in the OpenAI representation model in the way the prompt is generated. The keywords are only separated by a space, not a comma, which is problematic for n-grams > 1.

def _create_prompt(self, docs, topic, topics):
keywords = list(zip(*topics[topic]))[0]
# Use the Default Chat Prompt
if self.prompt == DEFAULT_CHAT_PROMPT or self.prompt == DEFAULT_PROMPT:
prompt = self.prompt.replace("[KEYWORDS]", " ".join(keywords))
prompt = self._replace_documents(prompt, docs)

Without proper delimiting I end up with a prompt like this:

I have a topic that contains the following documents: 
- Legumes for mitigation of climate change and the provision of feedstock for biofuels and biorefineries. A review.
- A global spectral library to characterize the world's soil.
- Classification of natural flow regimes in Australia to support environmental flow management.
- Laboratory characterisation of shale properties.
- Effects of climate extremes on the terrestrial carbon cycle: concepts, processes and potential future impacts.
- Threat of plastic pollution to seabirds is global, pervasive, and increasing.
- Pushing the limits in marine species distribution modelling: lessons from the land present challenges and opportunities.
- Land-use futures in the shared socio-economic pathways.
- The WULCA consensus characterization model for water scarcity footprints: assessing impacts of water consumption based on available water remaining (AWARE).
- BIOCHAR APPLICATION TO SOIL: AGRONOMIC AND ENVIRONMENTAL BENEFITS AND UNINTENDED CONSEQUENCES.

The topic is described by the following keywords: food land use global properties climate using review potential change different production environmental data changes high study based years model models time used area future terrestrial plant field analysis management

Based on the information above, extract a short topic label in the following format:
topic: <topic label>

TextGeneration and Cohere look to be okay.

def _create_prompt(self, docs, topic, topics):
keywords = ", ".join(list(zip(*topics[topic]))[0])
# Use the default prompt and replace keywords
if self.prompt == DEFAULT_PROMPT:
prompt = self.prompt.replace("[KEYWORDS]", keywords)

It would also be helpful to have some way to generate an example prompt with [DOCUMENTS] and [KEYWORDS] applied to help with testing so the user can actually see what's being sent. I've got a custom class bc I'm using ChatGPT on AWS so I've got extra loggers in there but its it difficult to actually see the prompt in context with standard BERTopic.

@MaartenGr
Copy link
Owner

Thanks for the extensive description! I'll make sure to change it in #1539

It would also be helpful to have some way to generate an example prompt with [DOCUMENTS] and [KEYWORDS] applied to help with testing so the user can actually see what's being sent. I've got a custom class bc I'm using ChatGPT on AWS so I've got extra loggers in there but its it difficult to actually see the prompt in context with standard BERTopic.

That indeed would be helpful. I can enable verbosity to print out the prompts that are given for each call but that might prove to be too much logging if you have a very large dataset.

MaartenGr added a commit that referenced this issue Sep 27, 2023
@zilch42
Copy link
Contributor Author

zilch42 commented Sep 27, 2023

What I've set up in my custom class is just for it to print the prompt for topic 0 (or the outlier topic if there are no topics) so that might be a good way to go if you want to do it with verbosity rather than a make function to generate just a prompt

@MaartenGr
Copy link
Owner

The thing is, when just 1 topic will be logged users might want to log every one of them and vice versa. I might add it in the LLMs themselves as additional verbosity levels but that feels a bit less intuitive wrt user experience since verbosity is handled differently throughout BERTopic.

@zilch42
Copy link
Contributor Author

zilch42 commented Sep 28, 2023

Yeah it might be nice to have access to all the prompts in an easier way than extracting them from logs. Is the selection and diversification of representative documents deterministic? If so, rather than looping through the topics, generating a prompt and getting the description back one by one, you could generate all of the prompts at once, then loop through the prompts to get each representation. Then you could abstract the prompt generation to a function or method exposed to the user so they could just call a function to get all the prompts using the same arguments that they sent to the LLM initially, maybe bind them to .get_topic_info() if they wanted...

psuedo code might be something like:

representation_model = OpenAI(delay_in_seconds=5, nr_rocs=10, diversity=0.2)

topic_model = BERTopic(representation_model=representation_model)
topics, probs = topic_model.fit_transform()

topic_info = topic_model.get_topic_info()
topic_info['prompts'] = representation_model.generate_prompts()

Then rather then:

# Generate using OpenAI's Language Model
updated_topics = {}
for topic, docs in tqdm(repr_docs_mappings.items(), disable=not topic_model.verbose):
truncated_docs = [truncate_document(topic_model, self.doc_length, self.tokenizer, doc) for doc in docs]
prompt = self._create_prompt(truncated_docs, topic, topics)
# Delay
if self.delay_in_seconds:
time.sleep(self.delay_in_seconds)
if self.chat:
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
kwargs = {"model": self.model, "messages": messages, **self.generator_kwargs}
if self.exponential_backoff:
response = chat_completions_with_backoff(**kwargs)
else:
response = openai.ChatCompletion.create(**kwargs)
label = response["choices"][0]["message"]["content"].strip().replace("topic: ", "")
else:
if self.exponential_backoff:
response = completions_with_backoff(model=self.model, prompt=prompt, **self.generator_kwargs)
else:
response = openai.Completion.create(model=self.model, prompt=prompt, **self.generator_kwargs)
label = response["choices"][0]["text"].strip()
updated_topics[topic] = [(label, 1)]

You might have something like:

        # generate prompts 
        prompts = self.generate_prompts(topic_model, repr_docs_mappings, topics)
        
        # log an example prompt 
        logger.info("Example prompt: \n{}".format(prompts[min(1,len(prompts))]))

        # Generate using OpenAI's Language Model
        updated_topics = {}
        for topic, p in tqdm(zip(topics, prompts), total=len(topics), disable=not topic_model.verbose):
            
            # Delay
            if self.delay_in_seconds:
                time.sleep(self.delay_in_seconds)

            if self.chat:
                messages = [
                    {"role": "system", "content": "You are a helpful assistant."},
                    {"role": "user", "content": p}
                ]
                kwargs = {"model": self.model, "messages": messages, **self.generator_kwargs}
                if self.exponential_backoff:
                    response = chat_completions_with_backoff(**kwargs)
                else:
                    response = openai.ChatCompletion.create(**kwargs)
                label = response["choices"][0]["message"]["content"].strip().replace("topic: ", "")
            else:
                if self.exponential_backoff:
                    response = completions_with_backoff(model=self.model, prompt=p, **self.generator_kwargs)
                else:
                    response = openai.Completion.create(model=self.model, prompt=p, **self.generator_kwargs)
                label = response["choices"][0]["text"].strip()

            updated_topics[topic] = [(label, 1)]

        return updated_topics

    def generate_prompts(self, topic_model, repr_docs_mappings, topics):
        prompts = []
        for topic, docs in repr_docs_mappings.items():
            truncated_docs = [truncate_document(topic_model, self.doc_length, self.tokenizer, doc) for doc in docs]
            prompts.append(self._create_prompt(truncated_docs, topic, topics))
        
        return prompts 

That code is based on #1539 and still needs some work... It works for generating the representations but representation_model.generate_prompts() still doesn't work as generate_prompts is sitting inside extract_topics and relies on some things that aren't easily available from the outside... but no use putting more time into it without your feedback first.

@MaartenGr
Copy link
Owner

Good idea! It is possible to generate the prompts before passing them to the LLM, they are currently not dependent on previous prompts. This might change in the future however so I think I would prefer to simply save the prompts after generating them iteratively. Then, you could save the prompts to the representation model and access them there.

Since the prompts are also dependent on the order of representation models (KeyBERT -> OpenAI), I think .generate_prompts would only work if OpenAI were used as a standalone. So that method would not work without running all other representation methods if they exist, which might prove to be computationally too inefficient.

Also, in your example, you would essentially create the prompts twice. Once when running .fit_transform and another time when running .generate_prompts. Instead, you could save the prompts to representation_model.OpenAI whilst creating the representations during .fit_transform. You could then access the prompts with something like representation_model.generated_prompts_.

Based on that, I would suggest the following. During any LLM representation model, save the prompts in the representation model whilst they are being created with the option of logging each of them or just the first. This would mean that the prompts are created once during .fit_transform and can easily be accessed afterward.

@zilch42
Copy link
Contributor Author

zilch42 commented Sep 28, 2023

Yes, Very good points. I forget that data can be saved in objects in python (I think I still approach python with a bit of an R mindset). That sounds like a great solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants