You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question about the role of N (the number of particles), and K (the factor) in the task of prompt intersection. I am trying to replicate Fig. 4 in the workshop paper, with the same 2 prompts.
I am observing that, for N>1 and K=1, the obtained continuations are different for different N. Instead, as soon as K>1, they are equal. I attach a couple of examples and my code for the mode.
Does the fact that I chose batch_size=1 matter? I stop generations after 20 tokens.
Example N =2, K=1
20 and has never spoken to me. (Though we may be in the same lecture
compression of time and the expansion of space. Are you aware of the work of John Arch
Example N =2, K=2
19th century English physicist James Clerk Maxwell. His work on elect
19th century English physicist James Clerk Maxwell. His work on elect
import asyncio
import os
import string
import torch
from hfppl import CachedCausalLM
from hfppl import LMContext
from hfppl import Model
from hfppl import smc_standard, smc_steer
from hfppl.distributions import transformer
if "HF_AUTH_TOKEN" in os.environ:
HF_AUTH_TOKEN = os.environ["HF_AUTH_TOKEN"]
# Load the language model.
# Mistral and Vicuna are open models; to use a model with restricted access, like LLaMA 2,
# pass your HuggingFace API key as the optional `auth_token` argument:
#LLM = CachedCausalLM.from_pretrained(
# "meta-llama/Meta-Llama-3-8B", auth_token=HF_AUTH_TOKEN
#)
LLM = CachedCausalLM.from_pretrained("lmsys/vicuna-7b-v1.5")
# LLM = CachedCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
LLM.batch_size = 1
class PromptIntersection(Model):
# Initialize
def __init__(self, prompts,max_tokens):
super().__init__()
self.s = ""
self.prompts = prompts
self.x = [LMContext(LLM, p)
for p in prompts]
self.max_tokens = max_tokens
# Generate
async def step(self):
w = await self.sample(self.x[0].next_token())
# Reduce number of max tokens remaining
self.max_tokens -= 1
#(self.transformer(self.x[0]))
for x in self.x[1:]:
await self.observe(x.next_token(), w)
if w == LLM.tokenizer.eos_token_id or self.max_tokens == 0:
self.finish()
else:
self.s += w
prompts = ["My favorite physicist is probably ", "My favorite writer is probably "]
async def main():
constraint_model = PromptIntersection(prompts,20)
particles = await smc_steer(
constraint_model, 2,3
)
for p in particles:
print(f"{p.s}")
asyncio.run(main())
The text was updated successfully, but these errors were encountered:
I have a question about the role of N (the number of particles), and K (the factor) in the task of prompt intersection. I am trying to replicate Fig. 4 in the workshop paper, with the same 2 prompts.
I am observing that, for N>1 and K=1, the obtained continuations are different for different N. Instead, as soon as K>1, they are equal. I attach a couple of examples and my code for the mode.
Does the fact that I chose batch_size=1 matter? I stop generations after 20 tokens.
Example N =2, K=1
20 and has never spoken to me. (Though we may be in the same lecture
compression of time and the expansion of space. Are you aware of the work of John Arch
Example N =2, K=2
19th century English physicist James Clerk Maxwell. His work on elect
19th century English physicist James Clerk Maxwell. His work on elect
The text was updated successfully, but these errors were encountered: