You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I intended to do some prep work on general tooling pre-November so I could focus more on output rather than tools...
The only prep work I did pre-November was to play with Chat GPT 3.5 briefly, which I got generating some interesting dialogues, and working a little with @cpressey's Fountain context sensitive language tool which I thought might be a nice alternative to Tracery. When I used Tracery in previous years, I created a lot of rough context handling code and called out to Tracery in a way that I never quite standardised. Last month I implemented a pseudo-random number generator in Fountain, which I'm pleased with, and could be used for deterministic but varied text generation.
I've previously avoided LLMs as being "too easy" and less interesting than other creative options, but I may succumb to the trend, at least partially.
I originally planned revisit and complete a unfinished project from: 2020 NaNoGenMo/2020#34 , but the bulk of the work in that is going to be in normalizing texts from different sources and consistently tokenising them... which is a bit boring, and was supposed to be what I tooled up in October as a general purpose utility (totally not cheating!). Anyway, I didn't even do that.
Now I am fixating on an old idea I've had for years I thought I wouldn't be able to do justice.... but maybe Chat GPT will fill in the harder creative bits for me? It's even okay at making arbitrary creative decisions, a bit like Oblique Strategies to help me figure out how I'd go about it ... maybe.
and generate them and model them as networks in something like Gephi, and not worry too much about the narrative. It should be possible to do a good job with this, but the diagrams are just so distracting!
Rough plan:
Spend the majority of my time generating complex web networks for their own sake, hopefully making them look pretty (and fight with Gephi to make graphs linear)
Use Fountain or Tracery to generate the high level plot descriptions for each point, maybe generating a set of AI prompts consistent with the graph, perhaps with some AI input. (I tried to teach Chat GPT Fountain... that was a waste of time)
Feed grammar generated prompts into Chat GPT (3.5 because I'm cheap, and maybe it's more challenging to get good results?) to generate 50k words.
I'm not sure I'm setting myself up for success here. The LLM generated text may be entertaining, (if it's too Keeler-esque it could be tragic). Getting a complex plot right, and feeding it in correctly might make a big difference, but I'm not quite sure how I can transfer the graph structure through to the LLM reliably. Not really sure whether it'll hang together, but it seems to be what I'm thinking about.
I have an arbitrary stretch goal I would like to include in any project, but even if successful, I doubt it'll be noticeable, so don't want to spoil it unless I can do it and have something interesting to say about it.
super secret stretch goal
I don't even know how this will result in a single piece of code -- it's going to be scattered data and code in different languages and formats, with prompts appearing here and there. Are prompts code? They're not even deterministic. This could turn into just an elaborate, barely documentable process...
I also have some smaller back-up plans I could fall back on, or chose to do in addition (breaking my first intent for this year) -- There are so many ways to fail!
Keeler is inspiring me to work by throwing everything together in a way that barely makes sense, do whatever, pad things out where necessary, and insist my ideas are great and successful regardless of how things turn out.
The text was updated successfully, but these errors were encountered:
Ah, it's so weird when someone decides to use a tool I made, because it reminds me how poorly I support them. Because they're experimental tools, see, they're mostly just there to show what a tool could do, yeah, that's the ticket...
(I did see your comment on the other issue about Fountain, I've been meaning to get around to responding to it, hopefully I will eventually.)
(has Keeler been an inspiration for previous NaNoGenMo projects?)
I think I remember @MichaelPaulukonis mentioning Keeler as a potential inspiration a while back, but I'm not sure if Keeler ever achieved the status of actual inspiration.
Are prompts code? They're not even deterministic.
I would say they are code, especially in this context. And I'll note that even "real code" often isn't determinstic (for better or worse).
Based on past years' experience:
The only prep work I did pre-November was to play with Chat GPT 3.5 briefly, which I got generating some interesting dialogues, and working a little with @cpressey's Fountain context sensitive language tool which I thought might be a nice alternative to Tracery. When I used Tracery in previous years, I created a lot of rough context handling code and called out to Tracery in a way that I never quite standardised. Last month I implemented a pseudo-random number generator in Fountain, which I'm pleased with, and could be used for deterministic but varied text generation.
I've previously avoided LLMs as being "too easy" and less interesting than other creative options, but I may succumb to the trend, at least partially.
I originally planned revisit and complete a unfinished project from: 2020 NaNoGenMo/2020#34 , but the bulk of the work in that is going to be in normalizing texts from different sources and consistently tokenising them... which is a bit boring, and was supposed to be what I tooled up in October as a general purpose utility (totally not cheating!). Anyway, I didn't even do that.
Now I am fixating on an old idea I've had for years I thought I wouldn't be able to do justice.... but maybe Chat GPT will fill in the harder creative bits for me? It's even okay at making arbitrary creative decisions, a bit like Oblique Strategies to help me figure out how I'd go about it ... maybe.
The idea:
A generated H. S Keeler inspired web-work story
(has Keeler been an inspiration for previous NaNoGenMo projects?)
Unfortunately I only really want to play around with the amazing plot diagrams as shown here: https://site.xavier.edu/polt/keeler/onwebwork.html
and generate them and model them as networks in something like Gephi, and not worry too much about the narrative. It should be possible to do a good job with this, but the diagrams are just so distracting!
Rough plan:
I'm not sure I'm setting myself up for success here. The LLM generated text may be entertaining, (if it's too Keeler-esque it could be tragic). Getting a complex plot right, and feeding it in correctly might make a big difference, but I'm not quite sure how I can transfer the graph structure through to the LLM reliably. Not really sure whether it'll hang together, but it seems to be what I'm thinking about.
I have an arbitrary stretch goal I would like to include in any project, but even if successful, I doubt it'll be noticeable, so don't want to spoil it unless I can do it and have something interesting to say about it.
I don't even know how this will result in a single piece of code -- it's going to be scattered data and code in different languages and formats, with prompts appearing here and there. Are prompts code? They're not even deterministic. This could turn into just an elaborate, barely documentable process...
I also have some smaller back-up plans I could fall back on, or chose to do in addition (breaking my first intent for this year) -- There are so many ways to fail!
Keeler is inspiring me to work by throwing everything together in a way that barely makes sense, do whatever, pad things out where necessary, and insist my ideas are great and successful regardless of how things turn out.
The text was updated successfully, but these errors were encountered: