I’m interested in automatically generating lengthy, coherent stories of 100,000+ words from a single prompt using an open source local large language model (LLM). I came across the “Awesome-Story-Generation” repository which lists relevant papers describing promising methods like “Re3: Generating Longer Stories With Recursive Reprompting and Revision”, announced in this Twitter thread from October 2022 and “DOC: Improving Long Story Coherence With Detailed Outline Control”, announced in this Twitter thread from December 2022. However, these papers used GPT-3, and I was hoping to find similar techniques implemented with open source tools that I could run locally. If anyone has experience or knows of resources that could help me achieve long, coherent story generation with an open source LLM, I would greatly appreciate any advice or guidance.
Creating a 100,000-word coherent story using an LLM with a limited context window requires strategic planning in how you manage the narrative flow, continuity, and character development over multiple sessions. Here’s a strategy tailored for this scenario:
Detailed Plot Outline:
Expand the Outline: Break down the story into smaller, manageable arcs or segments (e.g., each act could be split into several chapters). Each segment should have its own mini-outline: Major plot points Character development for that segment Setting changes Key interactions or conflicts Micro-Outline for Each Chapter: For each chapter within these arcs: Opening scenario Middle conflict Resolution or cliffhanger Character arcs within the chapter
Session Management:
Context Management: Due to the limited context window, you’ll need to manage how much information is retained from session to session: Summarize Previous Content: Before each new prompt, provide a concise summary of the previous narrative sections. This summary should include: Key events Current state of characters Unresolved conflicts or mysteries Setting and time Prompt Structure: Start with a Summary: Begin each prompt with a summary:
Length of Each Segment: Estimate how many words you can comfortably fit into one session. If your LLM can handle around 2,000 tokens (which could be around 1,500 words, depending on the model), you might aim for each session to produce a chapter of 1,500 words.
Continuity and Cohesion:
Character Consistency: Keep a running document of character details, relationships, and developments outside the LLM context. Use this to ensure consistency: Character sheets Timeline of events Plot Devices: Use recurring elements or plot devices to maintain cohesion: Recurring themes Foreshadowing elements from earlier segments Feedback Loop: After each session, review the output for: Continuity errors Character voice consistency Plot holes
Use this feedback to adjust your next prompts or summaries to address any discrepancies.
Incremental Development:
Iterative Refinement: As you generate content, refine your prompts based on what works better to certain styles of prompts or require more detailed instructions.
Draft and Revise: Treat each segment as a draft. After generating a section, you might need to: Revise for coherence with the previous content. Adjust for pacing to keep the narrative engaging without overwhelming the reader. Enhance character development if it feels lacking. Dynamic Outlining: Be prepared to adapt your outline as the story progresses. Sometimes, the LLM might produce content that suggests new directions or deepens certain aspects of the plot or characters in ways you hadn’t initially planned.
Technical Considerations:
Token Management: Since LLMs count tokens rather than words, be aware of how much each prompt and response consumes. Words with multiple tokens (like proper nouns, rare words) can quickly fill up your context window. Prompt Efficiency: Keep prompts concise but informative. Avoid redundant information in prompts to maximize the space for story development: Use bullet points or lists for summaries when possible. Focus on key points rather than detailed narratives in your instructions.
Final Assembly and Editing:
Compilation: After generating all segments, compile them into a single document. Here, you’ll have a larger context to: Check for continuity Ensure narrative flow Address any plot holes or character inconsistencies Editing: The final step involves editing for: Grammar and style Pacing and tension Theme consistency External Tools: Consider using external writing tools or collaborators for a final polish, especially if the LLM has limitations in areas like nuanced character development or complex plot twists.
Iterative Feedback:
Review and Adjust: If possible, after a few segments, review the overall narrative to see if adjustments are needed in how you’re prompting the LLM. This could mean changing how you summarize past events or specifying more about character motivations and interactions.
By employing this strategy, you can leverage the capabilities of an LLM to create an expansive, coherent narrative even with the limitations of context window size. Remember, this process is iterative and might require several attempts to get the balance right between creative generation and maintaining narrative integrity.
Don’t be afraid to revise what you get by hand either
I want to know what’s the best I can get automatically. I don’t want to do or revise anything by hand.
Gross.
urgh .
Then you’re not gonna get anything of value.
Looking through your past comments on Lemmy the only other thing I can see is this:
You’re just not interested in doing anything at all for yourself, huh? You just want to sit there and mindlessly consume whatever shows up in front of you?
“I don’t want to do or revise anything by hand” AI dorks are wonderful. You gonna get an LLM to read the thing for you, too? 😂
Then why bother?
I think it’s impossible then. My experience aligns with these recommendations. First tell it to come up with interesting story ideas. Then pick one. Have it write an outline. Have it come up with story arcs, subplots and a general structure. Chapter names… Then tell it to write the chapters individually, factoring in the results from before. Once it trails off or writes short chapters, edit the text and guide it back to where you want it to be.
It’ll just write bad and maybe short stories unless you do that. I mean you could theoretically automate this. Write a program with some AI agent framework that instructs it to do the individual tasks, have it reflect on itself, always feed back what it came up with and include it in the next task.
I’ve tried doing something like that and I don’t think there is a way around this. Or you do it like the other people and just tell it “Generate a novel” and be fine with whatever result it will come up with. But that just won’t be a good result.