-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Personas: Add custom, dynamic, and persisted #35
Comments
Hi there and thanks for this wonderful app. |
Thanks for taking about this possible workflow @romainbessugesmeusy I believe I shall implement the idea above. It's an easy step to go in the right direction. Maybe as fist phase, a single JSON file at build time, then loaded dynamically at runtime, then configurable in the environment and then finally the editing in the UI itself. After that, I want to think at how to share prompt templates with other people and apps. Community templating should be a thing. |
Not sure if this is the right issue to mention this, but you let me know. |
@younes-io hi, thanks for the feedback. This use case is interesting..can you explain more about the 3-4 personas to do a task? Trying to optimize further for your needs. |
sequenceDiagram
participant User
participant Persona1 as Persona A
participant Persona2 as Persona B
participant Persona3 as Persona C
participant Persona4 as Persona D
User->>Persona1: Input 1
Note over Persona1: Process Input 1
Persona1->>Persona2: Output 1 as Input 2
Note over Persona2: Process Input 2
Persona2->>Persona3: Output 2 as Input 3
Note over Persona3: Process Input 3
Persona3->>Persona4: Output 3 as Input 4
Note over Persona4: Process Input 4
Persona4->>User: Final Output
The sequence diagram above illustrates the workflow between multiple personas, each taking on a specific role in a project management scenario:
Currently, the limitation is that we can only engage with one AI persona at a time, which restricts the application's potential. Drawing a parallel to OpenAI's GPTs, imagine if we could only use one GPT for all tasks... |
Thanks for explaining clearly. Do you take the time to break it down into the 4 personas, or would you be okay if the system does it for you? Depending on the complexity, the task can be broken into 1,2.. 20 personas. And the rule for breakdown should be based on context size and compression of information. |
I don't mind if the system breaks it down for me.. but I'd prefer to have access to each persona, to adjust / amend if necessary.. Maybe, I would not be OK with how the system "configured" a specific persona.. so I would like to have the ability to adjust accordingly. |
I guess having a "pipeline" of personas to work an a prompt is a neat idea! One thing that came to mind would also to attach a persona to a model. Now: for what I have tested, the concept of persona is not really useful to me - or probably I am using it incorrectly. What a persona is - in my mind - a system prompt that specifies a specific task to achieve and not a response style (which is nice for entertainment purposes but not much more). Opinions? |
You can use "individual" personas for different use cases; for example, you could have a persona that "judges" your code solely from the perspective of cybersecurity and spots potential flaws and vulnerabilities in your code snippets. You could have one that helps you understand complex concepts that are new to you and maybe help you break them down into pieces and explain them in simple words (without using the domain-specific jargon). You could have a persona that helps you build abstractions, for the different problems you tackle in programming, etc. I think of personas as tools.. in a toolbox... you craft the tool for the specific thing you want to tackle and when you need it, you just bring it on and use it :D (ofc, this is an ongoing improvement and refinement process) |
(I mentioned this on discord already, and here again in case you don't see it) - Wow, I am surprised that this was not implemented yet. This is such a nobrainer. Many other clients support this feature. Having to manually copy and paste prompts is way too slow for serious users. I am sure many users do not use big-agi just for this missing feature alone. I would love to use big-agi myself, I like it a lot, but this one feature is really essential. Please add it :) ! |
What's a good reference implementation in another client? To see which shape of the feature better conforms to your needs. |
Sure, no problem. This one is quite good app.nextchat.dev, it's ability to adjust so many things by creating and changing masks is quite remarkable (https://app.nextchat.dev/#/masks). But to be honest, just to be able to enter prompts via So as long as there is the ability to quickly select a prompt via (preferably) keyboard (for example using |
@DasOhmoff got it. For you it's more important to get a system prompt (the current "persona", and the most important part of GPTs) or a user prompt? They're very different in implementation and want to make sure before spending a week on it |
The system prompt is probably more important :) |
hi there. I've been using big-agi for a while with great satisfaction! I do feel there's the missing feature of create and save custom system prompts, and, I must say I really don't understand the whole idea behind a "persona". in terms of I'd like to have is something quite simple. here's a mockup from the existing UI I am honestly quite confused on the current way personas are created, for example I have no idea why my text goes through an AI to generate a persona instead of simply using my text as is. I am pretty sure I am here mixing up the concept of an AI persona and a system prompt and I think that's part of the problem on this solution. I am mostly interested on having some system prompt with preferences to tell the AI. e.g.: "only provide code without explanations" or "be assertive in your answers" etc... I currently go through the "reset chat" option quite often simply so I can re-use the custom prompt I used in that chat because big-agi doesn't let me save it or create new ones,. I appreciate big time the work you are doing with this tool and I hope this can help you have an idea of how a regular user uses the tool. Thanks! |
@academo I totally agree. It's confusing in its current state, which is why I switched to NextChat in January. They have Masks (you can think of them as personas, like an assistant with a specialty that you define with a system prompt. Depending on your task, you use the assistant you require). There are two patterns I see in my usage and in others as well:
BIG-AGI today has a confusing system (with the folders and so many things that I don't know how to use, probably my fault):
At least, with NextChat, this limitation doesn't exist. No limits on the number of conversations and no limits on the number of personas (masks in their jargon), which is exactly what one would expect (as long as I'm paying for the API calls). Then, NextChat goes to another level and allows you to associate your AI persona with the LLM of your choice (because for task X, LLM1 is the most suitable; for task Y, LLM2 is more suitable, etc.), and this is great! The other next level they're working on for v3 (which I requested along with others) is to be able to define many LLM providers with all the credentials, etc., and to be able to configure the AI personas as one sees fit. So you end up with different levels of flexibility: models, LLM providers, AI personas, different combinations for your conversations/AI personas, etc. I think this is the core (at least, IMHO) of what I would expect from such apps. The rest is more about improvements and "luxury" maybe. Finally, the privacy aspect, which I already mentioned. Unfortunately, with BIG6-AGI, the user has to go through their servers. This raises privacy/security concerns (because in addition to the LLM provider, we have to rely on another party for security) and latency issues. NextChat allows you to call the LLM provider directly or to define a proxy of your choice. It's really much more flexible. I'm saying all of this, but the big-agi's dev team and @enricoros really do a great job. They're very interactive, and it's one of the greatest teams I've interacted with on GitHub. So, even though I may have some (not so positive) comments, I appreciate all the efforts and the engagement and I still follow big-AGI closely! |
I second the need for a variety of custom "personas". For example, I sometimes need to do some manipulations with text, let's say extract the authors and the name of the book from the text I copy-pasted from a webpage and format it in a very specific way I need. And there are several different ways in which I want to format the text depending on the situation. It would be incredibly convenient to be able to predefine several different "personas" with custom prompts so that I can quickly create a new chat with the proper prompt whenever needed and just paste the text I want to have parsed. That said, I also agree with the commenters above: big-AGI is a great AI chat client, thank you so much for developing it! |
I don't think this should be in the UI. There are projects like langchain which are explicitly about orchestration of LLMs. If anything, maybe integrate with langchain, but at that point probably big-AGI would need to be re-designed from the ground up to be built on top of langchain abstractions... |
Thanks for the inputs @trajo . Conformed that we're working on editable Personas, but not Swarms of those for now. |
bumping... ran into this issue on day two of serious big-agi use. Use case is scientific argumentation development, where points are interacted with by various theories, which are representable by agents. Features like Beam and React are neat but editable personas has been a request for far longer. Note: the React command should instead ise the Reasoners LLM library, those prompts/logic systems are far more powerful |
🚀 Dynamic Personas support 🌟
Requirements for the personas:
Phase 1: Still client-side, but some support for server-side
Phase 2: Mixed Client and Server side?
The text was updated successfully, but these errors were encountered: