This commit is contained in:
Nathan Sobo 2023-05-23 15:25:34 -06:00
parent 7be41e19f7
commit d934da1905
3 changed files with 41 additions and 29 deletions

View file

@ -1,8 +1,8 @@
You an AI language model embedded in a code editor named Zed, authored by Zed Industries.
The input you are currently processing was produced by a special \"model mention\" in a document that is open in the editor.
The input you are currently processing was produced by a special "model mention" in a document that is open in the editor.
A model mention is indicated via a leading / on a line.
The user's currently selected text is indicated via ->->selected text<-<- surrounding selected text.
In this sentence, the word ->->example<-<- is selected.
The user's currently selected text is indicated via [SELECTION_START] and [SELECTION_END] surrounding selected text.
In this sentence, the word [SELECTION_START]example[SELECTION_ENd] is selected.
Respond to any selected model mention.
Wrap your responses in > < as follows.
@ -14,22 +14,6 @@ For lines that are likely to wrap, or multiline responses, start and end the > a
I think that's a great idea
<
If the selected mention is not at the end of the document, briefly summarize the context.
> Key ideas of generative programming:
* Managing context
* Managing length
* Context distillation
- Shrink a context's size without loss of meaning.
* Fine-grained version control
* Portals to other contexts
* Distillation policies
* Budgets
<
If no response is appropriate, respond with ><.
*Only* respond to a mention if either
a) The mention is at the end of the document.
b) The user's selection intersects the mention.
If no response is appropriate based on these conditions, respond with ><.
If the user's cursor is on the same line as a mention, as in: "/ This is a ->-><-<- question somewhere in the document and the cursor is inside it", then focus strongly on that question. The user wants you to respond primarily to the input intersecting their cursor.
Focus attention primarily on text within [SELECTION_START] and [SELECTION_END] tokens.

View file

@ -2,8 +2,22 @@
One big concept I want to explore for Zed's AI integration is representing conversations between multiple users and the model as more of a shared document called a *context*, to which we apply fine-grained version control.
The assistant pane will contain a set of contexts, each in its own tab. Each context represents input to a language model, so its maximum length will be limited accordingly. Contexts can be freely edited, and to submit a context you hit cmd-enter or click the button in the toolbar. The toolbar also indicates the number of tokens remaining for the model.
The assistant pane will contain a set of contexts, each in its own tab. Each context represents input to a language model, so its maximum length will be limited accordingly. Contexts can be freely edited, and to submit a context you hit `cmd-enter` or click the button in the toolbar. The toolbar also indicates the number of tokens remaining for the model.
It's possible to "drill in" on a particular piece of the content and start a new context that is based on it. To do this, select any text in Zed and hit `cmd-shift-?`. A portal to the selected code will be created in the most recently active context. Question to self: should we always start a new context? Should there be a binding to start a new one vs appending to the most recently active one? What if we only append to the most recently active one if it's open.
It's possible to "drill in" on a particular piece of the content and start a new context that is based on it. To do this, select any text in Zed and hit `cmd-shift-?`. A portal to the selected code will be created in the most recently active context.
When you embed content
/ How does this section relate to the overall idea presented in this document?
You can also hit `cmd-shift-?` when editing a context. This pushes a new context to the stack, which is designed for editing the previous context. You can use this to manage the current context. For example, select text, hit `cmd-shift-?`, and then ask the child context to summarize the parent.
We want it to be possible to use `/` anywhere in the document to communicate with the model as if we were talking at that location. While we will provide the full document to the model as context, we want the model to focus on the section marked with [EDIT_START] and [EDIT_END] and provide a relevant response at the specified location.
Next key problems to solve:
- Indicating to the model what is selected
- Indicating to the model what we want to be edited
- Can the model insert somewhere other than the end?
I want to select a subset of text and hit `cmd-shift-?` and have that text marked in a special mode, indicating that I want it to be edited. The text will be appended to the context, along with the selected text (if they're different). The model will assume that its output is destined to replace the text in question.
> In this document, the main idea revolves around enhancing Zed's AI integration by using a shared document-like structure called a *context* for conversations between multiple users and the AI model. The selected section describes a specific feature within this framework where users can "drill in" on a particular piece of content and create a new context based on it. This feature would allow users to easily reference and discuss specific portions of code, making collaboration more efficient and targeted. It contributes to the overall concept by providing a concrete example of how users can interact with the AI and one another within the context-based approach. <

View file

@ -121,32 +121,46 @@ impl Assistant {
let selections = editor.selections.all(cx);
let (user_message, insertion_site) = editor.buffer().update(cx, |buffer, cx| {
// Insert ->-> <-<- around selected text as described in the system prompt above.
// Insert markers around selected text as described in the system prompt above.
let snapshot = buffer.snapshot(cx);
let mut user_message = String::new();
let mut user_message_suffix = String::new();
let mut buffer_offset = 0;
for selection in selections {
if !selection.is_empty() {
if user_message_suffix.is_empty() {
user_message_suffix.push_str("\n\n");
}
user_message_suffix.push_str("[Selected excerpt from above]\n");
user_message_suffix
.extend(snapshot.text_for_range(selection.start..selection.end));
user_message_suffix.push_str("\n\n");
}
user_message.extend(snapshot.text_for_range(buffer_offset..selection.start));
user_message.push_str("->->");
user_message.push_str("[SELECTION_START]");
user_message.extend(snapshot.text_for_range(selection.start..selection.end));
buffer_offset = selection.end;
user_message.push_str("<-<-");
user_message.push_str("[SELECTION_END]");
}
if buffer_offset < snapshot.len() {
user_message.extend(snapshot.text_for_range(buffer_offset..snapshot.len()));
}
user_message.push_str(&user_message_suffix);
// Ensure the document ends with 4 trailing newlines.
let trailing_newline_count = snapshot
.reversed_chars_at(snapshot.len())
.take_while(|c| *c == '\n')
.take(4);
let suffix = "\n".repeat(4 - trailing_newline_count.count());
buffer.edit([(snapshot.len()..snapshot.len(), suffix)], None, cx);
let buffer_suffix = "\n".repeat(4 - trailing_newline_count.count());
buffer.edit([(snapshot.len()..snapshot.len(), buffer_suffix)], None, cx);
let snapshot = buffer.snapshot(cx); // Take a new snapshot after editing.
let insertion_site = snapshot.anchor_after(snapshot.len() - 2);
println!("{}", user_message);
(user_message, insertion_site)
});