"Temperature" is a parameter in OpenAI GPT models, to control for
randomess in the generated content. To decrease the probability of
either escaping the markdown blocks and creating invalid code, we
decreased temperature for all Non-Prose files. For Markdown or Plain
Text, in which more creativity may be a good thing, we increase the
temperature to allow for more randomness. Along with this, we ask the
generate inline prompt to include only the code and not markdown blocks,
as it appears that lower temperature may decrease the probability of
introducing random markdown blocks.
Release Notes (Internal Only):
- Decrease temperature for inline assist on code content.
(This PR was written 100% by the Inline Assistant)
This PR brings in new components into our ai and assistant crates namely
PromptTemplate and PromptChains. They offer a new way to generate
prompts that allow for a more flexible and dynamic approach than before.
Release Notes:
- Introduced PromptTemplate: an abstract base for individual parts of
the prompt.
- Added PromptChains: manage multiple PromptTemplates, sort them based
on priority and regulate the output size based on tokens.
- Provided new PromptArguments structure to encapsulate arguments needed
for PromptTemplate.
- Extended repository_context to include PromptCodeSnippet.
Previously, that method could loop forever if the editor's autoclose
regions had unexpected selection ids.
Something must have changed recently that allowed this invariant to be
violated, but regardless, this code should not have relied on that
invariant to terminate like this.
Now when buffer_search::Deploy action is triggered (with cmd-f), we'll
keep the previous query in query_editor (if there was one) instead of
replacing it with empty query.
This addresses this bit of feedback from Jose:
> If no text is selected, `cmd + f` should not delete the text in the
search bar when refocusing
Release Notes:
- Improved buffer search by not clearing out query editor when no text
is selected and "buffer search: deploy" (default keybind: cmd-f) is
triggered.
Scaffolding for guest members in channels
Release notes:
- You can now set channels to "public" which will allow anyone to join
and become a member. In a future release guests joining public channels
will have reduced permissions.
This PR introduces a new Inline Assistant feature "Retrieve Context", to
dynamically fill the content in your generation prompt based on relevant
results returned from the Semantic Search for the Prompt.
Release Notes:
- Introduce "Retrieve Context" button in Inline Assistant
* on opening a language server's logs, a new editor for server logs is
now created from `\n`-joined `VecDeque` elements instead of a buffer, as
before
* every `VecDeque` entry is a log line we receiver out of stderr or LSP
server, and their general amount is capped with `let
MAX_STORED_LOG_ENTRIES: usize = 2000;`
* currently opened editor with logs (`Editor::multi_line`) keeps getting
log lines appended and may get over this cap, but only last stored 2000
entries will be restored on reopen
* similarly, cap rpc message logs
Release Notes:
- Improved memory usage by storing less language LSP server and rpc logs
Partially fixeszed-industries/community#575
This PR will see one more fix to the case I've spotted while working on
this: namely, if a project has several nested repositories, e.g for a
structure:
/a
/a/.git/
/a/.gitignore
/a/b/
/a/b/.git/
/a/b/.gitignore
/b/ should not account for a's .gitignore at all - which is sort of
similar to the fix in commit #c416fbb, but for the paths in the project.
The release note is kinda bad, I'll try to reword it too.
- [ ] Improve release note.
- [x] Address the same bug for project files.
Release Notes:
- Fixed .gitignore files beyond the first .git directory being respected
by the worktree (zed-industries/community#575).