zed/crates/assistant/Cargo.toml

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

109 lines
2.8 KiB
TOML
Raw Normal View History

[package]
2023-09-22 01:54:59 +00:00
name = "assistant"
version = "0.1.0"
edition = "2021"
publish = false
license = "GPL-3.0-or-later"
[lints]
workspace = true
[lib]
2023-09-22 01:54:59 +00:00
path = "src/assistant.rs"
doctest = false
[features]
test-support = [
"editor/test-support",
"language/test-support",
"project/test-support",
"text/test-support",
]
[dependencies]
anthropic = { workspace = true, features = ["schemars"] }
anyhow.workspace = true
assets.workspace = true
assistant_slash_command.workspace = true
assistant_tool.workspace = true
async-watch.workspace = true
cargo_toml.workspace = true
chrono.workspace = true
client.workspace = true
clock.workspace = true
collections.workspace = true
command_palette_hooks.workspace = true
context_servers: Add initial implementation (#16103) This commit proposes the addition of "context serveres" and the underlying protocol (model context protocol). Context servers allow simple definition of slash commands in another language and running local on the user machines. This aims to quickly prototype new commands, and provide a way to add personal (or company wide) customizations to the assistant panel, without having to maintain an extension. We can use this to reuse our existing codebase, with authenticators, etc and easily have it provide context into the assistant panel. As such it occupies a different design space as extensions, which I think are more aimed towards long-term, well maintained pieces of code that can be easily distributed. It's implemented as a central crate for easy reusability across the codebase and to easily hook into the assistant panel at all points. Design wise there are a few pieces: 1. client.rs: A simple JSON-RPC client talking over stdio to a spawned server. This is very close to how LSP work and likely there could be a combined client down the line. 2. types.rs: Serialization and deserialization client for the underlying model context protocol. 3. protocol.rs: Handling the session between client and server. 4. manager.rs: Manages settings and adding and deleting servers from a central pool. A server can be defined in the settings.json as: ``` "context_servers": [ {"id": "test", "executable": "python", "args": ["-m", "context_server"] ] ``` ## Quick Example A quick example of how a theoretical backend site can look like. With roughly 100 lines of code (nicely generated by Claude) and a bit of decorator magic (200 lines in total), one can come up with a framework that makes it as easy as: ```python @context_server.slash_command(name="rot13", description="Perform a rot13 transformation") @context_server.argument(name="input", type=str, help="String to rot13") async def rot13(input: str) -> str: return ''.join(chr((ord(c) - 97 + 13) % 26 + 97) if c.isalpha() else c for c in echo.lower()) ``` to define a new slash_command. ## Todo: - Allow context servers to be defined in workspace settings. - Allow passing env variables to context_servers Release Notes: - N/A --------- Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
2024-08-15 14:49:30 +00:00
context_servers.workspace = true
db.workspace = true
editor.workspace = true
feature_flags.workspace = true
fs.workspace = true
futures.workspace = true
fuzzy.workspace = true
globset.workspace = true
gpui.workspace = true
handlebars.workspace = true
heed.workspace = true
html_to_markdown.workspace = true
http_client.workspace = true
indexed_docs.workspace = true
indoc.workspace = true
language.workspace = true
Extract completion provider crate (#14823) We will soon need `semantic_index` to be able to use `CompletionProvider`. This is currently impossible due to a cyclic crate dependency, because `CompletionProvider` lives in the `assistant` crate, which depends on `semantic_index`. This PR breaks the dependency cycle by extracting two crates out of `assistant`: `language_model` and `completion`. Only one piece of logic changed: [this code](https://github.com/zed-industries/zed/commit/922fcaf5a6076e56890373035b1065b13512546d#diff-3857b3707687a4d585f1200eec4c34a7a079eae8d303b4ce5b4fce46234ace9fR61-R69). * As of https://github.com/zed-industries/zed/pull/13276, whenever we ask a given completion provider for its available models, OpenAI providers would go and ask the global assistant settings whether the user had configured an `available_models` setting, and if so, return that. * This PR changes it so that instead of eagerly asking the assistant settings for this info (the new crate must not depend on `assistant`, or else the dependency cycle would be back), OpenAI completion providers now store the user-configured settings as part of their struct, and whenever the settings change, we update the provider. In theory, this change should not change user-visible behavior...but since it's the only change in this large PR that's more than just moving code around, I'm mentioning it here in case there's an unexpected regression in practice! (cc @amtoaer in case you'd like to try out this branch and verify that the feature is still working the way you expect.) Release Notes: - N/A --------- Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
2024-07-19 17:35:34 +00:00
language_model.workspace = true
language_model_selector.workspace = true
language_models.workspace = true
log.workspace = true
lsp.workspace = true
markdown.workspace = true
menu.workspace = true
multi_buffer.workspace = true
Ollama Provider for Assistant (#12902) Closes #4424. A few design decisions that may need some rethinking or later PRs: * Other providers have a check for authentication. I use this opportunity to fetch the models which doubles as a way of finding out if the Ollama server is running. * Ollama has _no_ API for getting the max tokens per model * Ollama has _no_ API for getting the current token count https://github.com/ollama/ollama/issues/1716 * Ollama does allow setting the `num_ctx` so I've defaulted this to 4096. It can be overridden in settings. * Ollama models will be "slow" to start inference because they're loading the model into memory. It's faster after that. There's no UI affordance to show that the model is being loaded. Release Notes: - Added an Ollama Provider for the assistant. If you have [Ollama](https://ollama.com/) running locally on your machine, you can enable it in your settings under: ```jsonc "assistant": { "version": "1", "provider": { "name": "ollama", // Recommended setting to allow for model startup "low_speed_timeout_in_seconds": 30, } } ``` Chat like usual <img width="1840" alt="image" src="https://github.com/zed-industries/zed/assets/836375/4e0af266-4c4f-4d9e-9d74-1a91f76a12fe"> Interact with any model from the [Ollama Library](https://ollama.com/library) <img width="587" alt="image" src="https://github.com/zed-industries/zed/assets/836375/87433ac6-bf87-4a99-89e1-96a93bf8de8a"> Open up the terminal to download new models via `ollama pull`: ![image](https://github.com/zed-industries/zed/assets/836375/af7ec411-76bf-41c7-ba81-64bbaeea98a8)
2024-06-12 00:35:27 +00:00
ollama = { workspace = true, features = ["schemars"] }
open_ai = { workspace = true, features = ["schemars"] }
ordered-float.workspace = true
parking_lot.workspace = true
paths.workspace = true
picker.workspace = true
project.workspace = true
proto.workspace = true
regex.workspace = true
release_channel.workspace = true
Allow the assistant to suggest edits to files in the project (#11993) ### Todo * [x] tuck the new system prompt away somehow * for now, we're treating it as built-in, and not editable. once we have a way to fold away default prompts, let's make it a default prompt. * [x] when applying edits, re-parse the edit from the latest content of the assistant buffer (to allow for manual editing of edits) * [x] automatically adjust the indentation of edits suggested by the assistant * [x] fix edit row highlights persisting even when assistant messages with edits are deleted * ~adjust the fuzzy search to allow for small errors in the old text, using some string similarity routine~ We decided to defer the fuzzy searching thing to a separate PR, since it's a little bit involved, and the current functionality works well enough to be worth landing. A couple of notes on the fuzzy searching: * sometimes the assistant accidentally omits line breaks from the text that it wants to replace * when the old text has hallucinations, the new text often contains the same hallucinations. so we'll probably need to use a more fine-grained editing strategy where we perform a character-wise diff of the old and new text as reported by the assistant, and then adjust that diff so that it can be applied to the actual buffer text Release Notes: - Added the ability to request edits to project files using the assistant panel. --------- Co-authored-by: Antonio Scandurra <me@as-cii.com> Co-authored-by: Marshall <marshall@zed.dev> Co-authored-by: Antonio <antonio@zed.dev> Co-authored-by: Nathan <nathan@zed.dev>
2024-05-17 22:38:14 +00:00
rope.workspace = true
rpc.workspace = true
schemars.workspace = true
search.workspace = true
semantic_index.workspace = true
2023-05-29 14:23:16 +00:00
serde.workspace = true
serde_json.workspace = true
settings.workspace = true
similar.workspace = true
smallvec.workspace = true
smol.workspace = true
strum.workspace = true
telemetry_events.workspace = true
terminal.workspace = true
terminal_view.workspace = true
text.workspace = true
theme.workspace = true
toml.workspace = true
ui.workspace = true
util.workspace = true
uuid.workspace = true
workspace.workspace = true
zed_actions.workspace = true
[dev-dependencies]
ctor.workspace = true
editor = { workspace = true, features = ["test-support"] }
env_logger.workspace = true
language = { workspace = true, features = ["test-support"] }
language_model = { workspace = true, features = ["test-support"] }
languages = { workspace = true, features = ["test-support"] }
log.workspace = true
Restructure assistant edits to show all changes in a proposed-change editor (#18240) This changes the `/workflow` command so that instead of emitting edits in separate steps, the user is presented with a single tab, with an editable diff that they can apply to the buffer. Todo * Assistant panel * [x] Show a patch title and a list of changed files in a block decoration * [x] Don't store resolved patches as state on Context. Resolve on demand. * [ ] Better presentation of patches in the panel * [ ] Show a spinner while patch is streaming in * Patches * [x] Preserve leading whitespace in new text, auto-indent insertions * [x] Ensure patch title is very short, to fit better in tab * [x] Improve patch location resolution, prefer skipping whitespace over skipping `}` * [x] Ensure patch edits are auto-indented properly * [ ] Apply `Update` edits via a diff between the old and new text, to get fine-grained edits. * Proposed changes editor * [x] Show patch title in the tab * [x] Add a toolbar with an "Apply all" button * [x] Make `open excerpts` open the corresponding location in the base buffer (https://github.com/zed-industries/zed/pull/18591) * [x] Add an apply button above every hunk (https://github.com/zed-industries/zed/pull/18592) * [x] Expand all diff hunks by default (https://github.com/zed-industries/zed/pull/18598) * [x] Fix https://github.com/zed-industries/zed/issues/18589 * [x] Syntax highlighting doesn't work until the buffer is edited (https://github.com/zed-industries/zed/pull/18648) * [x] Disable LSP interaction in Proposed Changes editor (https://github.com/zed-industries/zed/pull/18945) * [x] No auto-indent? (https://github.com/zed-industries/zed/pull/18984) * Prompt * [ ] make sure old_text is unique Release Notes: - N/A --------- Co-authored-by: Marshall Bowers <elliott.codes@gmail.com> Co-authored-by: Antonio <antonio@zed.dev> Co-authored-by: Richard <richard@zed.dev> Co-authored-by: Marshall <marshall@zed.dev> Co-authored-by: Nate Butler <iamnbutler@gmail.com> Co-authored-by: Antonio Scandurra <me@as-cii.com> Co-authored-by: Richard Feldman <oss@rtfeldman.com>
2024-10-17 17:18:13 +00:00
pretty_assertions.workspace = true
project = { workspace = true, features = ["test-support"] }
rand.workspace = true
serde_json_lenient.workspace = true
text = { workspace = true, features = ["test-support"] }
tree-sitter-md.workspace = true
Allow the assistant to suggest edits to files in the project (#11993) ### Todo * [x] tuck the new system prompt away somehow * for now, we're treating it as built-in, and not editable. once we have a way to fold away default prompts, let's make it a default prompt. * [x] when applying edits, re-parse the edit from the latest content of the assistant buffer (to allow for manual editing of edits) * [x] automatically adjust the indentation of edits suggested by the assistant * [x] fix edit row highlights persisting even when assistant messages with edits are deleted * ~adjust the fuzzy search to allow for small errors in the old text, using some string similarity routine~ We decided to defer the fuzzy searching thing to a separate PR, since it's a little bit involved, and the current functionality works well enough to be worth landing. A couple of notes on the fuzzy searching: * sometimes the assistant accidentally omits line breaks from the text that it wants to replace * when the old text has hallucinations, the new text often contains the same hallucinations. so we'll probably need to use a more fine-grained editing strategy where we perform a character-wise diff of the old and new text as reported by the assistant, and then adjust that diff so that it can be applied to the actual buffer text Release Notes: - Added the ability to request edits to project files using the assistant panel. --------- Co-authored-by: Antonio Scandurra <me@as-cii.com> Co-authored-by: Marshall <marshall@zed.dev> Co-authored-by: Antonio <antonio@zed.dev> Co-authored-by: Nathan <nathan@zed.dev>
2024-05-17 22:38:14 +00:00
unindent.workspace = true