Commit graph

28 commits

Author SHA1 Message Date
Conrad Irwin
e28496d4e2
Stop leaking isahc assumption (#18408)
Users of our http_client crate knew they were interacting with isahc as
they set its extensions on the request. This change adds our own
equivalents for their APIs in preparation for changing the default http
client.

Release Notes:

- N/A
2024-09-26 14:01:05 -06:00
Peter Tripp
d245f5e75c
OpenAI o1-preview and o1-mini support (#17796)
Release Notes:

- Added support for OpenAI o1-mini and o1-preview models.

---------

Co-authored-by: Jason Mancuso <7891333+jvmncs@users.noreply.github.com>
Co-authored-by: Bennet <bennet@zed.dev>
2024-09-13 16:23:55 -04:00
jvmncs
c71f052276
Add ability to use o1-preview and o1-mini as custom models (#17804)
This is a barebones modification of the OpenAI provider code to
accommodate non-streaming completions. This is specifically for the o1
models, which do not support streaming. Tested that this is working by
running a `/workflow` with the following (arbitrarily chosen) settings:

```json
{
  "language_models": {
    "openai": {
      "version": "1",
      "available_models": [
        {
          "name": "o1-preview",
          "display_name": "o1-preview",
          "max_tokens": 128000,
          "max_completion_tokens": 30000
        },
        {
          "name": "o1-mini",
          "display_name": "o1-mini",
          "max_tokens": 128000,
          "max_completion_tokens": 20000
        }
      ]
    }
  },
}
```

Release Notes:

- Changed  `low_speed_timeout_in_seconds` option to `600` for OpenAI
provider to accommodate recent o1 model release.

---------

Co-authored-by: Peter <peter@zed.dev>
Co-authored-by: Bennet <bennet@zed.dev>
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
2024-09-13 15:42:15 -04:00
Peter Tripp
fb9d01b0d5
assistant: Add display_name for OpenAI and Gemini (#17508) 2024-09-10 13:41:06 -04:00
Peter Tripp
58c0f39714
OpenAI: Fix GPT-4. Only include max_tokens when max_output_tokens provided (#17168)
- Fixed GPT-4 breakage (incorrect `max_output_tokens` handling).
2024-08-30 14:57:50 -04:00
Peter Tripp
4d6bb52d1f
Anthropic/OpenAI: Add country codes for territories (#17089)
- Cloudflare provides ISO-3166-1 country code for protectorates. Expand our allowlist to include the territories of countries on the allowlist (US, UK, France, Australia, New Zealand). 
- Also include the country_code in the error message when we block. 

Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
2024-08-29 11:32:29 -04:00
邻二氮杂菲
f1778dd9de
Add max_output_tokens to OpenAI models and integrate into requests (#16381)
### Pull Request Title
Introduce `max_output_tokens` Field for OpenAI Models


https://platform.deepseek.com/api-docs/news/news0725/#4-8k-max_tokens-betarelease-longer-possibilities

### Description
This commit introduces a new field `max_output_tokens` to the OpenAI
models, which allows specifying the maximum number of tokens that can be
generated in the output. This field is now integrated into the request
handling across multiple crates, ensuring that the output token limit is
respected during language model completions.

Changes include:
- Adding `max_output_tokens` to the `Custom` variant of the
`open_ai::Model` enum.
- Updating the `into_open_ai` method in `LanguageModelRequest` to accept
and use `max_output_tokens`.
- Modifying the `OpenAiLanguageModel` and `CloudLanguageModel`
implementations to pass `max_output_tokens` when converting requests.
- Ensuring that the `max_output_tokens` field is correctly serialized
and deserialized in relevant structures.

This enhancement provides more control over the output length of OpenAI
model responses, improving the flexibility and accuracy of language
model interactions.

### Changes
- Added `max_output_tokens` to the `Custom` variant of the
`open_ai::Model` enum.
- Updated the `into_open_ai` method in `LanguageModelRequest` to accept
and use `max_output_tokens`.
- Modified the `OpenAiLanguageModel` and `CloudLanguageModel`
implementations to pass `max_output_tokens` when converting requests.
- Ensured that the `max_output_tokens` field is correctly serialized and
deserialized in relevant structures.

### Related Issue
https://github.com/zed-industries/zed/pull/16358

### Screenshots / Media
N/A

### Checklist
- [x] Code compiles correctly.
- [x] All tests pass.
- [ ] Documentation has been updated accordingly.
- [ ] Additional tests have been added to cover new functionality.
- [ ] Relevant documentation has been updated or added.

### Release Notes

- Added `max_output_tokens` field to OpenAI models for controlling
output token length.
2024-08-21 00:39:10 -04:00
Max Brunsfeld
4c390b82fb
Make LanguageModel::use_any_tool return a stream of chunks (#16262)
Some checks are pending
CI / Check formatting and spelling (push) Waiting to run
CI / (macOS) Run Clippy and tests (push) Waiting to run
CI / (Linux) Run Clippy and tests (push) Waiting to run
CI / (Windows) Run Clippy and tests (push) Waiting to run
CI / Create a macOS bundle (push) Blocked by required conditions
CI / Create a Linux bundle (push) Blocked by required conditions
CI / Create arm64 Linux bundle (push) Blocked by required conditions
Deploy Docs / Deploy Docs (push) Waiting to run
Docs / Check formatting (push) Waiting to run
This PR is a refactor to pave the way for allowing the user to view and
edit workflow step resolutions. I've made tool calls work more like
normal streaming completions for all providers. The `use_any_tool`
method returns a stream of strings (which contain chunks of JSON). I've
also done some minor cleanup of language model providers in general,
removing the duplication around handling streaming responses.

Release Notes:

- N/A
2024-08-14 18:02:46 -07:00
Marshall Bowers
cf5f4dddf5
Authorize access to language model providers based on country (#15859)
This PR updates the LLM service to authorize access to language model
providers based on the requester's country.

We detect the country using Cloudflare's
[`CF-IPCountry`](https://developers.cloudflare.com/fundamentals/reference/http-request-headers/#cf-ipcountry)
header.

The country code is then checked against the list of supported countries
for the given LLM provider. Countries that are not supported will
receive an `HTTP 451: Unavailable For Legal Reasons` response.

Release Notes:

- N/A
2024-08-06 11:49:04 -04:00
Piotr Osiewicz
874f0c0712
assistant: Use tools in other providers (#15803)
- [x] OpenAI
- [ ] ~Google~ Moved into a separate branch at:
https://github.com/zed-industries/zed/tree/tool-calls-in-google-ai I've
ran into issues with having the API digest our schema without tripping
over itself - the function call parameters are malformed and whatnot. We
can resume from that branch if needed.
- [x] Ollama
- [x] Cloud
- [ ] ~Copilot Chat (?)~

Release Notes:

- Added tool calling capabilities to OpenAI and Ollama models.
2024-08-06 15:45:47 +02:00
Antonio Scandurra
21816d1ff5
Add Qwen2-7B to the list of zed.dev models (#15649)
Release Notes:

- N/A

---------

Co-authored-by: Nathan <nathan@zed.dev>
2024-08-01 22:26:07 +02:00
Antonio Scandurra
d6bdaa8a91
Simplify LLM protocol (#15366)
Some checks are pending
CI / Check formatting and spelling (push) Waiting to run
CI / (macOS) Run Clippy and tests (push) Waiting to run
CI / (Linux) Run Clippy and tests (push) Waiting to run
CI / (Windows) Run Clippy and tests (push) Waiting to run
CI / Create a macOS bundle (push) Blocked by required conditions
CI / Create a Linux bundle (push) Blocked by required conditions
CI / Create arm64 Linux bundle (push) Blocked by required conditions
Deploy Docs / Deploy Docs (push) Waiting to run
Docs / Check formatting (push) Waiting to run
In this pull request, we change the zed.dev protocol so that we pass the
raw JSON for the specified provider directly to our server. This avoids
the need to define a protobuf message that's a superset of all these
formats.

@bennetbo: We also changed the settings for available_models under
zed.dev to be a flat format, because the nesting seemed too confusing.
Can you help us upgrade the local provider configuration to be
consistent with this? We do whatever we need to do when parsing the
settings to make this simple for users, even if it's a bit more complex
on our end. We want to use versioning to avoid breaking existing users,
but need to keep making progress.

```json
"zed.dev": {
  "available_models": [
    {
      "provider": "anthropic",
        "name": "some-newly-released-model-we-havent-added",
        "max_tokens": 200000
      }
  ]
}
```

Release Notes:

- N/A

---------

Co-authored-by: Nathan <nathan@zed.dev>
2024-07-28 11:07:10 +02:00
Marshall Bowers
02c43a5bf2
Add missing workspace lints (#15237)
This PR adds the missing workspace lint configuration for the following
crates that were missing it:

- `google_ai`
- `open_ai`
- `tab_switcher`

Release Notes:

- N/A
2024-07-25 19:52:24 -04:00
Mikayla Maki
855048041d
Update http crate name (#15041)
Release Notes:

- N/A
2024-07-23 15:01:05 -07:00
Bennet Bo Fenner
d0f52e90e6
assistant: Overhaul provider infrastructure (#14929)
<img width="624" alt="image"
src="https://github.com/user-attachments/assets/f492b0bd-14c3-49e2-b2ff-dc78e52b0815">

- [x] Correctly set custom model token count
- [x] How to count tokens for Gemini models?
- [x] Feature flag zed.dev provider
- [x] Figure out how to configure custom models
- [ ] Update docs

Release Notes:

- Added support for quickly switching between multiple language model
providers in the assistant panel

---------

Co-authored-by: Antonio <antonio@zed.dev>
2024-07-23 19:48:41 +02:00
versecafe
18b5a87298
Add gpt-4o-mini as an available model (#14770)
Some checks are pending
CI / Check formatting and spelling (push) Waiting to run
CI / (macOS) Run Clippy and tests (push) Waiting to run
CI / (Linux) Run Clippy and tests (push) Waiting to run
CI / (Windows) Run Clippy and tests (push) Waiting to run
CI / Create a macOS bundle (push) Blocked by required conditions
CI / Create a Linux bundle (push) Blocked by required conditions
CI / Create arm64 Linux bundle (push) Blocked by required conditions
Deploy Docs / Deploy Docs (push) Waiting to run
Release Notes:

- Fixes #14769
2024-07-18 22:32:56 -06:00
Allison Durham
995b082c64
Change tool_calls to be an Option in response (#13778)
Here is an image of my now getting assistance responses!

![2024-07-03_08-45-37_swappy](https://github.com/zed-industries/zed/assets/20910163/904adc51-cb40-4622-878e-f679e0212426)

I ended up adding a function to handle the use case of not serializing
the tool_calls response if it is either null or empty to keep the
functionality of the existing implementation (not deserializing if vec
is empty). I'm sorta a noob, so happy to make changes if this isn't done
correctly, although it does work and it does pass tests!

Thanks a bunch to [amtoaer](https://github.com/amtoaer) for pointing me
in the direction on how to fix it.

Release Notes:

- Fixed some responses being dropped from OpenAI-compatible providers
([#13741](https://github.com/zed-industries/zed/issues/13741)).
2024-07-03 11:07:11 -04:00
ᴀᴍᴛᴏᴀᴇʀ
922fcaf5a6
Add the ability to customize available models for OpenAI-compatible services (#13276)
Closes #11984, closes #11075.

Release Notes:

- Added the ability to customize available models for OpenAI-compatible
services ([#11984](https://github.com/zed-industries/zed/issues/11984))
([#11075](https://github.com/zed-industries/zed/issues/11075)).


![image](https://github.com/zed-industries/zed/assets/32017007/01057e7b-1f21-49ad-a3ad-abc5282ffaf0)
2024-06-25 16:37:02 -04:00
Antonio Scandurra
6ff01b17ca
Improve model selection in the assistant (#12472)
https://github.com/zed-industries/zed/assets/482957/3b017850-b7b6-457a-9b2f-324d5533442e


Release Notes:

- Improved the UX for selecting a model in the assistant panel. You can
now switch model using just the keyboard by pressing `alt-m`. Also, when
switching models via the UI, settings will now be updated automatically.
2024-05-30 12:36:07 +02:00
Toon Willems
9b74acc4f5
Add GPT-4o as possible model (#11764)
Resolves: #11766

Release Notes:

- Add GPT-4o support (see: https://openai.com/index/hello-gpt-4o/).
GPT-4o is better and faster than 4-turbo, at half the price.
2024-05-14 10:43:24 +02:00
Conrad Irwin
5515ba6043
Extract http from util (#11680)
This avoids the CLI linking libssl etc...

Release Notes:

- N/A
2024-05-10 15:50:20 -06:00
Marshall Bowers
0d26beb91b
Add configurable low-speed timeout for OpenAI provider (#11668)
This PR adds a setting to allow configuring the low-speed timeout for
the Assistant when using the OpenAI provider.

The `low_speed_timeout_in_seconds` accepts a number of seconds that the
HTTP client can go below a minimum speed limit (currently set to 100
bytes/second) before it times out.

```json
{
  "assistant": {
    "version": "1",
    "provider": { "name": "openai", "low_speed_timeout_in_seconds": 60 }
  },
}
```

This should help the case where the `openai` provider is being used with
a local model that requires higher timeouts.

Issue: https://github.com/zed-industries/zed/issues/9913

Release Notes:

- Added a `low_speed_timeout_in_seconds` setting to the Assistant's
OpenAI provider
([#9913](https://github.com/zed-industries/zed/issues/9913)).
2024-05-10 13:19:21 -04:00
Kyle Kelley
68a1ad89bb
New revision of the Assistant Panel (#10870)
This is a crate only addition of a new version of the AssistantPanel.
We'll be putting this behind a feature flag while we iron out the new
experience.

Release Notes:

- N/A

---------

Co-authored-by: Nathan Sobo <nathan@zed.dev>
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Conrad Irwin <conrad@zed.dev>
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
Co-authored-by: Antonio Scandurra <antonio@zed.dev>
Co-authored-by: Nate Butler <nate@zed.dev>
Co-authored-by: Nate Butler <iamnbutler@gmail.com>
Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-authored-by: Max <max@zed.dev>
2024-04-23 16:23:26 -07:00
Kyle Kelley
49371b44cb
Semantic Index (#10329)
This introduces semantic indexing in Zed based on chunking text from
files in the developer's workspace and creating vector embeddings using
an embedding model. As part of this, we've created an embeddings
provider trait that allows us to work with OpenAI, a local Ollama model,
or a Zed hosted embedding.

The semantic index is built by breaking down text for known
(programming) languages into manageable chunks that are smaller than the
max token size. Each chunk is then fed to a language model to create a
high dimensional vector which is then normalized to a unit vector to
allow fast comparison with other vectors with a simple dot product.
Alongside the vector, we store the path of the file and the range within
the document where the vector was sourced from.

Zed will soon grok contextual similarity across different text snippets,
allowing for natural language search beyond keyword matching. This is
being put together both for human-based search as well as providing
results to Large Language Models to allow them to refine how they help
developers.

Remaining todo:

* [x] Change `provider` to `model` within the zed hosted embeddings
database (as its currently a combo of the provider and the model in one
name)


Release Notes:

- N/A

---------

Co-authored-by: Nathan Sobo <nathan@zed.dev>
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Conrad Irwin <conrad@zed.dev>
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
Co-authored-by: Antonio <antonio@zed.dev>
2024-04-12 11:40:59 -06:00
Nathan Sobo
6d5787cfdc
Hard code max token counts for supported models (#9675) 2024-03-21 20:30:33 -06:00
Antonio Scandurra
9ab7a22fa8 Fix licensing errors 2024-03-20 15:52:02 +01:00
Antonio Scandurra
f2394c76f5 Fix licensing 2024-03-20 13:03:13 +01:00
Nathan Sobo
8ae5a3b61a
Allow AI interactions to be proxied through Zed's server so you don't need an API key (#7367)
Co-authored-by: Antonio <antonio@zed.dev>

Resurrected this from some assistant work I did in Spring of 2023.
- [x] Resurrect streaming responses
- [x] Use streaming responses to enable AI via Zed's servers by default
(but preserve API key option for now)
- [x] Simplify protobuf
- [x] Proxy to OpenAI on zed.dev
- [x] Proxy to Gemini on zed.dev
- [x] Improve UX for switching between openAI and google models
- We current disallow cycling when setting a custom model, but we need a
better solution to keep OpenAI models available while testing the google
ones
- [x] Show remaining tokens correctly for Google models
- [x] Remove semantic index
- [x] Delete `ai` crate
- [x] Cloud front so we can ban abuse
- [x] Rate-limiting
- [x] Fix panic when using inline assistant
- [x] Double check the upgraded `AssistantSettings` are
backwards-compatible
- [x] Add hosted LLM interaction behind a `language-models` feature
flag.

Release Notes:

- We are temporarily removing the semantic index in order to redesign it
from scratch.

---------

Co-authored-by: Antonio <antonio@zed.dev>
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Thorsten <thorsten@zed.dev>
Co-authored-by: Max <max@zed.dev>
2024-03-19 19:22:26 +01:00