Commit graph

9 commits

Author SHA1 Message Date
Conrad Irwin
e28496d4e2
Stop leaking isahc assumption (#18408)
Users of our http_client crate knew they were interacting with isahc as
they set its extensions on the request. This change adds our own
equivalents for their APIs in preparation for changing the default http
client.

Release Notes:

- N/A
2024-09-26 14:01:05 -06:00
Marshall Bowers
02c43a5bf2
Add missing workspace lints (#15237)
This PR adds the missing workspace lint configuration for the following
crates that were missing it:

- `google_ai`
- `open_ai`
- `tab_switcher`

Release Notes:

- N/A
2024-07-25 19:52:24 -04:00
Mikayla Maki
855048041d
Update http crate name (#15041)
Release Notes:

- N/A
2024-07-23 15:01:05 -07:00
Antonio Scandurra
6ff01b17ca
Improve model selection in the assistant (#12472)
https://github.com/zed-industries/zed/assets/482957/3b017850-b7b6-457a-9b2f-324d5533442e


Release Notes:

- Improved the UX for selecting a model in the assistant panel. You can
now switch model using just the keyboard by pressing `alt-m`. Also, when
switching models via the UI, settings will now be updated automatically.
2024-05-30 12:36:07 +02:00
Conrad Irwin
5515ba6043
Extract http from util (#11680)
This avoids the CLI linking libssl etc...

Release Notes:

- N/A
2024-05-10 15:50:20 -06:00
Marshall Bowers
0d26beb91b
Add configurable low-speed timeout for OpenAI provider (#11668)
This PR adds a setting to allow configuring the low-speed timeout for
the Assistant when using the OpenAI provider.

The `low_speed_timeout_in_seconds` accepts a number of seconds that the
HTTP client can go below a minimum speed limit (currently set to 100
bytes/second) before it times out.

```json
{
  "assistant": {
    "version": "1",
    "provider": { "name": "openai", "low_speed_timeout_in_seconds": 60 }
  },
}
```

This should help the case where the `openai` provider is being used with
a local model that requires higher timeouts.

Issue: https://github.com/zed-industries/zed/issues/9913

Release Notes:

- Added a `low_speed_timeout_in_seconds` setting to the Assistant's
OpenAI provider
([#9913](https://github.com/zed-industries/zed/issues/9913)).
2024-05-10 13:19:21 -04:00
Antonio Scandurra
9ab7a22fa8 Fix licensing errors 2024-03-20 15:52:02 +01:00
Antonio Scandurra
f2394c76f5 Fix licensing 2024-03-20 13:03:13 +01:00
Nathan Sobo
8ae5a3b61a
Allow AI interactions to be proxied through Zed's server so you don't need an API key (#7367)
Co-authored-by: Antonio <antonio@zed.dev>

Resurrected this from some assistant work I did in Spring of 2023.
- [x] Resurrect streaming responses
- [x] Use streaming responses to enable AI via Zed's servers by default
(but preserve API key option for now)
- [x] Simplify protobuf
- [x] Proxy to OpenAI on zed.dev
- [x] Proxy to Gemini on zed.dev
- [x] Improve UX for switching between openAI and google models
- We current disallow cycling when setting a custom model, but we need a
better solution to keep OpenAI models available while testing the google
ones
- [x] Show remaining tokens correctly for Google models
- [x] Remove semantic index
- [x] Delete `ai` crate
- [x] Cloud front so we can ban abuse
- [x] Rate-limiting
- [x] Fix panic when using inline assistant
- [x] Double check the upgraded `AssistantSettings` are
backwards-compatible
- [x] Add hosted LLM interaction behind a `language-models` feature
flag.

Release Notes:

- We are temporarily removing the semantic index in order to redesign it
from scratch.

---------

Co-authored-by: Antonio <antonio@zed.dev>
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Thorsten <thorsten@zed.dev>
Co-authored-by: Max <max@zed.dev>
2024-03-19 19:22:26 +01:00