This PR polishes the search bar UI, making the layout more dense, and
the spacing more consistent with the rest of the app. I've also
re-ordered the toolbar items to reflect some of @iamnbutler's original
search designs. The items related to the search query are on the left,
and the actions that navigate the buffer (next, prev, select all, result
count) are on the right.
This PR adds a disclosable component, related wiring, and uses it to
implement the collaboration panel's disclosure of subchannels. It also
adds a component test page to make style development easier, and
refactors components into v0.2, safe styles (as described in [TWAZ
#16](https://zed.dev/blog/this-week-at-zed-16))
Release Notes:
- N/A
Lately, I've been finding Rust-analyzer unusably slow when editing large
files (like `editor_tests.rs`, or `integration_tests.rs`). When I
profile the Rust-analyzer process, I see that it sometimes saturates up
to 10 cores processing a queue of code actions requests.
Additionally, sometimes when collaborating on large files like these, we
see long delays in propagating buffer operations. I'm still not sure why
this is happening, but whenever I look at the server logs in Datadog, I
see that there are remote `CodeActions` and `DocumentHighlights`
messages being processed that take upwards of 30 seconds. I think what
may be happening is that many such requests are resolving at once, and
the responses are taking up too much of the host's bandwidth.
I think that both of these problems are caused by us sending way too
many code action and document highlight requests to rust-analyzer. This
PR adds a simple debounce between changing selections and making these
requests.
From my local testing, this debounce makes Rust-analyzer *much* more
responsive when moving the cursor around a large file like
`editor_tests.rs`.
This fixes a bug where text moved up and down by one pixel in the buffer
search query editor, while typing.
Release notes:
* Fixed a bug where editors didn't auto-scroll when typing if all
cursors could not fit within the viewport.
This is a deep cut. There's still more work to do until we start
building UI with this. I've approached this as additively as possible,
but I've made a few changes to the rest of the code that I think would
be good to upstream before proceeding too much further.
Most of the interesting pieces are in gpui/playground, which is a
standalone binary that opens a single window and renders a new kind of
element. The layout of these new elements is provided by the taffy
layout engine crate, which conforms to web conventions. The idea is that
playground is relatively cheap to build and work on. As concepts
coalesce in playground, we can drop them into gpui and start
transitioning.
- vim: support P for paste before
- vim: support P in visual mode for paste without overriding clipboard
- vim: fix position when using `p` on text copied outside zed
- vim: fix indentation when using `p` on text copied from zed
This PR adds new config option to language config called
`word_boundaries` that controls which characters should be recognised as
word boundary for a given language. This will improve our UX for
languages such as PHP and Tailwind.
Release Notes:
- Improved completions for PHP
[#1820](https://github.com/zed-industries/community/issues/1820)
---------
Co-authored-by: Julia Risley <julia@zed.dev>
[This PR has been sitting around for a
bit](https://github.com/zed-industries/zed/pull/2845). I received a bit
of mixed opinions from the team on how this setting should work, if it
should use the full model names or some simpler form of it, etc. I went
ahead and made the decision to do the following:
- Use the full model names in settings - ex: `gpt-4-0613`
- Default to `gpt-4-0613` when no setting is present
- Save the full model names in the conversation history files (this is
how it was prior) - ex: `gpt-4-0613`
- Display the shortened model names in the assistant - ex: `gpt-4`
- Not worry about adding an option to add custom models (can add in a
follow-up PR)
- Not query what models are available to the user via their api key (can
add in a follow-up PR)
Release Notes:
- Added a `default_open_ai_model` setting for the assistant (defaults to
`gpt-4-0613`).
---------
Co-authored-by: Mikayla <mikayla@zed.dev>
Small PR aimed at improving a few edge cases for semantic indexing large
projects.
Release Notes (Preview-only).
- Batched large files with a number of documents greater than
EMBEDDINGS_BATCH_SIZE.
- Ensured that the job handle counting mechanism is consistent with
inner file batching.
- Updated tab content names for semantic search, to match text/regex
searches.
This reduces our dep count by 1% at the expense of not supporting
playback of .flac, .mp3 and .vorbis formats. We only use .wav anyways.
Release Notes:
- N/A