* [x] Re-send operations that weren't sent while disconnected
* [x] Apply other clients' operations that were missed while
disconnected
* [x] Update collaborators that joined / left while disconnected
* [x] Inform current collaborators that your peer id has changed
* [x] Refresh channel buffer collaborators on server restart
* [x] randomized test
This PR updates our client code to connect to preview whenever we're not
on stable. This will make it more likely that we'll be able to
collaborate on a dev build, but obviously won't work if there's a
protocol change on main that hasn't made its way to preview yet.
This PR ships a series of optimizations for the semantic search engine.
Mostly focused on removing invalid states, optimizing requests to
OpenAI, and reducing token usage.
Release Notes (Preview-Only):
- Added eager incremental indexing in the background on a debounce.
- Added a local embeddings cache for reducing redundant calls to OpenAI.
- Moved to an Embeddings Queue model which ensures optimal batch sizes
at the token level, and atomic file & document writes.
- Adjusted OpenAI Embedding API requests to use provided backoff delays
during Rate Limiting.
- Removed flush races between parsing files step and embedding queue
steps.
- Moved truncation to parsing step reducing the probability that OpenAI
encounters bad data.
This should have no user-visible impact.
For vim `.` to repeat it's important that actions are replayable.
Currently editor::MoveDown *sometimes* moves the cursor down, and
*sometimes* selects the next completion.
For replay we need to be able to separate the two.
Because of the way we set up tools that add rows inside the toolbar it
is complicated to tighten up the spacing inside the toolbar.
This PR just reverts the changes I made previously. We'll need to
properly add rows below the toolbar instead of rendering search inside
of it to have non-equal height tools be able to descend from it.
Release Notes:
- Preview – Fixed an issue where search filters were partially cut off
in the UI.