Previously, we would use `Project::serialize_buffer_for_peer` and
`Project::deserialize_buffer` respectively in the host and in the
guest to create a new buffer or just send its ID if the host thought
the buffer had already been sent.
These methods would be called as part of other methods, such as
`Project::open_buffer_by_id` or `Project::open_buffer_for_symbol`.
However, if any of the tasks driving the futures that eventually
called `Project::deserialize_buffer` were dropped after the host
responded with the buffer state but (crucially) before the guest
deserialized it and registered it, there could be a situation where
the host thought the guest had the buffer (thus sending them just the
buffer id) and the guest would wait indefinitely.
Given how crucial this interaction is, this commit switches to creating
remote buffers for peers out of band. The host will push buffers to guests,
who will always refer to buffers via IDs and wait for the host to send them,
as opposed to including the buffer's payload as part of some other operation.
As part of #1405, we changed the way we performed undo and redo to
support combining transactions that were not temporally adjacent for
IME purposes.
We introduced a bug with that release that caused divergence
when performing undo: the bug was caused by only changing the visibility
of fragments whose insertion id was contained in the undo operation. However,
an undo operation also affects deletions which we were mistakenly not
considering. Randomized tests caught this but I guess we didn't run enough
of them.
Now, instead of using these versioned offset ranges, we locate the
fragments associated with a transaction using the transaction's
edit ids. To make this possible, buffers now store a new map called
`insertion_slices`, which lets you look up the ranges of insertions
that were affected by a given edit.
Co-authored-by: Antonio Scandurra <antonio@zed.dev>
* Make `UnregisterProject` a request. This way the client-side project can wait
to clear out its remote id until the request has completed, so that the
contacts panel can avoid showing duplicate private/public projects in the
brief time after unregistering a project, before the next UpdateCollaborators
message is received.
* Remove the `RegisterWorktree` and `UnregisterWorktree` methods and replace
them with a single `UpdateProject` method that idempotently updates the
Project's list of worktrees.
We temporarily let it grow when the message size exceed the limit,
but restore the buffer's capacity shortly after. This ensures that,
for each connection in its entire lifetime, we only ever use 1MB.
This allows these events to be logged in the Zed client (until we setup tracing there).
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
Co-Authored-By: Max Brunsfeld <maxbrunsfeld@gmail.com>
- Add a bunch of events to Peer's async connection handling logic
- Use an EnvFilter to allow more control over the verbosity level of tracing on a per-module basis
- Wire up logging to emit trace events (we actually probably want to do this the other way around)
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
* Be sure we send updates to multiple clients for the same user
* Be sure we send a full contacts update on initial connection
As part of this commit, I fixed an issue where we couldn't disconnect and reconnect in tests. The first disconnect would cause the I/O future to terminate asynchronously, which caused us to sign out even though the active connection didn't belong to that future. I added a guard to ensure that we only sign out if the I/O future is associated with the current connection.
This ensures that entries don't randomly re-appear on remote worktrees
due to observing an update too late. In fact, it ensures that the remote
worktree has the same starting state of the host before preemptively applying
the fs operation locally.
When opening a buffer, some language servers might start reporting
diagnostics. When closing a buffer, they might report that no diagnostics
are present for that buffer. Previously, we would keep an empty summary entry
which would cause us to open a buffer in the project diagnostics view, only to
drop it because it contained no diagnostics. However, the act of opening it
caused the language server to asynchronously report non-empty diagnostics.
We would therefore handle this as an update, but the previous closing of the
buffer would cause the language server to report empty diagnostics again. This
would cause the project diagnostics view to thrash infinitely between these two
states, pegging the CPU and constantly refreshing the UI.
With this commit we won't maintain empty summary entries for files that contain
no diagnostics, which fixes the above issue.
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
We have an hypothesis that the server gets stuck while processing
an incoming message, either because the buffer fills up or because
a handler never completes. This should mitigate that and, once we
add logging, give us some clue as to what is causing it exactly.
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
Previously, we weren't fully clearing the state associated with projects and worktrees when losing connection. This caused us to not see guest avatars disappear and not be able to re-share upon reconnect.
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
Also, increase the receive timeout to 30 seconds. We'll still respond immediately
to explicit disconnection, but when there are temporary network blips that
delay pings, we think we should err on the side of keeping the connection
alive. This is in response to a false positive 'host disconnected' state
that we observed when pairing today, while the host (Keith) still clearly
had a working internet connection, because we were screen sharing.
Co-Authored-By: Keith Simmons <keith@zed.dev>
* Make advance_clock more realistic by waking timers in order,
instead of all at once.
* Don't advance the clock when simulating random delays.
Co-Authored-By: Keith Simmons <keith@zed.dev>
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
When a network connection is lost without being explicitly closed by the
other end, writes to that connection will error, but reads will just wait
indefinitely.
This allows the tests to exercise our heartbeat logic.
This allows us to drop the context *after* we ran all futures to
completion and that's crucial otherwise we'll never drop entities
and/or flush effects.
However, the randomized integration test is still failing:
```
ITERATIONS=100000 SEED=3027 OPERATIONS=200 cargo test --release test_random --package=zed-server -- --nocapture
```
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
Instead, create an empty worktree on guests when a worktree is first *registered*, then update it via an initial UpdateWorktree message.
This prevents the host from referencing a worktree in definition RPC responses that hasn't yet been observed by the guest. We could have waited until the entire worktree was shared, but this could take a long time, so instead we create an empty one on guests and proceed from there.
We still have randomized test failures as of this commit:
SEED=9519 MAX_PEERS=2 ITERATIONS=10000 OPERATIONS=7 ct -p zed-server test_random_collaboration
Co-Authored-By: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
We don't need this anymore because worktree updates are foreground
messages.
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
Co-Authored-By: Max Brunsfeld <max@zed.dev>