Commit graph

228 commits

Author SHA1 Message Date
KCaverly
19c2df4822 outlined when truncation is taking place in the prompt 2023-10-19 14:33:52 -04:00
KCaverly
178a84bcf6 progress on smarter truncation strategy for file context 2023-10-18 17:56:59 -04:00
KCaverly
587fd707ba added smarter error handling for file_context prompts without provided buffers 2023-10-18 16:40:09 -04:00
KCaverly
f59f2eccd5 added dumb truncation strategies to file_context and generate 2023-10-18 16:32:14 -04:00
KCaverly
a0e01e075d fix for error when truncating a length less than the string length 2023-10-18 16:31:29 -04:00
KCaverly
32853c2044 added initial placeholder for truncation without a valid strategy 2023-10-18 16:23:53 -04:00
KCaverly
473067db31 update PromptPriority to accomodate for both Mandatory and Ordered prompts 2023-10-18 15:56:39 -04:00
KCaverly
aa1825681c update the assistant panel to use new prompt templates 2023-10-18 14:20:12 -04:00
KCaverly
b9bb27512c fix template ordering during prompt chain generation 2023-10-18 13:10:31 -04:00
KCaverly
fa61c1b9c1 add prompt template for generate inline content 2023-10-18 13:03:11 -04:00
KCaverly
178a79fc47 added prompt template for file context without truncation 2023-10-18 12:29:10 -04:00
KCaverly
02853bbd60 added prompt template for repository context 2023-10-17 17:29:07 -04:00
KCaverly
a874a09b7e added openai language model tokenizer and LanguageModel trait 2023-10-17 16:21:03 -04:00
KCaverly
ad92fe49c7 implement initial concept of prompt chain 2023-10-17 11:58:45 -04:00
KCaverly
500af6d775 progress on prompt chains 2023-10-16 18:47:10 -04:00
KCaverly
40755961ea added initial template outline 2023-10-16 11:54:32 -04:00
KCaverly
ecfece3ac4 catchup with main 2023-10-06 16:30:31 +02:00
KCaverly
0666fa80ac moved status to icon with additional information in tooltip 2023-10-05 16:49:25 +03:00
KCaverly
ec1b4e6f85 added initial working status in inline assistant prompt 2023-10-05 13:01:11 +03:00
Mikayla
6007c8705c
Upgrade SeaORM to latest version, also upgrade sqlite bindings, rustqlite, and remove SeaQuery
co-authored-by: Max <max@zed.dev>
2023-10-03 12:16:53 -07:00
KCaverly
68c37ca2a4 move embedding provider to ai crate 2023-09-22 09:33:59 -04:00
KCaverly
48e151495f introduce ai crate with completion providers 2023-09-21 22:44:56 -04:00
KCaverly
5f6334696a rename ai crate to assistant crate 2023-09-21 21:54:59 -04:00
Joseph T. Lyons
5df9a57a8b
Add assistant events (#2978)
Add assistant events

Release Notes:

- N/A
2023-09-15 15:25:35 -04:00
Nate Butler
24974ee2fa Unify icons using multiple variants, remove all unused icons 2023-09-15 12:50:49 -04:00
Antonio Scandurra
925da97599 Don't dismiss inline assistant when an error occurs 2023-09-15 12:32:37 +02:00
Kirill Bulatov
9f5314e938 Unify highlights in *Map 2023-09-14 22:08:12 +03:00
Antonio Scandurra
127d03516f Diff lines one chunk at a time after discovering indentation 2023-09-13 11:57:10 +02:00
Antonio Scandurra
b8c437529c Never use the indentation that comes from OpenAI 2023-09-13 11:40:28 +02:00
Antonio Scandurra
6d9333dc3b Add a failing test for codegen autoindent 2023-09-11 14:35:15 +02:00
Antonio Scandurra
02078140c0 Extract code generation logic into its own module 2023-09-11 11:25:37 +02:00
Antonio Scandurra
d868ec920f Avoid duplicate entries in inline assistant's prompt history 2023-09-01 09:15:29 +02:00
Max Brunsfeld
eecd4e39cc Propagate Cancel action if there is no pending inline assist 2023-08-31 11:09:36 -07:00
Antonio Scandurra
bf67d3710a Remove trailing backticks when assistant ends with a trailing newline 2023-08-30 12:08:14 +02:00
Antonio Scandurra
5f6562c214 Detect indentation from GPT output 2023-08-30 12:07:58 +02:00
Antonio Scandurra
c6f4390511 Retain search history for inline assistants
This only works in-memory for now.
2023-08-30 11:30:51 +02:00
Antonio Scandurra
5c498c8610 Show inline assistant errors 2023-08-30 11:04:48 +02:00
Antonio Scandurra
87e25c8c23 Use model from conversation when available 2023-08-29 18:25:02 +02:00
Antonio Scandurra
16422a06ad Remember whether include conversation was toggled 2023-08-29 18:25:02 +02:00
Antonio Scandurra
72413dbaf2 Remove the ability to reply to specific message in assistant 2023-08-29 14:51:00 +02:00
Antonio Scandurra
2332f82442 More polish 2023-08-29 14:41:02 +02:00
Antonio Scandurra
08df24412a Delete less aggressively 2023-08-29 14:31:58 +02:00
Antonio Scandurra
c2b60df5af Allow including conversation when triggering inline assist 2023-08-29 14:08:16 +02:00
Antonio Scandurra
ccec59337a 📝 2023-08-28 14:46:05 +02:00
Antonio Scandurra
52e1e014ad Allow redoing edits performed by inline assistant after cancelling it 2023-08-28 14:42:52 +02:00
Antonio Scandurra
8c4d2ccf80 Close inline assist when the associated transaction is undone 2023-08-28 14:23:42 +02:00
Antonio Scandurra
44f554f489 Merge remote-tracking branch 'origin/main' into ai-refactoring 2023-08-28 12:16:24 +02:00
Antonio Scandurra
1fb7ce0f4a Show icon to toggle inline assist 2023-08-28 12:13:44 +02:00
Antonio Scandurra
d804afcfa9 Don't auto-indent when the assistant starts responding with indentation 2023-08-28 11:57:02 +02:00
Piotr Osiewicz
07b9c6c302
language: Make Buffer::new take an explicit ID (#2900)
See Linear description for the full explanation of the issue. This PR is
mostly a mechanical change, except for the one case where we do pass in
an explicit `next_id` instead of `model_id` in project.rs.

Release Notes:
- Fixed a bug where some results were not reported in project search in
presence of unnamed buffers.
2023-08-28 11:51:50 +02:00