* put copilot fix and explain in status bar
* fix up actions
* watch for execution error within the viewmodel
* make observable publicly readonly
* remove unused service
* fix tests
* remove unused import
* Try LanguageModelToolResultItem
* Implement it
* lmTools API updates
Resolve TODOs
* Fix build
* Doc
* More content type -> mime type
* More edits
* Fixes
* Add LanguageModelChatToolMode
* Add implementation
* New thing
* note
* API version bump
* Finish it
* Updates
* Properly convert tool result content parts
* properly convert and send type to ext
* switch to new type that includes mimeType
* add new type
* use thenable
* change to proposed
* fix whitespace
* Move to new account preference concept
Previously, session preference was at a per-"set of scopes" bases. This means that an extension could ask for scopes A,B,C and get account 1... but then ask for scopes E,F and get account 2.
Thinking on this more, it really doesn't make sense. An extension should have a preference wholistically since we also don't surface scopes to the user.
This PR:
* changes that model (while keeping the old model for migration purposes for now)
* allows the user to change that value via the gear icon in the Trusted Extensions quick pick
* hooks up the extension event for when the user changes that
* lastly introduces a product.json entry that allows an extension to be a "child" of another extension's preference. This will be useful for GitHub Copilot & GitHub Copilot Chat sharing the same preference.
Out of scope:
* Adding entries to the extension editor to get to this quick pick (it'll come later, but I wanted to get these changes in now)
* placeholder
- Have `countTokens` only take a string for now, since that's all that
tool elements in tsx should currently omit (not chat messages with
roles.) String counting is a much easier interface to satisfy.
- Joyce pointed out that it's "invocation" not "invokation", for reasons
unfathomable to me. I will open an issue with the English language.
* lm: a second rendition of returning data from LM tools
This is an alternative to #225454. It allows the tool caller to pass
through token budget and counting information to the tool, and the
tool can then 'do its thing.'
Most of the actual implementation is in prompt-tsx with a new method to
render elements into a JSON-serializable form, and then splice them back
into the tree by the consumer. The implementation can be found here:
https://github.com/microsoft/vscode-prompt-tsx/tree/connor4312/tools-api-v2
On the tool side, this looks like:
```ts
vscode.lm.registerTool('myTestTool', {
async invoke(context, token): Promise<vscode.LanguageModelToolResult> {
return {
// context includes the token info:
'mytype': await renderElementJSON(MyCustomPrompt, {}, context, token),
toString() {
return 'hello world!'
}
};
},
});
```
I didn't make any nice wrappers yet, but the MVP consumer side looks like:
```
export class TestPrompt extends PromptElement {
async render(_state: void, sizing: PromptSizing) {
const result = await vscode.lm.invokeTool('myTestTool', {
parameters: {},
tokenBudget: sizing.tokenBudget,
countTokens: (v, token) => tokenizer.countTokens(v, token),
}, new vscode.CancellationTokenSource().token);
return (
<>
<elementJSON data={result.mytype} />
</>
);
}
}
```
I like this approach better. It avoids bleeding knowledge of TSX into
the extension host and comparatively simple.
* address comments
* address comments
Creating `New` versions of all Search APIs (ie: `TextSearchProviderNew`) to support the new API shape.
Since this new shape requires a new internal query shape, this is quite a big change.