Makes the `userSelectedTools` passed into the chat agent service
observable such that the tools of the ongoing request update as new
tools come in. The benefit of this versus other (previous) approaches
is that it reflects the enablement/disablement state of the picker
(which is likely to get additional controls in the future) and works
without special tagging.
@roblourens / @DonJayamanne I think with this we can get rid of
0d6f5516d2/src/extension/tools/vscode-node/toolsService.ts (L118-L122)
which was added for Jupyter iirc.
Closes#254684
First sketch for a simple API that lets extensions render content in chat using a webview
Right now this is targeting results from tool calls but we could potentially extend this to work with a more generic version of our chat response image part
* `issuer` -> `authorizationServer` refactor
Also:
* adds `authorizationServerGlobs` to the authentication contribution schema
* removes ugly MCP issuer hack and instead plumbs the authorizationServer down to the new auth providers
* Initial support of dynamic auth providers for MCP
With this change, we support being able to on-the-fly create an Auth Provider based on an oauth-authorization-server's metadata.
MCP then leverages this because by following the MCP spec we can figure out the oauth server metadata and then request a token from that server.
The tokens are persisted and will be refreshed on demand... all based on the OAuth spec.
* Connor's feedback & some type fixes
* LanguageModelDataPart as arbitrary mimeType+data+factory functions
Like NotebookCellOutputItem
* change languagemodel data part shape
* rm whitespace
---------
Co-authored-by: Your Name <justchen@microsoft.com>
* LanguageModel API updates to support prompt caching and thinking tokens
Add LanguageModelExtraDataPart which contains any arbitrary model-specific data
microsoft/vscode-copilot#15716
* Remove LanguageModelChatMessage3
* Bump API versions
- Replace SSE with HTTP in the API (although this still supports the
same SSE fallback we otherwise have, this is not documented)
- Add a new `resolveMcpServerDefinition` step that is only run before
the MCP server is started -- both from a cache or from a running
extension. This lets extensions do user interaction is a more correct way.
- Add appropriate docs and such.
* Pass model to tools
When tools make requests, they should use the right model. So we
- save the model id on the request
- look it up when the tool is invoked
- resolve it to a LanguageModelChat in the EH
* Fix build
* first pass language model image
* add language model 2 instead
* more languagemodel2
* change api shape
* add back original languagemodelchatmessage
* fix imports
* allow old languageModelChatMessage to also b used
* handle vsbuffer case
* make sure to resize images before sending the chat request
* remove detail
* remove whitespace
* rename to languageModelDataPart
* Add proposed API for trusted domains and for web page extraction
I don't love the shape of this API but I'm going with this for now to play with it.
This will help Chat provide contents of web pages as context but also allow it full control of the tool.
ref https://github.com/microsoft/vscode/issues/243615
* use a different url
* delete the test until we have a good way to mock
* add `ChatRequest#tools`
* make select tools for all tools, not just MCP tools
* make the select tools work with all tools, group by MCP server, extension, and rest
* fill in the new ChatRequest#selectTools API with selected tools
* have the MCP server command be its own command with its own custom rendering
* remove old comment