LanguageModel API updates to support prompt caching and thinking tokens (#246981)

* LanguageModel API updates to support prompt caching and thinking tokens
Add LanguageModelExtraDataPart which contains any arbitrary model-specific data
microsoft/vscode-copilot#15716

* Remove LanguageModelChatMessage3

* Bump API versions
This commit is contained in:
Rob Lourens
2025-04-19 18:21:22 -07:00
committed by GitHub
parent b9f3ba67a4
commit dd6b05c349
8 changed files with 78 additions and 19 deletions

View File

@@ -1814,6 +1814,7 @@ export function createApiFactoryAndRegisterActors(accessor: ServicesAccessor): I
LanguageModelError: extHostTypes.LanguageModelError,
LanguageModelToolResult: extHostTypes.LanguageModelToolResult,
LanguageModelDataPart: extHostTypes.LanguageModelDataPart,
LanguageModelExtraDataPart: extHostTypes.LanguageModelExtraDataPart,
ChatImageMimeType: extHostTypes.ChatImageMimeType,
ExtendedLanguageModelToolResult: extHostTypes.ExtendedLanguageModelToolResult,
PreparedTerminalToolInvocation: extHostTypes.PreparedTerminalToolInvocation,