Closes https://github.com/microsoft/vscode/issues/168492
This implements @aeschli's 'server server' concept in a new
`code serve-web` command.
Command line args are similar to the standalone web server. The first
time a user hits that page, the latest version of the VS Code web server
will be downloaded and run. Thanks to Martin's previous PRs, all
resources the page requests are prefixed with `/<quality-<commit>`.
The latest release version is cached, but when the page is loaded again
and there's a new release, a the new server version will be downloaded
and started up.
Behind the scenes the servers all listen on named pipes/sockets and the
CLI acts as a proxy server to those sockets. Servers without connections
for an hour will be shut down automatically.
This reuses a lot of the logic we use for the normal VS Code Server
tunnel to do port forwarding. It does use a _different_ tunnel than what
Remote Tunnels would otherwise use for the control server. The reason
for this is that ports exist on a tunnel instance, and if we reused the
same tunnel then a client would expect all CLI hosts to serve all
tunnels, where the port forwarding instance would not provide the VS
Code server. It also reuses the singleton logic so all ports on a
machine are handled by a single CLI instance for the same reason: we
can't have different instances hosting subsets of
ports on a single tunnel.
Currently all ports are under the default privacy: support for
public/private tunnels is either later today or next iteration.
* cli: allow exec server to listen on a port and require token authentication
For remote ssh on Windows where pipe forwarding doesn't work
* fix linux build
Adds a 5s timeout to keychain access on Linux. We had an issue about this a long time ago, but I never repro'd it until today and can't find the original...
If this timeout is hit, it'll fall back to the file-based keychain.
* cli: ensure ordering of rpc server messages
Sending lots of messages to a stream would block them around the async
tokio mutex, which is "fair" so doesn't preserve ordering. Instead, use
the write_loop approach I introduced to the server_multiplexer for the
same reason some time ago.
* fix clippy
* signing: implement signing service on the web
* wip
* cli: implement stdio service
This is used to implement the exec server for WSL. Guarded behind a signed handshake.
* update distro
* rm debug
* address pr comments
* cli: add acquire_cli
As given in my draft document, pipes a CLI of the given platform to the
specified process, for example:
```js
const cmd = await rpc.call('acquire_cli', {
command: 'node',
args: [
'-e',
'process.stdin.pipe(fs.createWriteStream("c:/users/conno/downloads/hello-cli"))',
],
platform: Platform.LinuxX64,
quality: 'insider',
});
```
It genericizes caching so that the CLI is also cached on the host, just
like servers.
* fix bug
Other connected clients will now print:
```
Connected to an existing tunnel process running on this machine. You can press:
- Ctrl+C to detach
- "x" to stop the tunnel and exit
- "r" to restart the tunnel
```
These are then sent to the server to have that take effect. This is
mostly some refactors in the singleton_server to make the lifecycle work.
I want to use the rpc interface for communication via stdin/out in wsl,
but currently RPC is tightly coupled to the control server. The control
server also speaks msgpack instead of JSON, since it deals with binary
messages. WSL won't, and we'll want to use JSON to interact with VS
Code, so some separation is needed.
This pulls out a base set of RPC types for use in both scenarios.
Currently these are only 'helper' structs that don't actually do any
i/o, but once I figure out the model I would like to have a cleaner way
to do i/o in a unified way as well.
For the control server, previously we basically handled all methods in
one big `switch` block with nasty macros, whereas now there's
nicer `register_a/sync` functions.
Some additional small refactors were needed to preserve the strict
ordering of server messages, since they need to be order else we get
decompression errors. This is the `start_bridge_write_loop`. As a small
benefit, this means we can avoid the relatively expensive async Tokio
mutex that we were using, and instead use the standard library mutex.
Fixes https://github.com/microsoft/vscode-cli/issues/563
If a new tunnel `--name` was provided but the existing tunnel was deleted,
the CLI would error out with a lookup failed message. Instead it now
recreates it like it normally would.
Adds support for running the tunnel as a service on macOS via launchservices.
It also hooks up observability (`code tunnel service log`) on macOS and Linux.
On macOS--and later Windows, hence the manual implementation of `tail`--it
saves output to a log file and watches it. On Linux, it simply delegates to
journalctl.
The "tailing" is implemented via simple polling of the file size. I didn't want
to pull in a giant set of dependencies for inotify/kqueue/etc just for this use
case; performance when polling a single log file is not a huge concern.
Implements an automatic local download fallback, similar to SSH
(cc @roblourens). If the initial download results in an error, either
in making the request or a 5xx, it'll try to fall back to making the
request locally and streaming it over the tunnel.
This abstracts the request client behing a "SimpleHttp" trait which
either uses to the native reqwest or uses the 'delegated' mode over the
socket.