Compare commits

...

23 Commits
v0.0.2 ... main

Author SHA1 Message Date
8c3e6a2845 Separate Daemon and Network logic into separate packages
In a world where the daemon can manage more than one network, the Daemon
is really responsible only for knowing which networks are currently
joined, creating/joining/leaving networks, and routing incoming RPC
requests to the correct network handler as needed.

The new network package, with its Network interface, inherits most of
the logic that Daemon used to have, leaving Daemon only the parts needed
for the functionality just described. There's a lot of cleanup done here
in order to really nail down the separation of concerns between the two,
especially around directory creation.
2024-09-09 16:34:00 +02:00
86b2ba7bfa Factor daemon.Children into its own package 2024-09-07 15:46:59 +02:00
a840d0e701 Move common daemon types and values into daecommon 2024-09-07 15:11:04 +02:00
ef86c1bbd1 Make Daemon into a concrete type which implements RPC directly 2024-09-07 14:05:07 +02:00
fed79c6ec7 Update documentation on jsonrpc2.NewDispatchHandler 2024-09-05 19:36:21 +02:00
8d3b17e1cb Remove extraneous empty struct returns from RPC interface 2024-09-05 17:28:10 +02:00
038a28bb02 Remove remaining extraneous 'Result' RPC-related struct types 2024-09-04 22:46:38 +02:00
06a345ecd1 Embed context directly into subCmdCtx 2024-09-04 22:35:29 +02:00
6c185f6263 Allow variadic number of parameters on RPC calls 2024-09-04 22:25:38 +02:00
53ad8a91b4 Generate RPC client wrapper 2024-09-04 21:24:45 +02:00
5138ed7c6a Attempt to delete socket file before listening on the path 2024-09-04 19:44:58 +02:00
4f6a89ced0 Roadmap has been moved to micropelago.net 2024-09-01 12:20:37 +02:00
39e12f6ebd disallow -h and --help as flags in sub-commands 2024-07-22 16:37:22 +02:00
d31be8455b Pluralize 'host(s)' subcommands 2024-07-22 15:52:51 +02:00
ca62a37692 Fix rendering of text flag defaults 2024-07-22 10:42:25 +02:00
af69f1cfba Fix panic when starting up daemon with existing bootstrap 2024-07-21 17:20:48 +02:00
1ea16d80e4 Require host in garage for nebula create-cert command 2024-07-21 17:12:35 +02:00
ee30199c4c Automatically choose IP for new hosts 2024-07-21 17:10:28 +02:00
1411370b0e Write new host to garage as part of CreateHost 2024-07-20 12:36:21 +02:00
c94f8e3475 Move creation of children into daemon initialize method 2024-07-20 11:14:59 +02:00
7aa11ebe29 Only restart sub-processes which need restarting on bootstrap changes 2024-07-20 10:42:26 +02:00
bc9a2b62ef Upgrade pmux to latest 2024-07-19 17:06:12 +02:00
e657061482 Set permission bits on unix socket, so it's group read/writable 2024-07-16 17:30:36 +02:00
70 changed files with 3212 additions and 2306 deletions

View File

@ -105,7 +105,4 @@ Documentation for devs:
Besides documentation, there are a few other pages which might be useful:
* [Roadmap][roadmap]
* [Glossary](docs/glossary.md)
[roadmap]: docs/roadmap.md

View File

@ -69,7 +69,7 @@ in rec {
'';
};
vendorHash = "sha256-JhcoMLVc6WHRiO3nNDyvGoxZRTtYf7e65nRMaK/EODo=";
vendorHash = "sha256-N80aMMR67DjyEtz9/Yat1bft92nj7JuuF4i2xC9iJ6s=";
subPackages = [
"./cmd/entrypoint"

View File

@ -15,22 +15,13 @@ conform to the following rules:
* It should end with a letter or number.
## Step 2: Choose IP
The admin should choose an IP for the host. The IP you choose for the new host
should be one which is not yet used by any other host and is in a subnet which
was configured when creating the network.
## Step 3: Create a `bootstrap.json` File
## Step 2: Create a `bootstrap.json` File
To create a `bootstrap.json` file for the new host, the admin should perform the
following command from their own host:
```
isle hosts create \
--hostname <name> \
--ip <ip> \
> bootstrap.json
isle hosts create --hostname <name> >bootstrap.json
```
The resulting `bootstrap.json` file should be treated as a secret file and

View File

@ -1,128 +0,0 @@
# Roadmap
The following are rough outlines of upcoming work on the roadmap, roughly in the
order they will be implemented.
## Main quest
These items are listed more or less in the order they need to be completed, as
they generally depend on the items previous to them.
### Windows Support + GUI
Support for Windows is a must. This requirement also includes a simple GUI,
which would essentially act as a thin layer on top of `daemon.yml` to start
with.
Depending on difficulty level, OSX support might be added at this stage as well.
### NATS
Garage is currently used to handle eventually-consistent persistent storage, but
there is no mechanism for inter-host realtime communication as of yet. NATS
would be a good candidate for this, as it uses a gossip protocol which does not
require a central coordinator (I don't think), and is well supported.
### Integration of [Caddy](https://caddyserver.com/docs/)
Integration of Caddy's will require some plugins to be developed. We want Caddy
to be able to store cert information in S3 (garage), so that all isle lighthouse
nodes can potentially become gateways as well. Once done, it would be possible
for lighthouses to forward public traffic to inner nodes.
It should also be possible for users within the network to take use lighthouse
Caddy's to host their websites (and eventually gemini capsules) for them.
Most likely this integration will require NATS as well, to coordinate cache
invalidation and cert refreshing.
### Invitation code bootstrapping
Once an HTTP gateway/load-balancer is set up it should be possible to do host
bootstrapping using invite codes rather than manually giving new users bootstrap
files. The bootstrap file would be stored, encrypted, in garage, with the invite
code being able to both identify and decrypt it. To instantiate a host, the user
only needs to input the network domain name and the invite code.
### FUSE Mount
KBFS style. Every user should be able to mount virtual directories to their host
which correspond to various buckets in garage.
- "public": editable amongst all users on the host, shared publicly via HTTP
gateway.
- "protected": editable amongst all users on the host, but not accessible
outside the network.
- "private": only accessible to a particular user (client-side encrypted).
Whether it's necessary to support directories which are shared only between
specific users remains to be seen. The identification of a single "user" between
different hosts is also an unsolved problem.
## Side quests
These items aren't necessarily required by the main quest, and aren't dependent
on any other items being completed. They are nice-to-haves that we do want to
eventually complete, but aren't the main focus.
### Design System
It would be great to get some help from a designer or otherwise
artistically-minded person to create some kind of design framework which could
be used across publicly-facing frontends. Such a system would provide a simple
but cohesive vision for how things should look, include:
- Color schemes
- Fonts and text decoration in different situations
- Some simple, reusable layout templates (splash page, documentation, form)
- Basic components like tables, lists, media, etc..
### DHCP
Currently all hosts require a static IP to be reserved by the admin. Nebula may
support DHCP already, but if it doesn't we should look into how this could be
accomplished. Depending on how reliable DNS support is it may be possible to use
DHCP for all non-lighthouse hosts, which would be excellent.
### IPv6 network ranges
It should theoretically be possible for the internal network IP range to be on
IPv6 rather than IPv4. This may be a simple matter of just testing it to confirm
it works.
### Proper Linux Packages
Rather than distributing raw binaries for Linux we should instead be
distributing actual packages.
* deb files for debian/ubuntu
* PKGBUILD for arch (done)
* rpm for fedora?
* flatpak?
This will allow for properly setting capabilities for the binary at install
time, so that it can be run as non-root, and installing any necessary `.desktop`
files so that it can be run as a GUI application.
### Mobile app
To start with a simple mobile app which provided connectivity to the network
would be great. We are not able to use the existing nebula mobile app because it
is not actually open-source, but we can at least use it as a reference to see
how this can be accomplished.
### DNS/Firewall Configuration
Ideally Isle could detect the DNS/firewall subsystems being used on a per-OS
basis and configure them as needed. This would be simplify necessary
documentation and setup steps for operators.
### Plugins
It would not be difficult to spec out a plugin system using nix commands.
Existing components could be rigged to use this plugin system, and we could then
use the system to add future components which might prove useful. Once the
project is public such a system would be much appreciated I think, as it would
let other groups rig their binaries with all sorts of new functionality.

View File

@ -7,9 +7,13 @@ import (
"encoding/json"
"fmt"
"isle/nebula"
"isle/toolkit"
"maps"
"net/netip"
"path/filepath"
"sort"
"dev.mediocregopher.com/mediocre-go-lib.git/mctx"
)
// StateDirPath returns the path within the user's state directory where the
@ -32,6 +36,22 @@ type CreationParams struct {
Domain string
}
// NewCreationParams instantiates and returns a CreationParams.
func NewCreationParams(name, domain string) CreationParams {
return CreationParams{
ID: toolkit.RandStr(32),
Name: name,
Domain: domain,
}
}
// Annotate implements the mctx.Annotator interface.
func (p CreationParams) Annotate(aa mctx.Annotations) {
aa["networkID"] = p.ID
aa["networkName"] = p.Name
aa["networkDomain"] = p.Domain
}
// Bootstrap contains all information which is needed by a host daemon to join a
// network on boot.
type Bootstrap struct {
@ -45,11 +65,15 @@ type Bootstrap struct {
Hosts map[nebula.HostName]Host
}
// New initializes and returns a new Bootstrap file for a new host. This
// function assigns Hosts an empty map.
// New initializes and returns a new Bootstrap file for a new host.
//
// TODO in the resulting bootstrap only include this host and hosts which are
// necessary for connecting to nebula/garage. Remember to immediately re-poll
// garage for the full hosts list during network joining.
func New(
caCreds nebula.CACredentials,
adminCreationParams CreationParams,
existingHosts map[nebula.HostName]Host,
name nebula.HostName,
ip netip.Addr,
) (
@ -72,13 +96,18 @@ func New(
return Bootstrap{}, fmt.Errorf("signing assigned fields: %w", err)
}
existingHosts = maps.Clone(existingHosts)
existingHosts[name] = Host{
HostAssigned: assigned,
}
return Bootstrap{
NetworkCreationParams: adminCreationParams,
CAPublicCredentials: caCreds.Public,
PrivateCredentials: hostPrivCreds,
HostAssigned: assigned,
SignedHostAssigned: signedAssigned,
Hosts: map[nebula.HostName]Host{},
Hosts: existingHosts,
}, nil
}

View File

@ -2,14 +2,13 @@ package main
import (
"fmt"
"isle/daemon"
"isle/bootstrap"
)
func (ctx subCmdCtx) getHosts() (daemon.GetHostsResult, error) {
var res daemon.GetHostsResult
err := ctx.daemonRCPClient.Call(ctx.ctx, &res, "GetHosts", nil)
func (ctx subCmdCtx) getHosts() ([]bootstrap.Host, error) {
res, err := ctx.daemonRPC.GetHosts(ctx)
if err != nil {
return daemon.GetHostsResult{}, fmt.Errorf("calling GetHosts: %w", err)
return nil, fmt.Errorf("calling GetHosts: %w", err)
}
return res, nil
}

View File

@ -6,16 +6,21 @@ import (
"os"
"isle/daemon"
"isle/daemon/daecommon"
"dev.mediocregopher.com/mediocre-go-lib.git/mlog"
)
// TODO it would be good to have an `isle daemon config-check` kind of command,
// which could be run prior to a systemd service restart to make sure we don't
// restart the service into a configuration that will definitely fail.
var subCmdDaemon = subCmd{
name: "daemon",
descr: "Runs the isle daemon (Default if no sub-command given)",
do: func(subCmdCtx subCmdCtx) error {
do: func(ctx subCmdCtx) error {
flags := subCmdCtx.flagSet(false)
flags := ctx.flagSet(false)
daemonConfigPath := flags.StringP(
"config-path", "c", "",
@ -32,14 +37,12 @@ var subCmdDaemon = subCmd{
`Maximum log level which should be output. Values can be "debug", "info", "warn", "error", "fatal". Does not apply to sub-processes`,
)
if err := flags.Parse(subCmdCtx.args); err != nil {
if err := flags.Parse(ctx.args); err != nil {
return fmt.Errorf("parsing flags: %w", err)
}
ctx := subCmdCtx.ctx
if *dumpConfig {
return daemon.CopyDefaultConfig(os.Stdout, envAppDirPath)
return daecommon.CopyDefaultConfig(os.Stdout, envAppDirPath)
}
logLevel := mlog.LevelFromString(*logLevelStr)
@ -47,15 +50,19 @@ var subCmdDaemon = subCmd{
return fmt.Errorf("couldn't parse log level %q", *logLevelStr)
}
logger := subCmdCtx.logger.WithMaxLevel(logLevel.Int())
logger := ctx.logger.WithMaxLevel(logLevel.Int())
daemonConfig, err := daemon.LoadConfig(envAppDirPath, *daemonConfigPath)
// TODO check that daemon is either running as root, or that the
// required linux capabilities are set.
// TODO check that the tun module is loaded (for nebula).
daemonConfig, err := daecommon.LoadConfig(envAppDirPath, *daemonConfigPath)
if err != nil {
return fmt.Errorf("loading daemon config: %w", err)
}
daemonInst, err := daemon.NewDaemon(
logger, daemonConfig, envBinDirPath, nil,
daemonInst, err := daemon.New(
ctx, logger, daemonConfig, envBinDirPath, nil,
)
if err != nil {
return fmt.Errorf("starting daemon: %w", err)
@ -71,7 +78,7 @@ var subCmdDaemon = subCmd{
{
logger := logger.WithNamespace("http")
httpSrv, err := newHTTPServer(
ctx, logger, daemon.NewRPC(daemonInst),
ctx, logger, daemonInst,
)
if err != nil {
return fmt.Errorf("starting HTTP server: %w", err)

View File

@ -4,10 +4,11 @@ import (
"context"
"errors"
"fmt"
"io/fs"
"isle/daemon"
"isle/daemon/jsonrpc2"
"net"
"net/http"
"os"
"dev.mediocregopher.com/mediocre-go-lib.git/mctx"
"dev.mediocregopher.com/mediocre-go-lib.git/mlog"
@ -16,30 +17,41 @@ import (
const daemonHTTPRPCPath = "/rpc/v0.json"
func newHTTPServer(
ctx context.Context, logger *mlog.Logger, rpc *daemon.RPC,
ctx context.Context, logger *mlog.Logger, daemonInst *daemon.Daemon,
) (
*http.Server, error,
) {
socketPath := daemon.HTTPSocketPath()
l, err := net.Listen("unix", socketPath)
if err != nil {
ctx = mctx.Annotate(ctx, "socketPath", socketPath)
if err := os.Remove(socketPath); errors.Is(err, fs.ErrNotExist) {
// No problem
} else if err != nil {
return nil, fmt.Errorf(
"failed to listen on socket %q: %w", socketPath, err,
"removing %q prior to listening: %w", socketPath, err,
)
} else {
logger.WarnString(
ctx, "Deleted existing socket file prior to listening, it's possible a previous daemon failed to shutdown gracefully",
)
}
l, err := net.Listen("unix", socketPath)
if err != nil {
return nil, fmt.Errorf(
"listening on socket %q: %w", socketPath, err,
)
}
if err := os.Chmod(socketPath, 0660); err != nil {
return nil, fmt.Errorf(
"setting permissions of %q to 0660: %w", socketPath, err,
)
}
ctx = mctx.Annotate(ctx, "httpAddr", l.Addr().String())
logger.Info(ctx, "HTTP server socket created")
rpcHandler := jsonrpc2.Chain(
jsonrpc2.NewMLogMiddleware(logger.WithNamespace("rpc")),
jsonrpc2.ExposeServerSideErrorsMiddleware,
)(
jsonrpc2.NewDispatchHandler(&rpc),
)
httpMux := http.NewServeMux()
httpMux.Handle(daemonHTTPRPCPath, jsonrpc2.NewHTTPHandler(rpcHandler))
httpMux.Handle(daemonHTTPRPCPath, daemonInst.HTTPRPCHandler())
srv := &http.Server{Handler: httpMux}

View File

@ -3,20 +3,37 @@ package main
import (
"encoding"
"fmt"
"isle/nebula"
"net/netip"
)
type textUnmarshalerFlag struct {
inner interface {
encoding.TextUnmarshaler
type textUnmarshaler[T any] interface {
encoding.TextUnmarshaler
*T
}
type textUnmarshalerFlag[T encoding.TextMarshaler, P textUnmarshaler[T]] struct {
V T
}
func (f *textUnmarshalerFlag[T, P]) Set(v string) error {
return P(&(f.V)).UnmarshalText([]byte(v))
}
func (f *textUnmarshalerFlag[T, P]) String() string {
b, err := f.V.MarshalText()
if err != nil {
panic(fmt.Sprintf("calling MarshalText on %#v: %v", f.V, err))
}
return string(b)
}
func (f textUnmarshalerFlag) Set(v string) error {
return f.inner.UnmarshalText([]byte(v))
}
func (f *textUnmarshalerFlag[T, P]) Type() string { return "string" }
func (f textUnmarshalerFlag) String() string {
return fmt.Sprint(f.inner)
}
////////////////////////////////////////////////////////////////////////////////
func (f textUnmarshalerFlag) Type() string { return "string" }
type (
hostNameFlag = textUnmarshalerFlag[nebula.HostName, *nebula.HostName]
ipNetFlag = textUnmarshalerFlag[nebula.IPNet, *nebula.IPNet]
ipFlag = textUnmarshalerFlag[netip.Addr, *netip.Addr]
)

View File

@ -3,20 +3,19 @@ package main
import (
"errors"
"fmt"
"isle/daemon/daecommon"
"os"
"path/filepath"
"syscall"
"isle/daemon"
)
// minio-client keeps a configuration directory which contains various pieces of
// information which may or may not be useful. Unfortunately when it initializes
// this directory it likes to print some annoying logs, so we pre-initialize in
// order to prevent it from doing so.
func initMCConfigDir() (string, error) {
func initMCConfigDir(envVars daecommon.EnvVars) (string, error) {
var (
path = filepath.Join(daemonEnvVars.StateDirPath, "mc")
path = filepath.Join(envVars.StateDir.Path, "mc")
sharePath = filepath.Join(path, "share")
configJSONPath = filepath.Join(path, "config.json")
)
@ -35,9 +34,9 @@ func initMCConfigDir() (string, error) {
var subCmdGarageMC = subCmd{
name: "mc",
descr: "Runs the mc (minio-client) binary. The isle garage can be accessed under the `garage` alias",
do: func(subCmdCtx subCmdCtx) error {
do: func(ctx subCmdCtx) error {
flags := subCmdCtx.flagSet(true)
flags := ctx.flagSet(true)
keyID := flags.StringP(
"key-id", "i", "",
@ -49,14 +48,11 @@ var subCmdGarageMC = subCmd{
"Optional key secret to use, defaults to that of the shared global key",
)
if err := flags.Parse(subCmdCtx.args); err != nil {
if err := flags.Parse(ctx.args); err != nil {
return fmt.Errorf("parsing flags: %w", err)
}
var clientParams daemon.GarageClientParams
err := subCmdCtx.daemonRCPClient.Call(
subCmdCtx.ctx, &clientParams, "GetGarageClientParams", nil,
)
clientParams, err := ctx.daemonRPC.GetGarageClientParams(ctx)
if err != nil {
return fmt.Errorf("calling GetGarageClientParams: %w", err)
}
@ -77,7 +73,9 @@ var subCmdGarageMC = subCmd{
args = args[i:]
}
configDir, err := initMCConfigDir()
envVars := daecommon.GetEnvVars()
configDir, err := initMCConfigDir(envVars)
if err != nil {
return fmt.Errorf("initializing minio-client config directory: %w", err)
}
@ -118,12 +116,9 @@ var subCmdGarageMC = subCmd{
var subCmdGarageCLI = subCmd{
name: "cli",
descr: "Runs the garage binary, automatically configured to point to the garage sub-process of a running isle daemon",
do: func(subCmdCtx subCmdCtx) error {
do: func(ctx subCmdCtx) error {
var clientParams daemon.GarageClientParams
err := subCmdCtx.daemonRCPClient.Call(
subCmdCtx.ctx, &clientParams, "GetGarageClientParams", nil,
)
clientParams, err := ctx.daemonRPC.GetGarageClientParams(ctx)
if err != nil {
return fmt.Errorf("calling GetGarageClientParams: %w", err)
}
@ -134,7 +129,7 @@ var subCmdGarageCLI = subCmd{
var (
binPath = binPath("garage")
args = append([]string{"garage"}, subCmdCtx.args...)
args = append([]string{"garage"}, ctx.args...)
cliEnv = append(
os.Environ(),
"GARAGE_RPC_HOST="+clientParams.Peer.RPCPeerAddr(),
@ -156,8 +151,8 @@ var subCmdGarageCLI = subCmd{
var subCmdGarage = subCmd{
name: "garage",
descr: "Runs the garage binary, automatically configured to point to the garage sub-process of a running isle daemon",
do: func(subCmdCtx subCmdCtx) error {
return subCmdCtx.doSubCmd(
do: func(ctx subCmdCtx) error {
return ctx.doSubCmd(
subCmdGarageCLI,
subCmdGarageMC,
)

142
go/cmd/entrypoint/host.go Normal file
View File

@ -0,0 +1,142 @@
package main
import (
"encoding/json"
"errors"
"fmt"
"isle/bootstrap"
"isle/daemon/network"
"isle/jsonutil"
"os"
"sort"
)
var subCmdHostCreate = subCmd{
name: "create",
descr: "Creates a new host in the network, writing its new bootstrap.json to stdout",
do: func(ctx subCmdCtx) error {
var (
flags = ctx.flagSet(false)
hostName hostNameFlag
ip ipFlag
)
hostNameF := flags.VarPF(
&hostName,
"hostname", "n",
"Name of the host to generate bootstrap.json for",
)
flags.VarP(&ip, "ip", "i", "IP of the new host. An available IP will be chosen if none is given.")
canCreateHosts := flags.Bool(
"can-create-hosts",
false,
"The new host should have the ability to create hosts too",
)
if err := flags.Parse(ctx.args); err != nil {
return fmt.Errorf("parsing flags: %w", err)
}
if !hostNameF.Changed {
return errors.New("--hostname is required")
}
var (
res network.JoiningBootstrap
err error
)
res, err = ctx.daemonRPC.CreateHost(
ctx, hostName.V, network.CreateHostOpts{
IP: ip.V,
CanCreateHosts: *canCreateHosts,
},
)
if err != nil {
return fmt.Errorf("calling CreateHost: %w", err)
}
return json.NewEncoder(os.Stdout).Encode(res)
},
}
var subCmdHostList = subCmd{
name: "list",
descr: "Lists all hosts in the network, and their IPs",
do: func(ctx subCmdCtx) error {
hostsRes, err := ctx.getHosts()
if err != nil {
return fmt.Errorf("calling GetHosts: %w", err)
}
type host struct {
Name string
VPN struct {
IP string
}
Storage bootstrap.GarageHost `json:",omitempty"`
}
hosts := make([]host, 0, len(hostsRes))
for _, h := range hostsRes {
host := host{
Name: string(h.Name),
Storage: h.Garage,
}
host.VPN.IP = h.IP().String()
hosts = append(hosts, host)
}
sort.Slice(hosts, func(i, j int) bool { return hosts[i].Name < hosts[j].Name })
return jsonutil.WriteIndented(os.Stdout, hosts)
},
}
var subCmdHostRemove = subCmd{
name: "remove",
descr: "Removes a host from the network",
do: func(ctx subCmdCtx) error {
var (
flags = ctx.flagSet(false)
hostName hostNameFlag
)
hostNameF := flags.VarPF(
&hostName,
"hostname", "n",
"Name of the host to remove",
)
if err := flags.Parse(ctx.args); err != nil {
return fmt.Errorf("parsing flags: %w", err)
}
if !hostNameF.Changed {
return errors.New("--hostname is required")
}
if err := ctx.daemonRPC.RemoveHost(ctx, hostName.V); err != nil {
return fmt.Errorf("calling RemoveHost: %w", err)
}
return nil
},
}
var subCmdHost = subCmd{
name: "host",
plural: "s",
descr: "Sub-commands having to do with configuration of hosts in the network",
do: func(ctx subCmdCtx) error {
return ctx.doSubCmd(
subCmdHostCreate,
subCmdHostRemove,
subCmdHostList,
)
},
}

View File

@ -1,153 +0,0 @@
package main
import (
"encoding/json"
"errors"
"fmt"
"isle/bootstrap"
"isle/daemon"
"isle/jsonutil"
"isle/nebula"
"net/netip"
"os"
"sort"
)
var subCmdHostsCreate = subCmd{
name: "create",
descr: "Creates a new host in the network, writing its new bootstrap.json to stdout",
do: func(subCmdCtx subCmdCtx) error {
var (
flags = subCmdCtx.flagSet(false)
hostName nebula.HostName
ip netip.Addr
)
hostNameF := flags.VarPF(
textUnmarshalerFlag{&hostName},
"hostname", "h",
"Name of the host to generate bootstrap.json for",
)
ipF := flags.VarPF(
textUnmarshalerFlag{&ip}, "ip", "i", "IP of the new host",
)
canCreateHosts := flags.Bool(
"can-create-hosts",
false,
"The new host should have the ability to create hosts too",
)
if err := flags.Parse(subCmdCtx.args); err != nil {
return fmt.Errorf("parsing flags: %w", err)
}
if !hostNameF.Changed || !ipF.Changed {
return errors.New("--hostname and --ip are required")
}
var res daemon.CreateHostResult
err := subCmdCtx.daemonRCPClient.Call(
subCmdCtx.ctx,
&res,
"CreateHost",
daemon.CreateHostRequest{
HostName: hostName,
IP: ip,
Opts: daemon.CreateHostOpts{
CanCreateHosts: *canCreateHosts,
},
},
)
if err != nil {
return fmt.Errorf("calling CreateHost: %w", err)
}
return json.NewEncoder(os.Stdout).Encode(res.JoiningBootstrap)
},
}
var subCmdHostsList = subCmd{
name: "list",
descr: "Lists all hosts in the network, and their IPs",
do: func(subCmdCtx subCmdCtx) error {
hostsRes, err := subCmdCtx.getHosts()
if err != nil {
return fmt.Errorf("calling GetHosts: %w", err)
}
type host struct {
Name string
VPN struct {
IP string
}
Storage bootstrap.GarageHost `json:",omitempty"`
}
hosts := make([]host, 0, len(hostsRes.Hosts))
for _, h := range hostsRes.Hosts {
host := host{
Name: string(h.Name),
Storage: h.Garage,
}
host.VPN.IP = h.IP().String()
hosts = append(hosts, host)
}
sort.Slice(hosts, func(i, j int) bool { return hosts[i].Name < hosts[j].Name })
return jsonutil.WriteIndented(os.Stdout, hosts)
},
}
var subCmdHostsRemove = subCmd{
name: "remove",
descr: "Removes a host from the network",
do: func(subCmdCtx subCmdCtx) error {
var (
flags = subCmdCtx.flagSet(false)
hostName nebula.HostName
)
hostNameF := flags.VarPF(
textUnmarshalerFlag{&hostName},
"hostname", "h",
"Name of the host to remove",
)
if err := flags.Parse(subCmdCtx.args); err != nil {
return fmt.Errorf("parsing flags: %w", err)
}
if !hostNameF.Changed {
return errors.New("--hostname is required")
}
err := subCmdCtx.daemonRCPClient.Call(
subCmdCtx.ctx, nil, "RemoveHost", daemon.RemoveHostRequest{
HostName: hostName,
},
)
if err != nil {
return fmt.Errorf("calling RemoveHost: %w", err)
}
return nil
},
}
var subCmdHosts = subCmd{
name: "hosts",
descr: "Sub-commands having to do with configuration of hosts in the network",
do: func(subCmdCtx subCmdCtx) error {
return subCmdCtx.doSubCmd(
subCmdHostsCreate,
subCmdHostsRemove,
subCmdHostsList,
)
},
}

View File

@ -2,7 +2,6 @@ package main
import (
"context"
"isle/daemon"
"os"
"os/signal"
"path/filepath"
@ -21,7 +20,6 @@ func getAppDirPath() string {
}
var (
daemonEnvVars = daemon.GetEnvVars()
envAppDirPath = getAppDirPath()
envBinDirPath = filepath.Join(envAppDirPath, "bin")
)
@ -57,13 +55,13 @@ func main() {
}()
err := subCmdCtx{
args: os.Args[1:],
ctx: ctx,
logger: logger,
Context: ctx,
args: os.Args[1:],
logger: logger,
}.doSubCmd(
subCmdDaemon,
subCmdGarage,
subCmdHosts,
subCmdHost,
subCmdNebula,
subCmdNetwork,
subCmdVersion,

View File

@ -3,26 +3,23 @@ package main
import (
"errors"
"fmt"
"isle/daemon"
"isle/jsonutil"
"isle/nebula"
"net/netip"
"os"
)
var subCmdNebulaCreateCert = subCmd{
name: "create-cert",
descr: "Creates a signed nebula certificate file for an existing host and writes it to stdout",
do: func(subCmdCtx subCmdCtx) error {
do: func(ctx subCmdCtx) error {
var (
flags = subCmdCtx.flagSet(false)
hostName nebula.HostName
ip netip.Addr
flags = ctx.flagSet(false)
hostName hostNameFlag
)
hostNameF := flags.VarPF(
textUnmarshalerFlag{&hostName},
"hostname", "h",
&hostName,
"hostname", "n",
"Name of the host to generate a certificate for",
)
@ -31,13 +28,7 @@ var subCmdNebulaCreateCert = subCmd{
`Path to PEM file containing public key which will be embedded in the cert.`,
)
flags.Var(
textUnmarshalerFlag{&ip},
"ip",
"IP address to create a cert for. If this is not given then the IP associated with the host via its `hosts create` call will be used",
)
if err := flags.Parse(subCmdCtx.args); err != nil {
if err := flags.Parse(ctx.args); err != nil {
return fmt.Errorf("parsing flags: %w", err)
}
@ -55,24 +46,14 @@ var subCmdNebulaCreateCert = subCmd{
return fmt.Errorf("unmarshaling public key as PEM: %w", err)
}
var res daemon.CreateNebulaCertificateResult
err = subCmdCtx.daemonRCPClient.Call(
subCmdCtx.ctx,
&res,
"CreateNebulaCertificate",
daemon.CreateNebulaCertificateRequest{
HostName: hostName,
HostEncryptingPublicKey: hostPub,
Opts: daemon.CreateNebulaCertificateOpts{
IP: ip,
},
},
res, err := ctx.daemonRPC.CreateNebulaCertificate(
ctx, hostName.V, hostPub,
)
if err != nil {
return fmt.Errorf("calling CreateNebulaCertificate: %w", err)
}
nebulaHostCertPEM, err := res.HostNebulaCertifcate.Unwrap().MarshalToPEM()
nebulaHostCertPEM, err := res.Unwrap().MarshalToPEM()
if err != nil {
return fmt.Errorf("marshaling cert to PEM: %w", err)
}
@ -88,22 +69,19 @@ var subCmdNebulaCreateCert = subCmd{
var subCmdNebulaShow = subCmd{
name: "show",
descr: "Writes nebula network information to stdout in JSON format",
do: func(subCmdCtx subCmdCtx) error {
do: func(ctx subCmdCtx) error {
flags := subCmdCtx.flagSet(false)
if err := flags.Parse(subCmdCtx.args); err != nil {
flags := ctx.flagSet(false)
if err := flags.Parse(ctx.args); err != nil {
return fmt.Errorf("parsing flags: %w", err)
}
hosts, err := subCmdCtx.getHosts()
hosts, err := ctx.getHosts()
if err != nil {
return fmt.Errorf("getting hosts: %w", err)
}
var caPublicCreds nebula.CAPublicCredentials
err = subCmdCtx.daemonRCPClient.Call(
subCmdCtx.ctx, &caPublicCreds, "GetNebulaCAPublicCredentials", nil,
)
caPublicCreds, err := ctx.daemonRPC.GetNebulaCAPublicCredentials(ctx)
if err != nil {
return fmt.Errorf("calling GetNebulaCAPublicCredentials: %w", err)
}
@ -134,7 +112,7 @@ var subCmdNebulaShow = subCmd{
SubnetCIDR: subnet.String(),
}
for _, h := range hosts.Hosts {
for _, h := range hosts {
if h.Nebula.PublicAddr == "" {
continue
}
@ -156,8 +134,8 @@ var subCmdNebulaShow = subCmd{
var subCmdNebula = subCmd{
name: "nebula",
descr: "Sub-commands related to the nebula VPN",
do: func(subCmdCtx subCmdCtx) error {
return subCmdCtx.doSubCmd(
do: func(ctx subCmdCtx) error {
return ctx.doSubCmd(
subCmdNebulaCreateCert,
subCmdNebulaShow,
)

View File

@ -3,55 +3,57 @@ package main
import (
"errors"
"fmt"
"isle/daemon"
"isle/daemon/network"
"isle/jsonutil"
)
var subCmdNetworkCreate = subCmd{
name: "create",
descr: "Create's a new network, with this host being the first host in that network.",
do: func(subCmdCtx subCmdCtx) error {
do: func(ctx subCmdCtx) error {
var (
ctx = subCmdCtx.ctx
flags = subCmdCtx.flagSet(false)
req daemon.CreateNetworkRequest
flags = ctx.flagSet(false)
ipNet ipNetFlag
hostName hostNameFlag
)
flags.StringVarP(
&req.Name, "name", "n", "",
name := flags.StringP(
"name", "N", "",
"Human-readable name to identify the network as.",
)
flags.StringVarP(
&req.Domain, "domain", "d", "",
domain := flags.StringP(
"domain", "d", "",
"Domain name that should be used as the root domain in the network.",
)
ipNetF := flags.VarPF(
textUnmarshalerFlag{&req.IPNet}, "ip-net", "i",
&ipNet, "ip-net", "i",
`An IP subnet, in CIDR form, which will be the overall range of`+
` possible IPs in the network. The first IP in this network`+
` range will become this first host's IP.`,
)
hostNameF := flags.VarPF(
textUnmarshalerFlag{&req.HostName},
"hostname", "h",
&hostName,
"hostname", "n",
"Name of this host, which will be the first host in the network",
)
if err := flags.Parse(subCmdCtx.args); err != nil {
if err := flags.Parse(ctx.args); err != nil {
return fmt.Errorf("parsing flags: %w", err)
}
if req.Name == "" ||
req.Domain == "" ||
if *name == "" ||
*domain == "" ||
!ipNetF.Changed ||
!hostNameF.Changed {
return errors.New("--name, --domain, --ip-net, and --hostname are required")
}
err := subCmdCtx.daemonRCPClient.Call(ctx, nil, "CreateNetwork", req)
err := ctx.daemonRPC.CreateNetwork(
ctx, *name, *domain, ipNet.V, hostName.V,
)
if err != nil {
return fmt.Errorf("creating network: %w", err)
}
@ -63,17 +65,15 @@ var subCmdNetworkCreate = subCmd{
var subCmdNetworkJoin = subCmd{
name: "join",
descr: "Joins this host to an existing network",
do: func(subCmdCtx subCmdCtx) error {
do: func(ctx subCmdCtx) error {
var (
ctx = subCmdCtx.ctx
flags = subCmdCtx.flagSet(false)
flags = ctx.flagSet(false)
bootstrapPath = flags.StringP(
"bootstrap-path", "b", "", "Path to a bootstrap.json file.",
)
)
bootstrapPath := flags.StringP(
"bootstrap-path", "b", "", "Path to a bootstrap.json file.",
)
if err := flags.Parse(subCmdCtx.args); err != nil {
if err := flags.Parse(ctx.args); err != nil {
return fmt.Errorf("parsing flags: %w", err)
}
@ -81,24 +81,22 @@ var subCmdNetworkJoin = subCmd{
return errors.New("--bootstrap-path is required")
}
var newBootstrap daemon.JoiningBootstrap
var newBootstrap network.JoiningBootstrap
if err := jsonutil.LoadFile(&newBootstrap, *bootstrapPath); err != nil {
return fmt.Errorf(
"loading bootstrap from %q: %w", *bootstrapPath, err,
)
}
return subCmdCtx.daemonRCPClient.Call(
ctx, nil, "JoinNetwork", newBootstrap,
)
return ctx.daemonRPC.JoinNetwork(ctx, newBootstrap)
},
}
var subCmdNetwork = subCmd{
name: "network",
descr: "Sub-commands related to network membership",
do: func(subCmdCtx subCmdCtx) error {
return subCmdCtx.doSubCmd(
do: func(ctx subCmdCtx) error {
return ctx.doSubCmd(
subCmdNetworkCreate,
subCmdNetworkJoin,
)

View File

@ -12,21 +12,41 @@ import (
"github.com/spf13/pflag"
)
type flagSet struct {
*pflag.FlagSet
}
func (fs flagSet) Parse(args []string) error {
fs.VisitAll(func(f *pflag.Flag) {
if f.Shorthand == "h" {
panic(fmt.Sprintf("flag %+v has reserved shorthand `-h`", f))
}
if f.Name == "help" {
panic(fmt.Sprintf("flag %+v has reserved name `--help`", f))
}
})
return fs.FlagSet.Parse(args)
}
// subCmdCtx contains all information available to a subCmd's do method.
type subCmdCtx struct {
context.Context
subCmd subCmd // the subCmd itself
args []string // command-line arguments, excluding the subCmd itself.
subCmdNames []string // names of subCmds so far, including this one
ctx context.Context
logger *mlog.Logger
daemonRCPClient jsonrpc2.Client
logger *mlog.Logger
daemonRPC daemon.RPC
}
type subCmd struct {
name string
descr string
do func(subCmdCtx) error
// If set then the name will be allowed to be suffixed with this string.
plural string
}
func (ctx subCmdCtx) usagePrefix() string {
@ -39,7 +59,7 @@ func (ctx subCmdCtx) usagePrefix() string {
return fmt.Sprintf("\nUSAGE: %s %s", os.Args[0], subCmdNamesStr)
}
func (ctx subCmdCtx) flagSet(withPassthrough bool) *pflag.FlagSet {
func (ctx subCmdCtx) flagSet(withPassthrough bool) flagSet {
flags := pflag.NewFlagSet(ctx.subCmd.name, pflag.ExitOnError)
flags.Usage = func() {
@ -58,7 +78,7 @@ func (ctx subCmdCtx) flagSet(withPassthrough bool) *pflag.FlagSet {
os.Stderr.Sync()
os.Exit(2)
}
return flags
return flagSet{flags}
}
func (ctx subCmdCtx) doSubCmd(subCmds ...subCmd) error {
@ -76,7 +96,11 @@ func (ctx subCmdCtx) doSubCmd(subCmds ...subCmd) error {
fmt.Fprintf(os.Stderr, "\nSUB-COMMANDS:\n\n")
for _, subCmd := range subCmds {
fmt.Fprintf(os.Stderr, " %s\t%s\n", subCmd.name, subCmd.descr)
name := subCmd.name
if subCmd.plural != "" {
name += "(" + subCmd.plural + ")"
}
fmt.Fprintf(os.Stderr, " %s\t%s\n", name, subCmd.descr)
}
fmt.Fprintf(os.Stderr, "\n")
@ -93,6 +117,9 @@ func (ctx subCmdCtx) doSubCmd(subCmds ...subCmd) error {
subCmdsMap := map[string]subCmd{}
for _, subCmd := range subCmds {
subCmdsMap[subCmd.name] = subCmd
if subCmd.plural != "" {
subCmdsMap[subCmd.name+subCmd.plural] = subCmd
}
}
subCmdName, args := args[0], args[1:]
@ -102,17 +129,19 @@ func (ctx subCmdCtx) doSubCmd(subCmds ...subCmd) error {
printUsageExit(subCmdName)
}
daemonRCPClient := jsonrpc2.NewUnixHTTPClient(
daemon.HTTPSocketPath(), daemonHTTPRPCPath,
daemonRPC := daemon.RPCFromClient(
jsonrpc2.NewUnixHTTPClient(
daemon.HTTPSocketPath(), daemonHTTPRPCPath,
),
)
err := subCmd.do(subCmdCtx{
subCmd: subCmd,
args: args,
subCmdNames: append(ctx.subCmdNames, subCmdName),
ctx: ctx.ctx,
logger: ctx.logger,
daemonRCPClient: daemonRCPClient,
Context: ctx.Context,
subCmd: subCmd,
args: args,
subCmdNames: append(ctx.subCmdNames, subCmdName),
logger: ctx.logger,
daemonRPC: daemonRPC,
})
if err != nil {

View File

@ -9,7 +9,7 @@ import (
var subCmdVersion = subCmd{
name: "version",
descr: "Dumps version and build info to stdout",
do: func(subCmdCtx subCmdCtx) error {
do: func(ctx subCmdCtx) error {
versionPath := filepath.Join(envAppDirPath, "share/version")

View File

@ -1,50 +0,0 @@
package daemon
import (
"fmt"
"isle/bootstrap"
"isle/dnsmasq"
"path/filepath"
"sort"
"code.betamike.com/micropelago/pmux/pmuxlib"
)
func dnsmasqPmuxProcConfig(
runtimeDirPath, binDirPath string,
hostBootstrap bootstrap.Bootstrap,
daemonConfig Config,
) (
pmuxlib.ProcessConfig, error,
) {
confPath := filepath.Join(runtimeDirPath, "dnsmasq.conf")
hostsSlice := make([]dnsmasq.ConfDataHost, 0, len(hostBootstrap.Hosts))
for _, host := range hostBootstrap.Hosts {
hostsSlice = append(hostsSlice, dnsmasq.ConfDataHost{
Name: string(host.Name),
IP: host.IP().String(),
})
}
sort.Slice(hostsSlice, func(i, j int) bool {
return hostsSlice[i].IP < hostsSlice[j].IP
})
confData := dnsmasq.ConfData{
Resolvers: daemonConfig.DNS.Resolvers,
Domain: hostBootstrap.NetworkCreationParams.Domain,
IP: hostBootstrap.ThisHost().IP().String(),
Hosts: hostsSlice,
}
if err := dnsmasq.WriteConfFile(confPath, confData); err != nil {
return pmuxlib.ProcessConfig{}, fmt.Errorf("writing dnsmasq.conf to %q: %w", confPath, err)
}
return pmuxlib.ProcessConfig{
Name: "dnsmasq",
Cmd: filepath.Join(binDirPath, "dnsmasq"),
Args: []string{"-d", "-C", confPath},
}, nil
}

View File

@ -1,105 +0,0 @@
package daemon
import (
"context"
"errors"
"fmt"
"isle/bootstrap"
"isle/secrets"
"code.betamike.com/micropelago/pmux/pmuxlib"
"dev.mediocregopher.com/mediocre-go-lib.git/mlog"
)
// Children manages all child processes of a network. Child processes are
// comprised of:
// - nebula
// - dnsmasq
// - garage (0 or more, depending on configured storage allocations)
type Children struct {
logger *mlog.Logger
opts Opts
pmuxCancelFn context.CancelFunc
pmuxStoppedCh chan struct{}
}
// NewChildren initialized and returns a Children instance. If initialization
// fails an error is returned.
func NewChildren(
ctx context.Context,
logger *mlog.Logger,
binDirPath string,
secretsStore secrets.Store,
daemonConfig Config,
garageAdminToken string,
hostBootstrap bootstrap.Bootstrap,
opts *Opts,
) (
*Children, error,
) {
opts = opts.withDefaults()
logger.Info(ctx, "Loading secrets")
garageRPCSecret, err := getGarageRPCSecret(ctx, secretsStore)
if err != nil && !errors.Is(err, secrets.ErrNotFound) {
return nil, fmt.Errorf("loading garage RPC secret: %w", err)
}
pmuxCtx, pmuxCancelFn := context.WithCancel(context.Background())
c := &Children{
logger: logger,
opts: *opts,
pmuxCancelFn: pmuxCancelFn,
pmuxStoppedCh: make(chan struct{}),
}
pmuxConfig, err := c.newPmuxConfig(
ctx,
garageRPCSecret,
binDirPath,
daemonConfig,
garageAdminToken,
hostBootstrap,
)
if err != nil {
return nil, fmt.Errorf("generating pmux config: %w", err)
}
go func() {
defer close(c.pmuxStoppedCh)
pmuxlib.Run(pmuxCtx, c.opts.Stdout, c.opts.Stderr, pmuxConfig)
c.logger.Debug(pmuxCtx, "pmux stopped")
}()
initErr := c.postPmuxInit(
ctx, daemonConfig, garageAdminToken, hostBootstrap,
)
if initErr != nil {
logger.Warn(ctx, "failed to initialize Children, shutting down child processes", err)
if err := c.Shutdown(); err != nil {
panic(fmt.Sprintf(
"failed to shut down child processes after initialization"+
" error, there may be zombie children leftover."+
" Original error: %v",
initErr,
))
}
return nil, initErr
}
return c, nil
}
// Shutdown blocks until all child processes have gracefully shut themselves
// down.
//
// If this returns an error then it's possible that child processes are
// still running and are no longer managed.
func (c *Children) Shutdown() error {
c.pmuxCancelFn()
<-c.pmuxStoppedCh
return nil
}

View File

@ -0,0 +1,175 @@
// Package children manages the creation, lifetime, and shutdown of child
// processes created by the daemon.
package children
import (
"context"
"errors"
"fmt"
"io"
"os"
"code.betamike.com/micropelago/pmux/pmuxlib"
"dev.mediocregopher.com/mediocre-go-lib.git/mlog"
"isle/bootstrap"
"isle/daemon/daecommon"
"isle/secrets"
"isle/toolkit"
)
// Opts are optional parameters which can be passed in when initializing a new
// Children instance. A nil Opts is equivalent to a zero value.
type Opts struct {
// Stdout and Stderr are what the associated outputs from child processes
// will be directed to.
Stdout, Stderr io.Writer
}
func (o *Opts) withDefaults() *Opts {
if o == nil {
o = new(Opts)
}
if o.Stdout == nil {
o.Stdout = os.Stdout
}
if o.Stderr == nil {
o.Stderr = os.Stderr
}
return o
}
// Children manages all child processes of a network. Child processes are
// comprised of:
// - nebula
// - dnsmasq
// - garage (0 or more, depending on configured storage allocations)
type Children struct {
logger *mlog.Logger
daemonConfig daecommon.Config
runtimeDir toolkit.Dir
opts Opts
pmux *pmuxlib.Pmux
}
// New initializes and returns a Children instance. If initialization fails an
// error is returned.
func New(
ctx context.Context,
logger *mlog.Logger,
binDirPath string,
secretsStore secrets.Store,
daemonConfig daecommon.Config,
runtimeDir toolkit.Dir,
garageAdminToken string,
hostBootstrap bootstrap.Bootstrap,
opts *Opts,
) (
*Children, error,
) {
opts = opts.withDefaults()
logger.Info(ctx, "Loading secrets")
garageRPCSecret, err := daecommon.GetGarageRPCSecret(ctx, secretsStore)
if err != nil && !errors.Is(err, secrets.ErrNotFound) {
return nil, fmt.Errorf("loading garage RPC secret: %w", err)
}
c := &Children{
logger: logger,
daemonConfig: daemonConfig,
runtimeDir: runtimeDir,
opts: *opts,
}
pmuxConfig, err := c.newPmuxConfig(
ctx,
garageRPCSecret,
binDirPath,
daemonConfig,
garageAdminToken,
hostBootstrap,
)
if err != nil {
return nil, fmt.Errorf("generating pmux config: %w", err)
}
c.pmux = pmuxlib.NewPmux(pmuxConfig, c.opts.Stdout, c.opts.Stderr)
initErr := c.postPmuxInit(
ctx, daemonConfig, garageAdminToken, hostBootstrap,
)
if initErr != nil {
logger.Warn(ctx, "failed to initialize Children, shutting down child processes", err)
c.Shutdown()
return nil, initErr
}
return c, nil
}
// RestartDNSMasq rewrites the dnsmasq config and restarts the process.
//
// TODO block until process has been confirmed to have come back up
// successfully.
func (c *Children) RestartDNSMasq(hostBootstrap bootstrap.Bootstrap) error {
_, err := dnsmasqWriteConfig(
c.runtimeDir.Path, c.daemonConfig, hostBootstrap,
)
if err != nil {
return fmt.Errorf("writing new dnsmasq config: %w", err)
}
c.pmux.Restart("dnsmasq")
return nil
}
// RestartNebula rewrites the nebula config and restarts the process.
//
// TODO block until process has been confirmed to have come back up
// successfully.
func (c *Children) RestartNebula(hostBootstrap bootstrap.Bootstrap) error {
_, err := nebulaWriteConfig(
c.runtimeDir.Path, c.daemonConfig, hostBootstrap,
)
if err != nil {
return fmt.Errorf("writing a new nebula config: %w", err)
}
c.pmux.Restart("nebula")
return nil
}
// Reload applies a ReloadDiff to the Children, using the given bootstrap which
// must be the same one which was passed to CalculateReloadDiff.
func (c *Children) Reload(
ctx context.Context, newBootstrap bootstrap.Bootstrap, diff ReloadDiff,
) error {
var errs []error
if diff.DNSChanged {
c.logger.Info(ctx, "Restarting dnsmasq to account for bootstrap changes")
if err := c.RestartDNSMasq(newBootstrap); err != nil {
errs = append(errs, fmt.Errorf("restarting dnsmasq: %w", err))
}
}
if diff.NebulaChanged {
c.logger.Info(ctx, "Restarting nebula to account for bootstrap changes")
if err := c.RestartNebula(newBootstrap); err != nil {
errs = append(errs, fmt.Errorf("restarting nebula: %w", err))
}
}
return errors.Join(errs...)
}
// Shutdown blocks until all child processes have gracefully shut themselves
// down.
func (c *Children) Shutdown() {
c.pmux.Stop()
}

View File

@ -0,0 +1,47 @@
package children
import (
"errors"
"fmt"
"isle/bootstrap"
"isle/daemon/daecommon"
"reflect"
)
// ReloadDiff describes which children had their configurations changed as part
// of a change in the bootstrap.
type ReloadDiff struct {
NebulaChanged bool
DNSChanged bool
}
// CalculateReloadDiff calculates a ReloadDiff based on an old and new
// bootstrap.
func CalculateReloadDiff(
daemonConfig daecommon.Config,
prevBootstrap, nextBootstrap bootstrap.Bootstrap,
) (
diff ReloadDiff, err error,
) {
{
prevNebulaConfig, prevErr := nebulaConfig(daemonConfig, prevBootstrap)
nextNebulaConfig, nextErr := nebulaConfig(daemonConfig, nextBootstrap)
if err = errors.Join(prevErr, nextErr); err != nil {
err = fmt.Errorf("calculating nebula config: %w", err)
return
}
diff.NebulaChanged = !reflect.DeepEqual(
prevNebulaConfig, nextNebulaConfig,
)
}
{
diff.DNSChanged = !reflect.DeepEqual(
dnsmasqConfig(daemonConfig, prevBootstrap),
dnsmasqConfig(daemonConfig, nextBootstrap),
)
}
return
}

View File

@ -0,0 +1,90 @@
package children
import (
"context"
"fmt"
"isle/bootstrap"
"isle/daemon/daecommon"
"isle/dnsmasq"
"path/filepath"
"sort"
"code.betamike.com/micropelago/pmux/pmuxlib"
"dev.mediocregopher.com/mediocre-go-lib.git/mlog"
)
func dnsmasqConfig(
daemonConfig daecommon.Config, hostBootstrap bootstrap.Bootstrap,
) dnsmasq.ConfData {
hostsSlice := make([]dnsmasq.ConfDataHost, 0, len(hostBootstrap.Hosts))
for _, host := range hostBootstrap.Hosts {
hostsSlice = append(hostsSlice, dnsmasq.ConfDataHost{
Name: string(host.Name),
IP: host.IP().String(),
})
}
sort.Slice(hostsSlice, func(i, j int) bool {
return hostsSlice[i].IP < hostsSlice[j].IP
})
return dnsmasq.ConfData{
Resolvers: daemonConfig.DNS.Resolvers,
Domain: hostBootstrap.NetworkCreationParams.Domain,
IP: hostBootstrap.ThisHost().IP().String(),
Hosts: hostsSlice,
}
}
func dnsmasqWriteConfig(
runtimeDirPath string,
daemonConfig daecommon.Config,
hostBootstrap bootstrap.Bootstrap,
) (
string, error,
) {
var (
confPath = filepath.Join(runtimeDirPath, "dnsmasq.conf")
confData = dnsmasqConfig(daemonConfig, hostBootstrap)
)
if err := dnsmasq.WriteConfFile(confPath, confData); err != nil {
return "", fmt.Errorf("writing dnsmasq.conf to %q: %w", confPath, err)
}
return confPath, nil
}
func dnsmasqPmuxProcConfig(
logger *mlog.Logger,
runtimeDirPath, binDirPath string,
daemonConfig daecommon.Config,
hostBootstrap bootstrap.Bootstrap,
) (
pmuxlib.ProcessConfig, error,
) {
confPath, err := dnsmasqWriteConfig(
runtimeDirPath, daemonConfig, hostBootstrap,
)
if err != nil {
return pmuxlib.ProcessConfig{}, fmt.Errorf(
"writing dnsmasq config: %w", err,
)
}
return pmuxlib.ProcessConfig{
Cmd: filepath.Join(binDirPath, "dnsmasq"),
Args: []string{"-d", "-C", confPath},
StartAfterFunc: func(ctx context.Context) error {
// TODO consider a shared dnsmasq across all the daemon's networks.
// This would have a few benefits:
// - Less processes, less problems
// - Less configuration for the user in the case of more than one
// network.
// - Can listen on 127.0.0.x:53, rather than on the nebula address.
// This allows DNS to come up before nebula, which is helpful when
// nebula depends on DNS.
return waitForNebula(ctx, logger, hostBootstrap)
},
}, nil
}

View File

@ -1,9 +1,10 @@
package daemon
package children
import (
"context"
"fmt"
"isle/bootstrap"
"isle/daemon/daecommon"
"isle/garage"
"isle/garage/garagesrv"
"net"
@ -19,31 +20,10 @@ func garageAdminClientLogger(logger *mlog.Logger) *mlog.Logger {
return logger.WithNamespace("garageAdminClient")
}
// newGarageAdminClient will return an AdminClient for a local garage instance,
// or it will _panic_ if there is no local instance configured.
func newGarageAdminClient(
logger *mlog.Logger,
daemonConfig Config,
adminToken string,
hostBootstrap bootstrap.Bootstrap,
) *garage.AdminClient {
thisHost := hostBootstrap.ThisHost()
return garage.NewAdminClient(
garageAdminClientLogger(logger),
net.JoinHostPort(
thisHost.IP().String(),
strconv.Itoa(daemonConfig.Storage.Allocations[0].AdminPort),
),
adminToken,
)
}
func waitForGarage(
ctx context.Context,
logger *mlog.Logger,
daemonConfig Config,
daemonConfig daecommon.Config,
adminToken string,
hostBootstrap bootstrap.Bootstrap,
) error {
@ -81,35 +61,16 @@ func waitForGarage(
}
// bootstrapGarageHostForAlloc returns the bootstrap.GarageHostInstance which
// corresponds with the given alloc from the daemon config. This will panic if
// no associated instance can be found.
//
// This assumes that coalesceDaemonConfigAndBootstrap has already been called.
func bootstrapGarageHostForAlloc(
host bootstrap.Host,
alloc ConfigStorageAllocation,
) bootstrap.GarageHostInstance {
for _, inst := range host.Garage.Instances {
if inst.RPCPort == alloc.RPCPort {
return inst
}
}
panic(fmt.Sprintf("could not find alloc %+v in the bootstrap data", alloc))
}
func garageWriteChildConfig(
rpcSecret, runtimeDirPath, adminToken string,
hostBootstrap bootstrap.Bootstrap,
alloc ConfigStorageAllocation,
alloc daecommon.ConfigStorageAllocation,
) (
string, error,
) {
thisHost := hostBootstrap.ThisHost()
id := bootstrapGarageHostForAlloc(thisHost, alloc).ID
id := daecommon.BootstrapGarageHostForAlloc(thisHost, alloc).ID
peer := garage.LocalPeer{
RemotePeer: garage.RemotePeer{
@ -147,14 +108,14 @@ func garagePmuxProcConfigs(
ctx context.Context,
logger *mlog.Logger,
rpcSecret, runtimeDirPath, binDirPath string,
daemonConfig Config,
daemonConfig daecommon.Config,
adminToken string,
hostBootstrap bootstrap.Bootstrap,
) (
[]pmuxlib.ProcessConfig, error,
map[string]pmuxlib.ProcessConfig, error,
) {
var (
pmuxProcConfigs []pmuxlib.ProcessConfig
pmuxProcConfigs = map[string]pmuxlib.ProcessConfig{}
allocs = daemonConfig.Storage.Allocations
)
@ -172,53 +133,15 @@ func garagePmuxProcConfigs(
return nil, fmt.Errorf("writing child config file for alloc %+v: %w", alloc, err)
}
pmuxProcConfigs = append(pmuxProcConfigs, pmuxlib.ProcessConfig{
Name: fmt.Sprintf("garage-%d", alloc.RPCPort),
procName := fmt.Sprintf("garage-%d", alloc.RPCPort)
pmuxProcConfigs[procName] = pmuxlib.ProcessConfig{
Cmd: filepath.Join(binDirPath, "garage"),
Args: []string{"-c", childConfigPath, "server"},
StartAfterFunc: func(ctx context.Context) error {
return waitForNebula(ctx, logger, hostBootstrap)
},
})
}
}
return pmuxProcConfigs, nil
}
func garageApplyLayout(
ctx context.Context,
logger *mlog.Logger,
daemonConfig Config,
adminToken string,
hostBootstrap bootstrap.Bootstrap,
) error {
var (
adminClient = newGarageAdminClient(
logger, daemonConfig, adminToken, hostBootstrap,
)
thisHost = hostBootstrap.ThisHost()
hostName = thisHost.Name
allocs = daemonConfig.Storage.Allocations
peers = make([]garage.PeerLayout, len(allocs))
)
for i, alloc := range allocs {
id := bootstrapGarageHostForAlloc(thisHost, alloc).ID
zone := string(hostName)
if alloc.Zone != "" {
zone = alloc.Zone
}
peers[i] = garage.PeerLayout{
ID: id,
Capacity: alloc.Capacity * 1_000_000_000,
Zone: zone,
Tags: []string{},
}
}
return adminClient.ApplyLayout(ctx, peers)
}

View File

@ -0,0 +1,30 @@
package children
import (
"context"
"time"
"dev.mediocregopher.com/mediocre-go-lib.git/mlog"
)
// until keeps trying fn until it returns nil, returning true. If the context is
// canceled then until returns false.
func until(
ctx context.Context,
logger *mlog.Logger,
descr string,
fn func(context.Context) error,
) bool {
for {
logger.Info(ctx, descr)
err := fn(ctx)
if err == nil {
return true
} else if ctxErr := ctx.Err(); ctxErr != nil {
return false
}
logger.Warn(ctx, descr+" failed, retrying in one second", err)
time.Sleep(1 * time.Second)
}
}

View File

@ -1,9 +1,10 @@
package daemon
package children
import (
"context"
"fmt"
"isle/bootstrap"
"isle/daemon/daecommon"
"isle/yamlutil"
"net"
"path/filepath"
@ -46,14 +47,12 @@ func waitForNebula(
return ctx.Err()
}
func nebulaPmuxProcConfig(
runtimeDirPath, binDirPath string,
daemonConfig Config,
func nebulaConfig(
daemonConfig daecommon.Config,
hostBootstrap bootstrap.Bootstrap,
) (
pmuxlib.ProcessConfig, error,
map[string]any, error,
) {
var (
lighthouseHostIPs []string
staticHostMap = map[string][]string{}
@ -72,23 +71,19 @@ func nebulaPmuxProcConfig(
caCertPEM, err := hostBootstrap.CAPublicCredentials.Cert.Unwrap().MarshalToPEM()
if err != nil {
return pmuxlib.ProcessConfig{}, fmt.Errorf(
"marshaling CA cert to PEM: :%w", err,
)
return nil, fmt.Errorf("marshaling CA cert to PEM: :%w", err)
}
hostCertPEM, err := hostBootstrap.PublicCredentials.Cert.Unwrap().MarshalToPEM()
if err != nil {
return pmuxlib.ProcessConfig{}, fmt.Errorf(
"marshaling host cert to PEM: :%w", err,
)
return nil, fmt.Errorf("marshaling host cert to PEM: :%w", err)
}
hostKeyPEM := cert.MarshalX25519PrivateKey(
hostBootstrap.PrivateCredentials.EncryptingPrivateKey.Bytes(),
)
config := map[string]interface{}{
config := map[string]any{
"pki": map[string]string{
"ca": string(caCertPEM),
"cert": string(hostCertPEM),
@ -99,7 +94,7 @@ func nebulaPmuxProcConfig(
"punch": true,
"respond": true,
},
"tun": map[string]interface{}{
"tun": map[string]any{
"dev": daemonConfig.VPN.Tun.Device,
},
"firewall": daemonConfig.VPN.Firewall,
@ -112,7 +107,7 @@ func nebulaPmuxProcConfig(
"port": "0",
}
config["lighthouse"] = map[string]interface{}{
config["lighthouse"] = map[string]any{
"hosts": lighthouseHostIPs,
}
@ -121,7 +116,9 @@ func nebulaPmuxProcConfig(
_, port, err := net.SplitHostPort(publicAddr)
if err != nil {
return pmuxlib.ProcessConfig{}, fmt.Errorf("parsing public address %q: %w", publicAddr, err)
return nil, fmt.Errorf(
"parsing public address %q: %w", publicAddr, err,
)
}
config["listen"] = map[string]string{
@ -129,21 +126,60 @@ func nebulaPmuxProcConfig(
"port": port,
}
config["lighthouse"] = map[string]interface{}{
config["lighthouse"] = map[string]any{
"hosts": []string{},
"am_lighthouse": true,
}
}
return config, nil
}
func nebulaWriteConfig(
runtimeDirPath string,
daemonConfig daecommon.Config,
hostBootstrap bootstrap.Bootstrap,
) (
string, error,
) {
config, err := nebulaConfig(daemonConfig, hostBootstrap)
if err != nil {
return "", fmt.Errorf("creating nebula config: %w", err)
}
nebulaYmlPath := filepath.Join(runtimeDirPath, "nebula.yml")
if err := yamlutil.WriteYamlFile(config, nebulaYmlPath, 0600); err != nil {
return pmuxlib.ProcessConfig{}, fmt.Errorf("writing nebula.yml to %q: %w", nebulaYmlPath, err)
return "", fmt.Errorf("writing nebula.yml to %q: %w", nebulaYmlPath, err)
}
return nebulaYmlPath, nil
}
func nebulaPmuxProcConfig(
runtimeDirPath, binDirPath string,
daemonConfig daecommon.Config,
hostBootstrap bootstrap.Bootstrap,
) (
pmuxlib.ProcessConfig, error,
) {
config, err := nebulaConfig(daemonConfig, hostBootstrap)
if err != nil {
return pmuxlib.ProcessConfig{}, fmt.Errorf(
"creating nebula config: %w", err,
)
}
nebulaYmlPath := filepath.Join(runtimeDirPath, "nebula.yml")
if err := yamlutil.WriteYamlFile(config, nebulaYmlPath, 0600); err != nil {
return pmuxlib.ProcessConfig{}, fmt.Errorf(
"writing nebula.yml to %q: %w", nebulaYmlPath, err,
)
}
return pmuxlib.ProcessConfig{
Name: "nebula",
Cmd: filepath.Join(binDirPath, "nebula"),
Args: []string{"-config", nebulaYmlPath},
Cmd: filepath.Join(binDirPath, "nebula"),
Args: []string{"-config", nebulaYmlPath},
Group: -1, // Make sure nebula is shut down last.
}, nil
}

View File

@ -1,9 +1,10 @@
package daemon
package children
import (
"context"
"fmt"
"isle/bootstrap"
"isle/daemon/daecommon"
"code.betamike.com/micropelago/pmux/pmuxlib"
)
@ -11,14 +12,14 @@ import (
func (c *Children) newPmuxConfig(
ctx context.Context,
garageRPCSecret, binDirPath string,
daemonConfig Config,
daemonConfig daecommon.Config,
garageAdminToken string,
hostBootstrap bootstrap.Bootstrap,
) (
pmuxlib.Config, error,
) {
nebulaPmuxProcConfig, err := nebulaPmuxProcConfig(
c.opts.EnvVars.RuntimeDirPath,
c.runtimeDir.Path,
binDirPath,
daemonConfig,
hostBootstrap,
@ -28,10 +29,11 @@ func (c *Children) newPmuxConfig(
}
dnsmasqPmuxProcConfig, err := dnsmasqPmuxProcConfig(
c.opts.EnvVars.RuntimeDirPath,
c.logger,
c.runtimeDir.Path,
binDirPath,
hostBootstrap,
daemonConfig,
hostBootstrap,
)
if err != nil {
return pmuxlib.Config{}, fmt.Errorf(
@ -43,7 +45,7 @@ func (c *Children) newPmuxConfig(
ctx,
c.logger,
garageRPCSecret,
c.opts.EnvVars.RuntimeDirPath,
c.runtimeDir.Path,
binDirPath,
daemonConfig,
garageAdminToken,
@ -55,20 +57,18 @@ func (c *Children) newPmuxConfig(
)
}
pmuxProcConfigs := garagePmuxProcConfigs
pmuxProcConfigs["nebula"] = nebulaPmuxProcConfig
pmuxProcConfigs["dnsmasq"] = dnsmasqPmuxProcConfig
return pmuxlib.Config{
Processes: append(
[]pmuxlib.ProcessConfig{
nebulaPmuxProcConfig,
dnsmasqPmuxProcConfig,
},
garagePmuxProcConfigs...,
),
Processes: pmuxProcConfigs,
}, nil
}
func (c *Children) postPmuxInit(
ctx context.Context,
daemonConfig Config,
daemonConfig daecommon.Config,
garageAdminToken string,
hostBootstrap bootstrap.Bootstrap,
) error {

107
go/daemon/client.go Normal file
View File

@ -0,0 +1,107 @@
// Code generated by gowrap. DO NOT EDIT.
// template: jsonrpc2/client_gen.tpl
// gowrap: http://github.com/hexdigest/gowrap
package daemon
//go:generate gowrap gen -p isle/daemon -i RPC -t jsonrpc2/client_gen.tpl -o client.go -l ""
import (
"context"
"isle/bootstrap"
"isle/daemon/jsonrpc2"
"isle/daemon/network"
"isle/nebula"
)
type rpcClient struct {
client jsonrpc2.Client
}
// RPCFromClient wraps a Client so that it implements the
// RPC interface.
func RPCFromClient(client jsonrpc2.Client) RPC {
return &rpcClient{client}
}
func (c *rpcClient) CreateHost(ctx context.Context, h1 nebula.HostName, c2 network.CreateHostOpts) (j1 network.JoiningBootstrap, err error) {
err = c.client.Call(
ctx,
&j1,
"CreateHost",
h1,
c2,
)
return
}
func (c *rpcClient) CreateNebulaCertificate(ctx context.Context, h1 nebula.HostName, e1 nebula.EncryptingPublicKey) (c2 nebula.Certificate, err error) {
err = c.client.Call(
ctx,
&c2,
"CreateNebulaCertificate",
h1,
e1,
)
return
}
func (c *rpcClient) CreateNetwork(ctx context.Context, name string, domain string, ipNet nebula.IPNet, hostName nebula.HostName) (err error) {
err = c.client.Call(
ctx,
nil,
"CreateNetwork",
name,
domain,
ipNet,
hostName,
)
return
}
func (c *rpcClient) GetGarageClientParams(ctx context.Context) (g1 network.GarageClientParams, err error) {
err = c.client.Call(
ctx,
&g1,
"GetGarageClientParams",
)
return
}
func (c *rpcClient) GetHosts(ctx context.Context) (ha1 []bootstrap.Host, err error) {
err = c.client.Call(
ctx,
&ha1,
"GetHosts",
)
return
}
func (c *rpcClient) GetNebulaCAPublicCredentials(ctx context.Context) (c2 nebula.CAPublicCredentials, err error) {
err = c.client.Call(
ctx,
&c2,
"GetNebulaCAPublicCredentials",
)
return
}
func (c *rpcClient) JoinNetwork(ctx context.Context, j1 network.JoiningBootstrap) (err error) {
err = c.client.Call(
ctx,
nil,
"JoinNetwork",
j1,
)
return
}
func (c *rpcClient) RemoveHost(ctx context.Context, hostName nebula.HostName) (err error) {
err = c.client.Call(
ctx,
nil,
"RemoveHost",
hostName,
)
return
}

View File

@ -1,183 +1,81 @@
package daemon
import (
"errors"
"fmt"
"io"
"isle/yamlutil"
"io/fs"
"os"
"path/filepath"
"strconv"
"github.com/imdario/mergo"
"gopkg.in/yaml.v3"
"slices"
"strings"
"sync"
)
func defaultConfigPath(appDirPath string) string {
return filepath.Join(appDirPath, "etc", "daemon.yml")
}
func getDefaultHTTPSocketDirPath() string {
path, err := firstExistingDir(
"/tmp",
type ConfigTun struct {
Device string `yaml:"device"`
}
type ConfigFirewall struct {
Conntrack ConfigConntrack `yaml:"conntrack"`
Outbound []ConfigFirewallRule `yaml:"outbound"`
Inbound []ConfigFirewallRule `yaml:"inbound"`
}
type ConfigConntrack struct {
TCPTimeout string `yaml:"tcp_timeout"`
UDPTimeout string `yaml:"udp_timeout"`
DefaultTimeout string `yaml:"default_timeout"`
MaxConnections int `yaml:"max_connections"`
}
type ConfigFirewallRule struct {
Port string `yaml:"port,omitempty"`
Code string `yaml:"code,omitempty"`
Proto string `yaml:"proto,omitempty"`
Host string `yaml:"host,omitempty"`
Group string `yaml:"group,omitempty"`
Groups []string `yaml:"groups,omitempty"`
CIDR string `yaml:"cidr,omitempty"`
CASha string `yaml:"ca_sha,omitempty"`
CAName string `yaml:"ca_name,omitempty"`
}
// ConfigStorageAllocation describes the structure of each storage allocation
// within the daemon config file.
type ConfigStorageAllocation struct {
DataPath string `yaml:"data_path"`
MetaPath string `yaml:"meta_path"`
Capacity int `yaml:"capacity"`
S3APIPort int `yaml:"s3_api_port"`
RPCPort int `yaml:"rpc_port"`
AdminPort int `yaml:"admin_port"`
// Zone is a secret option which makes it easier to test garage bugs, but
// which we don't want users to otherwise know about.
Zone string `yaml:"zone"`
}
// Config describes the structure of the daemon config file.
type Config struct {
DNS struct {
Resolvers []string `yaml:"resolvers"`
} `yaml:"dns"`
VPN struct {
PublicAddr string `yaml:"public_addr"`
Firewall ConfigFirewall `yaml:"firewall"`
Tun ConfigTun `yaml:"tun"`
} `yaml:"vpn"`
Storage struct {
Allocations []ConfigStorageAllocation
} `yaml:"storage"`
}
func (c *Config) fillDefaults() {
var firewallGarageInbound []ConfigFirewallRule
for i := range c.Storage.Allocations {
if c.Storage.Allocations[i].RPCPort == 0 {
c.Storage.Allocations[i].RPCPort = 3900 + (i * 10)
}
if c.Storage.Allocations[i].S3APIPort == 0 {
c.Storage.Allocations[i].S3APIPort = 3901 + (i * 10)
}
if c.Storage.Allocations[i].AdminPort == 0 {
c.Storage.Allocations[i].AdminPort = 3902 + (i * 10)
}
alloc := c.Storage.Allocations[i]
firewallGarageInbound = append(
firewallGarageInbound,
ConfigFirewallRule{
Port: strconv.Itoa(alloc.S3APIPort),
Proto: "tcp",
Host: "any",
},
ConfigFirewallRule{
Port: strconv.Itoa(alloc.RPCPort),
Proto: "tcp",
Host: "any",
},
)
}
c.VPN.Firewall.Inbound = append(
c.VPN.Firewall.Inbound,
firewallGarageInbound...,
// TODO it's possible the daemon process can't actually write to these
"/run",
"/var/run",
"/dev/shm",
)
}
// CopyDefaultConfig copies the daemon config file embedded in the AppDir into
// the given io.Writer.
func CopyDefaultConfig(into io.Writer, appDirPath string) error {
defaultConfigPath := defaultConfigPath(appDirPath)
f, err := os.Open(defaultConfigPath)
if err != nil {
return fmt.Errorf("opening daemon config at %q: %w", defaultConfigPath, err)
panic(fmt.Sprintf("Failed to find directory for HTTP socket: %v", err))
}
defer f.Close()
if _, err := io.Copy(into, f); err != nil {
return fmt.Errorf("copying daemon config from %q: %w", defaultConfigPath, err)
}
return nil
return path
}
// LoadConfig loads the daemon config from userConfigPath, merges it with
// the default found in the appDirPath, and returns the result.
//
// If userConfigPath is not given then the default is loaded and returned.
func LoadConfig(
appDirPath, userConfigPath string,
) (
Config, error,
) {
// HTTPSocketPath returns the path to the daemon's HTTP socket which is used for
// RPC and other functionality.
var HTTPSocketPath = sync.OnceValue(func() string {
return envOr(
"ISLE_DAEMON_HTTP_SOCKET_PATH",
func() string {
return filepath.Join(
getDefaultHTTPSocketDirPath(), "isle-daemon.sock",
)
},
)
})
defaultConfigPath := defaultConfigPath(appDirPath)
////////////////////////////////////////////////////////////////////////////////
// Jigs
var fullDaemon map[string]interface{}
if err := yamlutil.LoadYamlFile(&fullDaemon, defaultConfigPath); err != nil {
return Config{}, fmt.Errorf("parsing default daemon config file: %w", err)
func envOr(name string, fallback func() string) string {
if v := os.Getenv(name); v != "" {
return v
}
return fallback()
}
if userConfigPath != "" {
var daemonConfig map[string]interface{}
if err := yamlutil.LoadYamlFile(&daemonConfig, userConfigPath); err != nil {
return Config{}, fmt.Errorf("parsing %q: %w", userConfigPath, err)
}
err := mergo.Merge(&fullDaemon, daemonConfig, mergo.WithOverride)
if err != nil {
return Config{}, fmt.Errorf("merging contents of file %q: %w", userConfigPath, err)
func firstExistingDir(paths ...string) (string, error) {
var errs []error
for _, path := range paths {
stat, err := os.Stat(path)
switch {
case errors.Is(err, fs.ErrExist):
continue
case err != nil:
errs = append(
errs, fmt.Errorf("checking if path %q exists: %w", path, err),
)
case !stat.IsDir():
errs = append(
errs, fmt.Errorf("path %q exists but is not a directory", path),
)
default:
return path, nil
}
}
fullDaemonB, err := yaml.Marshal(fullDaemon)
if err != nil {
return Config{}, fmt.Errorf("yaml marshaling: %w", err)
err := fmt.Errorf(
"no directory found at any of the following paths: %s",
strings.Join(paths, ", "),
)
if len(errs) > 0 {
err = errors.Join(slices.Insert(errs, 0, err)...)
}
var config Config
if err := yaml.Unmarshal(fullDaemonB, &config); err != nil {
return Config{}, fmt.Errorf("yaml unmarshaling back into Config struct: %w", err)
}
config.fillDefaults()
return config, nil
return "", err
}

View File

@ -0,0 +1,201 @@
package daecommon
import (
"fmt"
"io"
"isle/bootstrap"
"isle/yamlutil"
"os"
"path/filepath"
"strconv"
"github.com/imdario/mergo"
"gopkg.in/yaml.v3"
)
func defaultConfigPath(appDirPath string) string {
return filepath.Join(appDirPath, "etc", "daemon.yml")
}
type ConfigTun struct {
Device string `yaml:"device"`
}
type ConfigFirewall struct {
Conntrack ConfigConntrack `yaml:"conntrack"`
Outbound []ConfigFirewallRule `yaml:"outbound"`
Inbound []ConfigFirewallRule `yaml:"inbound"`
}
type ConfigConntrack struct {
TCPTimeout string `yaml:"tcp_timeout"`
UDPTimeout string `yaml:"udp_timeout"`
DefaultTimeout string `yaml:"default_timeout"`
MaxConnections int `yaml:"max_connections"`
}
type ConfigFirewallRule struct {
Port string `yaml:"port,omitempty"`
Code string `yaml:"code,omitempty"`
Proto string `yaml:"proto,omitempty"`
Host string `yaml:"host,omitempty"`
Group string `yaml:"group,omitempty"`
Groups []string `yaml:"groups,omitempty"`
CIDR string `yaml:"cidr,omitempty"`
CASha string `yaml:"ca_sha,omitempty"`
CAName string `yaml:"ca_name,omitempty"`
}
// ConfigStorageAllocation describes the structure of each storage allocation
// within the daemon config file.
type ConfigStorageAllocation struct {
DataPath string `yaml:"data_path"`
MetaPath string `yaml:"meta_path"`
Capacity int `yaml:"capacity"`
S3APIPort int `yaml:"s3_api_port"`
RPCPort int `yaml:"rpc_port"`
AdminPort int `yaml:"admin_port"`
// Zone is a secret option which makes it easier to test garage bugs, but
// which we don't want users to otherwise know about.
Zone string `yaml:"zone"`
}
// Config describes the structure of the daemon config file.
type Config struct {
DNS struct {
Resolvers []string `yaml:"resolvers"`
} `yaml:"dns"`
VPN struct {
PublicAddr string `yaml:"public_addr"`
Firewall ConfigFirewall `yaml:"firewall"`
Tun ConfigTun `yaml:"tun"`
} `yaml:"vpn"`
Storage struct {
Allocations []ConfigStorageAllocation
} `yaml:"storage"`
}
func (c *Config) fillDefaults() {
var firewallGarageInbound []ConfigFirewallRule
for i := range c.Storage.Allocations {
if c.Storage.Allocations[i].RPCPort == 0 {
c.Storage.Allocations[i].RPCPort = 3900 + (i * 10)
}
if c.Storage.Allocations[i].S3APIPort == 0 {
c.Storage.Allocations[i].S3APIPort = 3901 + (i * 10)
}
if c.Storage.Allocations[i].AdminPort == 0 {
c.Storage.Allocations[i].AdminPort = 3902 + (i * 10)
}
alloc := c.Storage.Allocations[i]
firewallGarageInbound = append(
firewallGarageInbound,
ConfigFirewallRule{
Port: strconv.Itoa(alloc.S3APIPort),
Proto: "tcp",
Host: "any",
},
ConfigFirewallRule{
Port: strconv.Itoa(alloc.RPCPort),
Proto: "tcp",
Host: "any",
},
)
}
c.VPN.Firewall.Inbound = append(
c.VPN.Firewall.Inbound,
firewallGarageInbound...,
)
}
// CopyDefaultConfig copies the daemon config file embedded in the AppDir into
// the given io.Writer.
func CopyDefaultConfig(into io.Writer, appDirPath string) error {
defaultConfigPath := defaultConfigPath(appDirPath)
f, err := os.Open(defaultConfigPath)
if err != nil {
return fmt.Errorf("opening daemon config at %q: %w", defaultConfigPath, err)
}
defer f.Close()
if _, err := io.Copy(into, f); err != nil {
return fmt.Errorf("copying daemon config from %q: %w", defaultConfigPath, err)
}
return nil
}
// LoadConfig loads the daemon config from userConfigPath, merges it with
// the default found in the appDirPath, and returns the result.
//
// If userConfigPath is not given then the default is loaded and returned.
func LoadConfig(
appDirPath, userConfigPath string,
) (
Config, error,
) {
defaultConfigPath := defaultConfigPath(appDirPath)
var fullDaemon map[string]interface{}
if err := yamlutil.LoadYamlFile(&fullDaemon, defaultConfigPath); err != nil {
return Config{}, fmt.Errorf("parsing default daemon config file: %w", err)
}
if userConfigPath != "" {
var daemonConfig map[string]interface{}
if err := yamlutil.LoadYamlFile(&daemonConfig, userConfigPath); err != nil {
return Config{}, fmt.Errorf("parsing %q: %w", userConfigPath, err)
}
err := mergo.Merge(&fullDaemon, daemonConfig, mergo.WithOverride)
if err != nil {
return Config{}, fmt.Errorf("merging contents of file %q: %w", userConfigPath, err)
}
}
fullDaemonB, err := yaml.Marshal(fullDaemon)
if err != nil {
return Config{}, fmt.Errorf("yaml marshaling: %w", err)
}
var config Config
if err := yaml.Unmarshal(fullDaemonB, &config); err != nil {
return Config{}, fmt.Errorf("yaml unmarshaling back into Config struct: %w", err)
}
config.fillDefaults()
return config, nil
}
// BootstrapGarageHostForAlloc returns the bootstrap.GarageHostInstance which
// corresponds with the given alloc from the daemon config. This will panic if
// no associated instance can be found.
func BootstrapGarageHostForAlloc(
host bootstrap.Host,
alloc ConfigStorageAllocation,
) bootstrap.GarageHostInstance {
for _, inst := range host.Garage.Instances {
if inst.RPCPort == alloc.RPCPort {
return inst
}
}
panic(fmt.Sprintf("could not find alloc %+v in the bootstrap data", alloc))
}

View File

@ -0,0 +1,3 @@
// Package daecommon holds types and functionality which are required the daemon
// package and other of its subpackages.
package daecommon

View File

@ -0,0 +1,58 @@
package daecommon
import (
"isle/toolkit"
"os"
"path/filepath"
"sync"
"github.com/adrg/xdg"
)
// EnvVars are variables which are derived based on the environment which the
// process is running in.
type EnvVars struct {
StateDir toolkit.Dir
RuntimeDir toolkit.Dir
}
// GetEnvVars will return the EnvVars of the current processes, as determined by
// the process's environment.
var GetEnvVars = sync.OnceValue(func() EnvVars {
// RUNTIME_DIRECTORY/STATE_DIRECTORY are used by the systemd service in
// conjunction with the RuntimeDirectory/StateDirectory directives.
var (
res EnvVars
stateDirPath = envOr(
"STATE_DIRECTORY",
func() string { return filepath.Join(xdg.StateHome, "isle") },
)
runtimeDirPath = envOr(
"RUNTIME_DIRECTORY",
func() string { return filepath.Join(xdg.RuntimeDir, "isle") },
)
)
h := new(toolkit.MkDirHelper)
res.StateDir, _ = h.Maybe(toolkit.MkDir(stateDirPath, true))
res.RuntimeDir, _ = h.Maybe(toolkit.MkDir(runtimeDirPath, true))
if err := h.Err(); err != nil {
panic(err)
}
return res
})
////////////////////////////////////////////////////////////////////////////////
// Jigs
func envOr(name string, fallback func() string) string {
if v := os.Getenv(name); v != "" {
return v
}
return fallback()
}

View File

@ -0,0 +1,8 @@
package daecommon
// Registrationg of error code ranges for different namespaces (ie packages)
// within daemon. Each range covers 1000 available error codes.
const (
ErrorCodeRangeDaemon = 0 // daemon package
ErrorCodeRangeNetwork = 1000 // daemon/network package
)

View File

@ -0,0 +1,51 @@
package daecommon
import (
"fmt"
"isle/garage"
"isle/nebula"
"isle/secrets"
)
const (
secretsNSNebula = "nebula"
secretsNSGarage = "garage"
)
////////////////////////////////////////////////////////////////////////////////
// Nebula-related secrets
// IDs and Get/Set functions for nebula-related secrets.
var (
NebulaCASigningPrivateKeySecretID = secrets.NewID(secretsNSNebula, "ca-signing-private-key")
GetNebulaCASigningPrivateKey, SetNebulaCASigningPrivateKey = secrets.GetSetFunctions[nebula.SigningPrivateKey](
NebulaCASigningPrivateKeySecretID,
)
)
////////////////////////////////////////////////////////////////////////////////
// Garage-related secrets
func garageS3APIBucketCredentialsSecretID(credsName string) secrets.ID {
return secrets.NewID(
secretsNSGarage, fmt.Sprintf("s3-api-bucket-credentials-%s", credsName),
)
}
// IDs and Get/Set functions for garage-related secrets.
var (
GarageRPCSecretSecretID = secrets.NewID(secretsNSGarage, "rpc-secret")
GarageS3APIGlobalBucketCredentialsSecretID = garageS3APIBucketCredentialsSecretID(
garage.GlobalBucketS3APICredentialsName,
)
GetGarageRPCSecret, SetGarageRPCSecret = secrets.GetSetFunctions[string](
GarageRPCSecretSecretID,
)
GetGarageS3APIGlobalBucketCredentials,
SetGarageS3APIGlobalBucketCredentials = secrets.GetSetFunctions[garage.S3APICredentials](
GarageS3APIGlobalBucketCredentialsSecretID,
)
)

File diff suppressed because it is too large Load Diff

View File

@ -1,130 +0,0 @@
package daemon
import (
"errors"
"fmt"
"io/fs"
"os"
"path/filepath"
"slices"
"strings"
"sync"
"github.com/adrg/xdg"
)
// EnvVars are variables which are derived based on the environment which the
// process is running in.
type EnvVars struct {
RuntimeDirPath string
StateDirPath string
}
func (e EnvVars) init() error {
var errs []error
if err := mkDir(e.RuntimeDirPath); err != nil {
errs = append(errs, fmt.Errorf(
"creating runtime directory %q: %w",
e.RuntimeDirPath,
err,
))
}
if err := mkDir(e.StateDirPath); err != nil {
errs = append(errs, fmt.Errorf(
"creating state directory %q: %w",
e.StateDirPath,
err,
))
}
return errors.Join(errs...)
}
func getDefaultHTTPSocketDirPath() string {
path, err := firstExistingDir(
"/tmp",
// TODO it's possible the daemon process can't actually write to these
"/run",
"/var/run",
"/dev/shm",
)
if err != nil {
panic(fmt.Sprintf("Failed to find directory for HTTP socket: %v", err))
}
return path
}
// HTTPSocketPath returns the path to the daemon's HTTP socket which is used for
// RPC and other functionality.
var HTTPSocketPath = sync.OnceValue(func() string {
return envOr(
"ISLE_DAEMON_HTTP_SOCKET_PATH",
func() string {
return filepath.Join(
getDefaultHTTPSocketDirPath(), "isle-daemon.sock",
)
},
)
})
// GetEnvVars will return the EnvVars of the current processes, as determined by
// the process's environment.
var GetEnvVars = sync.OnceValue(func() (v EnvVars) {
// RUNTIME_DIRECTORY/STATE_DIRECTORY are used by the systemd service in
// conjunction with the RuntimeDirectory/StateDirectory directives.
v.RuntimeDirPath = envOr(
"RUNTIME_DIRECTORY",
func() string { return filepath.Join(xdg.RuntimeDir, "isle") },
)
v.StateDirPath = envOr(
"STATE_DIRECTORY",
func() string { return filepath.Join(xdg.StateHome, "isle") },
)
return
})
////////////////////////////////////////////////////////////////////////////////
// Jigs
func envOr(name string, fallback func() string) string {
if v := os.Getenv(name); v != "" {
return v
}
return fallback()
}
func firstExistingDir(paths ...string) (string, error) {
var errs []error
for _, path := range paths {
stat, err := os.Stat(path)
switch {
case errors.Is(err, fs.ErrExist):
continue
case err != nil:
errs = append(
errs, fmt.Errorf("checking if path %q exists: %w", path, err),
)
case !stat.IsDir():
errs = append(
errs, fmt.Errorf("path %q exists but is not a directory", path),
)
default:
return path, nil
}
}
err := fmt.Errorf(
"no directory found at any of the following paths: %s",
strings.Join(paths, ", "),
)
if len(errs) > 0 {
err = errors.Join(slices.Insert(errs, 0, err)...)
}
return "", err
}

View File

@ -1,31 +1,21 @@
package daemon
import "isle/daemon/jsonrpc2"
import (
"isle/daemon/daecommon"
"isle/daemon/jsonrpc2"
)
const (
errCodeNoNetwork = daecommon.ErrorCodeRangeDaemon + iota
errCodeAlreadyJoined
)
var (
// ErrNoNetwork is returned when the daemon has never been configured with a
// network.
ErrNoNetwork = jsonrpc2.NewError(1, "No network configured")
// ErrInitializing is returned when a network is unavailable due to still
// being initialized.
ErrInitializing = jsonrpc2.NewError(2, "Network is being initialized")
// ErrRestarting is returned when a network is unavailable due to being
// restarted.
ErrRestarting = jsonrpc2.NewError(3, "Network is being restarted")
ErrNoNetwork = jsonrpc2.NewError(errCodeNoNetwork, "No network configured")
// ErrAlreadyJoined is returned when the daemon is instructed to create or
// join a new network, but it is already joined to a network.
ErrAlreadyJoined = jsonrpc2.NewError(4, "Already joined to a network")
// ErrInvalidConfig is returned when the daemon's configuration is invalid
// for an operation being attempted.
//
// The Data field will be a string containing further details.
ErrInvalidConfig = jsonrpc2.NewError(5, "Invalid daemon config")
// ErrHostNotFound is returned when performing an operation which expected a
// host to exist in the network, but that host wasn't found.
ErrHostNotFound = jsonrpc2.NewError(6, "Host not found")
ErrAlreadyJoined = jsonrpc2.NewError(errCodeAlreadyJoined, "Already joined to a network")
)

View File

@ -1,52 +0,0 @@
package daemon
import (
"context"
"errors"
"fmt"
"isle/bootstrap"
"isle/garage"
"isle/secrets"
)
// GarageClientParams contains all the data needed to instantiate garage
// clients.
type GarageClientParams struct {
Peer garage.RemotePeer
GlobalBucketS3APICredentials garage.S3APICredentials
// RPCSecret may be empty, if the secret is not available on the host.
RPCSecret string
}
func (d *daemon) getGarageClientParams(
ctx context.Context, currBootstrap bootstrap.Bootstrap,
) (
GarageClientParams, error,
) {
creds, err := getGarageS3APIGlobalBucketCredentials(ctx, d.secretsStore)
if err != nil {
return GarageClientParams{}, fmt.Errorf("getting garage global bucket creds: %w", err)
}
rpcSecret, err := getGarageRPCSecret(ctx, d.secretsStore)
if err != nil && !errors.Is(err, secrets.ErrNotFound) {
return GarageClientParams{}, fmt.Errorf("getting garage rpc secret: %w", err)
}
return GarageClientParams{
Peer: currBootstrap.ChooseGaragePeer(),
GlobalBucketS3APICredentials: creds,
RPCSecret: rpcSecret,
}, nil
}
// GlobalBucketS3APIClient returns an S3 client pre-configured with access to
// the global bucket.
func (p GarageClientParams) GlobalBucketS3APIClient() garage.S3APIClient {
var (
addr = p.Peer.S3APIAddr()
creds = p.GlobalBucketS3APICredentials
)
return garage.NewS3APIClient(addr, creds)
}

View File

@ -1,76 +0,0 @@
package daemon
import (
"context"
"crypto/rand"
"encoding/hex"
"errors"
"fmt"
"io/fs"
"os"
"path/filepath"
"time"
"dev.mediocregopher.com/mediocre-go-lib.git/mlog"
)
// until keeps trying fn until it returns nil, returning true. If the context is
// canceled then until returns false.
func until(
ctx context.Context,
logger *mlog.Logger,
descr string,
fn func(context.Context) error,
) bool {
for {
logger.Info(ctx, descr)
err := fn(ctx)
if err == nil {
return true
} else if ctxErr := ctx.Err(); ctxErr != nil {
return false
}
logger.Warn(ctx, descr+" failed, retrying in one second", err)
time.Sleep(1 * time.Second)
}
}
func randStr(l int) string {
b := make([]byte, l)
if _, err := rand.Read(b); err != nil {
panic(err)
}
return hex.EncodeToString(b)
}
// mkDir is like os.Mkdir but it returns better error messages. If the directory
// already exists then nil is returned.
func mkDir(path string) error {
{
parentPath := filepath.Dir(path)
parentInfo, err := os.Stat(parentPath)
if err != nil {
return fmt.Errorf("checking fs node of parent %q: %w", parentPath, err)
} else if !parentInfo.IsDir() {
return fmt.Errorf("%q is not a directory", parentPath)
}
}
info, err := os.Stat(path)
if errors.Is(err, fs.ErrNotExist) {
// fine
} else if err != nil {
return fmt.Errorf("checking fs node: %w", err)
} else if !info.IsDir() {
return fmt.Errorf("exists but is not a directory")
} else {
return nil
}
if err := os.Mkdir(path, 0700); err != nil {
return fmt.Errorf("creating directory: %w", err)
}
return nil
}

View File

@ -15,5 +15,5 @@ type Client interface {
//
// If an error result is returned from the server that will be returned as
// an Error struct.
Call(ctx context.Context, rcv any, method string, params any) error
Call(ctx context.Context, rcv any, method string, params ...any) error
}

View File

@ -0,0 +1,42 @@
import (
"isle/daemon/jsonrpc2"
)
{{ $t := printf "%sClient" (down .Interface.Name) }}
type {{$t}} struct {
client jsonrpc2.Client
}
// {{.Interface.Name}}FromClient wraps a Client so that it implements the
// {{.Interface.Name}} interface.
func {{.Interface.Name}}FromClient(client jsonrpc2.Client) {{.Interface.Name}} {
return &{{$t}}{client}
}
{{range $method := .Interface.Methods}}
func (c *{{$t}}) {{$method.Declaration}} {
{{- $ctx := (index $method.Params 0).Name}}
{{- $rcv := ""}}
{{- $err := ""}}
{{- if (eq (len $method.Results) 1)}}
{{- $rcv = "nil" }}
{{- $err = (index $method.Results 0).Name}}
{{- else}}
{{- $rcv = printf "&%s" (index $method.Results 0).Name}}
{{- $err = (index $method.Results 1).Name}}
{{- end}}
{{- $err}} = c.client.Call(
{{$ctx}},
{{$rcv}},
"{{$method.Name}}",
{{- range $param := (slice $method.Params 1)}}
{{$param.Name}},
{{- end}}
)
return
}
{{end}}

View File

@ -49,7 +49,7 @@ func NewUnixHTTPClient(unixSocketPath, reqPath string) Client {
}
func (c *httpClient) Call(
ctx context.Context, rcv any, method string, params any,
ctx context.Context, rcv any, method string, params ...any,
) error {
var (
body = new(bytes.Buffer)

View File

@ -19,7 +19,7 @@ func NewReadWriterClient(rw io.ReadWriter) Client {
}
func (c rwClient) Call(
ctx context.Context, rcv any, method string, params any,
ctx context.Context, rcv any, method string, params ...any,
) error {
id, err := encodeRequest(c.enc, method, params)
if err != nil {

View File

@ -18,38 +18,56 @@ type methodDispatchFunc func(context.Context, Request) (any, error)
func newMethodDispatchFunc(
method reflect.Value,
) methodDispatchFunc {
paramT := method.Type().In(1)
return func(ctx context.Context, req Request) (any, error) {
var (
ctxV = reflect.ValueOf(ctx)
paramPtrV = reflect.New(paramT)
)
paramTs := make([]reflect.Type, method.Type().NumIn()-1)
for i := range paramTs {
paramTs[i] = method.Type().In(i + 1)
}
err := json.Unmarshal(req.Params, paramPtrV.Interface())
if err != nil {
// The JSON has already been validated, so this is not an
// errCodeParse situation. We assume it's an invalid param then,
// unless the error says otherwise via an UnmarshalJSON method
// returning an Error of its own.
if !errors.As(err, new(Error)) {
err = NewInvalidParamsError(
"JSON unmarshaling params into %T: %v", paramT, err,
)
return func(ctx context.Context, req Request) (any, error) {
callVals := make([]reflect.Value, 0, len(paramTs)+1)
callVals = append(callVals, reflect.ValueOf(ctx))
for i, paramT := range paramTs {
paramPtrV := reflect.New(paramT)
err := json.Unmarshal(req.Params[i], paramPtrV.Interface())
if err != nil {
// The JSON has already been validated, so this is not an
// errCodeParse situation. We assume it's an invalid param then,
// unless the error says otherwise via an UnmarshalJSON method
// returning an Error of its own.
if !errors.As(err, new(Error)) {
err = NewInvalidParamsError(
"JSON unmarshaling param %d into %T: %v", i, paramT, err,
)
}
return nil, err
}
return nil, err
callVals = append(callVals, paramPtrV.Elem())
}
var (
callResV = method.Call([]reflect.Value{ctxV, paramPtrV.Elem()})
resV = callResV[0]
errV = callResV[1]
callResV = method.Call(callVals)
resV, errV reflect.Value
)
if errV.IsNil() {
if len(callResV) == 1 {
errV = callResV[0]
} else {
resV = callResV[0]
errV = callResV[1]
}
if !errV.IsNil() {
return nil, errV.Interface().(error)
}
if resV.IsValid() {
return resV.Interface(), nil
}
return nil, errV.Interface().(error)
return nil, nil
}
}
@ -61,7 +79,8 @@ type dispatcher struct {
// value to dispatch RPC calls. The passed in value must be a pointer. All
// exported methods which look like:
//
// MethodName(context.Context, ParamType) (ResponseType, error)
// MethodName(context.Context, ...ParamType) (ResponseType, error)
// MethodName(context.Context, ...ParamType) error
//
// will be available via RPC calls.
func NewDispatchHandler(i any) Handler {
@ -86,10 +105,11 @@ func NewDispatchHandler(i any) Handler {
)
if !method.IsExported() ||
methodT.NumIn() != 2 ||
methodT.NumIn() < 1 ||
methodT.In(0) != ctxT ||
methodT.NumOut() != 2 ||
methodT.Out(1) != errT {
(methodT.NumOut() == 1 && methodT.Out(0) != errT) ||
(methodT.NumOut() == 2 && methodT.Out(1) != errT) ||
methodT.NumOut() > 2 {
continue
}

View File

@ -3,6 +3,7 @@ package jsonrpc2
import (
"context"
"errors"
"fmt"
"dev.mediocregopher.com/mediocre-go-lib.git/mctx"
"dev.mediocregopher.com/mediocre-go-lib.git/mlog"
@ -59,9 +60,14 @@ func NewMLogMiddleware(logger *mlog.Logger) Middleware {
)
if logger.MaxLevel() >= mlog.LevelDebug.Int() {
ctx := mctx.Annotate(
ctx, "rpcRequestParams", string(req.Params),
)
ctx := ctx
for i := range req.Params {
ctx = mctx.Annotate(
ctx,
fmt.Sprintf("rpcRequestParam%d", i),
string(req.Params[i]),
)
}
logger.Debug(ctx, "Handling RPC request")
}

View File

@ -13,10 +13,10 @@ const version = "2.0"
// Request encodes an RPC request according to the spec.
type Request struct {
Version string `json:"jsonrpc"` // must be "2.0"
Method string `json:"method"`
Params json.RawMessage `json:"params,omitempty"`
ID string `json:"id"`
Version string `json:"jsonrpc"` // must be "2.0"
Method string `json:"method"`
Params []json.RawMessage `json:"params,omitempty"`
ID string `json:"id"`
}
type response[Result any] struct {
@ -37,13 +37,19 @@ func newID() string {
// encodeRequest writes a request to an io.Writer, returning the ID of the
// request.
func encodeRequest(
enc *json.Encoder, method string, params any,
enc *json.Encoder, method string, params []any,
) (
string, error,
) {
paramsB, err := json.Marshal(params)
if err != nil {
return "", fmt.Errorf("encoding params as JSON: %w", err)
var (
paramsBs = make([]json.RawMessage, len(params))
err error
)
for i := range params {
paramsBs[i], err = json.Marshal(params[i])
if err != nil {
return "", fmt.Errorf("encoding param %d as JSON: %w", i, err)
}
}
var (
@ -51,7 +57,7 @@ func encodeRequest(
reqEnvelope = Request{
Version: version,
Method: method,
Params: paramsB,
Params: paramsBs,
ID: id,
}
)

View File

@ -26,14 +26,26 @@ var ErrDivideByZero = Error{
type dividerImpl struct{}
func (dividerImpl) Divide(ctx context.Context, p DivideParams) (int, error) {
if p.Bottom == 0 {
func (dividerImpl) Divide2(ctx context.Context, top, bottom int) (int, error) {
if bottom == 0 {
return 0, ErrDivideByZero
}
if p.Top%p.Bottom != 0 {
if top%bottom != 0 {
return 0, errors.New("numbers don't divide evenly, cannot compute!")
}
return p.Top / p.Bottom, nil
return top / bottom, nil
}
func (i dividerImpl) Divide(ctx context.Context, p DivideParams) (int, error) {
return i.Divide2(ctx, p.Top, p.Bottom)
}
func (i dividerImpl) One(ctx context.Context) (int, error) {
return 1, nil
}
func (i dividerImpl) Noop(ctx context.Context) error {
return nil
}
func (dividerImpl) Hidden(ctx context.Context, p struct{}) (int, error) {
@ -41,7 +53,10 @@ func (dividerImpl) Hidden(ctx context.Context, p struct{}) (int, error) {
}
type divider interface {
Divide2(ctx context.Context, top, bottom int) (int, error)
Divide(ctx context.Context, p DivideParams) (int, error)
One(ctx context.Context) (int, error)
Noop(ctx context.Context) error
}
var testHandler = func() Handler {
@ -82,6 +97,33 @@ func testClient(t *testing.T, client Client) {
}
})
t.Run("success/multiple_params", func(t *testing.T) {
var res int
err := client.Call(ctx, &res, "Divide2", 6, 3)
if err != nil {
t.Fatal(err)
} else if res != 2 {
t.Fatalf("expected 2, got %d", res)
}
})
t.Run("success/no_params", func(t *testing.T) {
var res int
err := client.Call(ctx, &res, "One")
if err != nil {
t.Fatal(err)
} else if res != 1 {
t.Fatalf("expected 1, got %d", res)
}
})
t.Run("success/no_results", func(t *testing.T) {
err := client.Call(ctx, nil, "Noop")
if err != nil {
t.Fatal(err)
}
})
t.Run("err/application", func(t *testing.T) {
err := client.Call(ctx, nil, "Divide", DivideParams{})
if !errors.Is(err, ErrDivideByZero) {

73
go/daemon/network.go Normal file
View File

@ -0,0 +1,73 @@
package daemon
import (
"fmt"
"isle/bootstrap"
"isle/daemon/network"
"isle/toolkit"
)
func networkStateDir(
networksStateDir toolkit.Dir, networkID string, mayExist bool,
) (
toolkit.Dir, error,
) {
return networksStateDir.MkChildDir(networkID, mayExist)
}
func networkRuntimeDir(
networksRuntimeDir toolkit.Dir, networkID string, mayExist bool,
) (
toolkit.Dir, error,
) {
return networksRuntimeDir.MkChildDir(networkID, mayExist)
}
func networkDirs(
networksStateDir, networksRuntimeDir toolkit.Dir,
networkID string,
mayExist bool,
) (
stateDir, runtimeDir toolkit.Dir, err error,
) {
h := new(toolkit.MkDirHelper)
stateDir, _ = h.Maybe(
networkStateDir(networksStateDir, networkID, mayExist),
)
runtimeDir, _ = h.Maybe(
networkRuntimeDir(networksRuntimeDir, networkID, mayExist),
)
err = h.Err()
return
}
// LoadableNetworks returns the CreationParams for each Network which is able to
// be loaded.
func LoadableNetworks(
networksStateDir toolkit.Dir,
) (
[]bootstrap.CreationParams, error,
) {
networkStateDirs, err := networksStateDir.ChildDirs()
if err != nil {
return nil, fmt.Errorf(
"listing children of %q: %w", networksStateDir.Path, err,
)
}
creationParams := make([]bootstrap.CreationParams, 0, len(networkStateDirs))
for _, networkStateDir := range networkStateDirs {
thisCreationParams, err := network.LoadCreationParams(networkStateDir)
if err != nil {
return nil, fmt.Errorf(
"loading creation params from %q: %w",
networkStateDir.Path,
err,
)
}
creationParams = append(creationParams, thisCreationParams)
}
return creationParams, nil
}

View File

@ -1,24 +1,15 @@
package daemon
package network
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"isle/bootstrap"
"isle/daemon/daecommon"
"isle/garage/garagesrv"
"isle/jsonutil"
"isle/secrets"
"os"
"path/filepath"
)
// JoiningBootstrap wraps a normal Bootstrap to include extra data which a host
// might need while joining a network.
type JoiningBootstrap struct {
Bootstrap bootstrap.Bootstrap
Secrets map[secrets.ID]json.RawMessage
}
func writeBootstrapToStateDir(
stateDirPath string, hostBootstrap bootstrap.Bootstrap,
) error {
@ -39,7 +30,7 @@ func writeBootstrapToStateDir(
}
func coalesceDaemonConfigAndBootstrap(
daemonConfig Config, hostBootstrap bootstrap.Bootstrap,
daemonConfig daecommon.Config, hostBootstrap bootstrap.Bootstrap,
) (
bootstrap.Bootstrap, error,
) {

View File

@ -0,0 +1,28 @@
package network
import (
"isle/daemon/daecommon"
"isle/daemon/jsonrpc2"
)
const (
errCodeInitializing = daecommon.ErrorCodeRangeNetwork + iota
errCodeInvalidConfig
errCodeHostNotFound
)
var (
// ErrInitializing is returned when a network is unavailable due to still
// being initialized.
ErrInitializing = jsonrpc2.NewError(errCodeInitializing, "Network is being initialized")
// ErrInvalidConfig is returned when the daemon's configuration is invalid
// for an operation being attempted.
//
// The Data field will be a string containing further details.
ErrInvalidConfig = jsonrpc2.NewError(errCodeInvalidConfig, "Invalid daemon config")
// ErrHostNotFound is returned when performing an operation which expected a
// host to exist in the network, but that host wasn't found.
ErrHostNotFound = jsonrpc2.NewError(errCodeHostNotFound, "Host not found")
)

View File

@ -1,14 +1,19 @@
package daemon
package network
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"isle/bootstrap"
"isle/daemon/daecommon"
"isle/garage"
"isle/nebula"
"isle/secrets"
"net"
"path/filepath"
"strconv"
"dev.mediocregopher.com/mediocre-go-lib.git/mctx"
"dev.mediocregopher.com/mediocre-go-lib.git/mlog"
@ -20,10 +25,97 @@ const (
garageGlobalBucketBootstrapHostsDirPath = "bootstrap/hosts"
)
func (n *network) getGarageClientParams(
ctx context.Context, currBootstrap bootstrap.Bootstrap,
) (
GarageClientParams, error,
) {
creds, err := daecommon.GetGarageS3APIGlobalBucketCredentials(
ctx, n.secretsStore,
)
if err != nil {
return GarageClientParams{}, fmt.Errorf("getting garage global bucket creds: %w", err)
}
rpcSecret, err := daecommon.GetGarageRPCSecret(ctx, n.secretsStore)
if err != nil && !errors.Is(err, secrets.ErrNotFound) {
return GarageClientParams{}, fmt.Errorf("getting garage rpc secret: %w", err)
}
return GarageClientParams{
Peer: currBootstrap.ChooseGaragePeer(),
GlobalBucketS3APICredentials: creds,
RPCSecret: rpcSecret,
}, nil
}
func garageAdminClientLogger(logger *mlog.Logger) *mlog.Logger {
return logger.WithNamespace("garageAdminClient")
}
// newGarageAdminClient will return an AdminClient for a local garage instance,
// or it will _panic_ if there is no local instance configured.
func newGarageAdminClient(
logger *mlog.Logger,
daemonConfig daecommon.Config,
adminToken string,
hostBootstrap bootstrap.Bootstrap,
) *garage.AdminClient {
thisHost := hostBootstrap.ThisHost()
return garage.NewAdminClient(
garageAdminClientLogger(logger),
net.JoinHostPort(
thisHost.IP().String(),
strconv.Itoa(daemonConfig.Storage.Allocations[0].AdminPort),
),
adminToken,
)
}
func garageApplyLayout(
ctx context.Context,
logger *mlog.Logger,
daemonConfig daecommon.Config,
adminToken string,
hostBootstrap bootstrap.Bootstrap,
) error {
var (
adminClient = newGarageAdminClient(
logger, daemonConfig, adminToken, hostBootstrap,
)
thisHost = hostBootstrap.ThisHost()
hostName = thisHost.Name
allocs = daemonConfig.Storage.Allocations
peers = make([]garage.PeerLayout, len(allocs))
)
for i, alloc := range allocs {
id := daecommon.BootstrapGarageHostForAlloc(thisHost, alloc).ID
zone := string(hostName)
if alloc.Zone != "" {
zone = alloc.Zone
}
peers[i] = garage.PeerLayout{
ID: id,
Capacity: alloc.Capacity * 1_000_000_000,
Zone: zone,
Tags: []string{},
}
}
return adminClient.ApplyLayout(ctx, peers)
}
func garageInitializeGlobalBucket(
ctx context.Context,
logger *mlog.Logger,
daemonConfig Config,
daemonConfig daecommon.Config,
adminToken string,
hostBootstrap bootstrap.Bootstrap,
) (
@ -60,13 +152,73 @@ func garageInitializeGlobalBucket(
return creds, nil
}
func (n *network) getGarageBootstrapHosts(
ctx context.Context, currBootstrap bootstrap.Bootstrap,
) (
map[nebula.HostName]bootstrap.Host, error,
) {
garageClientParams, err := n.getGarageClientParams(ctx, currBootstrap)
if err != nil {
return nil, fmt.Errorf("getting garage client params: %w", err)
}
var (
client = garageClientParams.GlobalBucketS3APIClient()
hosts = map[nebula.HostName]bootstrap.Host{}
objInfoCh = client.ListObjects(
ctx, garage.GlobalBucket,
minio.ListObjectsOptions{
Prefix: garageGlobalBucketBootstrapHostsDirPath,
Recursive: true,
},
)
)
for objInfo := range objInfoCh {
ctx := mctx.Annotate(ctx, "objectKey", objInfo.Key)
if objInfo.Err != nil {
return nil, fmt.Errorf("listing objects: %w", objInfo.Err)
}
obj, err := client.GetObject(
ctx, garage.GlobalBucket, objInfo.Key, minio.GetObjectOptions{},
)
if err != nil {
return nil, fmt.Errorf("retrieving object %q: %w", objInfo.Key, err)
}
var authedHost bootstrap.AuthenticatedHost
err = json.NewDecoder(obj).Decode(&authedHost)
obj.Close()
if err != nil {
n.logger.Warn(ctx, "Object contains invalid json", err)
continue
}
host, err := authedHost.Unwrap(currBootstrap.CAPublicCredentials)
if err != nil {
n.logger.Warn(ctx, "Host could not be authenticated", err)
}
hosts[host.Name] = host
}
return hosts, nil
}
// putGarageBoostrapHost places the <hostname>.json.signed file for this host
// into garage so that other hosts are able to see relevant configuration for
// it.
func (d *daemon) putGarageBoostrapHost(
ctx context.Context, logger *mlog.Logger, currBootstrap bootstrap.Bootstrap,
func (n *network) putGarageBoostrapHost(
ctx context.Context, currBootstrap bootstrap.Bootstrap,
) error {
garageClientParams, err := d.getGarageClientParams(ctx, currBootstrap)
garageClientParams, err := n.getGarageClientParams(ctx, currBootstrap)
if err != nil {
return fmt.Errorf("getting garage client params: %w", err)
}
@ -112,66 +264,6 @@ func (d *daemon) putGarageBoostrapHost(
return nil
}
func (d *daemon) getGarageBootstrapHosts(
ctx context.Context, logger *mlog.Logger, currBootstrap bootstrap.Bootstrap,
) (
map[nebula.HostName]bootstrap.Host, error,
) {
garageClientParams, err := d.getGarageClientParams(ctx, currBootstrap)
if err != nil {
return nil, fmt.Errorf("getting garage client params: %w", err)
}
var (
client = garageClientParams.GlobalBucketS3APIClient()
hosts = map[nebula.HostName]bootstrap.Host{}
objInfoCh = client.ListObjects(
ctx, garage.GlobalBucket,
minio.ListObjectsOptions{
Prefix: garageGlobalBucketBootstrapHostsDirPath,
Recursive: true,
},
)
)
for objInfo := range objInfoCh {
ctx := mctx.Annotate(ctx, "objectKey", objInfo.Key)
if objInfo.Err != nil {
return nil, fmt.Errorf("listing objects: %w", objInfo.Err)
}
obj, err := client.GetObject(
ctx, garage.GlobalBucket, objInfo.Key, minio.GetObjectOptions{},
)
if err != nil {
return nil, fmt.Errorf("retrieving object %q: %w", objInfo.Key, err)
}
var authedHost bootstrap.AuthenticatedHost
err = json.NewDecoder(obj).Decode(&authedHost)
obj.Close()
if err != nil {
logger.Warn(ctx, "Object contains invalid json", err)
continue
}
host, err := authedHost.Unwrap(currBootstrap.CAPublicCredentials)
if err != nil {
logger.Warn(ctx, "Host could not be authenticated", err)
}
hosts[host.Name] = host
}
return hosts, nil
}
func removeGarageBootstrapHost(
ctx context.Context, client garage.S3APIClient, hostName nebula.HostName,
) error {

14
go/daemon/network/jigs.go Normal file
View File

@ -0,0 +1,14 @@
package network
import (
"crypto/rand"
"encoding/hex"
)
func randStr(l int) string {
b := make([]byte, l)
if _, err := rand.Read(b); err != nil {
panic(err)
}
return hex.EncodeToString(b)
}

View File

@ -0,0 +1,900 @@
// Package network implements the Network type, which manages the daemon's
// membership in a single network.
package network
import (
"bytes"
"cmp"
"context"
"crypto/rand"
"encoding/json"
"errors"
"fmt"
"isle/bootstrap"
"isle/daemon/children"
"isle/daemon/daecommon"
"isle/garage"
"isle/jsonutil"
"isle/nebula"
"isle/secrets"
"isle/toolkit"
"log"
"net/netip"
"slices"
"sync"
"time"
"dev.mediocregopher.com/mediocre-go-lib.git/mlog"
"golang.org/x/exp/maps"
)
// GarageClientParams contains all the data needed to instantiate garage
// clients.
type GarageClientParams struct {
Peer garage.RemotePeer
GlobalBucketS3APICredentials garage.S3APICredentials
// RPCSecret may be empty, if the secret is not available on the host.
RPCSecret string
}
// GlobalBucketS3APIClient returns an S3 client pre-configured with access to
// the global bucket.
func (p GarageClientParams) GlobalBucketS3APIClient() garage.S3APIClient {
var (
addr = p.Peer.S3APIAddr()
creds = p.GlobalBucketS3APICredentials
)
return garage.NewS3APIClient(addr, creds)
}
// CreateHostOpts are optional parameters to the CreateHost method.
type CreateHostOpts struct {
// IP address of the new host. An IP address will be randomly chosen if one
// is not given here.
IP netip.Addr
// CanCreateHosts indicates that the bootstrap produced by CreateHost should
// give the new host the ability to create new hosts as well.
CanCreateHosts bool
// TODO add nebula cert tags
}
// JoiningBootstrap wraps a normal Bootstrap to include extra data which a host
// might need while joining a network.
type JoiningBootstrap struct {
Bootstrap bootstrap.Bootstrap
Secrets map[secrets.ID]json.RawMessage
}
// RPC defines the methods related to a single network which are available over
// the daemon's RPC interface.
type RPC interface {
// GetHosts returns all hosts known to the network, sorted by their name.
GetHosts(context.Context) ([]bootstrap.Host, error)
// GetGarageClientParams returns a GarageClientParams for the current
// network state.
GetGarageClientParams(context.Context) (GarageClientParams, error)
// GetNebulaCAPublicCredentials returns the CAPublicCredentials for the
// network.
GetNebulaCAPublicCredentials(
context.Context,
) (
nebula.CAPublicCredentials, error,
)
// RemoveHost removes the host of the given name from the network.
RemoveHost(ctx context.Context, hostName nebula.HostName) error
// CreateHost creates a bootstrap for a new host with the given name and IP
// address.
CreateHost(
context.Context, nebula.HostName, CreateHostOpts,
) (
JoiningBootstrap, error,
)
// CreateNebulaCertificate creates and signs a new nebula certficate for an
// existing host, given the public key for that host. This is currently
// mostly useful for creating certs for mobile devices.
//
// TODO replace this with CreateHostBootstrap, and the
// CreateNebulaCertificate RPC method can just pull cert out of that.
//
// Errors:
// - ErrHostNotFound
CreateNebulaCertificate(
context.Context, nebula.HostName, nebula.EncryptingPublicKey,
) (
nebula.Certificate, error,
)
}
// Network manages membership in a single micropelago network. Each Network
// is comprised of a unique IP subnet, hosts connected together on that subnet
// via a VPN, an S3 storage layer only accessible to those hosts, plus other
// services built on this foundation.
//
// A single daemon (isle server) can manage multiple networks. Each network is
// expected to be independent of the others, ie they should not share any
// resources.
type Network interface {
RPC
// GetNetworkCreationParams returns the CreationParams that the Network was
// originally created with.
GetNetworkCreationParams(context.Context) (bootstrap.CreationParams, error)
// Shutdown blocks until all resources held or created by the Network,
// including child processes it has started, have been cleaned up.
//
// If this returns an error then it's possible that child processes are
// still running and are no longer managed.
Shutdown() error
}
////////////////////////////////////////////////////////////////////////////////
// Network implementation
// Opts are optional parameters which can be passed in when initializing a new
// Network instance. A nil Opts is equivalent to a zero value.
type Opts struct {
ChildrenOpts *children.Opts
}
func (o *Opts) withDefaults() *Opts {
if o == nil {
o = new(Opts)
}
return o
}
type network struct {
logger *mlog.Logger
daemonConfig daecommon.Config
envBinDirPath string
stateDir toolkit.Dir
runtimeDir toolkit.Dir
opts *Opts
secretsStore secrets.Store
garageAdminToken string
l sync.RWMutex
children *children.Children
currBootstrap bootstrap.Bootstrap
shutdownCh chan struct{}
wg sync.WaitGroup
}
// instatiateNetwork returns an instantiated *network instance which has not yet
// been initialized.
func instatiateNetwork(
logger *mlog.Logger,
networkID string,
daemonConfig daecommon.Config,
envBinDirPath string,
stateDir toolkit.Dir,
runtimeDir toolkit.Dir,
opts *Opts,
) *network {
log.Printf("DEBUG: network stateDir:%+v runtimeDir:%+v", stateDir, runtimeDir)
return &network{
logger: logger,
daemonConfig: daemonConfig,
envBinDirPath: envBinDirPath,
stateDir: stateDir,
runtimeDir: runtimeDir,
opts: opts.withDefaults(),
garageAdminToken: randStr(32),
shutdownCh: make(chan struct{}),
}
}
// LoadCreationParams returns the CreationParams of a Network which was
// Created/Joined with the given state directory.
func LoadCreationParams(
stateDir toolkit.Dir,
) (
bootstrap.CreationParams, error,
) {
var (
// TODO store/load the creation params separately from the rest of
// the bootstrap, since the bootstrap contains potentially the
// entire host list of a network, which could be pretty bulky.
bootstrapFilePath = bootstrap.StateDirPath(stateDir.Path)
bs bootstrap.Bootstrap
)
if err := jsonutil.LoadFile(&bs, bootstrapFilePath); err != nil {
return bootstrap.CreationParams{}, fmt.Errorf(
"loading bootstrap from %q: %w", bootstrapFilePath, err,
)
}
return bs.NetworkCreationParams, nil
}
// Load initializes and returns a Network instance for a network which was
// previously joined or created, and which has the given ID.
func Load(
ctx context.Context,
logger *mlog.Logger,
networkID string,
daemonConfig daecommon.Config,
envBinDirPath string,
stateDir toolkit.Dir,
runtimeDir toolkit.Dir,
opts *Opts,
) (
Network, error,
) {
n := instatiateNetwork(
logger,
networkID,
daemonConfig,
envBinDirPath,
stateDir,
runtimeDir,
opts,
)
if err := n.initializeDirs(true); err != nil {
return nil, fmt.Errorf("initializing directories: %w", err)
}
var (
currBootstrap bootstrap.Bootstrap
bootstrapFilePath = bootstrap.StateDirPath(n.stateDir.Path)
)
if err := jsonutil.LoadFile(&currBootstrap, bootstrapFilePath); err != nil {
return nil, fmt.Errorf(
"loading bootstrap from %q: %w", bootstrapFilePath, err,
)
} else if err := n.initialize(ctx, currBootstrap); err != nil {
return nil, fmt.Errorf("initializing with bootstrap: %w", err)
}
return n, nil
}
// Join initializes and returns a Network instance for an existing network which
// was not previously joined to on this host. Once Join has been called for a
// particular network it will error on subsequent calls for that same network,
// Load should be used instead.
func Join(
ctx context.Context,
logger *mlog.Logger,
daemonConfig daecommon.Config,
joiningBootstrap JoiningBootstrap,
envBinDirPath string,
stateDir toolkit.Dir,
runtimeDir toolkit.Dir,
opts *Opts,
) (
Network, error,
) {
n := instatiateNetwork(
logger,
joiningBootstrap.Bootstrap.NetworkCreationParams.ID,
daemonConfig,
envBinDirPath,
stateDir,
runtimeDir,
opts,
)
if err := n.initializeDirs(false); err != nil {
return nil, fmt.Errorf("initializing directories: %w", err)
}
if err := secrets.Import(
ctx, n.secretsStore, joiningBootstrap.Secrets,
); err != nil {
return nil, fmt.Errorf("importing secrets: %w", err)
}
if err := n.initialize(ctx, joiningBootstrap.Bootstrap); err != nil {
return nil, fmt.Errorf("initializing with bootstrap: %w", err)
}
return n, nil
}
// Create initializes and returns a Network for a brand new network which uses
// the given creation parameters.
//
// - name: Human-readable name of the network.
// - domain: Primary domain name that network services are served under.
// - ipNet: An IP subnet, in CIDR form, which will be the overall range of
// possible IPs in the network. The first IP in this network range will
// become this first host's IP.
// - hostName: The name of this first host in the network.
//
// Errors:
// - ErrInvalidConfig - if daemonConfig doesn't have 3 storage allocations
// configured.
func Create(
ctx context.Context,
logger *mlog.Logger,
daemonConfig daecommon.Config,
envBinDirPath string,
stateDir toolkit.Dir,
runtimeDir toolkit.Dir,
creationParams bootstrap.CreationParams,
ipNet nebula.IPNet, // TODO should this go in CreationParams?
hostName nebula.HostName,
opts *Opts,
) (
Network, error,
) {
if len(daemonConfig.Storage.Allocations) < 3 {
return nil, ErrInvalidConfig.WithData(
"At least three storage allocations are required.",
)
}
nebulaCACreds, err := nebula.NewCACredentials(creationParams.Domain, ipNet)
if err != nil {
return nil, fmt.Errorf("creating nebula CA cert: %w", err)
}
garageRPCSecret := randStr(32)
n := instatiateNetwork(
logger,
creationParams.ID,
daemonConfig,
envBinDirPath,
stateDir,
runtimeDir,
opts,
)
if err := n.initializeDirs(false); err != nil {
return nil, fmt.Errorf("initializing directories: %w", err)
}
err = daecommon.SetGarageRPCSecret(ctx, n.secretsStore, garageRPCSecret)
if err != nil {
return nil, fmt.Errorf("setting garage RPC secret: %w", err)
}
err = daecommon.SetNebulaCASigningPrivateKey(
ctx, n.secretsStore, nebulaCACreds.SigningPrivateKey,
)
if err != nil {
return nil, fmt.Errorf("setting nebula CA signing key secret: %w", err)
}
hostBootstrap, err := bootstrap.New(
nebulaCACreds,
creationParams,
map[nebula.HostName]bootstrap.Host{},
hostName,
ipNet.FirstAddr(),
)
if err != nil {
return nil, fmt.Errorf("initializing bootstrap data: %w", err)
}
if err := n.initialize(ctx, hostBootstrap); err != nil {
return nil, fmt.Errorf("initializing with bootstrap: %w", err)
}
return n, nil
}
func (n *network) initializeDirs(mayExist bool) error {
secretsDir, err := n.stateDir.MkChildDir("secrets", mayExist)
if err != nil {
return fmt.Errorf("creating secrets dir: %w", err)
}
n.secretsStore = secrets.NewFSStore(secretsDir.Path)
return nil
}
func (n *network) initialize(
ctx context.Context, currBootstrap bootstrap.Bootstrap,
) error {
// we update this Host's data using whatever configuration has been provided
// by the daemon config. This way the network has the most up-to-date
// possible bootstrap. This updated bootstrap will later get updated in
// garage as a background task, so other hosts will see it as well.
currBootstrap, err := coalesceDaemonConfigAndBootstrap(
n.daemonConfig, currBootstrap,
)
if err != nil {
return fmt.Errorf("combining configuration into bootstrap: %w", err)
}
err = writeBootstrapToStateDir(n.stateDir.Path, currBootstrap)
if err != nil {
return fmt.Errorf("writing bootstrap to state dir: %w", err)
}
n.currBootstrap = currBootstrap
n.logger.Info(ctx, "Creating child processes")
n.children, err = children.New(
ctx,
n.logger.WithNamespace("children"),
n.envBinDirPath,
n.secretsStore,
n.daemonConfig,
n.runtimeDir,
n.garageAdminToken,
currBootstrap,
n.opts.ChildrenOpts,
)
if err != nil {
return fmt.Errorf("creating child processes: %w", err)
}
n.logger.Info(ctx, "Child processes created")
if err := n.postInit(ctx); err != nil {
n.logger.Error(ctx, "Post-initialization failed, stopping child processes", err)
n.children.Shutdown()
return fmt.Errorf("performing post-initialization: %w", err)
}
// TODO annotate this context with creation params
ctx, cancel := context.WithCancel(context.Background())
n.wg.Add(1)
go func() {
defer n.wg.Done()
<-n.shutdownCh
cancel()
}()
n.wg.Add(1)
go func() {
defer n.wg.Done()
n.reloadLoop(ctx)
n.logger.Debug(ctx, "Daemon restart loop stopped")
}()
return nil
}
func (n *network) postInit(ctx context.Context) error {
if len(n.daemonConfig.Storage.Allocations) > 0 {
n.logger.Info(ctx, "Applying garage layout")
if err := garageApplyLayout(
ctx, n.logger, n.daemonConfig, n.garageAdminToken, n.currBootstrap,
); err != nil {
return fmt.Errorf("applying garage layout: %w", err)
}
}
// This is only necessary during network creation, otherwise the bootstrap
// should already have these credentials built in.
//
// TODO this is pretty hacky, but there doesn't seem to be a better way to
// manage it at the moment.
_, err := daecommon.GetGarageS3APIGlobalBucketCredentials(
ctx, n.secretsStore,
)
if errors.Is(err, secrets.ErrNotFound) {
n.logger.Info(ctx, "Initializing garage shared global bucket")
garageGlobalBucketCreds, err := garageInitializeGlobalBucket(
ctx,
n.logger,
n.daemonConfig,
n.garageAdminToken,
n.currBootstrap,
)
if err != nil {
return fmt.Errorf("initializing global bucket: %w", err)
}
err = daecommon.SetGarageS3APIGlobalBucketCredentials(
ctx, n.secretsStore, garageGlobalBucketCreds,
)
if err != nil {
return fmt.Errorf("storing global bucket creds: %w", err)
}
}
n.logger.Info(ctx, "Updating host info in garage")
err = n.putGarageBoostrapHost(ctx, n.currBootstrap)
if err != nil {
return fmt.Errorf("updating host info in garage: %w", err)
}
return nil
}
func (n *network) reloadLoop(ctx context.Context) {
ticker := time.NewTicker(3 * time.Minute)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
n.l.RLock()
currBootstrap := n.currBootstrap
n.l.RUnlock()
n.logger.Info(ctx, "Checking for bootstrap changes")
newHosts, err := n.getGarageBootstrapHosts(ctx, currBootstrap)
if err != nil {
n.logger.Error(ctx, "Failed to get hosts from garage", err)
continue
}
// TODO there's some potential race conditions here, where
// CreateHost could be called at this point, write the new host to
// garage and the bootstrap, but then this reload call removes the
// host from this bootstrap/children until the next reload.
if err := n.reload(ctx, currBootstrap, newHosts); err != nil {
n.logger.Error(ctx, "Reloading with new host data failed", err)
continue
}
}
}
}
// reload will check the existing hosts data from currBootstrap against a
// potentially updated set of hosts data, and if there are any differences will
// perform whatever changes are necessary.
func (n *network) reload(
ctx context.Context,
currBootstrap bootstrap.Bootstrap,
newHosts map[nebula.HostName]bootstrap.Host,
) error {
var (
newBootstrap = currBootstrap
thisHost = currBootstrap.ThisHost()
)
newBootstrap.Hosts = newHosts
// the daemon's view of this host's bootstrap info takes precedence over
// whatever is in garage
newBootstrap.Hosts[thisHost.Name] = thisHost
diff, err := children.CalculateReloadDiff(
n.daemonConfig, currBootstrap, newBootstrap,
)
if err != nil {
return fmt.Errorf("calculating diff between bootstraps: %w", err)
} else if diff == (children.ReloadDiff{}) {
n.logger.Info(ctx, "No changes to bootstrap detected")
return nil
}
n.logger.Info(ctx, "Bootstrap has changed, storing new bootstrap")
n.l.Lock()
n.currBootstrap = newBootstrap
n.l.Unlock()
if err := n.children.Reload(ctx, newBootstrap, diff); err != nil {
return fmt.Errorf("reloading child processes (diff:%+v): %w", diff, err)
}
return nil
}
func withCurrBootstrap[Res any](
n *network, fn func(bootstrap.Bootstrap) (Res, error),
) (Res, error) {
n.l.RLock()
defer n.l.RUnlock()
currBootstrap := n.currBootstrap
return fn(currBootstrap)
}
func (n *network) getBootstrap(
ctx context.Context,
) (
bootstrap.Bootstrap, error,
) {
return withCurrBootstrap(n, func(
currBootstrap bootstrap.Bootstrap,
) (
bootstrap.Bootstrap, error,
) {
return currBootstrap, nil
})
}
func (n *network) GetHosts(ctx context.Context) ([]bootstrap.Host, error) {
b, err := n.getBootstrap(ctx)
if err != nil {
return nil, fmt.Errorf("retrieving bootstrap: %w", err)
}
hosts := maps.Values(b.Hosts)
slices.SortFunc(hosts, func(a, b bootstrap.Host) int {
return cmp.Compare(a.Name, b.Name)
})
return hosts, nil
}
func (n *network) GetGarageClientParams(
ctx context.Context,
) (
GarageClientParams, error,
) {
return withCurrBootstrap(n, func(
currBootstrap bootstrap.Bootstrap,
) (
GarageClientParams, error,
) {
return n.getGarageClientParams(ctx, currBootstrap)
})
}
func (n *network) GetNebulaCAPublicCredentials(
ctx context.Context,
) (
nebula.CAPublicCredentials, error,
) {
b, err := n.getBootstrap(ctx)
if err != nil {
return nebula.CAPublicCredentials{}, fmt.Errorf(
"retrieving bootstrap: %w", err,
)
}
return b.CAPublicCredentials, nil
}
func (n *network) RemoveHost(ctx context.Context, hostName nebula.HostName) error {
// TODO RemoveHost should publish a certificate revocation for the host
// being removed.
_, err := withCurrBootstrap(n, func(
currBootstrap bootstrap.Bootstrap,
) (
struct{}, error,
) {
garageClientParams, err := n.getGarageClientParams(ctx, currBootstrap)
if err != nil {
return struct{}{}, fmt.Errorf("get garage client params: %w", err)
}
client := garageClientParams.GlobalBucketS3APIClient()
return struct{}{}, removeGarageBootstrapHost(ctx, client, hostName)
})
return err
}
func makeCACreds(
currBootstrap bootstrap.Bootstrap,
caSigningPrivateKey nebula.SigningPrivateKey,
) nebula.CACredentials {
return nebula.CACredentials{
Public: currBootstrap.CAPublicCredentials,
SigningPrivateKey: caSigningPrivateKey,
}
}
func chooseAvailableIP(b bootstrap.Bootstrap) (netip.Addr, error) {
var (
cidrIPNet = b.CAPublicCredentials.Cert.Unwrap().Details.Subnets[0]
cidrMask = cidrIPNet.Mask
cidrIPB = cidrIPNet.IP
cidr = netip.MustParsePrefix(cidrIPNet.String())
cidrIP = cidr.Addr()
cidrSuffixBits = cidrIP.BitLen() - cidr.Bits()
inUseIPs = make(map[netip.Addr]struct{}, len(b.Hosts))
)
for _, host := range b.Hosts {
inUseIPs[host.IP()] = struct{}{}
}
// first check that there are any addresses at all. We can determine the
// number of possible addresses using the network CIDR. The first IP in a
// subnet is the network identifier, and is reserved. The last IP is the
// broadcast IP, and is also reserved. Hence, the -2.
usableIPs := (1 << cidrSuffixBits) - 2
if len(inUseIPs) >= usableIPs {
return netip.Addr{}, errors.New("no available IPs")
}
// We need to know the subnet broadcast address, so we don't accidentally
// produce it.
cidrBCastIPB := bytes.Clone(cidrIPB)
for i := range cidrBCastIPB {
cidrBCastIPB[i] |= ^cidrMask[i]
}
cidrBCastIP, ok := netip.AddrFromSlice(cidrBCastIPB)
if !ok {
panic(fmt.Sprintf("invalid broadcast ip calculated: %x", cidrBCastIP))
}
// Try a handful of times to pick an IP at random. This is preferred, as it
// leaves less room for two different CreateHost calls to choose the same
// IP.
for range 20 {
b := make([]byte, len(cidrIPB))
if _, err := rand.Read(b); err != nil {
return netip.Addr{}, fmt.Errorf("reading random bytes: %w", err)
}
for i := range b {
b[i] = cidrIPB[i] | (b[i] & ^cidrMask[i])
}
ip, ok := netip.AddrFromSlice(b)
if !ok {
panic(fmt.Sprintf("generated invalid IP: %x", b))
} else if !cidr.Contains(ip) {
panic(fmt.Sprintf(
"generated IP %v which is not in cidr %v", ip, cidr,
))
}
if ip == cidrIP || ip == cidrBCastIP {
continue
}
if _, inUse := inUseIPs[ip]; !inUse {
return ip, nil
}
}
// If randomly picking fails then just go through IPs one by one until the
// free one is found.
for ip := cidrIP.Next(); ip != cidrBCastIP; ip = ip.Next() {
if _, inUse := inUseIPs[ip]; !inUse {
return ip, nil
}
}
panic("All ips are in-use, but somehow that wasn't determined earlier")
}
func (n *network) CreateHost(
ctx context.Context,
hostName nebula.HostName,
opts CreateHostOpts,
) (
JoiningBootstrap, error,
) {
n.l.RLock()
currBootstrap := n.currBootstrap
n.l.RUnlock()
ip := opts.IP
if ip == (netip.Addr{}) {
var err error
if ip, err = chooseAvailableIP(currBootstrap); err != nil {
return JoiningBootstrap{}, fmt.Errorf(
"choosing available IP: %w", err,
)
}
}
// TODO if the ip is given, check that it's not already in use.
caSigningPrivateKey, err := daecommon.GetNebulaCASigningPrivateKey(
ctx, n.secretsStore,
)
if err != nil {
return JoiningBootstrap{}, fmt.Errorf("getting CA signing key: %w", err)
}
var joiningBootstrap JoiningBootstrap
joiningBootstrap.Bootstrap, err = bootstrap.New(
makeCACreds(currBootstrap, caSigningPrivateKey),
currBootstrap.NetworkCreationParams,
currBootstrap.Hosts,
hostName,
ip,
)
if err != nil {
return JoiningBootstrap{}, fmt.Errorf(
"initializing bootstrap data: %w", err,
)
}
secretsIDs := []secrets.ID{
daecommon.GarageRPCSecretSecretID,
daecommon.GarageS3APIGlobalBucketCredentialsSecretID,
}
if opts.CanCreateHosts {
secretsIDs = append(
secretsIDs, daecommon.NebulaCASigningPrivateKeySecretID,
)
}
if joiningBootstrap.Secrets, err = secrets.Export(
ctx, n.secretsStore, secretsIDs,
); err != nil {
return JoiningBootstrap{}, fmt.Errorf("exporting secrets: %w", err)
}
n.logger.Info(ctx, "Putting new host in garage")
err = n.putGarageBoostrapHost(ctx, joiningBootstrap.Bootstrap)
if err != nil {
return JoiningBootstrap{}, fmt.Errorf("putting new host in garage: %w", err)
}
// the new bootstrap will have been initialized with both all existing hosts
// (based on currBootstrap) and the host being created.
newHosts := joiningBootstrap.Bootstrap.Hosts
n.logger.Info(ctx, "Reloading local state with new host")
if err := n.reload(ctx, currBootstrap, newHosts); err != nil {
return JoiningBootstrap{}, fmt.Errorf("reloading child processes: %w", err)
}
return joiningBootstrap, nil
}
func (n *network) CreateNebulaCertificate(
ctx context.Context,
hostName nebula.HostName,
hostPubKey nebula.EncryptingPublicKey,
) (
nebula.Certificate, error,
) {
return withCurrBootstrap(n, func(
currBootstrap bootstrap.Bootstrap,
) (
nebula.Certificate, error,
) {
host, ok := currBootstrap.Hosts[hostName]
if !ok {
return nebula.Certificate{}, ErrHostNotFound
}
ip := host.IP()
caSigningPrivateKey, err := daecommon.GetNebulaCASigningPrivateKey(
ctx, n.secretsStore,
)
if err != nil {
return nebula.Certificate{}, fmt.Errorf("getting CA signing key: %w", err)
}
caCreds := makeCACreds(currBootstrap, caSigningPrivateKey)
return nebula.NewHostCert(caCreds, hostPubKey, hostName, ip)
})
}
func (n *network) GetNetworkCreationParams(
ctx context.Context,
) (
bootstrap.CreationParams, error,
) {
return withCurrBootstrap(n, func(
currBootstrap bootstrap.Bootstrap,
) (
bootstrap.CreationParams, error,
) {
return currBootstrap.NetworkCreationParams, nil
})
}
func (n *network) Shutdown() error {
close(n.shutdownCh)
n.wg.Wait()
if n.children != nil {
n.children.Shutdown()
}
return nil
}

View File

@ -1,192 +1,47 @@
package daemon
import (
"cmp"
"context"
"fmt"
"isle/bootstrap"
"isle/daemon/jsonrpc2"
"isle/daemon/network"
"isle/nebula"
"net/netip"
"slices"
"golang.org/x/exp/maps"
"net/http"
)
// RPC exposes all RPC methods which are available to be called over the RPC
// interface.
type RPC struct {
daemon Daemon
// RPC defines the methods which the Daemon exposes over RPC (via the RPCHandler
// or HTTPHandler methods). Method documentation can be found on the Daemon
// type.
type RPC interface {
CreateNetwork(
ctx context.Context,
name string,
domain string,
ipNet nebula.IPNet,
hostName nebula.HostName,
) error
JoinNetwork(context.Context, network.JoiningBootstrap) error
// All network.RPC methods are automatically implemented by Daemon using the
// currently joined network. If no network is joined then any call to these
// methods will return ErrNoNetwork.
network.RPC
}
// NewRPC initializes and returns an RPC instance.
func NewRPC(daemon Daemon) *RPC {
return &RPC{daemon}
}
// CreateNetworkRequest contains the arguments to the CreateNetwork RPC method.
//
// All fields are required.
type CreateNetworkRequest struct {
// Human-readable name of the network.
Name string
// Primary domain name that network services are served under.
Domain string
// An IP subnet, in CIDR form, which will be the overall range of possible
// IPs in the network. The first IP in this network range will become this
// first host's IP.
IPNet nebula.IPNet
// The name of this first host in the network.
HostName nebula.HostName
}
// CreateNetwork passes through to the Daemon method of the same name.
func (r *RPC) CreateNetwork(
ctx context.Context, req CreateNetworkRequest,
) (
struct{}, error,
) {
return struct{}{}, r.daemon.CreateNetwork(
ctx, req.Name, req.Domain, req.IPNet, req.HostName,
// RPCHandler returns a jsonrpc2.Handler which will use the Daemon to serve all
// methods defined on the RPC interface.
func (d *Daemon) RPCHandler() jsonrpc2.Handler {
rpc := RPC(d)
return jsonrpc2.Chain(
jsonrpc2.NewMLogMiddleware(d.logger.WithNamespace("rpc")),
jsonrpc2.ExposeServerSideErrorsMiddleware,
)(
jsonrpc2.NewDispatchHandler(&rpc),
)
}
// JoinNetwork passes through to the Daemon method of the same name.
func (r *RPC) JoinNetwork(
ctx context.Context, req JoiningBootstrap,
) (
struct{}, error,
) {
return struct{}{}, r.daemon.JoinNetwork(ctx, req)
}
// GetHostsResult wraps the results from the GetHosts RPC method.
type GetHostsResult struct {
Hosts []bootstrap.Host
}
// GetHosts returns all hosts known to the network, sorted by their name.
func (r *RPC) GetHosts(
ctx context.Context, req struct{},
) (
GetHostsResult, error,
) {
b, err := r.daemon.GetBootstrap(ctx)
if err != nil {
return GetHostsResult{}, fmt.Errorf("retrieving bootstrap: %w", err)
}
hosts := maps.Values(b.Hosts)
slices.SortFunc(hosts, func(a, b bootstrap.Host) int {
return cmp.Compare(a.Name, b.Name)
})
return GetHostsResult{hosts}, nil
}
// GetGarageClientParams passes the call through to the Daemon method of the
// same name.
func (r *RPC) GetGarageClientParams(
ctx context.Context, req struct{},
) (
GarageClientParams, error,
) {
return r.daemon.GetGarageClientParams(ctx)
}
// GetNebulaCAPublicCredentials returns the CAPublicCredentials for the network.
func (r *RPC) GetNebulaCAPublicCredentials(
ctx context.Context, req struct{},
) (
nebula.CAPublicCredentials, error,
) {
b, err := r.daemon.GetBootstrap(ctx)
if err != nil {
return nebula.CAPublicCredentials{}, fmt.Errorf(
"retrieving bootstrap: %w", err,
)
}
return b.CAPublicCredentials, nil
}
// RemoveHostRequest contains the arguments to the RemoveHost RPC method.
//
// All fields are required.
type RemoveHostRequest struct {
HostName nebula.HostName
}
// RemoveHost passes the call through to the Daemon method of the same name.
func (r *RPC) RemoveHost(ctx context.Context, req RemoveHostRequest) (struct{}, error) {
return struct{}{}, r.daemon.RemoveHost(ctx, req.HostName)
}
// CreateHostRequest contains the arguments to the
// CreateHost RPC method.
//
// All fields are required.
type CreateHostRequest struct {
HostName nebula.HostName
IP netip.Addr
Opts CreateHostOpts
}
// CreateHostResult wraps the results from the CreateHost RPC method.
type CreateHostResult struct {
JoiningBootstrap JoiningBootstrap
}
// CreateHost passes the call through to the Daemon method of the
// same name.
func (r *RPC) CreateHost(
ctx context.Context, req CreateHostRequest,
) (
CreateHostResult, error,
) {
joiningBootstrap, err := r.daemon.CreateHost(
ctx, req.HostName, req.IP, req.Opts,
)
if err != nil {
return CreateHostResult{}, err
}
return CreateHostResult{JoiningBootstrap: joiningBootstrap}, nil
}
// CreateNebulaCertificateRequest contains the arguments to the
// CreateNebulaCertificate RPC method.
//
// All fields are required.
type CreateNebulaCertificateRequest struct {
HostName nebula.HostName
HostEncryptingPublicKey nebula.EncryptingPublicKey
Opts CreateNebulaCertificateOpts
}
// CreateNebulaCertificateResult wraps the results from the
// CreateNebulaCertificate RPC method.
type CreateNebulaCertificateResult struct {
HostNebulaCertifcate nebula.Certificate
}
// CreateNebulaCertificate passes the call through to the Daemon method of the
// same name.
func (r *RPC) CreateNebulaCertificate(
ctx context.Context, req CreateNebulaCertificateRequest,
) (
CreateNebulaCertificateResult, error,
) {
cert, err := r.daemon.CreateNebulaCertificate(
ctx, req.HostName, req.HostEncryptingPublicKey, req.Opts,
)
if err != nil {
return CreateNebulaCertificateResult{}, err
}
return CreateNebulaCertificateResult{
HostNebulaCertifcate: cert,
}, nil
// HTTPRPCHandler returns an http.Handler which will use the Daemon to serve all
// methods defined on the RPC interface via the JSONRPC2 protocol.
func (d *Daemon) HTTPRPCHandler() http.Handler {
return jsonrpc2.NewHTTPHandler(d.RPCHandler())
}

View File

@ -1,52 +0,0 @@
package daemon
import (
"fmt"
"isle/garage"
"isle/nebula"
"isle/secrets"
)
const (
secretsNSNebula = "nebula"
secretsNSGarage = "garage"
)
////////////////////////////////////////////////////////////////////////////////
// Nebula-related secrets
var (
nebulaCASigningPrivateKeySecretID = secrets.NewID(secretsNSNebula, "ca-signing-private-key")
)
var getNebulaCASigningPrivateKey, setNebulaCASigningPrivateKey = secrets.GetSetFunctions[nebula.SigningPrivateKey](
nebulaCASigningPrivateKeySecretID,
)
////////////////////////////////////////////////////////////////////////////////
// Garage-related secrets
func garageS3APIBucketCredentialsSecretID(credsName string) secrets.ID {
return secrets.NewID(
secretsNSGarage, fmt.Sprintf("s3-api-bucket-credentials-%s", credsName),
)
}
var (
garageRPCSecretSecretID = secrets.NewID(secretsNSGarage, "rpc-secret")
garageS3APIGlobalBucketCredentialsSecretID = garageS3APIBucketCredentialsSecretID(
garage.GlobalBucketS3APICredentialsName,
)
)
// Get/Set functions for garage-related secrets.
var (
getGarageRPCSecret, setGarageRPCSecret = secrets.GetSetFunctions[string](
garageRPCSecretSecretID,
)
getGarageS3APIGlobalBucketCredentials,
setGarageS3APIGlobalBucketCredentials = secrets.GetSetFunctions[garage.S3APICredentials](
garageS3APIGlobalBucketCredentialsSecretID,
)
)

View File

@ -3,13 +3,12 @@ module isle
go 1.22
require (
code.betamike.com/micropelago/pmux v0.0.0-20230706154656-fde8c2be7540
code.betamike.com/micropelago/pmux v0.0.0-20240719134913-f5fce902e8c4
dev.mediocregopher.com/mediocre-go-lib.git v0.0.0-20240511135822-4ab1176672d7
github.com/adrg/xdg v0.4.0
github.com/imdario/mergo v0.3.12
github.com/jxskiss/base62 v1.1.0
github.com/minio/minio-go/v7 v7.0.28
github.com/shirou/gopsutil v3.21.11+incompatible
github.com/slackhq/nebula v1.6.1
github.com/spf13/pflag v1.0.5
github.com/tv42/httpunix v0.0.0-20191220191345-2ba4b9c3382c
@ -19,7 +18,6 @@ require (
require (
github.com/dustin/go-humanize v1.0.0 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/google/uuid v1.1.1 // indirect
github.com/gopherjs/gopherjs v1.17.2 // indirect
github.com/json-iterator/go v1.1.12 // indirect
@ -34,9 +32,6 @@ require (
github.com/rs/xid v1.2.1 // indirect
github.com/sirupsen/logrus v1.8.1 // indirect
github.com/smartystreets/assertions v1.13.0 // indirect
github.com/tklauser/go-sysconf v0.3.10 // indirect
github.com/tklauser/numcpus v0.4.0 // indirect
github.com/yusufpapurcu/wmi v1.2.2 // indirect
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29 // indirect
golang.org/x/net v0.0.0-20220403103023-749bd193bc2b // indirect
golang.org/x/sys v0.0.0-20220406155245-289d7a0edf71 // indirect

View File

@ -1,5 +1,5 @@
code.betamike.com/micropelago/pmux v0.0.0-20230706154656-fde8c2be7540 h1:ycD1mEkCbrx3Apr71Q2PKgyycRu8wvwX9J4ZSvjyTtQ=
code.betamike.com/micropelago/pmux v0.0.0-20230706154656-fde8c2be7540/go.mod h1:WlEWacLREVfIQl1IlBjKzuDgL56DFRvyl7YiL5gGP4w=
code.betamike.com/micropelago/pmux v0.0.0-20240719134913-f5fce902e8c4 h1:n4pGP12kgdH5kCTF4TZYpOjWuAR/zZLpM9j3neeVMEk=
code.betamike.com/micropelago/pmux v0.0.0-20240719134913-f5fce902e8c4/go.mod h1:WlEWacLREVfIQl1IlBjKzuDgL56DFRvyl7YiL5gGP4w=
dev.mediocregopher.com/mediocre-go-lib.git v0.0.0-20240511135822-4ab1176672d7 h1:wKQ3bXzG+KQDtRAN/xaRZ4aQtJe1pccleG6V43MvFxw=
dev.mediocregopher.com/mediocre-go-lib.git v0.0.0-20240511135822-4ab1176672d7/go.mod h1:nP+AtQWrc3k5qq5y3ABiBLkOfUPlk/FO9fpTFpF+jgs=
github.com/adrg/xdg v0.4.0 h1:RzRqFcjH4nE5C6oTAxhBtoE2IRyjBSa62SCbyPidvls=
@ -9,8 +9,6 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
@ -55,8 +53,6 @@ github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZb
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/shirou/gopsutil v3.21.11+incompatible h1:+1+c1VGhc88SSonWP6foOcLhvnKlUeu/erjjvaPEYiI=
github.com/shirou/gopsutil v3.21.11+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/slackhq/nebula v1.6.1 h1:/OCTR3abj0Sbf2nGoLUrdDXImrCv0ZVFpVPP5qa0DsM=
@ -73,24 +69,16 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1 h1:5TQK59W5E3v0r2duFAb7P95B6hEeOyEnHRa8MjYSMTY=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/tklauser/go-sysconf v0.3.10 h1:IJ1AZGZRWbY8T5Vfk04D9WOA5WSejdflXxP03OUqALw=
github.com/tklauser/go-sysconf v0.3.10/go.mod h1:C8XykCvCb+Gn0oNCWPIlcb0RuglQTYaQ2hGm7jmxEFk=
github.com/tklauser/numcpus v0.4.0 h1:E53Dm1HjH1/R2/aoCtXtPgzmElmn51aOkhCFSuZq//o=
github.com/tklauser/numcpus v0.4.0/go.mod h1:1+UI3pD8NW14VMwdgJNJ1ESk2UnwhAnz5hMwiKKqXCQ=
github.com/tv42/httpunix v0.0.0-20191220191345-2ba4b9c3382c h1:u6SKchux2yDvFQnDHS3lPnIRmfVJ5Sxy3ao2SIdysLQ=
github.com/tv42/httpunix v0.0.0-20191220191345-2ba4b9c3382c/go.mod h1:hzIxponao9Kjc7aWznkXaL4U4TWaDSs8zcsY4Ka08nM=
github.com/yusufpapurcu/wmi v1.2.2 h1:KBNDSne4vP5mbSWnJbO+51IMOXJB67QiYCSBrubbPRg=
github.com/yusufpapurcu/wmi v1.2.2/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29 h1:tkVvjkPTB7pnW3jnid7kNyAMPVWllTNOf/qKDze4p9o=
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/exp v0.0.0-20240613232115-7f521ea00fb8 h1:yixxcjnhBmY0nkL253HFVIm0JsFHwrHdT3Yh6szTnfY=
golang.org/x/exp v0.0.0-20240613232115-7f521ea00fb8/go.mod h1:jj3sYF3dwk5D+ghuXyeI3r5MFf+NT2An6/9dOA95KSI=
golang.org/x/net v0.0.0-20220403103023-749bd193bc2b h1:vI32FkLJNAWtGD4BwkThwEy6XS7ZLLMHkSkYfF8M0W0=
golang.org/x/net v0.0.0-20220403103023-749bd193bc2b/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220406155245-289d7a0edf71 h1:PRD0hj6tTuUnCFD08vkvjkYFbQg/9lV8KIxe1y4/cvU=
golang.org/x/sys v0.0.0-20220406155245-289d7a0edf71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/text v0.3.8-0.20211105212822-18b340fc7af2 h1:GLw7MR8AfAG2GmGcmVgObFOHXYypgGjnGno25RDwn3Y=

View File

@ -11,6 +11,11 @@ var hostNameRegexp = regexp.MustCompile(`^[a-z][a-z0-9\-]*$`)
// lowercase letters, numbers, and hyphens, and must start with a letter.
type HostName string
// MarshalText casts the HostName to a byte string and returns it.
func (h HostName) MarshalText() ([]byte, error) {
return []byte(h), nil
}
// UnmarshalText parses and validates a HostName from a text string.
func (h *HostName) UnmarshalText(b []byte) error {
if !hostNameRegexp.Match(b) {

View File

@ -20,12 +20,9 @@ type fsStorePayload[Body any] struct {
}
// NewFSStore returns a Store which will store secrets to the given directory.
func NewFSStore(dirPath string) (Store, error) {
err := os.Mkdir(dirPath, 0700)
if err != nil && !errors.Is(err, fs.ErrExist) {
return nil, fmt.Errorf("making directory: %w", err)
}
return &fsStore{dirPath}, nil
// The directory must already exist.
func NewFSStore(dirPath string) Store {
return &fsStore{dirPath}
}
func (s *fsStore) path(id ID) string {

View File

@ -12,16 +12,11 @@ func Test_fsStore(t *testing.T) {
}
var (
ctx = context.Background()
dir = t.TempDir()
id = NewID("testing", "a")
ctx = context.Background()
store = NewFSStore(t.TempDir())
id = NewID("testing", "a")
)
store, err := NewFSStore(dir)
if err != nil {
t.Fatal(err)
}
var got payload
if err := store.Get(ctx, &got, id); !errors.Is(err, ErrNotFound) {
t.Fatalf("expected %v, got: %v", ErrNotFound, err)

121
go/toolkit/dir.go Normal file
View File

@ -0,0 +1,121 @@
package toolkit
import (
"errors"
"fmt"
"io/fs"
"os"
"path/filepath"
)
// Dir is a type which makes it possible to statically assert that a directory
// has already been created.
type Dir struct {
Path string
}
// MkDir creates a Dir at the given path.
//
// If the directory already exists, and mayExist is true, then Dir is returned
// successfully, otherwise an error is returned.
func MkDir(path string, mayExist bool) (Dir, error) {
if path == "" {
panic("Empty path passed to MkDir")
}
{
parentPath := filepath.Dir(path)
parentInfo, err := os.Stat(parentPath)
if err != nil {
return Dir{}, fmt.Errorf(
"checking fs node of parent %q: %w", parentPath, err,
)
} else if !parentInfo.IsDir() {
return Dir{}, fmt.Errorf(
"parent %q is not a directory", parentPath,
)
}
}
info, err := os.Stat(path)
if errors.Is(err, fs.ErrNotExist) {
// fine
} else if err != nil {
return Dir{}, fmt.Errorf("checking fs node: %w", err)
} else if !info.IsDir() {
return Dir{}, fmt.Errorf("exists but is not a directory")
} else {
if !mayExist {
return Dir{}, errors.New("directory already exists")
}
return Dir{path}, nil
}
if err := os.Mkdir(path, 0700); err != nil {
return Dir{}, fmt.Errorf("creating directory: %w", err)
}
return Dir{path}, nil
}
// MkChildDir is a helper for joining Dir's path to the given name and calling
// MkDir with the result.
func (d Dir) MkChildDir(name string, mayExist bool) (Dir, error) {
childPath := filepath.Join(d.Path, name)
d, err := MkDir(childPath, mayExist)
if err != nil {
return Dir{}, fmt.Errorf(
"creating child directory %q: %w", childPath, err,
)
}
return d, nil
}
// ChildDirs returns a Dir for every child directory found under this Dir.
func (d Dir) ChildDirs() ([]Dir, error) {
entries, err := os.ReadDir(d.Path)
if errors.Is(err, fs.ErrNotExist) {
return nil, nil
} else if err != nil {
return nil, fmt.Errorf("listing contents: %w", err)
}
dirs := make([]Dir, 0, len(entries))
for _, entry := range entries {
if !entry.IsDir() {
continue
}
dirs = append(dirs, Dir{Path: filepath.Join(d.Path, entry.Name())})
}
return dirs, nil
}
////////////////////////////////////////////////////////////////////////////////
// MkDirHelper is a helper type for creating a set of directories. It will
// collect errors as they occur, allowing error handling to be defered.
type MkDirHelper struct {
errs []error
}
// Maybe returns the given Dir and true if err == nil. If err != nil then false
// is returned, and the error is collected internally such that it will be
// returned as part of the Err() output.
//
// This method is designed to be used in conjunction with a MkDir
// function/method, e.g:
//
// d, ok := h.Maybe(MkDir("some/path", true))
func (m *MkDirHelper) Maybe(d Dir, err error) (Dir, bool) {
if err != nil {
m.errs = append(m.errs, err)
return Dir{}, false
}
return d, true
}
// Err returns a join of all errors which have occurred during its lifetime.
func (m *MkDirHelper) Err() error {
return errors.Join(m.errs...)
}

15
go/toolkit/rand.go Normal file
View File

@ -0,0 +1,15 @@
package toolkit
import (
"crypto/rand"
"encoding/hex"
)
// RandStr returns a random string with the given entropy of bytes.
func RandStr(l int) string {
b := make([]byte, l)
if _, err := rand.Read(b); err != nil {
panic(err)
}
return hex.EncodeToString(b)
}

3
go/toolkit/toolkit.go Normal file
View File

@ -0,0 +1,3 @@
// Package toolkit contains useful utilities which are not specific to any
// specific part of isle.
package toolkit

17
nix/gowrap.nix Normal file
View File

@ -0,0 +1,17 @@
{
buildGoModule,
fetchFromGitHub,
}: let
version = "1.4.0";
in buildGoModule {
pname = "gowrap";
inherit version;
src = fetchFromGitHub {
owner = "hexdigest";
repo = "gowrap";
rev = "v${version}";
hash = "sha256-eEaUANLnxRGfVbhOTwJV+R9iEWMObg0lHqmwO3AYuIk=";
};
vendorHash = "sha256-xIOyXXt8WSGQYIvIam+0e25VNI7awYEEYZBe7trC6zQ=";
subPackages = [ "cmd/gowrap" ];
}

View File

@ -14,6 +14,7 @@ in pkgs.mkShell {
buildInputs = [
pkgs.go
pkgs.golangci-lint
(pkgs.callPackage ./nix/gowrap.nix {})
];
shellHook = ''
true # placeholder

View File

@ -10,10 +10,7 @@ function assert_a {
as_primus
assert_a "$primus_ip" primus.hosts.shared.test
# TODO This doesn't work at present, there would need to be some mechanism to
# block the test until secondus' bootstrap info can propagate to primus.
#assert_a "$secondus_ip" secondus.hosts.shared.test
assert_a "$secondus_ip" secondus.hosts.shared.test
as_secondus
assert_a "$primus_ip" primus.hosts.shared.test

View File

@ -1,15 +1,20 @@
# shellcheck source=../../utils/with-1-data-1-empty-node-network.sh
source "$UTILS"/with-1-data-1-empty-node-network.sh
# TODO when primus creates secondus' bootstrap it should write the new host to
# its own bootstrap, as well as to garage.
function do_tests {
hosts="$(isle hosts list)"
[ "$(echo "$hosts" | jq -r '.[0].Name')" = "primus" ]
[ "$(echo "$hosts" | jq -r '.[0].VPN.IP')" = "10.6.9.1" ]
[ "$(echo "$hosts" | jq -r '.[0].Storage.Instances|length')" = "3" ]
[ "$(echo "$hosts" | jq -r '.[1].Name')" = "secondus" ]
[ "$(echo "$hosts" | jq -r '.[1].VPN.IP')" = "$secondus_ip" ]
[ "$(echo "$hosts" | jq -r '.[1].Storage.Instances|length')" = "0" ]
}
as_primus
do_tests
as_secondus
hosts="$(isle hosts list)"
[ "$(echo "$hosts" | jq -r '.[0].Name')" = "primus" ]
[ "$(echo "$hosts" | jq -r '.[0].VPN.IP')" = "10.6.9.1" ]
[ "$(echo "$hosts" | jq -r '.[0].Storage.Instances|length')" = "3" ]
[ "$(echo "$hosts" | jq -r '.[1].Name')" = "secondus" ]
[ "$(echo "$hosts" | jq -r '.[1].VPN.IP')" = "10.6.9.2" ]
[ "$(echo "$hosts" | jq -r '.[1].Storage.Instances|length')" = "0" ]
do_tests

View File

@ -1,7 +1,6 @@
# shellcheck source=../../utils/with-1-data-1-empty-node-network.sh
source "$UTILS"/with-1-data-1-empty-node-network.sh
adminBS="$XDG_STATE_HOME"/isle/bootstrap.json
bs="$secondus_bootstrap" # set in with-1-data-1-empty-node-network.sh
[ "$(jq -r <"$bs" '.Bootstrap.NetworkCreationParams.Domain')" = "shared.test" ]
@ -9,7 +8,7 @@ bs="$secondus_bootstrap" # set in with-1-data-1-empty-node-network.sh
[ "$(jq -r <"$bs" '.Bootstrap.SignedHostAssigned.Body.Name')" = "secondus" ]
[ "$(jq -r <"$bs" '.Bootstrap.Hosts.primus.PublicCredentials')" \
= "$(jq -r <"$adminBS" '.SignedHostAssigned.Body.PublicCredentials')" ]
= "$(jq -r <"$BOOTSTRAP_FILE" '.SignedHostAssigned.Body.PublicCredentials')" ]
[ "$(jq <"$bs" '.Bootstrap.Hosts.primus.Garage.Instances|length')" = "3" ]

View File

@ -9,7 +9,7 @@ cat pubkey
--hostname non-esiste \
--public-key-path pubkey \
2>&1 || true \
) | grep '\[6\] Host not found'
) | grep '\[1002\] Host not found'
isle nebula create-cert \
--hostname primus \

View File

@ -13,6 +13,6 @@ export TMPDIR="$TMPDIR"
export XDG_RUNTIME_DIR="$XDG_RUNTIME_DIR"
export XDG_STATE_HOME="$XDG_STATE_HOME"
export ISLE_DAEMON_HTTP_SOCKET_PATH="$ROOT_TMPDIR/$base-daemon.sock"
BOOTSTRAP_FILE="$XDG_STATE_HOME/isle/bootstrap.json"
BOOTSTRAP_FILE="$XDG_STATE_HOME/isle/networks/$NETWORK_ID/bootstrap.json"
cd "$TMPDIR"
EOF

View File

@ -8,7 +8,6 @@ primus_base="$base/primus"
primus_ip="10.6.9.1"
secondus_base="$base/secondus"
secondus_ip="10.6.9.2"
function as_primus {
current_ip="$primus_ip"
@ -68,7 +67,6 @@ EOF
echo "Creating secondus bootstrap"
isle hosts create \
--hostname secondus \
--ip "$secondus_ip" \
> "$secondus_bootstrap"
(
@ -91,3 +89,17 @@ EOF
isle network join -b "$secondus_bootstrap"
)
fi
secondus_ip="$(
nebula-cert print -json \
-path <(jq -r '.Bootstrap.Hosts["secondus"].PublicCredentials.Cert' "$secondus_bootstrap") \
| jq -r '.details.ips[0]' \
| cut -d/ -f1
)"
NETWORK_ID="$(jq '.Bootstrap.NetworkCreationParams.ID' "$secondus_bootstrap")"
export NETWORK_ID
# shared-daemon-env.sh depends on NETWORK_ID, so we re-call as_primus in order
# to fully populate the envvars we need.
as_primus