Got an already working docker compose file for your project? Just send it to your seelf instance and boom, that's live on your own infrastructure with all services correctly deployed and exposed on nice urls as needed! See the documentation for more information.
Note
Although Docker is the only backend supported at the moment, I would like to investigate to enable other ones too. Remote Docker or Podman for example, see the roadmap .
For now, seelf only support deploying application on a local Docker engine.
The goal is to support multiple providers inside a seelf instance. The simplest one will be a remote Docker engine and we can investigate some initial support for Podman, Docker Swarm or Kube
Initial version roadmap
Add a Target resource representing where seelf applications could be deployed with:
A unique id
A name (could be anything since it will only be used for tooltips and such)
A unique domain url (for the first version, only allow one domain per target, this will be extended to support multiple domains per target in the future but keep it simple for now)
A provider specific configuration with everything needed to connect to local/remote host. This one should be bound to a specific provider just like SourceData. So for example, a docker provider configuration will include an host and a private ssh key
Remove configuration related to the balancer (since it will now be configured on a per target basis)
Add a target property to app environments so an environment will now include a target and service variables
Add target to deployment config
Providers will receive a Target when deploying or cleaning stuff and they should appropriately setup it the first time they have to deal with it. This way, if a user deletes a proxy configuration, just restart seelf and the first deployment on that target will redeploy the proxy needed to make the application available
Changing an app environment target should clear that specific environment on the old target (it should happen on the first successful deployment on a different target)
App's name unicity will now be based on name, production_target and staging_target so multiple apps with the same name will be allowed only if they are not deployed on the target (because for now, one target equals one domain)
Configuring a target for Docker will use an ssh config file to configure Host / Identity file to use by using the target specific private key.
Not part of the first version
Multiple domains per target
Generates private key if no one is provided
Multiple targets per host and configurable proxy ports (See #1 (comment))
Some images do not use the EXPOSE instruction in a Dockerfile and without it, Traefik do not know how to forward the traffic to the container.
Since seelf rely on the port being defined in a compose file, we could easily add the needed label "traefik.http.services.service_name.loadbalancer.server.port=80" to prevent unreachable services.
The duration should appear in "realtime". For this to work, we need to store the offset between the server time and the user computer. Then, a simple setInterval should do the trick.
If not specifically set, seelf will assume it should use the HTTP router. From the docker perspective, when no protocol is defined, it fallbacks to TCP. From our side, we need to distinguish (for traefik purposes) between raw TCP and HTTP specific ones.
I think the most straightforward way, from a user stand point, is to force it to specify the protocol if needed. For example, for exposing a postgres container, one may use the "5432:5432/tcp" port definition.
When loading the compose file project, we can rely on a specific interpolation function to catch the port raw value and check if the protocol has been explicitly defined by the user hence knowing the distinction between HTTP/TCP before docker has fallback to tcp. Something like this:
opts, _:=cli.NewProjectOptions([]string{"compose.yml"},
cli.WithName("testouille"),
cli.WithLoadOptions(func(o*loader.Options) {
o.Interpolate=&interpolation.Options{
TypeCastMapping: map[tree.Path]interpolation.Cast{
"services.*.ports.[]": func(valuestring) (interface{}, error) {
// Parse the port value definition and check if the protocol is user-defined// and save it somewhere to be able to distinguish HTTP/TCP/UDP ones// and use the appropriate traefik router.returnvalue, nil
},
},
}
}),
cli.WithNormalization(true),
)
Warning
Due to this, the host mapping part will be mandatory. Without it (ex. just specifying - "8080" and relying on ephemeral ports) we can't distinguish between services and things may break. Services are looped in a non determinist order and the interpolate function does not provide the service being processed.
Apply labels accordingly
The first HTTP exposed port will be handled by the actual proxy using the application subdomain generated. Other ports will define a specific entrypoint, router and service with a unique name to reach that port.
The Service struct will store application service entrypoints. Every non default entrypoints will be saved in the target because it needs to configure them.
When cleaning up an application, we must ensure the mapping on the target side is deleted.
Find a random available port (for TCP/UDP)
If non default entrypoints exists, when configuring a target, we can launch a one off container with ephemeral ports (for every port not mapped yet), retrieve those allocated. This will make sure they are available on the host and leverage Docker.
With those new ports found, we can configure the proxy with all entrypoints added and relaunch it. If the configuration has not changed, Docker should skip the restart.
Note
For now, we will use the same proxy, causing a tiny unavailable period. This will keep the resource usage low but in the future, maybe we can add a configuration option to expose those custom ports on a second proxy to prevent that unavailability.
So when a new deployment expose new TCP or UDP entrypoints, the target will save them and trigger a re-configuration to handle them appropriately.
Note
With this solution, only the proxy know the final url / port of everything. It makes easy to change the URL (as this is the case right now) or the port mapping without having to redeploy everything.
Update the Service struct and the UI
A Service exposed will now have an array of entrypoints with a protocol and subdomain or port and will make them available in the UI so the user can know how to reach those entrypoints (based on the target url).
As stated in #17, add the option (for target with the docker provider) to handle custom entrypoints on a separate container.
The downside is that it will take a little more memory but default http entrypoints will not be unavailable during the exposition of custom entrypoints.
For now the installation is a bit clunky because I wanted to show how we can leverage the traefik proxy deployed by seelf at startup to also expose seelf under the same domain.
Maybe that was a mistake and I should show the easier way using docker run -d -e "[email protected]" -e "SEELF_ADMIN_PASSWORD=admin" -v "/var/run/docker.sock:/var/run/docker.sock" -v "seelfdata:/seelf/data" --restart=unless-stopped -p "8080:8080" yuukanoo/seelf which bind directly the server on the host.
But since I really want to ease the exposing of seelf using a specific subdomain and the certificate generation, I've been tinkering with configuring the traefik with a dynamic file exposing seelf (without requiring specific label).
With this in mind, we can conditionnaly output this file if some configuration option is here, such as SEELF_SUBDOMAIN and the command will then be docker run -d -e "SEELF_SUBDOMAIN=seelf"-e "[email protected]" -e "SEELF_ADMIN_PASSWORD=admin" -v "/var/run/docker.sock:/var/run/docker.sock" -v "seelfdata:/seelf/data" --restart=unless-stopped yuukanoo/seelf.
Also, add notes about how to update seelf to the latest version. Since the configuration is written at startup, the docker process should be an easy one.
Hi, I found your project on Reddit and like the idea.
I have a few stacks that are already managing their their reverse proxy/ingress. Is there a possibility to not start traefik and instead have the deployed stack manage their own exposure?
π¨ The automated release from the main branch failed. π¨
I recommend you give this issue a high priority, so other packages depending on you can benefit from your bug fixes and new features again.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. Iβm sure you can fix this πͺ.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the main branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here are some links that can help you:
seelf has a solid set of unit tests but they became complicated to manage.
A lot of tests need specific resources in specific states. Maybe we should had fixture packages to expose resource initialization stuff.
The testutil package could also be rewrite to provide simpler assertions (and also keep calls consistent, in the current implementation, sometimes expected and actual are switched).
Maybe it can also expose common patterns such as arrange, act, assert or test arrays to ease the process of writing those kind of tests.
I logged in to my private registry on the local Docker daemon and tried creating an app with an image from that registry, but it says it's missing basic auth.
At the moment, it feels like migrations are not applied in a transaction and as such could make the database inconsistent and unusable if a migration has failed. This should be improved.
To make sure migrations are applied in a specific order, it will be better to accept an array of migrations dir instead of an unordered map:
Allow PATCH only on the first level object of the API. If a property is set, it should contains all needed field as if it was a replacement. This will make things easier to implement and avoid potentially nested branching when updating.