yuukanoo / seelf Goto Github PK
View Code? Open in Web Editor NEWLightweight self-hosted deployment platform written in Go
Home Page: https://yuukanoo.github.io/seelf/
License: GNU General Public License v3.0
Lightweight self-hosted deployment platform written in Go
Home Page: https://yuukanoo.github.io/seelf/
License: GNU General Public License v3.0
Some images do not use the EXPOSE
instruction in a Dockerfile and without it, Traefik do not know how to forward the traffic to the container.
Since seelf rely on the port being defined in a compose file, we could easily add the needed label "traefik.http.services.service_name.loadbalancer.server.port=80"
to prevent unreachable services.
Support internationalization in the frontend.
As stated in #17, add the option (for target with the docker provider) to handle custom entrypoints on a separate container.
The downside is that it will take a little more memory but default http entrypoints will not be unavailable during the exposition of custom entrypoints.
At the moment, it feels like migrations are not applied in a transaction and as such could make the database inconsistent and unusable if a migration has failed. This should be improved.
To make sure migrations are applied in a specific order, it will be better to accept an array of migrations dir instead of an unordered map:
seelf/pkg/storage/sqlite/database.go
Line 67 in 1eba5d9
Create a Github action to deploy applications on seelf instances easily within a CI workflow.
In the future, we'll probably do the same for other providers such as Gitlab.
I skipped v2.2 (was waiting on private registries) and my go-to example app to test multiple ports is Forgejo, the Gitea fork I use. It has an HTTP and a TCP port for the built-in SSH server for git cloning.
Here is the compose.yaml
I use with my current Docker UI:
services:
app:
image: codeberg.org/forgejo/forgejo:7.0.3
environment:
- USER_UID=1026
- USER_GID=100
- FORGEJO__database__DB_TYPE=postgres
- FORGEJO__database__HOST=db
- FORGEJO__database__NAME=forgejo
- FORGEJO__database__USER=forgejo
- FORGEJO__database__PASSWD=forgejo
networks:
- dockge_default
volumes:
- /volume1/docker/compose/data/forgejo/data:/data
ports:
- 10022:10022
depends_on:
db:
condition: service_healthy
labels:
traefik.enable: "true"
traefik.http.routers.forgejo.rule: "Host(`forgejo.my.redacted.domain`)"
traefik.http.routers.forgejo.service: "forgejo"
traefik.http.routers.forgejo.entrypoints: "https"
traefik.http.routers.forgejo.tls.certresolver: "gandi"
traefik.http.routers.forgejo.tls.domains[0].main: "my.redacted.domain"
traefik.http.routers.forgejo.tls.domains[0].sans: "*.my.redacted.domain"
traefik.http.services.forgejo.loadbalancer.server.port: "3000"
db:
image: postgres:14.5
environment:
- POSTGRES_USER=forgejo
- POSTGRES_PASSWORD=forgejo
- POSTGRES_DB=forgejo
networks:
- dockge_default
healthcheck:
test: ['CMD-SHELL', 'pg_isready -h localhost -U forgejo -d forgejo']
interval: 5s
timeout: 5s
retries: 10
volumes:
- /volume1/docker/compose/data/forgejo/postgres:/var/lib/postgresql/data
networks:
dockge_default:
external: true
x-dockge:
urls:
- https://forgejo.my.redacted.domain
I only specify the SSH port I use (10022, to avoid taking the default 22) and the 3000 port for the app is part of the Traefik labels I add.
I tried the following compose with Seelf:
services:
app:
image: codeberg.org/forgejo/forgejo:7.0.3
environment:
- FORGEJO__database__DB_TYPE=postgres
- FORGEJO__database__HOST=db
- FORGEJO__database__NAME=forgejo
- FORGEJO__database__USER=forgejo
- FORGEJO__database__PASSWD=forgejo
volumes:
- forgejo-data:/data
ports:
- 3000:3000
- 10022:10022/tcp
depends_on:
db:
condition: service_healthy
db:
image: postgres:14.5
environment:
- POSTGRES_USER=forgejo
- POSTGRES_PASSWORD=forgejo
- POSTGRES_DB=forgejo
healthcheck:
test: ['CMD-SHELL', 'pg_isready -h localhost -U forgejo -d forgejo']
interval: 5s
timeout: 5s
retries: 10
volumes:
- forgejo-postgres:/var/lib/postgresql/data
volumes:
forgejo-data:
forgejo-postgres:
Then I go to the install, make sure the HTTP port is 3000 and SSH is 10022, and install. Then create a new user (becomes the admin), add my SSH key to the settings and create a repo. Cloning it via HTTP works, but via SSH it just hangs, presumably because the TCP port has a mismatch. And sure enough, the app info on Seelf's UI shows that port 10022
is... wrong?
Not sure why I'm getting port 32773
instead of the requested 10022
. The docs don't explain this, I think?
Thanks for any pointers!
This one is a big one.
It could be cool to integrate an "ops" panel inside applications for realtime monitoring of services, logs, resources usage, stop/restart and so on.
Deactivate the auto-scroll behavior if the user has scrolled up.
I logged in to my private registry on the local Docker daemon and tried creating an app with an image from that registry, but it says it's missing basic auth.
I've made a mistake during a small refactoring and missed a bug causing the interface to broke unexpectedly.
I'm working on a fix and it will be available within the next hour.
Sorry for the inconvenience.
Add a function monad.TryGet() (T, bool)
to ease the process of checking and getting an optional value and refactor things out.
Hi, I found your project on Reddit and like the idea.
I have a few stacks that are already managing their their reverse proxy/ingress. Is there a possibility to not start traefik and instead have the deployed stack manage their own exposure?
Use the next
branch as a preview build, publishing it to Docker hub.
Since this branch contains edge features, users may be able to give it a try without expecting it to be one hundred percent stable.
Allow PATCH
only on the first level object of the API. If a property is set, it should contains all needed field as if it was a replacement. This will make things easier to implement and avoid potentially nested branching when updating.
For now, seelf only support deploying application on a local Docker engine.
The goal is to support multiple providers inside a seelf instance. The simplest one will be a remote Docker engine and we can investigate some initial support for Podman, Docker Swarm or Kube
Target
resource representing where seelf applications could be deployed with:
id
name
(could be anything since it will only be used for tooltips and such)domain url
(for the first version, only allow one domain per target, this will be extended to support multiple domains per target in the future but keep it simple for now)provider specific configuration
with everything needed to connect to local/remote host. This one should be bound to a specific provider just like SourceData
. So for example, a docker provider configuration will include an host
and a private ssh key
target
property to app environments so an environment will now include a target and service variablestarget
to deployment configTarget
when deploying or cleaning stuff and they should appropriately setup it the first time they have to deal with it. This way, if a user deletes a proxy configuration, just restart seelf and the first deployment on that target will redeploy the proxy needed to make the application availablename
, production_target
and staging_target
so multiple apps with the same name will be allowed only if they are not deployed on the target (because for now, one target equals one domain)Configuring a target for Docker will use an ssh config file to configure Host / Identity file to use by using the target specific private key.
Configuring traefik to proxy TCP/UDP requests (https://doc.traefik.io/traefik/routing/routers/#configuring-tcp-routers) instead of just HTTP ones will enable postgres to be correctly exposed for example.
For this, we can rely on the port definition in the compose file. For example:
services:
app:
image: something
ports:
- "8080:80" # Default = HTTP
- "8081:81/tcp" # TCP
- "8082:82/udp" # UDP
If not specifically set, seelf will assume it should use the HTTP router. From the docker perspective, when no protocol is defined, it fallbacks to TCP. From our side, we need to distinguish (for traefik purposes) between raw TCP and HTTP specific ones.
I think the most straightforward way, from a user stand point, is to force it to specify the protocol if needed. For example, for exposing a postgres container, one may use the "5432:5432/tcp"
port definition.
When loading the compose file project, we can rely on a specific interpolation function to catch the port raw value and check if the protocol has been explicitly defined by the user hence knowing the distinction between HTTP/TCP before docker has fallback to tcp
. Something like this:
opts, _ := cli.NewProjectOptions([]string{"compose.yml"},
cli.WithName("testouille"),
cli.WithLoadOptions(func(o *loader.Options) {
o.Interpolate = &interpolation.Options{
TypeCastMapping: map[tree.Path]interpolation.Cast{
"services.*.ports.[]": func(value string) (interface{}, error) {
// Parse the port value definition and check if the protocol is user-defined
// and save it somewhere to be able to distinguish HTTP/TCP/UDP ones
// and use the appropriate traefik router.
return value, nil
},
},
}
}),
cli.WithNormalization(true),
)
Warning
Due to this, the host mapping part will be mandatory. Without it (ex. just specifying - "8080"
and relying on ephemeral ports) we can't distinguish between services and things may break. Services are looped in a non determinist order and the interpolate function does not provide the service being processed.
The first HTTP exposed port will be handled by the actual proxy using the application subdomain generated. Other ports will define a specific entrypoint, router and service with a unique name to reach that port.
The Service
struct will store application service entrypoints. Every non default entrypoints will be saved in the target because it needs to configure them.
When cleaning up an application, we must ensure the mapping on the target side is deleted.
If non default entrypoints exists, when configuring a target, we can launch a one off container with ephemeral ports (for every port not mapped yet), retrieve those allocated. This will make sure they are available on the host and leverage Docker.
With those new ports found, we can configure the proxy with all entrypoints added and relaunch it. If the configuration has not changed, Docker should skip the restart.
Note
For now, we will use the same proxy, causing a tiny unavailable period. This will keep the resource usage low but in the future, maybe we can add a configuration option to expose those custom ports on a second proxy to prevent that unavailability.
So when a new deployment expose new TCP or UDP entrypoints, the target will save them and trigger a re-configuration to handle them appropriately.
Note
With this solution, only the proxy know the final url / port of everything. It makes easy to change the URL (as this is the case right now) or the port mapping without having to redeploy everything.
Service
struct and the UIA Service
exposed will now have an array of entrypoints
with a protocol
and subdomain
or port
and will make them available in the UI so the user can know how to reach those entrypoints (based on the target url).
Hosted on Github pages using something like Docusaurus.
To make things easier when migrating an application for example, enable the support for backup/restore of application volumes.
Allow refreshing of the user API key.
For now the installation is a bit clunky because I wanted to show how we can leverage the traefik proxy deployed by seelf at startup to also expose seelf under the same domain.
Maybe that was a mistake and I should show the easier way using docker run -d -e "[email protected]" -e "SEELF_ADMIN_PASSWORD=admin" -v "/var/run/docker.sock:/var/run/docker.sock" -v "seelfdata:/seelf/data" --restart=unless-stopped -p "8080:8080" yuukanoo/seelf
which bind directly the server on the host.
But since I really want to ease the exposing of seelf using a specific subdomain and the certificate generation, I've been tinkering with configuring the traefik with a dynamic file exposing seelf (without requiring specific label).
With this in mind, we can conditionnaly output this file if some configuration option is here, such as SEELF_SUBDOMAIN
and the command will then be docker run -d -e "SEELF_SUBDOMAIN=seelf"-e "[email protected]" -e "SEELF_ADMIN_PASSWORD=admin" -v "/var/run/docker.sock:/var/run/docker.sock" -v "seelfdata:/seelf/data" --restart=unless-stopped yuukanoo/seelf
.
Also, add notes about how to update seelf to the latest version. Since the configuration is written at startup, the docker process should be an easy one.
To prevent unneeded roundtrips, if a deployment has ended, stop polling for logs or state.
There's no need to keep it in the pkg/log
since the pkg
directory is here to hold generic and reusable parts.
Better deployment logs with date, maybe by going back to JSON output and formatting it in the UI.
Make / find an icon and replace the default favicon.
Enable an application to have a custom domain to bypass a backend default one.
Also allow the API Key authorization only on some endpoints (such as deployment related stuff) for now.
seelf has a solid set of unit tests but they became complicated to manage.
A lot of tests need specific resources in specific states. Maybe we should had fixture
packages to expose resource initialization stuff.
The testutil
package could also be rewrite to provide simpler assertions (and also keep calls consistent, in the current implementation, sometimes expected and actual are switched).
Maybe it can also expose common patterns such as arrange, act, assert or test arrays to ease the process of writing those kind of tests.
main
branch failed. π¨I recommend you give this issue a high priority, so other packages depending on you can benefit from your bug fixes and new features again.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. Iβm sure you can fix this πͺ.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the main
branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here are some links that can help you:
If those donβt help, or if this issue is reporting something you think isnβt right, you can always ask the humans behind semantic-release.
semantic-release cannot push the version tag to the branch main
on the remote Git repository with URL https://[secure]@github.com/YuukanOO/seelf
.
This can be caused by:
Good luck with your project β¨
Your semantic-release bot π¦π
For now, seelf is a single user application because that was simpler to start with.
We should add teams support to namespace applications and enable better access management.
Everything is in the title!
Support multiple workers to enable multiple deployments or tasks to be run in parallel.
We should add a unique job key
to prevent multiple worker to pick the same type of job (deployment for the same app and env for example).
The duration should appear in "realtime". For this to work, we need to store the offset between the server time and the user computer. Then, a simple setInterval
should do the trick.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.