GithubHelp home page GithubHelp logo

convox / convox Goto Github PK

View Code? Open in Web Editor NEW
143.0 14.0 51.0 65.81 MB

Multicloud Platform as a Service

Home Page: https://convox.com

License: Apache License 2.0

Dockerfile 0.19% Makefile 0.37% Go 78.03% Shell 3.55% HCL 16.89% Smarty 0.34% CSS 0.50% JavaScript 0.14%
paas multicloud deployment

convox's Introduction

Convox

Convox is an open-source PaaS based on Kubernetes available for multiple cloud providers.

Supported Clouds

  • Amazon Web Services
  • Digital Ocean
  • Google Cloud
  • Microsoft Azure

Getting Started

Installation

Features

Resources

Development Tips

When testing new changes, a good way of adding them to a test rack is to build the image locally,push to a public repo and update the k8s deployment api:

docker build -t user/convox:tag .
docker push user/convox:tag
kubectl set image deploy api system=user/convox:tag -n rackName-system

If testing new changes in terraform, install the rack using the following command to have the /terraform folder mapped to the rack tf manifest.

/convox: CONVOX_TERRAFORM_SOURCE=$PWD//terraform/system/%s convox rack install aws rack1

After saving your changes, go to (Linux:~/.config/convox/racks/rack1 or OSX:/System/Volumes/Data/Users/$PROFILENAME/Library/Preferences/convox/racks and run terraform apply

License

convox's People

Contributors

azazeal avatar beastawakens avatar bigshika avatar camerondgray avatar codelitt avatar conarro avatar ddollar avatar dependabot[bot] avatar erichummel avatar heronrs avatar hsahovic avatar jfmyers avatar jsierles avatar kaiomagalhaes avatar lballore avatar lucasmacedot avatar lukaszjankowski avatar marcuswestin avatar mattdennewitz avatar mbirman avatar moisesrodriguez avatar niclas-ahden avatar nightfury1204 avatar ntner avatar rabidaudio avatar sanketsaurav avatar stevenpitts avatar twsouza avatar viniciusaa avatar yuriadams avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

convox's Issues

CLI should ignore .DS_Store in ~/Library/Preferences/convox/racks

Ran into a super weird issue that took a little while to figure out. If you open ~/Library/Preferences/convox/racks in Finder on a Mac, it will create a .DS_Store file inside this directory. Then the convox racks command will start to fail with this error:

$ convox racks
ERROR: parse "\x00\x00\x00\x01Bud1\x00\x00\x10\x00\x00\x00\b\x00\x00\x00\x10\x00\x00\x00\x00%\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\b\x00\x00\x00\b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00...x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00": net/url: invalid control character in URL

Just thought I should let you know!

`drain` doesn't actually apply to anything on the v3 side of things.

Discovered by accident today. The drain configuration in the manifest doesn't apply to any underlying template. k8s doesn't use the same concept of drain like ECS does. Rather it uses the grace termination for the same purpose.
So yeah, whether v3 docs should just update to show that drain is deprecated, or whether you want to merge the two configs behind the scenes somehow, it's up to you!

Support for syslog formatting needed to push logs to Appsignal

Appsignal is a great way to monitor applications: https://www.appsignal.com
I've used it for dozens of apps over the past decade because it works great and is competitively priced.

They have recently started to build out log aggregation.

It would be great if you expanded the current syslog (tcp+tls) style logging to include support for their new logging product.

I think there are probably tricks I can do to make it work, but official support would be better.

export terraform files

I use convox to setup AWS racks and with clients I have often got them going with convox. However often they ask for the terraform files which I've been writing manually but I am aware convox uses terraform.

Can I request a feature to allow saving/dumping the terraform configuration files out after setting up a rack?

BuildKit socket timeout on 3.12.0

Since upgrading a Rack to 3.12.0 , we're experiencing a lot of timeouts on build jobs/workflows with could not connect to unix:///run/buildkit/buildkitd.sock after 10 trials showing in the logs in Console. The job never completes so it sits there spinning but is not actually doing anything, thereby holding up and delaying the subsequent jobs.

Looks a lot like this: moby/buildkit#1423 which appears to have been fixed a few years ago with a configurable retry count env var. Can we set that higher please?

And/or fix why the buildkit socket isn't accessible?

Terraform fails when installing local rack on macos 14

Following the steps to setup a development rack. Unfortunately, when I run convox rack install local andy-dev -v 3.15.2 os=mac, terraform fails.

Command output

convox rack install local andy-dev -v 3.15.2 os=mac 

Initializing the backend...
Upgrading modules...
Downloading git::https://github.com/convox/convox.git?ref=3.15.2 for system...
- system in .terraform/modules/system/terraform/system/local
- system.platform in .terraform/modules/system/terraform/platform
- system.rack in .terraform/modules/system/terraform/rack/local
- system.rack.api in .terraform/modules/system/terraform/api/local
- system.rack.api.k8s in .terraform/modules/system/terraform/api/k8s
- system.rack.k8s in .terraform/modules/system/terraform/rack/k8s
- system.rack.resolver in .terraform/modules/system/terraform/resolver/local
- system.rack.resolver.k8s in .terraform/modules/system/terraform/resolver/k8s
- system.rack.router in .terraform/modules/system/terraform/router/local

Initializing provider plugins...
- Finding hashicorp/random versions matching "~> 3.1"...
- Finding hashicorp/tls versions matching "~> 3.1"...
- Finding hashicorp/http versions matching "~> 2.1"...
- Finding hashicorp/kubernetes versions matching "~> 2.19.0"...
- Finding hashicorp/external versions matching "~> 2.1"...
- Installing hashicorp/http v2.2.0...
- Installed hashicorp/http v2.2.0 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.19.0...
- Installed hashicorp/kubernetes v2.19.0 (signed by HashiCorp)
- Installing hashicorp/external v2.3.2...
- Installed hashicorp/external v2.3.2 (signed by HashiCorp)
- Installing hashicorp/random v3.6.0...
- Installed hashicorp/random v3.6.0 (signed by HashiCorp)
- Installing hashicorp/tls v3.4.0...
- Installed hashicorp/tls v3.4.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Error: Missing required argument

  on .terraform/modules/system/terraform/rack/local/main.tf line 1, in module "k8s":
   1: module "k8s" {

The argument "telemetry_map" is required, but no definition was found.

Error: Missing required argument

  on .terraform/modules/system/terraform/rack/local/main.tf line 1, in module "k8s":
   1: module "k8s" {

The argument "telemetry_default_map" is required, but no definition was found.

Error: Unsupported argument

  on .terraform/modules/system/terraform/rack/local/main.tf line 13, in module "k8s":
  13:   settings            = var.settings

An argument named "settings" is not expected here.
ERROR: exit status 1

Generated files:

Here's what is in the rack directory:

vars.json

{
  "name": "andy-dev",
  "os": "mac",
  "release": "3.15.2"
}

main.tf

		module "system" {
			source = "github.com/convox/convox//terraform/system/local?ref=3.15.2"
			name = "andy-dev"
			os = "mac"
			rack_name = "andy-dev"
			release = "3.15.2"
		}

		output "api" {
			value = module.system.api
		}

		output "provider" {
			value = "local"
		}

		output "release" {
			value = "3.15.2"
		}

v3 Azure rack - Unsupported TF module argument causing Azure Rack deploy to fail

Installing a new Azure rack from CLI or Console results in a terraform fail.

terraform/fluentd/elasticsearch/main.tf does not have resolver argument which is referred to by terraform/api/azure/main.tf

Error: Unsupported argument

on .terraform/modules/system/terraform/api/azure/main.tf line 40, in module "fluentd":
40: resolver = var.resolver

Is it possible to add a custom `nginx.ingress.kubernetes.io/server-snippet` using a convox.yml configuration

Just looking for help on a feature.

Currently, we're using kubectl to modify the ingress directly to apply a custom annotations

E.g.

  • nginx.ingress.kubernetes.io/proxy-buffer-size
  • nginx.ingress.kubernetes.io/server-snippet annotation.

Is this supported through the convox manifest file?

I can see the template file for the ingress has a generic Annotations list

{{ range $k, $v := .Annotations }}
but couldn't figure out whether this is linked to the manifest.

Updates to the Rack (version, params etc) which cause node group migrations do not respect app/service level healthiness/scaling - causes downtime

Rack version: tested on 3.13.9 but I believe this would affect other versions since as well.
Cloud: AWS

Scenario

Triggering a rack update, or changing rack params that affect the node groups [node_disk, node_type etc]), cause the terraform to request the creation of new node groups alongside the existing ones. Once the new node groups have been created, the old node groups will be torn down and discarded.

Expected outcome

Apps running on the Rack experience no downtime, as the horizontal pod autoscaler will start up new processes on the new node groups and wait for them to be healthy before allowing the old processes on the old node groups to be killed so the old node groups can be deleted.

Actual outcome

Apps can experience downtime as the old node groups are deleted before the new processes on the new node groups are healthy. It may require confirmation but it looks like that as soon as the new node groups are created, the old node groups are marked for deletion straight away, regardless of process health.
We have a particular app with a startup time of 15-20 mins before it is healthy (pulling a very large image from ECR contributes to that), so we need a particularly slow rolling out. Increasing the node_disk size on one of our Production racks yesterday caused a 15 minute outage.

Steps to reproduce

Create an app with a health check but a long startup time (>10 minutes for instance). Monitor the health endpoint for the app continuously. Update a rack param that causes node group migration. Experience the downtime!

BuildKit builds not able to see/use cache

Trying out the dedicated build node on v3.12.3, I see that builds do not use a local cache, and we get this error at the start of the build:

time="2023-07-12T16:37:55Z" level=warning msg="local cache import at /var/lib/buildkit not found due to err: could not read /var/lib/buildkit/index.json: open /var/lib/buildkit/index.json: no such file or directory" spanID=377bca3a664358f9 traceID=14d8f7a50f357d8b1e6d653e0dc4588c

Which suggests that the /var/lib/buildkit path is not being written correctly. This lack of caching negates the main purpose of a dedicated build node. I can see that in https://github.com/convox/convox/blob/allow-different-build-node-amitype/pkg/build/buildkit.go#L214C1-L215 we're trying to use the path but I wonder if that needs to be created in the Dockerfile.buildkit (https://github.com/convox/convox/blob/allow-different-build-node-amitype/Dockerfile.buildkit) or in the buildkit-daemonless.sh (https://github.com/convox/convox/blob/allow-different-build-node-amitype/scripts/buildctl-daemonless.sh)?

Rack dies if you hit the docker hub rate limit and try to update to add authentication..

Hit an interesting issue, luckily only on staging, but could be disastrous on production...

Starting seeing many review workflows failing at the build stage with messages like:

error: failed to solve: ruby:3.2.2-buster: failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/ruby/manifests/sha256:38f85fe6580dade01906f3b20381668250731d8a49cff715784a614ba0ffd815: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

So thought "fair enough, it's been a busy afternoon, I'll add some Docker Hub authentication credentials to at least double our limit".

Added docker_hub_username and docker_hub_password to the rack params (side note, these should really be documented), so they Rack goes to update itself, but then the rack update ends up failing with:

Error: Waiting for rollout to finish: 1 old replicas are pending termination...

  on .terraform/modules/system/terraform/api/k8s/main.tf line 45, in resource "kubernetes_deployment" "api":
  45: resource "kubernetes_deployment" "api" {


ERROR: exit status 1
ERROR: we have been notified about a system error: (d1b29934916543ed8ccfd9841d3dbaf1)

And the Rack has now become completely unresponsive. I'm guessing that when the new API services are trying to start up, they aren't able to do so successfully because of the aforementioned rate limit, so everything is just dead.

I'm just waiting to see if k8s will recover automatically, but if it doesn't, I'll have to rebuild this rack from scratch ๐Ÿ˜ญ

`convox start` cannot push to registry on macos 14

With a correctly-configured (I believe) development rack on arm64 macos, I am unable to stand up an application on that local rack.

tl;dr error message:

<... snip happy building noises ...>
build  | #19 ERROR: failed to push registry.andy.macdev.convox.cloud/notion-domain:web.BUIHOOSZEET: failed to do request: Head "https://registry.andy.macdev.convox.cloud/v2/notion-domain/blobs/sha256:4638dd7181c7b9ed00f93fc997d92f1ebe9c8dba5d6a7bac56be994c48045da6": dial tcp 127.0.0.1:443: connect: connection refused
build  | ------
build  |  > exporting to image:
build  | ------
build  | error: failed to solve: failed to push registry.andy.macdev.convox.cloud/notion-domain:web.BUIHOOSZEET: failed to do request: Head "https://registry.andy.macdev.convox.cloud/v2/notion-domain/blobs/sha256:4638dd7181c7b9ed00f93fc997d92f1ebe9c8dba5d6a7bac56be994c48045da6": dial tcp 127.0.0.1:443: connect: connection refused

Setup

  • Example app
  • M1 macbook pro running macos Sonoma (14.2.1)
  • Convox CLI version: 3.15.2
  • minikube tunnel running and authenticated

Convox development rack:

โฏ convox rack 
Name      andy
Provider  local
Router    router.andy.macdev.convox.cloud
Status    running
Version   3.13.0-arm64

(see #743 for reason I'm running a 3.13.0 rack)

Kubectl pod statuses:

NAMESPACE       NAME                                        READY   STATUS      RESTARTS      AGE
andy-system     api-76bddb8bb5-jmp86                        1/1     Running     0             10m
andy-system     atom-7fbd4b9bb-j75nw                        1/1     Running     0             10m
andy-system     registry-8558d7bb6c-5r44l                   1/1     Running     0             10m
andy-system     resolver-5f985fc6cd-kjlhc                   1/1     Running     0             10m
cert-manager    cert-manager-645b4d4bcb-llldz               1/1     Running     0             9m14s
cert-manager    cert-manager-cainjector-57967fcd59-pmnsm    1/1     Running     0             9m14s
cert-manager    cert-manager-webhook-68b8f57ccb-9pfgf       1/1     Running     0             9m14s
ingress-nginx   ingress-nginx-admission-create-grt48        0/1     Completed   0             20m
ingress-nginx   ingress-nginx-admission-patch-z6lh5         0/1     Completed   0             20m
ingress-nginx   ingress-nginx-controller-6c55c66679-b9dk4   1/1     Running     0             20m
kube-system     coredns-64897985d-kv2nj                     1/1     Running     0             20m
kube-system     etcd-minikube                               1/1     Running     0             20m
kube-system     kube-apiserver-minikube                     1/1     Running     0             20m
kube-system     kube-controller-manager-minikube            1/1     Running     0             20m
kube-system     kube-ingress-dns-minikube                   1/1     Running     0             18m
kube-system     kube-proxy-s5s8r                            1/1     Running     0             20m
kube-system     kube-scheduler-minikube                     1/1     Running     0             20m
kube-system     storage-provisioner                         1/1     Running     2 (20m ago)   20m

NLB timeout is allowed to be settable though it is not settable

service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"

This line suggest the NLB timeout is configurable. Based on my reading of the AWS NLB docs, I don't believe it is:

Elastic Load Balancing sets the idle timeout value for TCP flows to 350 seconds. You cannot modify this value

https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html#connection-idle-timeout

ERROR on promoting app with volumes

Rack Version: 3.13.5

Error:
Promoting RGUFGAVKBJW... ERROR: template: :10:22: executing "" at <.Labels>: can't evaluate field Labels in type string

If I remove the "volumes" entry, it deploys fine. This seems to be a new issue as we recently updated our Rack version and can no longer deploy this app.

convox.yaml

services:
  api:
    build: .
    internal: true
    singleton: true
    health:
      path: /health
      grace: 30
    port: 8108
    scale:
      count: 1
      memory: 4096
      cpu: 1024
    environment:
      - TYPESENSE_API_KEY
    volumes:
      - /var/typesense/data

Deployment stuck in "Waiting for app to be ready"

Posting in case someone else runs into this ๐Ÿ‘‡

Had a convox deploy stuck after the build step with "Waiting for app to be ready", convox apps reports that the app is updating but convox apps cancel cannot be performed

I eventually figured out this was due to atom getting in a bad state (I think flakey network on my part or something - not sure).

Fix:

  1. Set up direct kubernetes access
  2. Getting the current app namespace (kubectl get namespaces)
  3. Manually alter the atom state manually from Pending to Running (kubectl edit atoms -n <namespace>)

Hope this helps someone else and thanks for all the great work on Convox <3 ๐Ÿ™‡

Rack update stuck

(Apologies in advance if this isn't the right place to report this. Please let me know if there's a better place!)

This evening I tried to update a v3 rack running a fairly old version (3.0.38). I first ran convox rack update 3.0.54 to bring it to the latest version running k8s 1.17. This ran quickly and succeeded without issues.

I then ran convox rack update 3.2.5 (last version before k8s 1.19). Unfortunately, this update has been stuck for several hours with no new updates. Here are the last lines from the terraform run that show up in the https://console.convox.com logs:

module.system.module.rack.module.api.data.aws_iam_policy_document.assume_api: Refreshing state...
module.system.module.rack.module.router.module.nginx.kubernetes_config_map.nginx-configuration: Refreshing state... [id=convoxprod-system/nginx-configuration]
module.system.module.rack.module.router.module.nginx.kubernetes_config_map.tcp-services: Refreshing state... [id=convoxprod-system/tcp-services]
module.system.module.rack.module.router.module.nginx.kubernetes_config_map.udp-services: Refreshing state... [id=convoxprod-system/udp-services]
module.system.module.rack.module.router.module.nginx.kubernetes_horizontal_pod_autoscaler.router: Refreshing state... [id=convoxprod-system/nginx]
module.system.module.rack.module.router.module.nginx.kubernetes_cluster_role_binding.ingress-nginx: Refreshing state... [id=ingress-nginx]
module.system.module.rack.module.router.module.nginx.kubernetes_deployment.ingress-nginx: Refreshing state... [id=convoxprod-system/ingress-nginx]

It's been like this for several hours. It looks like the rack is in a stuck state as well:

$ cx rack params
ERROR: state is locked for rack: <rackname>

After several hours, I tried a variety of approaches to try to unstick the update (including killing the underlying EC2 instances) but none have been successful.

How can I cancel and retry this update? Is there something I can do to ensure the update succeeds next time?

For reference, I am able to run kubectl and run various k8s commands according to the docs here: https://docs.convox.com/management/direct-k8s-access/. I also have access to the EKS cluster info through the AWS web console (I followed the instructions at https://community.convox.com/t/resolved-how-can-i-get-permission-to-access-the-eks-cluster-from-the-aws-console/828/2).

cannot unmarshal !!str `....` into manifest.Certificate with convox v3.10.7

โฏ convox version
client: 3.10.7
server: 3.10.7

After upgrading our racks from 3.6.3 all the way to 3.10.7, our deploys are no longer working due to a YAML parse error.

Promoting RXECYSMJXUT... ERROR: yaml: unmarshal errors:
  line 4: cannot unmarshal !!str `portal....` into manifest.Certificate

Our convox.yml had a certificate: ... line that matched the comma-separated list of the domains: line. Based on the documentation, it seems like this line may not be required? However, after removing the certificate lines from all the convox.yaml files, the error above persists when doing a new build and attempting to promote it.

Based on #557, it sounds like certificate still exists? But it doesn't appear to be documented.

Even a commands like convox services is giving the unmarshal error for some unknown reason. Is it stuck on the YAML for the currently released apps, and not allowing us to promote the new builds?

Is Convox limiting access on free accounts?

I am using the Convox service, and recently, I received a notification saying: "You have exceeded the monthly rate limit assigned to your Free organization for this request." When using the CLI, can you provide me with the updates regarding the restrictions on free account activities? I have searched, but I couldn't find any relevant documentation.

SSL Certificates aren't working with v3

I've tried creating a few ssl certificates and CNAME them to my rack.

Every single one of them gives me the default Kubernetes Ingress Controller Fake Certificate

Expanding on how to troubleshoot this or even what we should be looking at would go a looong ways.

I've tried:
example.org
app.example.org
app.dev.example.org

CNAME created and pointed to convox rack -> Router

Tried reaching out to [email protected] and [email protected]. The login page on the community forum is broke.

ANY help would be appreciated here.

Add how to develop section in the Readme

  1. export VERSION={your own descriptive release name} i.e. export VERSION=edlocaldev
  2. make release (creates a release for you through GitHub actions. Give it a few minutes, can check the progress on GitHub.)
  3. convox rack update {release name} -r {local rack name} i.e convox rack update edlocaldev -r dev
  4. export IMAGE=convox/convox:{release name} && export RACK={rack name} i.e. export IMAGE=convox/convox:edlocaldev && export RACK=dev
  5. Make local code changes
  6. make dev (builds your updated code, pushes the image, restarts your local Rack components
  7. Check and test your Rack for your updates. Go to 5 and repeat as necessary.

Docker tags

When tagging a docker release for amd64 and arm64 add an additional tag to each release,
latest-amd64 and latest-arm64 so users can easily access the latest build for a specific platform.

I can submit a PR for this if it will be approved. Let me know.

panic: concurrent write to websocket connection

output:

goroutine 54 [running]:
github.com/gorilla/websocket.(*messageWriter).flushFrame(0x14000ac6900, 0x1, {0x0?, 0x0?, 0x0?})
github.com/gorilla/[email protected]/conn.go:617 +0x460
github.com/gorilla/websocket.(*messageWriter).Close(0x14000ac6930?)
github.com/gorilla/[email protected]/conn.go:731 +0x48
github.com/gorilla/websocket.(*Conn).beginMessage(0x140005c09a0, 0x14000ac6930, 0x9)
github.com/gorilla/[email protected]/conn.go:480 +0x3c
github.com/gorilla/websocket.(*Conn).NextWriter(0x140005c09a0, 0x9)
github.com/gorilla/[email protected]/conn.go:520 +0x40
github.com/gorilla/websocket.(*Conn).WriteMessage(0x14000169f70?, 0x14000169f48?, {0x102fee7c0, 0x0, 0x0})
github.com/gorilla/[email protected]/conn.go:773 +0x110
github.com/convox/stdsdk.keepalive({0x1022e00f8, 0x102fee7c0}, 0x0?)
github.com/convox/[email protected]/client.go:291 +0x88
created by github.com/convox/stdsdk.(*Client).Websocket in goroutine 1
github.com/convox/[email protected]/client.go:223 +0x70c

convox cli 3.13.8
installed via homebrew
running on Apple M1 max
os: ventura 13.6

How to set an existing VPC through CLI before the rack is created

Hello, in the CLI, how would I go about setting the parameter before creating the rack?

Because I can only use convox rack params set vpc_id=foo-bar-vpc after the rack is created with, for example, convox rack install aws foo-bar.

Do I have to first create a rack with a new VPC, than set the parameter to an existing VPC, then update the rack?

The parameter created in this PR: #417

To illustrate what I mean:
image

However, when I do create the rack, the new rack is created with the Foo: bar input on the main module

Cannot update to 3.8.1 in aws eu-north-1

It's probably #529 which breaks convox rack update in the eu-north-1 region with this message:

{
  RespMetadata: {
    StatusCode: 400,
    RequestID: "0a1bca35-3fe4-4093-a6dc-70f1ae459f5d"
  },
  Message_: "Tagging an elastic gpu on create is not yet supported in this region."
}

  on .terraform/modules/system/terraform/cluster/aws/main.tf line 90, in resource "aws_eks_node_group" "cluster":
  90: resource "aws_eks_node_group" "cluster" {

How to access cluster with kubectl?

I'm trying to set up basic authentication on my AWS EKS cluster that I've created with Convox. To do this, I believe I need to use kubectl to access my cluster.

Using kubectl to gain direct access is described here in the v1 documentation, however there doesn't appear to be any mention of it in the v2 documentation.

When I attempt to configure kubectl to point at my rack, as described in the v1 documentation:
convox rack kubeconfig > $HOME/.kube/myrack-config
I get this error:
ERROR: 0 args required

I assume this is because the kubeconfig arg is deprecrated?

If so, was it replaced with a different way to access the cluster?

FYI I'm using Convox server version 3.12.4

Any help on this would be appreciated, thanks.

Releases have disappeared ๐Ÿ˜ฑ

Running a rack on 3.5.7, went to update the node_type param but the update failed as Terraform could not download the files from GitHub as that release has disappeared.
I can't even ascertain what is the latest release to update to before going to the next minor as lots of release information appears to have been deleted/vanished.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.