GithubHelp home page GithubHelp logo

k3ai / k3ai Goto Github PK

View Code? Open in Web Editor NEW
121.0 4.0 12.0 561 KB

A lightweight tool to get an AI Infrastructure Stack up in minutes not days. K3ai will take care of setup K8s for You, deploy the AI tool of your choice and even run your code on it.

Home Page: https://k3ai.in

License: BSD 3-Clause "New" or "Revised" License

Makefile 1.60% Go 98.11% Shell 0.29%
machine-learning datascience mlops devops artificial-intelligence infrastructure-as-code notebooks mlflow airflow kubeflow

k3ai's Introduction


Welcome to K3ai Project

K3ai is a lightweight tool to get an AI Infrastructure Stack up in minutes not days.

cli version  go version  go report  license


NOTE on the K3ai origins

Original K3ai Project has been developed at the end of October 2020 in 2 weeks by:

K3ai v1.0 has been entirely re-written by Alessandro Festa during the month of October 2021 to offer a better User Experience.

Thanks to the amazing and incredible people and projects that have been instrumental to create K3ai project repositories,website,etc...

⚡️ Quick start

Let's discover K3ai in three simple steps.

🌘 Getting Started

Get started by download k3ai from the release page here.

Or try K3ai companion script using this command:

curl -LO https://get.k3ai.in | sh -

🌗 Load K3ai configuration

Let's start loading the configuration:

k3ai up

First time k3ai run will ask for a Github PAT (Personal Access Token) that we will use to avoid API calls limitations. Check Github Documentation to learn how to create one. Your personal GH PAT only need read repository permission.


🌖 Configure the base infrastructure

Choose your favourite Kubernetes flavor and run it:

To know which K8s flavors are available

k3ai cluster list --all

it should print something like:

┌─────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ INFRASTRUCTURE                                                                                          │
├───────┬─────────────────────────────────────────────────────┬───────┬────────┬─────────┬────────────────┤
│ TYPE  │ DESCRIPTION                                         │ KIND  │ TAG    │ VERSION │ STATUS         │
├───────┼─────────────────────────────────────────────────────┼───────┼────────┼─────────┼────────────────┤
│ CIVO  │ The First Cloud Native Service Provider Power...    │ infra │ cloud  │ latest  │ Available      │
├───────┼─────────────────────────────────────────────────────┼───────┼────────┼─────────┼────────────────┤
│ EKS-A │ Amazon Eks Anywhere Is A New Deployment Option...   │ infra │ hybrid │ v0.5.0  │ Available      │
│       │ ate And Operate Kubernetes Clusters On Custome...   │       │        │         │                │
├───────┼─────────────────────────────────────────────────────┼───────┼────────┼─────────┼────────────────┤
│ K3S   │ K3s Is A Highly Available, Certified Kubernetes...  │ infra │ local  │ latest  │ Available      │
│       │ oads In Unattended, Resource-Constrained...         │       │        │         │                │
├───────┼─────────────────────────────────────────────────────┼───────┼────────┼─────────┼────────────────┤
│ KIND  │ Kind Is A Tool For Running Local Kubernetes...      │ infra │ local  │ v0.11.2 │ Available      │
│       │ as Primarily Designed For Testing Kubernetes...     │       │        │         │                │
│       │  Or Ci.                                             │       │        │         │                │
├───────┼─────────────────────────────────────────────────────┼───────┼────────┼─────────┼────────────────┤
│ TANZU │ Tanzu Community Edition Is A Fully-Featured...      │ infra │ hybrid │ latest  │ In Development │
│       │ ers And Users. It Is A Freely Available...          │       │        │         │                │
│       │  Of Vmware Tanzu.                                   │       │        │         │                │
└───────┴─────────────────────────────────────────────────────┴───────┴────────┴─────────┴────────────────┘

Now let start with something super fast and super simple:

k3ai cluster deploy --type k3s --n mycluster

🌝 Install a plugin to do your AI experimentations

Now that the server is up and running let's type:

k3ai plugin deploy -n mlflow -t mycluster

K3ai will print the url where you may access to the MLFLow tracking server at the end of the installation. That's all now just start having fun with K3ai!

🌈 Push a piece of code to the AI tools and focus on your goals

Let's push some code to the AI tool (i.e.: MLFlow)

k3ai run --source https://github.com/k3ai/quickstart --target mycluster --backend mlflow

wait the run to complete and login the backend AI tolls (i.e.: on the MLFlow UI http://<YOUR IP>:30500)

Current Implementation support

Operating Systems

Operating System K3ai v1.0.0
Linux Yes
Windows In Progress
MacOs In Progress
Arm In Progress

Clusters

K8s Clusters K3ai v1.0.0
Rancher K3s Yes
Vmware Tanzu Community Ed. Yes
Amazon EKS Anywhere Yes
KinD Yes

Plugins

Plugins K3ai v1.0.0
Kuebflow Components Yes
MLFlow Yes
Apache Airflow Yes
Argo Workflows Yes

⭐️ Project assistance

If you want to say thank you or/and support active development of K3ai Project:

Together, we can make this project better every day! 😘

⚠️ License

K3ai is free and open-source software licensed under the BSD 3-Clause. Official logo was created by Alessandro Festa.

k3ai's People

Contributors

alefesta avatar burntcarrot avatar github-actions[bot] avatar thliang01 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

k3ai's Issues

[Feature] - Support One-click pipeline submissions for given backend

We should have a command like k3ai run that allows a user to submit a code artifact to certain backends.
This would allow the user to test immediately with the AI platform and focus on the result. The command should look like:

k3ai run --source [REMOTE GIT URL] --backend [kfp/mlflow/airflow] --target [CLUSTER NAME] --type [py/yaml/zip]

Ideally, the minimal backend to support are:

  • Kubeflow pipelines (argo)
  • MLFlow
  • Apache Airflow

For the type (in order to understand what to do):

  • py (python script)
  • yaml
  • zip (dsl in kfp)

[Feature] - Windows support

We should aim to support Windows, more specifically WSL2.

Current challenges:

  • Some K8s flavors require systemd (i.e.: K3s) so we should use an alternative like K3d
  • Expose ports could be challenging
  • How do we discriminate between the various OS in the plugins?

[BUG] - certain plugins fail to install

Describe the bug

Think the install dir could be hard coded?

To Reproduce
Steps to reproduce the behavior:

  1. k3ai plugin deploy -n kf-pa -t mycluster

Expected behavior

installs ok like the mlflow one

Screenshots

fermi ॐ  ~:
2112 ☿ k3ai plugin deploy -n kf-pa -t mycluster                                                                                                                         ⏎

**mkdir /home/alefesta**: permission denied 🚀 Starting installation...
 
 ⏳	Working... 
 🚀 error: evalsymlink failure on '/home/john/.k3ai/git/apps/pipeline/upstream/cluster-scoped-resources' : lstat /home/john/.k3ai/git: no such file or directory
 🚀 Working on the installation...

[Feature] - GPU support through NVIDIA GPU Operator

We should aim to support GPU's if those are available through a new flag in the cluster deploy command. Something like:

k3ai cluster deploy --type k3s --name mycluster --gpus

The flag --gpus should tell K3ai to deploy the NVIDIA GPU operator on the cluster. We assume the cards are there and everything else is already set up if needed. The operator should be part of the common plugins.

[Feature] - Artifact collection through multiple backends

🚀 Is your feature request related to a problem? Please describe.
For different backends, there are different means to retrieve artifacts. Once retrieved, we would also need to store these artifacts to a specified path or URL.

💡 Describe the solution you'd like
A new feature that could automatically collect artifacts and publish them to a specified path/URL as mentioned in the config file.

🤩 Describe alternatives you've considered

For MLFlow:
MLFlow supports 4 ways to store artifacts including SQLite and local storage.

We can store artifacts on:

  • Different cloud providers through config files
  • On a separate Github repo for metrics (PAT needs to be passed through Github Actions)

Github Actions supports uploading artifacts through their own custom action.

[BUG] - postgress crashes when deploying mlflow on k3s / intel

Describe the bug
in postgres pod:
Bus error (core dumped)

running on:
(base) pax@ithaki:$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.4 LTS
Release: 20.04
Codename: focal
(base) pax@ithaki:
$

To Reproduce

k3ai cluster deploy --type k3s --name arrakis
rk3ai plugin deploy -n mlflow -t arrakis
k3s kubectl logs postgres-0

Expected behavior
successful mlflow startup

Screenshots
(base) pax@ithaki:~$ kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/local-path-provisioner-6c79684f77-996dg 1/1 Running 0 7m28s
kube-system pod/coredns-d76bd69b-8zqzz 1/1 Running 0 7m28s
kube-system pod/metrics-server-7cd5fcb6b7-6zwwl 1/1 Running 0 7m28s
default pod/minio-0 1/1 Running 0 6m40s
default pod/mlflow-7c6768c4c-m6j6d 1/1 Running 0 6m23s
default pod/postgres-0 0/1 CrashLoopBackOff 6 (20s ago) 6m31s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.43.0.1 443/TCP 7m43s
kube-system service/kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 7m40s
kube-system service/metrics-server ClusterIP 10.43.39.68 443/TCP 7m39s
default service/minio-service ClusterIP 10.43.144.140 9000/TCP 6m40s
default service/postgres-service ClusterIP 10.43.236.158 5432/TCP 6m23s
default service/mlflow-service NodePort 10.43.192.251 5000:30500/TCP 6m8s

NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/local-path-provisioner 1/1 1 1 7m40s
kube-system deployment.apps/coredns 1/1 1 1 7m40s
kube-system deployment.apps/metrics-server 1/1 1 1 7m39s
default deployment.apps/mlflow 1/1 1 1 6m23s

NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/local-path-provisioner-6c79684f77 1 1 1 7m29s
kube-system replicaset.apps/coredns-d76bd69b 1 1 1 7m29s
kube-system replicaset.apps/metrics-server-7cd5fcb6b7 1 1 1 7m29s
default replicaset.apps/mlflow-7c6768c4c 1 1 1 6m23s

NAMESPACE NAME READY AGE
default statefulset.apps/minio 1/1 6m40s
default statefulset.apps/postgres 0/1 6m31s

(base) pax@ithaki:~$ kubectl logs postgres-0
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/mlflow/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 20
selecting default shared_buffers ... 400kB
selecting default timezone ... Etc/UTC
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok

[Feature] List results based on User demand

We should have a mature way to list various informations related to the actual usage of K3ai.
A minimum set should include:

  • List cluster type, plugins, and bundles available
  • List registered clusters
  • List plugins installed by cluster

[Feature] Support tools download in the .k3ai folder

To help users to start up we should aim do download the basic tools and use them directly through k3ai <tool>.
The tool should be saved in the .k3ai folder to avoid overwriting the existing ones. The minimal list should be:

  • kubectl
  • kubectx
  • helm
  • kustomize

[BUG] - MLFlow endpoint doesn't work in WSL2

Describe the bug

While running the MLFlow plugin, the endpoint URI displayed by k3ai is not accessible.

k3ai-mlflow

The following endpoints are not accessible:

  • http://172.29.170.187:30500/ (displayed by k3ai)
  • http://172.29.170.187:5000/
  • http://10.96.150.194:30500/
  • http://10.244.0.7:30500/

The IP address for the WSL2 machine is (through wsl hostname -I): 172.29.170.187

WSL2 uses dynamic IP allocation.

To Reproduce
Steps to reproduce the behavior:

k3ai run -s https://github.com/k3ai/quickstart -b mlflow

Expected behavior
The MLFlow endpoint exposed through k3ai should have worked.

[Feature] - companion executable to run port forwards

🚀 Is your feature request related to a problem? Please describe.
Clusters of type DinD do not expose correctly the services through nodePorts hence users have issues ( complexity) to solve that and this is against k3ai logic of simplicity

💡 Describe the solution you'd like
We should have a companion executable separated from the k3ai binary (web ui?) where to provide a simple way to prot forward plugins on a given cluster.

This will require complete the plugin-to-cluster map in the db, create a minimal ui/cli that retrieve installed plugins and expose their native port( in case of multiple cluster we may expose +n to avoid conflicts or use the same nodePort we got for other types of clusters

🤩 Describe alternatives you've considered
Use native kubectl command.

[BUG] - Incompatible k3s version for kubeflow

Describe the bug
There's bug on kubeflow part, they currently don't support k8s 1.22, so at the moment kubeflow pipelines seems to work, with k3s, but e.g. kf-dashboard is failing, which might be related to unsupported k8s API version.

...
 ⏳     Working...
 🚀 unable to recognize "/home/user/.k3ai/git/common/istio-1-9/istio-crds/base": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
...
 🚀 unable to recognize "/home/user/.k3ai/git/common/istio-1-9/istio-install/base": no matches for kind "EnvoyFilter" in version "networking.istio.io/v1alpha3"
...
 🚀 unable to recognize "/home/user/.k3ai/git/common/istio-1-9/istio-install/base": no matches for kind "Gateway" in version "networking.istio.io/v1alpha3"
 🚀 unable to recognize "/home/user/.k3ai/git/common/istio-1-9/istio-install/base": no matches for kind "AuthorizationPolicy" in version "security.istio.io/v1beta1"
...
 🚀 unable to recognize "/home/user/.k3ai/git/common/istio-1-9/istio-install/base": no matches for kind "MutatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1"
 🚀 unable to recognize "/home/user/.k3ai/git/common/istio-1-9/istio-install/base": no matches for kind "ValidatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1"
...

To Reproduce

k3ai plugin deploy -n kf-dashboard -t myk3scluster

Expected behavior

Successful deployment of all kubeflow components on k3s.

'invalid argument'

Hello - I am trying out the Mlflow deployment as in the tutorials and I get a stream of logs that say "invalid argument" and after a while I get "We tried to publish MLFLow at:http://172.17.0.2:30500" .. but when I go to this page there is no Mlflow server.

Would appreciate the help. Thanks.

Great work btw! this library is amazing!

[Feature] - Automatically creation of an .env file in .k3ai folder

In order to better manage user experience, we should create a .env file at the time of init. If the user deletes it by any chance we'll have to recreate it at any time through k3ai init update
The .env should at minimum contains:

  • GitHub personal token for API consumption
  • plugins repo URL: to support custom plugin URL's
  • commons/community plugins view: to decide if also the commons/community plugins can be view through k3ai list --type

[BUG] - Kubeflow Pipelines not starting

Describe the bug
I am trying to run the kubeflow plugin on a single node 8vcpu / 16gb ram.

To Reproduce
curl -sfL https://get.k3ai.in | sh -
k3ai up
k3ai cluster deploy --type k3s -n mycluster
k3ai plugin deploy -n kf-pa -t mycluster

Issue
Installation never ends, seems the pods are not being started correctly

ubuntu:~$ k3s kubectl get pods -n kubeflow
NAME                                              READY   STATUS                   RESTARTS        AGE
workflow-controller-b7f95d6c6-q2wkf               1/1     Running                  0               4m22s
ml-pipeline-scheduledworkflow-5c549bc5f5-drkmn    1/1     Running                  0               4m23s
ml-pipeline-viewer-crd-7555c4d55f-fpd2m           1/1     Running                  0               4m23s
metadata-envoy-deployment-7654b98955-rkt2g        1/1     Running                  0               4m24s
ml-pipeline-ui-656466fdc9-qg9xv                   1/1     Running                  0               4m23s
mysql-55778745b6-g4vbd                            1/1     Running                  0               4m22s
minio-6d6d45469f-xgmz2                            1/1     Running                  0               4m24s
cache-deployer-deployment-6f8ff5b986-tvwn4        1/1     Running                  0               4m24s
metadata-grpc-deployment-5c8599b99c-b45jf         1/1     Running                  1 (3m17s ago)   4m24s
ml-pipeline-8995b746f-dhznz                       1/1     Running                  1 (2m31s ago)   4m23s
cache-server-74494cbf5-k956w                      0/1     Pending                  0               2m20s
cache-server-74494cbf5-6v5lj                      0/1     ContainerStatusUnknown   0               4m24s
ml-pipeline-persistenceagent-59689585f6-s8dhd     1/1     Running                  1 (2m5s ago)    4m23s
ml-pipeline-visualizationserver-6b8fb8c44-mmrk8   0/1     ContainerStatusUnknown   0               4m22s
ml-pipeline-visualizationserver-6b8fb8c44-svm25   0/1     Pending                  0               113s
metadata-writer-fd965db48-9lw22                   0/1     Error                    0               4m24s
metadata-writer-fd965db48-rqt7d                   0/1     Pending                  0               82s

Pod metadata-writer-fd965db48-9lw22 error : message: 'The node was low on resource: ephemeral-storage. Container main was using 392Ki, which exceeds its request of 0. '

Any ideas? Thanks!

[BUG] - runtime error with index out of range when running quickstart

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Follow quickstart steps:
k3ai up
k3ai cluster deploy -t k3s -n mycluster
k3ai plugin deploy -n mlflow -t mycluster
  1. Try running quickstart: $ k3ai run -s https://github.com/k3ai/quickstart -b mlflow
  2. Receive error:
🧪	Initializing code...
panic: runtime error: index out of range [0] with length 0

goroutine 1 [running]:
github.com/k3ai/pkg/runner.Loader({0x7fff7fe7bb4e, 0x22}, {0x0, 0x0}, {0x7fff7fe7bb74, 0x6}, {0x0, 0x0}, {0x0, 0x0})
	/home/joshec/git/k3ai/pkg/runner/run.go:78 +0x10f6
github.com/k3ai/cmd.runCommand.func1(0xc000403680, {0xc0003d57c0, 0x0, 0x4})
	/home/joshec/git/k3ai/cmd/run.go:71 +0x58b
github.com/spf13/cobra.(*Command).execute(0xc000403680, {0xc0003d5780, 0x4, 0x4})
	/home/joshec/go/pkg/mod/github.com/spf13/[email protected]/command.go:860 +0x5f8
github.com/spf13/cobra.(*Command).ExecuteC(0x2254960)
	/home/joshec/go/pkg/mod/github.com/spf13/[email protected]/command.go:974 +0x3bc
github.com/spf13/cobra.(*Command).Execute(...)
	/home/joshec/go/pkg/mod/github.com/spf13/[email protected]/command.go:902
github.com/k3ai/cmd.Execute(...)
	/home/joshec/git/k3ai/cmd/root.go:34
main.main()
	/home/joshec/git/k3ai/main.go:10 +0x25

Expected behavior
A clear and concise description of what you expected to happen.

A successful run with proper artifact storage and tracking URI settings

Screenshots
If applicable, add screenshots to help explain your problem.

[Feature] - Add support for k3d

🚀 Is your feature request related to a problem? Please describe.
Currently, k3ai doesn't have support for k3d.

k3s is known to have issues with WSL2 deployment (systemd requirement, etc.), so it would be better to have k3d support.

💡 Describe the solution you'd like
We can add k3d support to k3ai in a subsequent release. (would require some work on pkg/io/execution).

[BUG] k3ai up yields version `GLIBC_2.28' not found error

Describe the bug
After following the installation instructions, the following error is reported:

k3ai: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.28' not found (required by k3ai)`

To Reproduce
Steps to reproduce the behavior:

  1. curl -LO https://get.k3ai.in | sh -
  2. k3ai up

Expected behavior
It should have spun up the cluster!

OS: Ubuntu 18.04

[BUG] - k3ai arm download the wrong tools

Describe the bug
K3ai use some tools to work with clusters:

  • kubectl
  • helm
  • cloud provider client's
    In the ARM version, the wrong arch tools are downloaded

To Reproduce
when deploy through anything that requires one of the above the installation will fail with no error.

[Feature] Implementing a config file to mimic an e2e workflow

Ideally, a user in a CI/CD environment would like to:

  • deploy a given cluster
  • deploy a given plugin or set of plugins
  • deploy a specific set of artifacts (it's training code)
  • get the outcome and save it
    This should be done through a simple config file (YAML?) that allows the above steps to be represented and embed the training in the form of existing defaults (i.e.: KFP, MLFlow, etc..) or as a single file (.py)

[Feature] - Use Github Actions to create issues for exported reports

🚀 Is your feature request related to a problem? Please describe.
Related:

Using the metrics report exported through the executor, we can use Github Actions workflows to create automated issues containing the reports.

💡 Describe the solution you'd like
Create issues with the exported report as the content using Github Actions.

[BUG] - Kubeflow Pipelines Quickstart Repository Missing

Describe the bug
I was trying to follow the kubeflow pipelines tutorial as described in the k3ai website.
It seems the final step of running the pipeline fails because the quickstart repository for kubeflow pipelines does not exist.

To Reproduce
Steps to reproduce the behavior:

  1. k3ai up
  2. k3ai cluster deploy -t k3s -n myk3scluster
  3. k3ai plugin deploy -n kf-pa -t myk3scluster
  4. k3ai run -s https://github.com/k3ai/quickstart/kfp -b kfp -e condition.py -t mycluster

Expected behavior
Pipeline to run successfully.

Actual behavior
Pipeline run fails.

[Feature] - Export metrics to Markdown

🚀 Is your feature request related to a problem? Please describe.
MLFlow metrics can be exported to a Markdown file. Once exported, it could then be used for creating issues, PDFs, etc.

💡 Describe the solution you'd like
Fetch metrics and graphs and export them into a single "report"-like Markdown file.

[Feature] - Clean up CLI error messages

🚀 Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

See Issue #53 regarding k3ai run not returning a useful error when a required argument was missing

💡 Describe the solution you'd like
A clear and concise description of what you want to happen.

a better CLI handler with more descriptive errors, or at least a fix for this bug on this command's handling

🤩 Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

None yet

Implement Plugin remove

Hey, great work with K3ai. Works For most of the operations pretty smooth.

After experimenting a bit with Kubeflow I wanted to remove a plugin, but it seems that the command is not implemented:

➜ k3ai  plugin remove --name kf-pa
Remove a given plugin based on NAME

Usage:
  k3ai[options] plugin remove [-n NAME] [other flags]

Flags:
  -n, --name string     NAME of plugin to be created/deleted
  -t, --target string   Target from where to remove plugin.
  -q, --quiet           Suppress output messages. Useful when k3ai is used within scripts.
  -c, --config string   Configure K3ai using a custom config file.[-c /path/tofile] [-c https://urlToFile]

See here: https://github.com/k3ai/k3ai/blob/main/cmd/plugin.go#L138

I am not sure, whether I am missing something but couldn't find anything related in the issues or Roadmap.


  • Your operating system name and version: Ubuntu 18.04
  • Detailed steps to reproduce the bug: Follow exact steps from documentation or README to deploy a plugin

[CI/CD] - Add Lint support

On running golangci-lint on my local machine, I was able to find 40+ linting issues.

10 of them were deadcode issues, so it can be ignored as they're a part of adding code for future releases.

The rest are ineffectual assignments and skipped error checks. We can log the error message for the skipped error checks; it would help us more in debugging.

I know this sounds like a minor issue, but with more code coming in the subsequent releases, addressing this earlier can help us save a lot of time maintaining good quality code.

Suggested Fix:
Add golangci-lint action as workflow to check linting issues. We can add a rule for excluding deadcode issues for now.

[Roadmap] - v1.0.1

This is the planning issue to decide what we want to keep in the v1.0.1 milestone. A similar issue will be created for v1.0.2

Target Release Date November 30th

K3ai:

K3ai Plugins:

Bugs:

If you like to work on something assign yourself editing this issue or comment below

[Feature] Support a local DB to store basic informations

K3ai should have a local DB (SQLite3) to store a minimal set of pieces of information like:

  • plugin list
  • cluster registered to be used with k3ai
  • plugin status by cluster
    later could be used to:
  • sync configuration between clusters
  • advanced configurations (i.e.: store kubeconfig's)

[Feature] - Adding entrypoint flag and change short for extras in k3ai RUN command

🚀 Is your feature request related to a problem? Please describe.
Currently KFP (DSL-compile) CLI is not able to package an entire folder but expect always an entry point file (--py) to package the pipeline. In order to support the `one-click approach will have to pass an entry point to know which file to take into account in a given directory. Also if the file is already one of the supported by KFP:

  • yaml
  • tar.gz
  • zip

We do not have to use dsl-compile but we may directly execute the pipeline.

💡 Describe the solution you'd like
In order to manage the above we propose to introduce a change in the k3ai run command:

  1. change the short of --extras from -e to -x
  2. introduce the flag --entrypoint with the short -e
    Extras will continue to be executed before anything else as a method to run pre-requisites (i.e.: run a pip install -r requirements.txt

🤩 Describe alternatives you've considered
Considering to keep extras as it is an use it also as entry-point. The issue we would face is, when to run and when this would indicate the entry-point.

This issue applies to:

  • #10
  • #14
  • #15 - will be implemented in v1.0.2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.