GithubHelp home page GithubHelp logo

virtual-kubelet / virtual-kubelet Goto Github PK

View Code? Open in Web Editor NEW
4.1K 104.0 614.0 59.1 MB

Virtual Kubelet is an open source Kubernetes kubelet implementation.

Home Page: https://virtual-kubelet.io

License: Apache License 2.0

Makefile 0.93% Go 93.68% Shell 0.06% PowerShell 0.04% Dockerfile 0.27% JavaScript 2.52% HTML 1.93% Sass 0.59%
kubernetes cncf

virtual-kubelet's Introduction

Virtual Kubelet

Go Reference

Virtual Kubelet is an open source Kubernetes kubelet implementation that masquerades as a kubelet for the purposes of connecting Kubernetes to other APIs. This allows the nodes to be backed by other services like ACI, AWS Fargate, IoT Edge, Tensile Kube etc. The primary scenario for VK is enabling the extension of the Kubernetes API into serverless container platforms like ACI and Fargate, though we are open to others. However, it should be noted that VK is explicitly not intended to be an alternative to Kubernetes federation.

Virtual Kubelet features a pluggable architecture and direct use of Kubernetes primitives, making it much easier to build on.

We invite the Kubernetes ecosystem to join us in empowering developers to build upon our base. Join our slack channel named, virtual-kubelet, within the Kubernetes slack group.

The best description is "Kubernetes API on top, programmable back."

Table of Contents

How It Works

The diagram below illustrates how Virtual-Kubelet works.

diagram

Usage

Virtual Kubelet is focused on providing a library that you can consume in your project to build a custom Kubernetes node agent.

See godoc for up to date instructions on consuming this project: https://godoc.org/github.com/virtual-kubelet/virtual-kubelet

There are implementations available for several providers, see those repos for details on how to deploy.

Current Features

  • create, delete and update pods
  • container logs, exec, and metrics
  • get pod, pods and pod status
  • capacity
  • node addresses, node capacity, node daemon endpoints
  • operating system
  • bring your own virtual network

Providers

This project features a pluggable provider interface developers can implement that defines the actions of a typical kubelet.

This enables on-demand and nearly instantaneous container compute, orchestrated by Kubernetes, without having VM infrastructure to manage and while still leveraging the portable Kubernetes API.

Each provider may have its own configuration file, and required environmental variables.

Providers must provide the following functionality to be considered a supported integration with Virtual Kubelet.

  1. Provides the back-end plumbing necessary to support the lifecycle management of pods, containers and supporting resources in the context of Kubernetes.
  2. Conforms to the current API provided by Virtual Kubelet.
  3. Does not have access to the Kubernetes API Server and has a well-defined callback mechanism for getting data like secrets or configmaps.

Admiralty Multi-Cluster Scheduler

Admiralty Multi-Cluster Scheduler mutates annotated pods into "proxy pods" scheduled on a virtual-kubelet node and creates corresponding "delegate pods" in remote clusters (actually running the containers). A feedback loop updates the statuses and annotations of the proxy pods to reflect the statuses and annotations of the delegate pods. You can find more details in the Admiralty Multi-Cluster Scheduler documentation.

Alibaba Cloud ECI Provider

Alibaba Cloud ECI(Elastic Container Instance) is a service that allow you run containers without having to manage servers or clusters.

You can find more details in the Alibaba Cloud ECI provider documentation.

Configuration File

The alibaba ECI provider will read configuration file specified by the --provider-config flag.

The example configure file is in the ECI provider repository.

Azure Container Instances Provider

The Azure Container Instances Provider allows you to utilize both typical pods on VMs and Azure Container instances simultaneously in the same Kubernetes cluster.

You can find detailed instructions on how to set it up and how to test it in the Azure Container Instances Provider documentation.

Configuration File

The Azure connector can use a configuration file specified by the --provider-config flag. The config file is in TOML format, and an example lives in providers/azure/example.toml.

AWS Fargate Provider

AWS Fargate is a technology that allows you to run containers without having to manage servers or clusters.

The AWS Fargate provider allows you to deploy pods to AWS Fargate. Your pods on AWS Fargate have access to VPC networking with dedicated ENIs in your subnets, public IP addresses to connect to the internet, private IP addresses to connect to your Kubernetes cluster, security groups, IAM roles, CloudWatch Logs and many other AWS services. Pods on Fargate can co-exist with pods on regular worker nodes in the same Kubernetes cluster.

Easy instructions and a sample configuration file is available in the AWS Fargate provider documentation. Please note that this provider is not currently supported.

Elotl Kip

Kip is a provider that runs pods in cloud instances, allowing a Kubernetes cluster to transparently scale workloads into a cloud. When a pod is scheduled onto the virtual node, Kip starts a right-sized cloud instance for the pod's workload and dispatches the pod onto the instance. When the pod is finished running, the cloud instance is terminated.

When workloads run on Kip, your cluster size naturally scales with the cluster workload, pods are strongly isolated from each other and the user is freed from managing worker nodes and strategically packing pods onto nodes.

HashiCorp Nomad Provider

HashiCorp Nomad provider for Virtual Kubelet connects your Kubernetes cluster with Nomad cluster by exposing the Nomad cluster as a node in Kubernetes. By using the provider, pods that are scheduled on the virtual Nomad node registered on Kubernetes will run as jobs on Nomad clients as they would on a Kubernetes node.

For detailed instructions, follow the guide here.

Liqo Provider

Liqo implements a provider for Virtual Kubelet designed to transparently offload pods and services to "peered" Kubernetes remote cluster. Liqo is capable of discovering neighbor clusters (using DNS, mDNS) and "peer" with them, or in other words, establish a relationship to share part of the cluster resources. When a cluster has established a peering, a new instance of the Liqo Virtual Kubelet is spawned to seamlessly extend the capacity of the cluster, by providing an abstraction of the resources of the remote cluster. The provider combined with the Liqo network fabric extends the cluster networking by enabling Pod-to-Pod traffic and multi-cluster east-west services, supporting endpoints on both clusters.

For detailed instruction, follow the guide here

OpenStack Zun Provider

OpenStack Zun provider for Virtual Kubelet connects your Kubernetes cluster with OpenStack in order to run Kubernetes pods on OpenStack Cloud. Your pods on OpenStack have access to OpenStack tenant networks because they have Neutron ports in your subnets. Each pod will have private IP addresses to connect to other OpenStack resources (i.e. VMs) within your tenant, optionally have floating IP addresses to connect to the internet, and bind-mount Cinder volumes into a path inside a pod's container.

./bin/virtual-kubelet --provider="openstack"

For detailed instructions, follow the guide here.

Tensile Kube Provider

Tensile kube is contributed by tencent games, which is provider for Virtual Kubelet connects your Kubernetes cluster with other Kubernetes clusters. This provider enables us extending Kubernetes to an unlimited one. By using the provider, pods that are scheduled on the virtual node registered on Kubernetes will run as jobs on other Kubernetes clusters' nodes.

Adding a New Provider via the Provider Interface

Providers consume this project as a library which implements the core logic of a Kubernetes node agent (Kubelet), and wire up their implementation for performing the neccessary actions.

There are 3 main interfaces:

PodLifecycleHandler

When pods are created, updated, or deleted from Kubernetes, these methods are called to handle those actions.

godoc#PodLifecylceHandler

type PodLifecycleHandler interface {
    // CreatePod takes a Kubernetes Pod and deploys it within the provider.
    CreatePod(ctx context.Context, pod *corev1.Pod) error

    // UpdatePod takes a Kubernetes Pod and updates it within the provider.
    UpdatePod(ctx context.Context, pod *corev1.Pod) error

    // DeletePod takes a Kubernetes Pod and deletes it from the provider.
    DeletePod(ctx context.Context, pod *corev1.Pod) error

    // GetPod retrieves a pod by name from the provider (can be cached).
    GetPod(ctx context.Context, namespace, name string) (*corev1.Pod, error)

    // GetPodStatus retrieves the status of a pod by name from the provider.
    GetPodStatus(ctx context.Context, namespace, name string) (*corev1.PodStatus, error)

    // GetPods retrieves a list of all pods running on the provider (can be cached).
    GetPods(context.Context) ([]*corev1.Pod, error)
}

There is also an optional interface PodNotifier which enables the provider to asynchronously notify the virtual-kubelet about pod status changes. If this interface is not implemented, virtual-kubelet will periodically check the status of all pods.

It is highly recommended to implement PodNotifier, especially if you plan to run a large number of pods.

godoc#PodNotifier

type PodNotifier interface {
    // NotifyPods instructs the notifier to call the passed in function when
    // the pod status changes.
    //
    // NotifyPods should not block callers.
    NotifyPods(context.Context, func(*corev1.Pod))
}

PodLifecycleHandler is consumed by the PodController which is the core logic for managing pods assigned to the node.

	pc, _ := node.NewPodController(podControllerConfig) // <-- instatiates the pod controller
	pc.Run(ctx) // <-- starts watching for pods to be scheduled on the node

NodeProvider

NodeProvider is responsible for notifying the virtual-kubelet about node status updates. Virtual-Kubelet will periodically check the status of the node and update Kubernetes accordingly.

godoc#NodeProvider

type NodeProvider interface {
    // Ping checks if the node is still active.
    // This is intended to be lightweight as it will be called periodically as a
    // heartbeat to keep the node marked as ready in Kubernetes.
    Ping(context.Context) error

    // NotifyNodeStatus is used to asynchronously monitor the node.
    // The passed in callback should be called any time there is a change to the
    // node's status.
    // This will generally trigger a call to the Kubernetes API server to update
    // the status.
    //
    // NotifyNodeStatus should not block callers.
    NotifyNodeStatus(ctx context.Context, cb func(*corev1.Node))
}

Virtual Kubelet provides a NaiveNodeProvider that you can use if you do not plan to have custom node behavior.

godoc#NaiveNodeProvider

NodeProvider gets consumed by the NodeController, which is core logic for managing the node object in Kubernetes.

	nc, _ := node.NewNodeController(nodeProvider, nodeSpec) // <-- instantiate a node controller from a node provider and a kubernetes node spec
	nc.Run(ctx) // <-- creates the node in kubernetes and starts up he controller

API endpoints

One of the roles of a Kubelet is to accept requests from the API server for things like kubectl logs and kubectl exec. Helpers for setting this up are provided here

Scrape Pod metrics

If you want to use HPA(Horizontal Pod Autoscaler) in your cluster, the provider should implement the GetStatsSummary function. Then metrics-server will be able to get the metrics of the pods on virtual-kubelet. Otherwise, you may see No metrics for pod on metrics-server, which means the metrics of the pods on virtual-kubelet are not collected.

Testing

Unit tests

Running the unit tests locally is as simple as make test.

End-to-end tests

Check out test/e2e for more details.

Known quirks and workarounds

Missing Load Balancer IP addresses for services

Providers that do not support service discovery

Kubernetes 1.9 introduces a new flag, ServiceNodeExclusion, for the control plane's Controller Manager. Enabling this flag in the Controller Manager's manifest allows Kubernetes to exclude Virtual Kubelet nodes from being added to Load Balancer pools, allowing you to create public facing services with external IPs without issue.

Workaround

Cluster requirements: Kubernetes 1.9 or above

Enable the ServiceNodeExclusion flag, by modifying the Controller Manager manifest and adding --feature-gates=ServiceNodeExclusion=true to the command line arguments.

Contributing

Virtual Kubelet follows the CNCF Code of Conduct. Sign the CNCF CLA to be able to make Pull Requests to this repo.

Monthly Virtual Kubelet Office Hours are held at 10am PST on the second Thursday of every month in this zoom meeting room. Check out the calendar here.

Our google drive with design specifications and meeting notes are here.

We also have a community slack channel named virtual-kubelet in the Kubernetes slack. You can also connect with the Virtual Kubelet community via the mailing list.

virtual-kubelet's People

Contributors

adrienjt avatar bketelsen avatar bnookala avatar champly avatar cpuguy83 avatar cwdsuzhou avatar dependabot[bot] avatar erikstmartin avatar erjadi avatar ewbankkit avatar helayoty avatar jeremyrickard avatar jimmy-xu avatar jlegrone avatar johanneswuerbach avatar ldx avatar lucperkins avatar miekg avatar mqliang avatar ofiliz avatar pires avatar rbitia avatar ritazh avatar robbiezhang avatar rohancme avatar sargun avatar shidao-ytt avatar yabuchan avatar yaron2 avatar yashdesai93 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

virtual-kubelet's Issues

[Azure] Windows connector deploys 2 nodes rather than 1

I deployed the windows connector using the az cli. I already had the linux connector deployed but then the windows one deployed both the windows and linux connector rather than one.

Rias-MacBook-Pro:examples riabhatia$ az aks install-connector -g virtualKubelet -n virtualK8sCluster --connector-name windows-connector --os windows
Merged "virtualK8sCluster" as current context in /var/folders/p2/6xnh48154w5_g8k_8pkfvkhm0000gn/T/tmpy3ww_xbm
Deploying the aci-connector using Helm
NAME: windows-connector
LAST DEPLOYED: Thu Jan 4 09:29:16 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
windows-connector-aci-connector 1 1 1 0 1s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
windows-connector-aci-connector-489931669-xc0vf 0/1 ContainerCreating 0 1s

==> v1/Secret
NAME TYPE DATA AGE
windows-connector-aci-connector Opaque 4 1s

NOTES:
The aci-connector is getting deployed on your cluster.

To verify that aci-connector has started, run:

kubectl --namespace=default get pods -l "app=windows-connector-aci-connector"

Rias-MacBook-Pro:examples riabhatia$ kubectl get nodes
NAME STATUS AGE VERSION
aci-connector Ready 17h v1.6.6
aci-connector-0 Ready 10m v1.6.6
aci-connector-1 Ready 10m v1.6.6
aks-nodepool1-21735328-0 Ready 21d v1.7.7

Pod scheduled at Windows kubelet should not mount any secret volume

error.log
I use the attached xml to deploy to Windows kubelet.

The windows kubelet should accept nodes for "beta.kubernetes.io/os": "windows"

From "describe node I get:"
Labels: beta.kubernetes.io/os=windows

I get error:
No nodes are available that match all of the following predicates:: MatchNodeSelector (2), PodToleratesNodeTaints (1).

Full info is attached.

Panics with "runtime error: invalid memory address or nil pointer dereference"

When I run ./virtual-kubelet --provider azure I get this:

/home/avranju/.kube/config
2017/12/18 16:17:10 Node 'virtual-kubelet' with OS type 'Linux' registered
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0xe9909a]

goroutine 1 [running]:
github.com/virtual-kubelet/virtual-kubelet/manager.(*ResourceManager).incrementRefCounters(0xc4202f59a0, 0xc420077c00)
	/home/avranju/code/go-workspace/src/github.com/virtual-kubelet/virtual-kubelet/manager/resource.go:260 +0xda
github.com/virtual-kubelet/virtual-kubelet/manager.(*ResourceManager).SetPods(0xc4202f59a0, 0xc42029a150)
	/home/avranju/code/go-workspace/src/github.com/virtual-kubelet/virtual-kubelet/manager/resource.go:89 +0x34f
github.com/virtual-kubelet/virtual-kubelet/vkubelet.(*Server).Run(0xc4203bdc80, 0xf, 0x11672e7)
	/home/avranju/code/go-workspace/src/github.com/virtual-kubelet/virtual-kubelet/vkubelet/vkubelet.go:169 +0x2fa
github.com/virtual-kubelet/virtual-kubelet/cmd.glob..func1(0x195e400, 0xc4201d2780, 0x0, 0x2)
	/home/avranju/code/go-workspace/src/github.com/virtual-kubelet/virtual-kubelet/cmd/root.go:54 +0x1cd
github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra.(*Command).execute(0x195e400, 0xc4200100a0, 0x2, 0x2, 0x195e400, 0xc4200100a0)
	/home/avranju/code/go-workspace/src/github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra/command.go:702 +0x2c6
github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x195e400, 0x195e620, 0xf31b66, 0x195e540)
	/home/avranju/code/go-workspace/src/github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra/command.go:783 +0x30e
github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra.(*Command).Execute(0x195e400, 0x0, 0xc42062bf48)
	/home/avranju/code/go-workspace/src/github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra/command.go:736 +0x2b
github.com/virtual-kubelet/virtual-kubelet/cmd.Execute()
	/home/avranju/code/go-workspace/src/github.com/virtual-kubelet/virtual-kubelet/cmd/root.go:61 +0x31
main.main()
	/home/avranju/code/go-workspace/src/virtual-kubelet/main.go:20 +0x20

I tried fixing resource.go to add some nil checks in incrementRefCounters but that didn't seem to make a difference. I am a go language noob so its probably just my lack of knowledge. Do we know what's going on here? I setup the environment variables for ACI_REGION, ACI_RESOURCE_GROUP and AZURE_AUTH_LOCATION as well. AZURE_AUTH_LOCATION points to a file containing this (real values elided below):

{
  "clientId": "...",
  "clientSecret": "...",
  "subscriptionId": "...",
  "tenantId": "..."
}

Create provider for 2nd Kubernetes Cluster

Please add a provider that will allow a 2nd kubernetes cluster to act as a virtual kubelet provider.

In this use case, this seems a lot like one-way cluster federation.

https://kubernetes.io/docs/concepts/cluster-administration/federation/

Seems like if Cluster A added a virtual-kubelet to Cluster B and Cluster B added a virtual-kubelet to Cluster A then this is effectively federation.

Curious on the differences between what's described above and federation, what are the strength and weaknesses of each approach and why would you choose one over the other?

Of course, right now I know this suggestion is only a theoretical idea so federation would be the only viable option today.

kubectl create pod with multiple containers


Environment summary

Provider (e.g. Azure, Hyper)
azure
Version (e.g. 0.1, 0.2-beta)
latest version
K8s Master Info (e.g. AKS, ACS, Bare Metal)
AKS
Install Method (e.g. Helm Chart, )
Helm

Issue Details

create pod with 2 containers, the second one keep restart

acidemo 1/2 Waiting 6 9m

Repo Steps

pod yaml
apiVersion: v1
kind: Pod
metadata:
name: acidemo
spec:

containers:

  • name: nginx-container
    image: nginx
    resources:
    requests:
    memory: 1G
    cpu: 1
    ports:

    • containerPort: 80
      name: http
      protocol: TCP
    • containerPort: 443
      name: https
  • name: debian-container
    image: debian
    resources:
    requests:
    memory: 1G
    cpu: 1
    nodeName: virtual-kubelet-aciconnector-linux
    dnsPolicy: ClusterFirst
    tolerations:

  • key: azure.com/aci
    effect: NoSchedule

Adding a New Provider -- a missing step

The Readme defines adding a new Provider as "Create a new directory for your provider under providers and implement the following interface. Then add your new provider under the others in the vkubelet/provider.go file."
https://github.com/virtual-kubelet/virtual-kubelet#adding-a-new-provider-via-the-provider-interface

It's also necessary to extend the switch in vkubelet.go with an alias to the new provider:

switch provider {

Please create matrix of limitations

I found this on the old project site.

Limitations:
ConfigMaps
Secrets
ServiceAccounts
Volumes
kubectl logs
kubectl exec

Please create a feature matrix that contains native real kubelet features, potential virtual-kubelet features as defined by the virtual-kubelet interface, then features for specific implementations. This will help with tagging and taint so deployment can be scoped accordingly.

Thanks.

[Azure] EmptyDir volume cannot be used

If we use emptyDir volume in a pod, it will get the following error:

api call to https://management.azure.com/subscriptions/2ce4ca73-8bf4-45d5-aa55-b37121fd51a6/resourceGroups/allHandsDemo/providers/Microsoft.ContainerInstance/containerGroups/default-demo-fr-ir-aci-847b4c59f8-ck72s?api-version=2017-12-01-preview:
got HTTP response status code 400 error code "InvalidRequestContent": The request
content was invalid and could not be deserialized: ''Could not find member ''medium''
on object of type ''EmptyDirVolume''. Path ''properties.volumes[0].emptyDir.medium'',
line 1, position 1052.''.'

The root cause is that the ACI provider added the "medium" and "sizeLimit" properties into the emptyDir, which are not supported by the ACI.

"medium": v.EmptyDir.Medium,
"sizeLimit": v.EmptyDir.SizeLimit,

[Azure] VK Node deleted

The node deleted itself while the pod is still running. I didn't remove the connector or delete the node.

Rias-MBP:community riabhatia$ kubectl get po
NAME READY STATUS RESTARTS AGE
demo-fr-backend-867bc8fc46-xdj7l 1/1 Running 0 2h
demo-fr-frontend-5577d588f9-m647s 1/1 Running 0 2h
demo-fr-ir-5d6954684f-l4wqb 1/1 Running 0 2h
demo-fr-ir-aci-67cd7d8bb8-5r8k5 0/1 Pending 0 14s
demo-fr-ir-aci-67cd7d8bb8-5s9pk 0/1 Pending 0 14s
demo-fr-ir-aci-67cd7d8bb8-66m5n 0/1 Pending 0 14s
demo-fr-ir-aci-67cd7d8bb8-lfvh6 0/1 Pending 0 14s
demo-fr-ir-aci-67cd7d8bb8-lsw85 0/1 Pending 0 14s
demo-fr-ir-aci-67cd7d8bb8-snl72 0/1 Pending 0 14s
demo-fr-ir-aci-67cd7d8bb8-szjkq 0/1 Pending 0 14s
demo-fr-ir-aci-67cd7d8bb8-tldbr 0/1 Pending 0 14s
demo-fr-ir-aci-847b4c59f8-28d9q 0/1 Pending 0 14s
demo-fr-ir-aci-847b4c59f8-8fkqq 0/1 Pending 0 14s
demo-fr-ir-aci-847b4c59f8-8t74k 0/1 Pending 0 14s
demo-fr-ir-aci-847b4c59f8-b6w8b 0/1 Pending 0 14s
demo-fr-ir-aci-847b4c59f8-v54nd 0/1 Pending 0 14s
myaciconnector-linux-virtual-kubelet-76864d69dd-lq4ff 1/1 Running 3 2h
Rias-MBP:community riabhatia$ kubectl logs myaciconnector-linux-virtual-kubelet-76864d69dd-lq4ff
/.kube/config
2018/01/22 21:48:56 Node 'virtual-kubelet-myaciconnector-linux' with OS type 'Linux' registered
2018/01/22 21:48:56 open : no such file or directory
2018/01/22 21:49:46 Failed to retrieve node: nodes "virtual-kubelet-myaciconnector-linux" not found
2018/01/22 21:50:31 Failed to retrieve node: nodes "virtual-kubelet-myaciconnector-linux" not found
2018/01/22 21:51:16 Failed to retrieve node: nodes "virtual-kubelet-myaciconnector-linux" not found

[AWS] implement provider for ECS and Fargate

Summary

Add a provider for AWS ECS and Fargate

Description

Provide the ability to use kubectl to deploy a pod spec to AWS ECS and also to AWS Fargate.

It should be possible to support deploying to both native ECS and to Fargate using the same provider. There are only slight differences is the task definitions between the two.

[Azure] Removing the connector doesn't remove the node

I removed the windows connector with the az cli and it removed the pod for the windows connector but not the 2 nodes it had also created.

Rias-MacBook-Pro:examples riabhatia$ az aks remove-connector --resource-group virtualKubelet --name virtualK8sCluster --connector-name windows-connector
Merged "virtualK8sCluster" as current context in /var/folders/p2/6xnh48154w5_g8k_8pkfvkhm0000gn/T/tmpreiq5g88
Undeploying the aci-connector using Helm
release "windows-connector" deleted

Rias-MacBook-Pro:examples riabhatia$ kubectl get pods
NAME READY STATUS RESTARTS AGE
aci-demo-aci-demo-ir-1243301560-7l8qm 1/1 Running 0 15m
aci-demo-aci-demo-ir-2416843260-89ghx 1/1 Running 0 15m
demo-fr-ir-aci-1922196937-vm23w 1/1 Running 0 15m
demo-fr-ir-aci-1922196937-xwpx1 1/1 Running 0 15m
linux-connector-aci-connector-4084327094-5ct30 1/1 Running 0 17h
nginx 1/1 Running 0 17m
Rias-MacBook-Pro:examples riabhatia$ kubectl get node
NAME STATUS AGE VERSION
aci-connector Ready 17h v1.6.6
aci-connector-0 Ready 15m v1.6.6
aci-connector-1 Ready 15m v1.6.6
aks-nodepool1-21735328-0 Ready 21d v1.7.7

[Azure] The 'MemoryInGB' request is not specified

Hi there,

Just trying to follow the docs in the README and ran the following command

./virtual-kubelet --provider azure

I'm getting the following error:

2017/12/30 19:59:22 Error creating pod 'kube-svc-redirect-5r8tq': api call to https://management.azure.com/subscriptions/<my subscription id>/resourceGroups/pegasus-aci/providers/Microsoft.ContainerInstance/containerGroups/kube-system-kube-svc-redirect-5r8tq?api-version=2017-12-01-preview: got HTTP response status code 400 error code "ResourceSomeRequestsNotSpecified": The 'MemoryInGB' request is not specified in 'Microsoft.Azure.CloudConsole.Providers.Common.Entities.ComputeResources' in container 'redirector' of contain group 'kube-system-kube-svc-redirect-5r8tq'. It is required since API version '2017-07-01-preview'.

Something about the "MemoryInGB" in the container group not being specified and being required in the preview. Any pointers in the right direction would be greatly appreciated. Thanks so much.

Ameer

GCP Provider?

This looks very promising as 'serverless' without the limitations (lock-in, function duration, event-driven model, resource size flexibility, tooling) for batch jobs that are not latency sensitive.

Does anyone know if GCP plans to launch a container instance service? If they offered pre-emptible instances this could result in very low cost containers.

Cluster Networking

Hi, Thanks for starting the great project!

One of my concerns on this kind of solution for me is that the possibility to support cluster ndtworking i.e. pods served by virtual-kubelet are able to:

  • Communicate with each other via pod and service IP regardless of where the pods are - real or virtual nodes
  • Discover each other across real/virutal nodes

How would it be possible?
Are there any on-going effort(s) I can track or contribute?

[Azure] remove-connector functionality

I'm trying to remove the aci connector through the command line and I've forgotten what I named my connector but I'm not able to figure out what I named it through the az commands.

I'm pretty sure I named it linux-connector but helm can't find the release.

Rias-MacBook-Pro:local riabhatia$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
linux-connector-aci-connector-4084327094-gcw4s 1/1 Running 0 7d 10.244.0.18 aks-nodepool1-21735328-0
Merged "virtualK8sCluster" as current context in /var/folders/p2/6xnh48154w5_g8k_8pkfvkhm0000gn/T/tmpy_yooi0a
Undeploying the 'linux-connector-linux' using Helm

[Hyper] Issue tracker of hyper.sh provider

Jimmy and I are refactoring and testing https://github.com/hyperhq/hyper.sh-connector-k8s with virtual-kubelet recently, so we just create this issue to track this part of work.

We will update the tracker periodically, and also figure out which API can not be implemented for now.

  • CreatePod(pod *v1.Pod) error
  • UpdatePod(pod *v1.Pod) error
  • DeletePod(pod *v1.Pod) error
  • GetPod(namespace, name string) (*v1.Pod, error)
  • GetPodStatus(namespace, name string) (*v1.PodStatus, error)
  • GetPods() ([]*v1.Pod, error)
  • Capacity() v1.ResourceList
  • NodeConditions() []v1.NodeCondition
  • OperatingSystem() string

PEM cert error

2018/01/23 01:25:24 tls: failed to find any PEM data in certificate input

Repo Steps

Install the connector

This cert is for kubectl logs -> if you don't add a cert you will get this error but it won't effect the connector

[Azure] case sensitive in az aks install-connector command

When deploying the connector you can only use lower case names. This is probably a k8 issue but it's documented here for now.

ria@Azure:~$ az aks install-connector -g connectorDemo -n cluster --connector-name aciconnectorRia
Merged "cluster" as current context in /tmp/tmpxlp3utzn
Deploying the aci-connector using Helm
Error: release aciconnectorRia failed: Secret "aciconnectorRia-aci-connector" is invalid: metadata.name: Invalid value: "aciconnectorRia-aci-connector": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is 'a-z0-9?(.a-z0-9?)*')

[Azure] Azure SDK for Go, update ACI

Hi, everyone!
First of all, this is a really awesome project, congrats!

Now I have a couple of questions regarding the Azure Container Instance provider:

  • why did you write your own client for ACI and not use the SDK?
  • as far as I last tried, ACI supports update - I am currently working on a sample and would love to contribute here 😄

Thanks,
Radu M

[Azure] Concatenate command and args for ACI


Environment summary

Provider (e.g. Azure, Hyper)
Azure
Version (e.g. 0.1, 0.2-beta)
latest
K8s Master Info (e.g. AKS, ACS, Bare Metal)
AKS
Install Method (e.g. Helm Chart, )
Helm

Issue Details

create pod by using pod yaml file include command & args parameters, the pod not create successfully

Repo Steps

apiVersion: v1
kind: Pod
metadata:
name: debiandemo
spec:

volumes:

  • name: shared-data
    emptyDir: {}

containers:

  • name: debian-container
    image: debian
    resources:
    requests:
    memory: 1G
    cpu: 1
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello; sleep 10;done"]
    volumeMounts:
    • name: shared-data
      mountPath: /pod-data
      dnsPolicy: ClusterFirst
      nodeName: virtual-kubelet-aciconnector-linux
      dnsPolicy: ClusterFirst
      tolerations:
  • key: azure.com/aci
    effect: NoSchedule

virtual-kubelet crashes when connecting to an existing Kubernetes cluster

As soon as I start virtual-kubelet, it crashes with the traces below.
Cluster is not having any daemonset applications. My guess is K8S is trying to schedule kube-proxy and networking components to the virtual-kubelet. Is there any plan on how to support kube-proxy scheduling ?

Kubernetes version:

kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T07:00:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
./virtual-kubelet --provider azure
/Users/pgogia/.kube/config
2017/12/11 18:46:28 Node 'virtual-kubelet' with OS type 'Linux' registered

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x1acc27e]

goroutine 1 [running]:
github.com/virtual-kubelet/virtual-kubelet/providers/azure.(*ACIProvider).getVolumes(0xc420328080, 0xc420208a80, 0x253a9b0, 0x0, 0x0, 0x0, 0x0)
	/Users/pgogia/workspace/golang/src/github.com/virtual-kubelet/virtual-kubelet/providers/azure/aci.go:475 +0x20e
github.com/virtual-kubelet/virtual-kubelet/providers/azure.(*ACIProvider).CreatePod(0xc420328080, 0xc420208a80, 0x0, 0x0)
	/Users/pgogia/workspace/golang/src/github.com/virtual-kubelet/virtual-kubelet/providers/azure/aci.go:115 +0x1e3
github.com/virtual-kubelet/virtual-kubelet/vkubelet.(*Server).createPod(0xc420386ea0, 0xc420208a80, 0xb, 0xc4202ab260)
	/Users/pgogia/workspace/golang/src/github.com/virtual-kubelet/virtual-kubelet/vkubelet/vkubelet.go:274 +0x7c
github.com/virtual-kubelet/virtual-kubelet/vkubelet.(*Server).reconcile(0xc420386ea0)
	/Users/pgogia/workspace/golang/src/github.com/virtual-kubelet/virtual-kubelet/vkubelet/vkubelet.go:251 +0x347
github.com/virtual-kubelet/virtual-kubelet/vkubelet.(*Server).Run(0xc420386ea0, 0xf, 0x1d6d8cf)
	/Users/pgogia/workspace/golang/src/github.com/virtual-kubelet/virtual-kubelet/vkubelet/vkubelet.go:170 +0x32d
github.com/virtual-kubelet/virtual-kubelet/cmd.glob..func1(0x2513680, 0xc4201c1a80, 0x0, 0x2)
	/Users/pgogia/workspace/golang/src/github.com/virtual-kubelet/virtual-kubelet/cmd/root.go:54 +0x1cd
github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra.(*Command).execute(0x2513680, 0xc4200740a0, 0x2, 0x2, 0x2513680, 0xc4200740a0)
	/Users/pgogia/workspace/golang/src/github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra/command.go:702 +0x2bd
github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x2513680, 0x10067a4, 0xc42006e058, 0x0)
	/Users/pgogia/workspace/golang/src/github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra/command.go:783 +0x349
github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra.(*Command).Execute(0x2513680, 0x0, 0x0)
	/Users/pgogia/workspace/golang/src/github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra/command.go:736 +0x2b
github.com/virtual-kubelet/virtual-kubelet/cmd.Execute()
	/Users/pgogia/workspace/golang/src/github.com/virtual-kubelet/virtual-kubelet/cmd/root.go:61 +0x31
main.main()
	/Users/pgogia/workspace/golang/src/github.com/virtual-kubelet/virtual-kubelet/main.go:20 +0x20

Please provide IoT Edge/AWS Greengrass like virtual-kubelet

I could see a few options for this

  1. Take advantage of IoT Hub/Edge servers and device runtime and just implement a virutal-kubelet that takes advantage of it. Since right now IoT Edge seems to not allow any arbitrary container but needs it to adhere to a specific module interface this might not be ideal.
  2. Take advantage of AWS Greengrass, this has similar drawbacks to IoT Hub/Edge.
  3. Create a new cloud service and device runtime just for this.

Illegal base64 data when using imagePullSecrets

Receiving the following error when using imagePullSecrets specified in the default service account:

2018/01/25 20:48:15 Error creating pod 'demo-fr-ir-aci-547fdc6c66-wjz7z': illegal base64 data at input byte 0

Removing the imagePullSecret from the service account resolves the error. Note the same pull secret is working for other non-VK workloads.

Docs on recommended way to do development

It's awesome that this using ~/.kube/config means it so smoothly works, even with minikube.

It'd be great to have something in the README or in a separate development doc on the recommended way to work on existing providers or new ones.

Maybe it's just cuz I'm a noob, but I'm not sure if I'm doing it right, but I basically did this:

go get github.com/virtual-kubelet/virtual-kubelet
cd $GOPATH/src/github.com/virtual-kubelet/virtual-kubelet
make build
./bin/virtual-kubelet --provider bash

[Azure] VK crashing

Installed the connector last week and now it's crashing:

goroutine 1 [running]:
github.com/virtual-kubelet/virtual-kubelet/manager.(*ResourceManager).incrementRefCounters(0xc42005dea0, 0xc42024d180)
/go/src/github.com/virtual-kubelet/virtual-kubelet/manager/resource.go:260 +0xda
github.com/virtual-kubelet/virtual-kubelet/manager.(*ResourceManager).SetPods(0xc42005dea0, 0xc42031c380)
/go/src/github.com/virtual-kubelet/virtual-kubelet/manager/resource.go:89 +0x34f
github.com/virtual-kubelet/virtual-kubelet/vkubelet.(*Server).Run(0xc4200426c0, 0x22, 0x7ffe15c38b13)
/go/src/github.com/virtual-kubelet/virtual-kubelet/vkubelet/vkubelet.go:169 +0x2fa
github.com/virtual-kubelet/virtual-kubelet/cmd.glob..func1(0x194a040, 0xc42014c600, 0x0, 0x8)
/go/src/github.com/virtual-kubelet/virtual-kubelet/cmd/root.go:54 +0x1cd
github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra.(*Command).execute(0x194a040, 0xc42000e0a0, 0x8, 0x8, 0x194a040, 0xc42000e0a0)
/go/src/github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra/command.go:702 +0x2c6
github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x194a040, 0x194a260, 0xc42006e3c0, 0x194a180)
/go/src/github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra/command.go:783 +0x30e
github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra.(*Command).Execute(0x194a040, 0x0, 0xc4205b5f48)
/go/src/github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra/command.go:736 +0x2b
github.com/virtual-kubelet/virtual-kubelet/cmd.Execute()
/go/src/github.com/virtual-kubelet/virtual-kubelet/cmd/root.go:61 +0x31
main.main()
/go/src/github.com/virtual-kubelet/virtual-kubelet/main.go:20 +0x20

[VK & Azure] add DNS name support

Add DNS name support to Vk and connect the ACI api to it. Use an annotation within ACI pod specs to create a DNS name but we will not resolve conflict if used within a kubernetes deployment for ACI through the connector.


Environment summary

Provider (e.g. Azure, Hyper)
Azure

Version (e.g. 0.1, 0.2-beta)

K8s Master Info (e.g. AKS, ACS, Bare Metal)

Install Method (e.g. Helm Chart, )

Issue Details

Repo Steps

[Azure] arm throttling limits due to incorrect region

We need to add a check in our provider to see if the user is trying to spin out container instances in an unsupported region, if so DO NOT call arm and throw a helpful error instead


Environment summary

Provider (e.g. Azure, Hyper)
Azure
Version (e.g. 0.1, 0.2-beta)
latest
K8s Master Info (e.g. AKS, ACS, Bare Metal)
AKS
Install Method (e.g. Helm Chart, )
helm chart

Issue Details

ARM throttling limits

Repo Steps

Create container instances in a rg with canada, or any region aci doesn't support and you will reach an arm throttling limit

[Marathon] Add a provider for Marathon / DC/OS

Summary

Add a provider for Marathon (and DC/OS respectively)

Description

Provide the possibility to deploy a pod spec to an existing vanilla Mesos or DC/OS cluster running Marathon as scheduler.

Questions

  • Are there any details regarding networking between pods/services running in different environments/platforms? That's something I didn't understand, but I'm quite new to k8s

Issue running Helm chart

When running the helm chart as described in the README.md file, the following is returned.

Error: gzip: invalid header

I do not think this is related, but if helpful, here are the values used with the chart. Are these correct?

  • env.azureClientId - Service principal app id
  • env.azureClientKey - Service principal password
  • env.azureTenantId - Service principal tenant
  • env.azureSubscriptionId - Azure subscription id
  • env.aciResourceGroup - Name of empty resource group
  • env.nodeName - Name for node, "virtual-kublet"
  • env.nodeOsType - "Linux"

[HELM] Can't access chart for helm w/ az aks install-connector

Sometimes files aren't created when using helm.

The error people get when using the command
'az aks install-connector -g -n --connector-name
Error: file "https://github.com/Azure/aci-connector-k8s/raw/master/charts/aci-connector.tgz" not found

helm/helm#2071

Work around:
‘helm init’
‘helm repo update’

Assuming you are on windows navigate to & make sure these files exist.
“C:\Users<user>.helm\repository\local\index.yaml”
"C:\Users<user>.helm\repository\cache\local-index.yaml"

If the second file "C:\Users<user>.helm\repository\cache\local-index.yaml" doesn't exist copy the contents from the index.yaml file & make a new one with the added content.

[Hyper] If the config file is not set, hyper provider will crash when pulling images

The AuthConfig is only set when there is a config file (and the env variables for key and secret are not set), but it is used by ensureImage nevertheless.

Steps to reproduce:

  • Set HYPER_ACCESS_KEY and HYPER_SECRET_KEY env variables
  • Create a pod using an image that you haven't pulled yet

Result:

The Hyper provider crashes:

/Users/ara/.kube/config
2017/12/28 11:50:21 Use AccessKey and SecretKey from HYPER_ACCESS_KEY and HYPER_SECRET_KEY
2017/12/28 11:50:21
 Host: tcp://eu-central-1.hyper.sh:443
 AccessKey: 4**********
 SecretKey: 4**********
 InstanceType: s2
2017/12/28 11:50:22 Node 'virtual-kubelet' with OS type 'Linux' registered
2017/12/28 11:50:22 receive GetPods
2017/12/28 11:50:22 found 0 pods
2017/12/28 11:50:37 receive GetPods
2017/12/28 11:50:37 found 0 pods
2017/12/28 11:50:37 Error retrieving pod 'nginx' from provider: Error: No such container: pod-nginx-nginx
2017/12/28 11:50:37 receive CreatePod "nginx"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1b48aa5]

goroutine 1 [running]:
github.com/virtual-kubelet/virtual-kubelet/providers/hypersh.(*HyperProvider).ensureImage(0xc42020d5c0, 0xc42045af80, 0xc, 0x0, 0x0)
	/Users/ara/go/src/github.com/virtual-kubelet/virtual-kubelet/providers/hypersh/util.go:337 +0x105
github.com/virtual-kubelet/virtual-kubelet/providers/hypersh.(*HyperProvider).CreatePod(0xc42020d5c0, 0xc420a0ea80, 0x0, 0x0)
	/Users/ara/go/src/github.com/virtual-kubelet/virtual-kubelet/providers/hypersh/hypersh.go:219 +0x652
github.com/virtual-kubelet/virtual-kubelet/vkubelet.(*Server).createPod(0xc4202104e0, 0xc420a0ea80, 0xc420979a40, 0x2)
	/Users/ara/go/src/github.com/virtual-kubelet/virtual-kubelet/vkubelet/vkubelet.go:274 +0x6d
github.com/virtual-kubelet/virtual-kubelet/vkubelet.(*Server).reconcile(0xc4202104e0)
	/Users/ara/go/src/github.com/virtual-kubelet/virtual-kubelet/vkubelet/vkubelet.go:251 +0x410
github.com/virtual-kubelet/virtual-kubelet/vkubelet.(*Server).Run(0xc4202104e0, 0xf, 0x1dabfb7)
	/Users/ara/go/src/github.com/virtual-kubelet/virtual-kubelet/vkubelet/vkubelet.go:193 +0x449
github.com/virtual-kubelet/virtual-kubelet/cmd.glob..func1(0x25d4a20, 0xc42028d7b0, 0x0, 0x1)
	/Users/ara/go/src/github.com/virtual-kubelet/virtual-kubelet/cmd/root.go:54 +0x1cd
github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra.(*Command).execute(0x25d4a20, 0xc42000a070, 0x1, 0x1, 0x25d4a20, 0xc42000a070)
	/Users/ara/go/src/github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra/command.go:702 +0x2c6
github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x25d4a20, 0x25d4c40, 0x1b66736, 0x25d4b60)
	/Users/ara/go/src/github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra/command.go:783 +0x30e
github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra.(*Command).Execute(0x25d4a20, 0x0, 0xc42058bf48)
	/Users/ara/go/src/github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/spf13/cobra/command.go:736 +0x2b
github.com/virtual-kubelet/virtual-kubelet/cmd.Execute()
	/Users/ara/go/src/github.com/virtual-kubelet/virtual-kubelet/cmd/root.go:61 +0x31
main.main()```

[Hyper] Build Break after Run "dep ensure"

After I use the dep ensure to rebuild the vendor folders, files in the vendor/github.com/hyperhq folder change a lot, and cause build break:

Building...
# github.com/virtual-kubelet/virtual-kubelet/vendor/github.com/hyperhq/hypercli/registry
vendor/github.com/hyperhq/hypercli/registry/registry.go:43:11: tlsConfig.InsecureSkipVerify undefined (type func() *tls.Config has no field or method InsecureSkipVerify)
vendor/github.com/hyperhq/hypercli/registry/registry.go:48:32: cannot use &tlsConfig (type *func() *tls.Config) as type *tls.Config in argument to ReadCertsDirectory
vendor/github.com/hyperhq/hypercli/registry/registry.go:53:9: cannot use &tlsConfig (type *func() *tls.Config) as type *tls.Config in return argument
vendor/github.com/hyperhq/hypercli/registry/registry.go:222:13: cannot use &cfg (type *func() *tls.Config) as type *tls.Config in assignment
vendor/github.com/hyperhq/hypercli/registry/service_v1.go:21:13: cannot use tlsConfig (type *func() *tls.Config) as type *tls.Config in field value
vendor/github.com/hyperhq/hypercli/registry/service_v1.go:32:17: cannot assign *tls.Config to tlsConfig (type *func() *tls.Config) in multiple assignment
vendor/github.com/hyperhq/hypercli/registry/service_v1.go:42:13: cannot use tlsConfig (type *func() *tls.Config) as type *tls.Config in field value
vendor/github.com/hyperhq/hypercli/registry/service_v1.go:46:14: tlsConfig.InsecureSkipVerify undefined (type *func() *tls.Config has no field or method InsecureSkipVerify)
vendor/github.com/hyperhq/hypercli/registry/service_v1.go:52:13: cannot use tlsConfig (type *func() *tls.Config) as type *tls.Config in field value
vendor/github.com/hyperhq/hypercli/registry/service_v2.go:37:13: cannot use tlsConfig (type *func() *tls.Config) as type *tls.Config in field value
vendor/github.com/hyperhq/hypercli/registry/service_v2.go:37:13: too many errors
Makefile:24: recipe for target 'build' failed
make: *** [build] Error 2

dep ensure re-sync the code from github. However, the hyperhq code in the vendor folder doesn't match the github one

circle-ci changes

circleci configuration should use the Makefile targets for build & test

Can't get Load Balancer IP for any service when installed

I followed the updated documentation that was supposed to fix this by creating a more powerful service principle this but I still get this error.

Any service with external IP I try to deploy just stays pending forever

az ad sp create-for-rbac --name sp-arch-aks-aci --role=Contributor --scopes /subscriptions/{subscriptionid}/ --password {password}

When I look at the logs for

helm install --name vk-linux -f values-linux.yaml "https://github.com/virtual-kubelet/virtual-kubelet/raw/master/charts/virtual-kubelet-0.1.0.tgz"
helm install --name vk-windows -f values-windows.yaml "https://github.com/virtual-kubelet/virtual-kubelet/raw/master/charts/virtual-kubelet-0.1.0.tgz"

--values-linux.yaml--
image:
repository: microsoft/virtual-kubelet
tag: latest
pullPolicy: Always
env:
azureClientId: {clientid}
azureClientKey: {clientkey}
azureTenantId: {tenantid}
azureSubscriptionId: {subscriptionid}
aciResourceGroup: rgp-use-arch-aci
aciRegion: eastus
nodeName: virtual-kubelet-linux
nodeTaint: azure.com/aci-linux
nodeOsType: Linux

--values-windows.yaml--
image:
repository: microsoft/virtual-kubelet
tag: latest
pullPolicy: Always
env:
azureClientId: {clientid}
azureClientKey: {clientkey}
azureTenantId: {tenantid}
azureSubscriptionId: {subscriptionid}
aciResourceGroup: rgp-use-arch-aci
aciRegion: eastus
nodeName: virtual-kubelet-windows
nodeTaint: azure.com/aci-windows
nodeOsType: Windows

/.kube/config
2017/12/23 06:01:57 Node 'virtual-kubelet-linux' with OS type 'Linux' registered

/.kube/config
2017/12/23 06:03:49 Node 'virtual-kubelet-windows' with OS type 'Windows' registered

Writing back information to a pod

Maybe I'm too dumb to figure it out, but it looks like that the current implementation of virtual-kubelet does not allow us to use the Pod to write some information that could ease the mapping between a Pod in Kubernetes land and the resource in the provider.
I would envision storing some annotation to actually make the mapping between the two different system much easier (i.e. the ARN of a task in AWS ECS to have a 1:1 mapping with a Pod).
Is this possible as of today and, if not, what do you think about this idea?

Definition of a provider

Create a definition of what it means to be a provider for Virtual Kubelet.

Some examples:
A provider is a company, or organization with a "Pods as a Service" service that they provide to customers.

A provider is anything that can replicate the create, delete, and more to what the Virtual Kubelet interface offers.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.