GithubHelp home page GithubHelp logo

kubevirt / kubevirt Goto Github PK

View Code? Open in Web Editor NEW
5.1K 112.0 1.3K 268.25 MB

Kubernetes Virtualization API and runtime in order to define and manage virtual machines.

Home Page: https://kubevirt.io

License: Apache License 2.0

Makefile 0.06% Go 95.10% Shell 1.83% Python 0.02% Dockerfile 0.04% Starlark 2.91% C 0.04%
kubernetes libvirt virtualization vms hacktoberfest

kubevirt's Introduction

KubeVirt

Build Status Go Report Card Licensed under Apache License version 2.0 Coverage Status CII Best Practices Visit our Slack channel FOSSA Status Quality Gate Status

KubeVirt is a virtual machine management add-on for Kubernetes. The aim is to provide a common ground for virtualization solutions on top of Kubernetes.

Introduction

Virtualization extension for Kubernetes

At its core, KubeVirt extends Kubernetes by adding additional virtualization resource types (especially the VM type) through Kubernetes's Custom Resource Definitions API. By using this mechanism, the Kubernetes API can be used to manage these VM resources alongside all other resources Kubernetes provides.

The resources themselves are not enough to launch virtual machines. For this to happen the functionality and business logic needs to be added to the cluster. The functionality is not added to Kubernetes itself, but rather added to a Kubernetes cluster by running additional controllers and agents on an existing cluster.

The necessary controllers and agents are provided by KubeVirt.

As of today KubeVirt can be used to declaratively

  • Create a predefined VM
  • Schedule a VM on a Kubernetes cluster
  • Launch a VM
  • Stop a VM
  • Delete a VM

To start using KubeVirt

Try our quickstart at kubevirt.io.

See our user documentation at kubevirt.io/docs.

Once you have the basics, you can learn more about how to run KubeVirt and its newest features by taking a look at:

To start developing KubeVirt

To set up a development environment please read our Getting Started Guide. To learn how to contribute, please read our contribution guide.

You can learn more about how KubeVirt is designed (and why it is that way), and learn more about the major components by taking a look at our developer documentation:

Useful links

The KubeVirt SIG-release repo is responsible for information regarding upcoming and previous releases.

Community

If you got enough of code and want to speak to people, then you got a couple of options:

Related resources

Submitting patches

When sending patches to the project, the submitter is required to certify that they have the legal right to submit the code. This is achieved by adding a line

Signed-off-by: Real Name <[email protected]>

to the bottom of every commit message. Existence of such a line certifies that the submitter has complied with the Developer's Certificate of Origin 1.1, (as defined in the file docs/developer-certificate-of-origin).

This line can be automatically added to a commit in the correct format, by using the '-s' option to 'git commit'.

License

KubeVirt is distributed under the Apache License, Version 2.0.

Copyright 2016

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

FOSSA Status

FOSSA Status

kubevirt's People

Contributors

acardace avatar ahadas avatar alicefr avatar alonakaplan avatar awels avatar booxter avatar dankenigsberg avatar davidvossel avatar dhiller avatar eddev avatar enp0s3 avatar fabiand avatar fossedihelm avatar iholder101 avatar jean-edouard avatar kubevirt-bot avatar lyarwood avatar maiqueb avatar mhenriks avatar orelmisan avatar ormergi avatar oshoval avatar petrkotas avatar rmohr avatar shellyka13 avatar slintes avatar stu-gott avatar vasiliy-ul avatar vladikr avatar xpivarc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubevirt's Issues

Implement `state.domain`

state.domain within a VM object should provide an up to date VM status.
spec.domain should reflect what we are required o have, and state.domain should reflect what we currently have.
The state actually also contains calculated values like mac and pci addresses.

The delta of the two can be valuable.

I consider this to be a bug, because it's a core feature to get the state of an object.

virt-handle should register on the apiserver

At the moment, when proxying VM connections, we have no way to know the virt-handler port for sure. virt-handler should register itself somewhere, so that we can look up the connection details.

Maybe adding something to the tains section of the node it is running on is good enough.

Unit tests are very slow

$ time go test
Running Suite: V1 Suite
=======================
Random Seed: 1482141367
Will run 2 of 3 specs

โ€ข
------------------------------
P [PENDING]
Mapper
/home/rmohr/go/src/kubevirt.io/kubevirt/pkg/api/v1/mapper_test.go:52
  With a v1.VM supplied
  /home/rmohr/go/src/kubevirt.io/kubevirt/pkg/api/v1/mapper_test.go:40
    should map to VM
    /home/rmohr/go/src/kubevirt.io/kubevirt/pkg/api/v1/mapper_test.go:39
------------------------------
โ€ข
Ran 2 of 3 Specs in 0.001 seconds
SUCCESS! -- 2 Passed | 0 Failed | 1 Pending | 0 Skipped PASS
ok  	kubevirt.io/kubevirt/pkg/api/v1	0.056s

real	0m4.215s
user	0m5.575s
sys	0m0.290s

Running the api/v1 unit tests takes 56 ms, but collecting and compiling them takes more than 5 seconds.

This probably has the same root cause like the reports at onsi/ginkgo#258.

Idea: No libvirt auth by default

We assume there is a secure environment.

For now I'd thus suggest to drop the default password which the virt-handler is assuming by default.

Check if VM to sync has the right UUID in virt-handler

Virt-handler reacts to changes. Currently, if you delete a VM with name "testvm" and then immediately recreate it, virt-handler might not process the delete event and instead immediately see the new "testvm" object.

Therefore, in DomainManager.SyncVM() in virt-handler, check if the UUIDs match. If they don't, first delete the already defined domain via DomainManager.KillVM().

Don't run our containers with root user

The following containers should not be run as root:

  • virt-api (#257)
  • haproxy (#270)
  • virt-controller (#271)
  • squid-proxy (#282)
  • virt-handler (this container is privileged, let's see if we can then still talk to libvirt if we are not root)

Launching VM Pod succeeds but reports error

level=error timestamp=2017-02-27T16:17:03.105612Z pos=vm.go:102 component=virt-controller service=http name=testvm kind= uid=2c94abd5-fd08-11e6-95e4-5254009a1995 reason="Pod \"virt-launcher-testvm-----ruqf2\" is invalid: spec.containers[0].ports[0].containerPort: Required value" msg="Defining a target pod for the VM."

Because the task is enqueued and tests for the existence of the Pod (which, for some reason, is there) the process succeeds, but it has reported that containerPort: Required value is missing.

move migration controlling logic to virt-handler

Current migration implementation is moving towards having it's own MigrationController running as a separate service within it's pod. This issue is grounds for discussion about actually moving the logic to virt-handler itself and implementing it as another watch loop.

The main advantage of having migration as a watcher within virt-handler is that we can easily access migration status and metrics from domain/vm watch loop. Our experience has shown that to correctly reflect domain/VM lifecycle, migration plays important role. There is, at the moment, no code that requires access to migration status/data, but it will help us in the long run when we're making sure that VM phase really handles migrations well.

Another benefit is that we don't need multiple libvirt connections and event handlers. Having each bigger functionality connect to libvirt could pose issues to node scaling. That being said, having the watch in virt-handler shouldn't pose any issue for cluster scalability - main migration bottleneck is the migration itself, not it's management. Running 50 migrations barely makes any sense from network bandwidth's perspective.

Disadvantages:

  • virt-handler becomes slightly bigger. Probably not a short-term issue, but code bloat in the long term could be a problem.
  • Overlapping migration status with VM status logic. That is unfortunately not solvable on this layer.
  • Not so kubernetes native approach.

Technical proposal:

  • keep migration TPR
  • create new virt-handler watch loop (pkg/virt-handler/migration.py)
  • (?) initiate the loop in cmd/virt-handler.go while making sure that the migration informer can be passed to vm/domain informers (e.g. type MigrationController passable as arg to NewDomainController)
  • for the migration informer, use libvirt API to initiate migrations (virDomainMigrateToURI3)

To decide:

  • who creates destination pod?

Changes to suggested user flows:
Hopefully none. Migration is still created/stopped/monitored via TPR (kubectl create -f migration.json etc.)

Consider this as a quick write up and feel free to add another (dis)advantages or discuss the topic.

haproxy fails to forward to virt-api in some cases

There are currently two scenarios where haproxy stops forwarding to virt-api:

  • virt-api pod not yet up (and service not ready) and you want to access it
  • kubernetes dns server is not reachable when we first try to access virt-api

If one of these situations happens you will get a "503" from haproxy and it will not retry.
We should probably take this to the haproxy mailing list too. I would expect haproxy to retry later on instead of stopping to work forever.

virt-api is too strinct on object names

Kubernetes allows lower case alphanumeric characters, -, and . characters to be part of an object name.

Our virt-api rest paths only allow alphanumeric values:

& cluster/kubectl.sh delete migrations testvm-migration -v 9
I0228 17:41:15.274071   12765 helpers.go:203] server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "the server rejected our request for an unknown reason (delete migrations.kubevirt.io testvm-migration)",
  "reason": "BadRequest",
  "details": {
    "name": "testvm-migration",
    "group": "kubevirt.io",
    "kind": "migrations",
    "causes": [
      {
        "reason": "UnexpectedServerResponse",
        "message": "Decode: Variable 'name' does not validate as alphanumeric."
      }
    ]
  },
  "code": 400
}]

weave 1.9.1 does not come up on CentOS7

Weave now seems to added an additional check if the module br_netfilter is present. If not it does not start. The problem is that in CentOS7 it really is not present, because it is built into the kernel.

Created weaveworks/weave#2820 and trying to find a version in the meantime which worked for us.

On libvirt reconnects, re-register event callbacks

Currently, when we lose the connection to libvirt, we reconnect on the next try to do a call to libvirt, but we don't register the event callbacks again.

That leads to the situation where everything seems to be normal, but we don't get any libvirt events anymore ...

segfault if libvirtd is not running

The virt-handler segfaults with the following trace if libvirtd is not running:

 Jan 04 09:12:50 demo.kubevirt.io docker-current[10639]: Hostname: demo.kubevirt.io
Jan 04 09:12:50 demo.kubevirt.io docker-current[10639]: panic: runtime error: invalid memory address or nil pointer dereference
Jan 04 09:12:50 demo.kubevirt.io docker-current[10639]: [signal 0xb code=0x1 addr=0x0 pc=0x40b131]
Jan 04 09:12:50 demo.kubevirt.io docker-current[10639]: 
Jan 04 09:12:50 demo.kubevirt.io docker-current[10639]: goroutine 1 [running]:
Jan 04 09:12:50 demo.kubevirt.io docker-current[10639]: panic(0x18bdea0, 0xc820014070)
Jan 04 09:12:50 demo.kubevirt.io docker-current[10639]:         /home/travis/.gimme/versions/go1.6.3.linux.amd64/src/runtime/panic.go:481 +0x3e6
Jan 04 09:12:50 demo.kubevirt.io docker-current[10639]: main.main()
Jan 04 09:12:50 demo.kubevirt.io docker-current[10639]:         /home/travis/gopath/src/kubevirt.io/kubevirt/cmd/virt-handler/virt-handler.go:50 +0x1341

Can this error be made a bit nicer?

Change core manifests from Pod to Deployment

In manifests/*.yaml.in we have a few cluster wide services which are currently depoyed as Pod. Changing them to Deployments has the benefit that they survive restarts in our vagrant setup.

Definitions to change are:

  • squid proxy (squid.yaml.in)
  • haproxy (haproxy.yaml.in)
  • virt-api (virt-api.yaml.in)
  • virt-controller (virt-controller.yaml.in)

Set fixed values like the emulator in virt-handler instead of virt-controller

Currently we set some Domain attributes like UUID, Name and Emulator in virt-controller.

They should actually be set in the domain manager in virt-handler and these fields should not even be part of v1.DomainSpec. Emulator is not important for the cluster wide view and UUID and Name are duplicates of the v1.VM metadata.

  • Remove these fields from v1.DomainSpec
  • Set fields in manager.SyncVM before starting or updating a VM

KubeVirt VM and another host support in the same subnets.

Hi,
I have a question. From my point of view, KubeVirt wants to launch the VM, like VM is running on pods.
I want to know how about the network between the VM and the container . And if another host want to ssh on the VM, does it support or does it keep them in the same subnets?

Thanks!

Move all verbs to `virtctl`

PR #178 adds a virtctl command.

This will be used in future as a kubectl plugin, to enable this we should already move all verbs from cluster/kubectl.sh to virtctl.

Make virt-api REST interface and swagger compatible with apiserver

This is a tracking bug, to show what needs to be done to integrate kubebvirt swagger cleanly with kubernetes swagger, make virt-api compatible with kubernetes REST expectations and work around apiserver limitations.

Swagger integration:

  • Merge Kubernetes ad KubeVirt Swagger documentation
  • Document all TPR paths and methods
  • Document all supported query parameters
  • Document path parameters
  • Document the existence of the swagger docs

REST compatibility:

  • Support proxying the registered group (apis/kubevirt.io), to allow kubectl autodiscovering our TPRs
  • Proxy watches
  • Proxy Get calls for collections of VMs
  • Proxy Deletes for VM collections
  • Support export query param
  • Support yaml for TPRs
  • Support OPTIONS

TPR Limitations to work around:

  • Support fieldSelector for status.phase and status.NodeName
  • Name TPR "VirtualMachine" and register a shortcute "VM" in the autodiscovery
  • Support https://github.com/evanphx/json-patch
  • Support strategic merge patch

When we are done, we can do the following:

  • Browse a combinded swagger-api definition of kubevirt.io and kubernetes.io
  • Proxy everything starting with /apis/kubevirt.io through virt-api
  • Give the VM TPR the look and the feel of a First Class Resource like Pod

Qemu instances are killed if the Libvirt pod goes down.

Our current approach leverages Libvirt inside a container. This libvirt process starts qemu instances as virt-handler requests them. Since this libvirt container originally started those processes, if that pod goes down or is restarted each qemu instance it created is also killed.

In the normal workflow, this is not an issue. If Libvirt stays running, so will each qemu instance. If the libvirt pod needs to be updated, an admin can simply evacuate the host before updating. The concern here is error/edge cases. If libvirt were to crash for some reason, each qemu instance spawned by it would be terminated.

Possible mitigation strategies include:

  1. Adding systemd to the libvirt container (so that the pod will continue to run even if the libvirt process terminates--and systemd will restart it). This doesn address the case where the pod itself goes down for some reason.
  2. Make the virt-launcher pod responsible for starting the qemu process. This will require some changes to our qemu-kvm wrapper script and some glue on the virt-launcher side. This obviously increases the complexity of our approach a bit so we'll need to ensure this mechanism is robust/secure if we choose this approach.

Iscsi target pod's status from "Running" to "Error" , then "CrashLoopBackOff"

Iscsi target pod's status from "Running" to "Error" , then "CrashLoopBackOff"
Could you met the issue?

iscsi-demo-target-tgtd-156604030-0fcc8   0/1       CrashLoopBackOff   7          13m
[root@master manifests]# kubectl describe pod iscsi-demo-target-tgtd-156604030-0fcc8
Name:		iscsi-demo-target-tgtd-156604030-0fcc8
Namespace:	default
Node:		master/192.168.121.60
Start Time:	Fri, 07 Apr 2017 06:05:50 +0000
Labels:		app=iscsi-demo-target
		name=iscsi-demo-target-tgtd
		pod-template-hash=156604030
Status:		Running
IP:		10.32.0.141
Controllers:	ReplicaSet/iscsi-demo-target-tgtd-156604030
Containers:
  target:
    Container ID:	docker://19b30a6f81502d35acbdc6bf5e9c372f0245bf8eac3a664b8aed6ce6ae87897c
    Image:		kubevirt/iscsi-demo-target-tgtd:devel
    Image ID:		docker://sha256:758bf175aaed63670f8be3acefd26669f06dd96df88ed0308374e0587ab3194c
    Port:		3260/TCP
    State:		Waiting
      Reason:		CrashLoopBackOff
    Last State:		Terminated
      Reason:		Error
      Exit Code:	107
      Started:		Fri, 07 Apr 2017 06:12:06 +0000
      Finished:		Fri, 07 Apr 2017 06:12:09 +0000
    Ready:		False
    Restart Count:	6
    Volume Mounts:
      /host from host (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-efw9d (ro)
    Environment Variables:
      EXPORT_HOST_PATHS:	
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	False 
  PodScheduled 	True 
Volumes:
  host:
    Type:	HostPath (bare host directory volume)
    Path:	/
  default-token-efw9d:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-efw9d
QoS Class:	BestEffort
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath		Type		Reason		Message
  ---------	--------	-----	----			-------------		--------	------		-------
  9m		9m		1	{default-scheduler }				Normal		Scheduled	Successfully assigned iscsi-demo-target-tgtd-156604030-0fcc8 to master
  9m		9m		1	{kubelet master}	spec.containers{target}	Normal		Created		Created container with docker id 26b108409850; Security:[seccomp=unconfined]
  9m		9m		1	{kubelet master}	spec.containers{target}	Normal		Started		Started container with docker id 26b108409850
  9m		9m		1	{kubelet master}	spec.containers{target}	Normal		Created		Created container with docker id e87e379ac06d; Security:[seccomp=unconfined]
  9m		9m		1	{kubelet master}	spec.containers{target}	Normal		Started		Started container with docker id e87e379ac06d
  9m		9m		2	{kubelet master}				Warning		FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "target" with CrashLoopBackOff: "Back-off 10s restarting failed container=target pod=iscsi-demo-target-tgtd-156604030-0fcc8_default(4032060c-1b58-11e7-9036-525400b58bb7)"

  9m	9m	1	{kubelet master}	spec.containers{target}	Normal	Created		Created container with docker id 1b0a16f15c1b; Security:[seccomp=unconfined]
  9m	9m	1	{kubelet master}	spec.containers{target}	Normal	Started		Started container with docker id 1b0a16f15c1b
  9m	8m	3	{kubelet master}				Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "target" with CrashLoopBackOff: "Back-off 20s restarting failed container=target pod=iscsi-demo-target-tgtd-156604030-0fcc8_default(4032060c-1b58-11e7-9036-525400b58bb7)"

  8m	8m	1	{kubelet master}	spec.containers{target}	Normal	Created		Created container with docker id f78e35632bd4; Security:[seccomp=unconfined]
  8m	8m	1	{kubelet master}	spec.containers{target}	Normal	Started		Started container with docker id f78e35632bd4
  8m	8m	4	{kubelet master}				Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "target" with CrashLoopBackOff: "Back-off 40s restarting failed container=target pod=iscsi-demo-target-tgtd-156604030-0fcc8_default(4032060c-1b58-11e7-9036-525400b58bb7)"

  7m	7m	1	{kubelet master}	spec.containers{target}	Normal	Created		Created container with docker id 27b0c5315e13; Security:[seccomp=unconfined]
  7m	7m	1	{kubelet master}	spec.containers{target}	Normal	Started		Started container with docker id 27b0c5315e13
  7m	6m	7	{kubelet master}				Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "target" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=target pod=iscsi-demo-target-tgtd-156604030-0fcc8_default(4032060c-1b58-11e7-9036-525400b58bb7)"

  6m	6m	1	{kubelet master}	spec.containers{target}	Normal	Created		Created container with docker id e958969a71e3; Security:[seccomp=unconfined]
  6m	6m	1	{kubelet master}	spec.containers{target}	Normal	Started		Started container with docker id e958969a71e3
  6m	3m	13	{kubelet master}				Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "target" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=target pod=iscsi-demo-target-tgtd-156604030-0fcc8_default(4032060c-1b58-11e7-9036-525400b58bb7)"

  9m	3m	7	{kubelet master}	spec.containers{target}	Normal	Pulled		Container image "kubevirt/iscsi-demo-target-tgtd:devel" already present on machine
  3m	3m	1	{kubelet master}	spec.containers{target}	Normal	Created		Created container with docker id 19b30a6f8150; Security:[seccomp=unconfined]
  3m	3m	1	{kubelet master}	spec.containers{target}	Normal	Started		Started container with docker id 19b30a6f8150
  9m	9s	46	{kubelet master}	spec.containers{target}	Warning	BackOff		Back-off restarting failed docker container
  3m	9s	17	{kubelet master}				Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "target" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=target pod=iscsi-demo-target-tgtd-156604030-0fcc8_default(4032060c-1b58-11e7-9036-525400b58bb7)"


VMs should allow being created without running

Currently the assumption is that VMs are running as along as they are defined in the cluster. To stop them, the object needs to be removed.
This was choosen to map a pod's behavior.

But - Eventually it makes sense to allow stopped VMs in KubeVirt. The reason is that VMs are stateful, and thus their state outlives their life-cycle.

My suggestion: Allow VMs to be stopped and to allow to create them stopped. With such a change KubeVirt could also act as a VM store.

Windows 10 does not run out-of-the-box

Installing Windows 10 (evaluation) on KubeVirt does not work. It hangs during boot. Or shows a kernel oops.

Some items:

  • VM API needs to expose all features which windows needs to have configured
    • SMBIOS UUID
    • Machine type features #606
    • CPU features #606
    • Clock things #606
  • Optional: Presets for grouping config options #652

/var/run issue when running on Ubuntu host

I was trying to deploy kubevirt to my existing k8s cluster running with Ubuntu 16.04 workers and discovered that virt-handler could not talk to libvirtd. The problem was that the socket was not available on the host.

This is is because /var/run is a symlink to /run (whereas on Fedora it is a symlink to ../run) this means that the socket does not get into the host OS.

Stabilize functional tests on Jenkins

  • Fix #27, to avoid unreachable virt-api scenarios
  • Work around kubernetes/kubernetes#31123
  • Improve kubernetes health checking before starting the tests
  • Collect events for later investigation, #222
  • Collect logs for later investigation, #222
  • Stabilize test with definition "Should update the Status.MigrationNodeName after the migration target pod was started"
  • If libvirt connection breaks, re-register event callbacks, to not loose events

Failed to set up a development environment

During vagrant up, the following errors are occurred.

==> master: Package ed1cdb328f2ee2cd2c9a200f3a3ae549801cd51216ef82e37d2b92acdbd72cf5-kubectl-1.5.4-0.x86_64.rpm is not signed
==> master: Failed to execute operation: No such file or directory
==> master: setup_kubernetes_master.sh: line 11: kubeadm: command not found
==> master: setup_kubernetes_master.sh: line 15: kubectl: command not found


==> master: Waiting for Kubernetes cluster to become functional...
==> master: setup_kubernetes_master.sh: line 19: kubectl: command not found
==> master: Waiting for Kubernetes cluster to become functional...
==> master: setup_kubernetes_master.sh: line 19: kubectl: command not found
==> master: Waiting for Kubernetes cluster to become functional...
==> master: setup_kubernetes_master.sh: line 19: kubectl: command not found
==> master: Waiting for Kubernetes cluster to become functional...
==> master: setup_kubernetes_master.sh: line 19: kubectl: command not found
==> master: Waiting for Kubernetes cluster to become functional...
==> master: setup_kubernetes_master.sh: line 19: kubectl: command not found

libvirt.NewVirConnectionWithAuth segfaults if user/pass are ""

@rmohr libvirt.NewVirConnectionWithAuth does not handle empty user/password gracefully.

I was thinking abut

 func NewConnection(uri string, user string, pass string) (Connection, error) {
-       virConn, err := libvirt.NewVirConnectionWithAuth(uri, user, pass)
+       var err
+       if user == "" {
+               virConn, err := libvirt.NewVirConnection(uri)
+       } else {
+               virConn, err := libvirt.NewVirConnectionWithAuth(uri, user, pass)
+       }
        if err != nil {
                return nil, err
        }

thoughts?

Kubeadm init blocks waiting for "control plane to become ready".

Hi, all
From the quick start guide, i will have a try, but after executed the "vagrant up", it will stucked by the script "cluster/vagrant/setup_kubernetes_master.sh". It was executing "kubeadm init --api-advertise-addresses=$ADVERTISED_MASTER_IP --pod-network-cidr=10.244.0.0/16 --token abcdef.1234567890123456 --use-kubernetes-version v1.4.5" and blocks waiting for "control plane to become ready".

If get rid of the parameter "--use-kubernetes-version v1.4.5", i was successfully to finish the guide.
Maybe kubeadm use default kuernetes version v1.5.1, does the kubernetes version affect the installation?

Thanks

Add serial and console schemas

VM spec needs "serials" and "consoles" as below to set up a serial console in a guest virtual machine. But schema.go does not have "serials" and "consoles" schemas.

Without a serial console, "cannot find character device " error occurs when attempting to connect to a guest virtual machine.

apiVersion: kubevirt.io/v1alpha1
kind: VM
metadata:
  name: testvm
spec:
  domain:
    devices:
      graphics:
      - autoPort: "yes"
        type: vnc
        listen:
          type: address
          address: 127.0.0.1
      interfaces:
      - source:
          network: default
        type: network
      video:
      - Model:
          heads: 1
          type: cirrus
          vram: 16384
      disks:
      - type: file
        shapshot: internal
        device: disk
        driver:
          name: qemu
          type: qcow2
        diskSource:
          file: /var/lib/libvirt/images/cirros-0.3.4-x86_64-disk.img
        diskTarget:
          dev: hda
          bus: ide
      serials:
      - type: pty
        target:
          port: 0
      consoles:
      - type: pty
        target:
          type: serial
          port: 0
    memory:
      unit: KiB
      value: 524288
    name: testvm
    os:
      type:
        os: hvm
    type: qemu
  nodeSelector:
    kubernetes.io/hostname: master

Update VM status when a VM dies or is shutdown

virt-handler gets an event from libvirt whenever a Domain is shutdown or killed in domain.go.

If a Domain is shutdown, the VM should enter the phase Succeeded. If a VM crashed or was killed, it should enter the phase Failed. The field reflecting this is VMSpec.Status.Phase. In the watch loop we need to check for the reason of the shutdown and update the Phase.

  • Update the Phase on the VM via the restclient according to the shutdown reason.
  • Write functional tests in tests/vm_lifecycle.go
  • If a Domain is stopped or crahes, virt-handler should not try to restart it

Launching VM, but VM keep pending status

When launching VM use "kubectl create -f vm.json" , then the VM keep pending status.

[root@master ~]# kubectl get vms
NAME      KIND
testvm    VM.v1alpha1.kubevirt.io

[root@master ~]# virsh list
 Id    Name                           State
--------------------------------------------------

[root@master ~]# kubectl get events
LASTSEEN   FIRSTSEEN   COUNT     NAME      KIND      SUBOBJECT   TYPE      REASON       SOURCE                  MESSAGE
3m         1h          7         testvm    VM                    Warning   SyncFailed   {virt-handler master}   virError(Code=27, Domain=20, Message='XML error: missing network source protocol type')

[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                READY     STATUS    RESTARTS   AGE
default       frontend-j1aaj                      1/1       Running   2          5d
default       frontend-ll60x                      1/1       Running   2          5d
default       frontend-xgjh9                      1/1       Running   2          5d
default       haproxy                             1/1       Running   7          5d
default       kubevirt-cockpit-3069757025-y1mga   1/1       Running   0          5h
default       redis-master-zljqa                  1/1       Running   2          5d
default       redis-slave-d8w0i                   1/1       Running   2          5d
default       redis-slave-y355j                   1/1       Running   2          5d
default       spice-proxy                         1/1       Running   2          5d
default       virt-api                            1/1       Running   2          5d
default       virt-controller                     1/1       Running   2          5d
default       virt-handler-ehv1v                  1/1       Running   2          5d
default       wordpress-1618093523-de8zj          1/1       Running   4          34d
default       wordpress-mysql-2379610080-5p0e6    1/1       Running   4          34d
kube-system   dummy-2088944543-lxi8s              1/1       Running   4          34d
kube-system   etcd-master                         1/1       Running   5          34d
kube-system   kube-apiserver-master               1/1       Running   6          34d
kube-system   kube-controller-manager-master      1/1       Running   6          34d
kube-system   kube-discovery-3670776889-793t6     1/1       Running   5          34d
kube-system   kube-dns-2924299975-8ic0c           4/4       Running   16         34d
kube-system   kube-proxy-ufd6z                    1/1       Running   4          34d
kube-system   kube-scheduler-master               1/1       Running   6          34d
kube-system   weave-net-tvs1l                     2/2       Running   9          34d




[root@master ~]# kubectl get vms -o json
{
    "apiVersion": "v1",
    "items": [
        {
            "apiVersion": "kubevirt.io/v1alpha1",
            "kind": "VM",
            "metadata": {
                "creationTimestamp": "2017-03-22T07:54:11Z",
                "labels": {
                    "kubevirt.io/nodeName": "master"
                },
                "name": "testvm",
                "namespace": "default",
                "resourceVersion": "2216964",
                "selfLink": "/apis/kubevirt.io/v1alpha1/namespaces/default/vms/testvm",
                "uid": "bc57dcd3-0ed4-11e7-909e-525400b58bb7"
            },
            "spec": {
                "domain": {
                    "devices": {
                        "disks": [
                            {
                                "device": "disk",
                                "diskSource": {
                                    "file": ""
                                },
                                "diskTarget": {
                                    "bus": "",
                                    "dev": ""
                                },
                                "driver": {
                                    "cache": "none",
                                    "name": "qemu",
                                    "type": "raw"
                                },
                                "shapshot": "",
                                "type": "network"
                            }
                        ],
                        "emulator": "/usr/local/bin/qemu-x86_64",
                        "graphics": [
                            {
                                "autoPort": "yes",
                                "defaultMode": "any",
                                "listen": {
                                    "address": "0.0.0.0",
                                    "type": "address"
                                },
                                "port": 4000,
                                "type": "spice"
                            }
                        ],
                        "interfaces": [
                            {
                                "source": {
                                    "network": "default"
                                },
                                "type": "network"
                            }
                        ],
                        "video": [
                            {
                                "Model": {
                                    "VGAMem": 16384,
                                    "heads": 1,
                                    "ram": 65536,
                                    "type": "qxl",
                                    "vram": 8192
                                }
                            }
                        ]
                    },
                    "memory": {
                        "unit": "MB",
                        "value": 64
                    },
                    "name": "testvm",
                    "os": {
                        "bootOrder": null,
                        "type": {
                            "os": "hvm"
                        }
                    },
                    "type": "qemu",
                    "uuid": "bc57dcd3-0ed4-11e7-909e-525400b58bb7"
                }
            },
            "status": {
                "nodeName": "master",
                "phase": "Pending"
            }
        }
    ],
    "kind": "List",
    "metadata": {},
    "resourceVersion": "",
    "selfLink": ""
}

Decide on import order

Currently we just have alphabetical import order in our go code.

Let's decide if this is ok, or if we want more grouping there. Very common is an order like:

  1. go system packages
  2. other vendor dependencies
  3. Project packages

If you have an empty line between these blocks, go fmt will only sort the blocks.

Since we are in the kubernetes world, I could think of four sections:

  1. go system packages
  2. other vendor dependencies
  3. Kubernetes packages
  4. Project packages

Just keeping one sorted list is fine for me too.

Allow setting attributes like `snapshot` and `cache` when using a PersistentVolumeClaim

At the moment virt-handler is completely overriding the PersistentVolumeClaim section.

So a transformation of

      disks:
      - type: network
        snapshot: external
        device: disk
        driver:
          name: qemu
          type: raw
          cache: none
        source:
          host:
            name: iscsi-demo-target
            port: "3260"
          protocol: iscsi
          name: iqn.2017-01.io.kubevirt:sn.42/2
        target:
          dev: vda

to a volume claim is currently not possible.

runV support

Hi,

I'm one of the maintainers of Hyper container and runV: github.com/hyperhq/runv.

Does kubevirt plan to support full-blown VM only, or include virtualized container like runV in the future?

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.