Comments (12)
Can you provide more detail, I'm having trouble parsing "static dhcp" ;). Maybe you mean our static IPs? We don't really care what the IPs are, but it seems like users won't care either. Is your issue just "we should drop those at some point" and not a "this is breaking $WORKFLOW
"?
Also, for our cloud systems we push up masters and workers in separate subnets, which we don't bother with for libvirt. I don't see that changing, but we could obviously revisit that if someone raises a use-case where it would matter.
from installer.
I'm having trouble parsing "static dhcp"
I mean exactly that. DHCP on libvirt is configured to provide static-only IP assignments, not from a pool.
Relevant config:
[root@mguginop50 ~]# cat /var/lib/libvirt/dnsmasq/tectonic.conf
##WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
##OVERWRITTEN AND LOST. Changes to this configuration should be made using:
## virsh net-edit tectonic
## or other application using the libvirt API.
##
## dnsmasq conf file created by libvirt
strict-order
local=/tt.testing/
domain=tt.testing
expand-hosts
pid-file=/var/run/libvirt/network/tectonic.pid
except-interface=lo
bind-dynamic
interface=tt0
dhcp-range=192.168.122.1,static
dhcp-no-override
dhcp-authoritative
dhcp-hostsfile=/var/lib/libvirt/dnsmasq/tectonic.hostsfile
addn-hosts=/var/lib/libvirt/dnsmasq/tectonic.addnhosts
[root@mguginop50 ~]# virsh net-dumpxml tectonic
<network connections='4'>
<name>tectonic</name>
<uuid>34300328-11b3-4dbb-ab76-dcd1e52886d4</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='tt0' stp='on' delay='0'/>
<mac address='52:54:00:bb:79:58'/>
<domain name='tt.testing' localOnly='yes'/>
<dns>
<host ip='192.168.122.10'>
<hostname>mgugino-test-api</hostname>
</host>
<host ip='192.168.122.11'>
<hostname>mgugino-test-api</hostname>
<hostname>mgugino-test-etcd-0</hostname>
</host>
<host ip='192.168.122.50'>
<hostname>mgugino-test</hostname>
</host>
</dns>
<ip family='ipv4' address='192.168.122.1' prefix='24'>
<dhcp>
<host mac='5e:f3:a7:15:ce:5d' name='mgugino-test-master-0' ip='192.168.122.11'/>
<host mac='5a:c0:c4:e9:62:ba' name='mgugino-test-bootstrap' ip='192.168.122.10'/>
<host mac='da:68:1d:0d:86:d1' name='worker-sdb9x' ip='192.168.122.51'/>
</dhcp>
</ip>
</network>
Compare those to libvirt's default network configuration, you'll see what I mean. There's no need to statically code dhcp entries, just use an IP range.
My use case is getting old installer to work with new via scaleup (part of my 4.0 deliverable). While I can work around this issue, it does add extra steps and it's just weird that it would be this way by default.
from installer.
maybe related: #708
from installer.
I believe this is an artifact of the way the old (Tectonic) installer worked. Now that we use DNS to resolve the hosts, there shouldn't be any need for static assignments. We could try removing these, but it doesn't feel like a high priority at the moment.
from installer.
/priority backlog
from installer.
/assign
from installer.
@zeenix is this still something you are looking at?
from installer.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
from installer.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
from installer.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
from installer.
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
from installer.
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
from installer.
Related Issues (20)
- RHOS 4.14 on vsphere environment
- Azure install fails (4.12, 4.13, and 4.14) HOT 6
- Why were the patch versions for CVE-2021-20198 released so late? HOT 4
- No host is compatible with the virtual machine. HOT 2
- Manifest files creation error - SIGSEGV: segmentation violation HOT 3
- Installer does not recognize Azure platform cloud name USSec for Azure secret cloud (IL6) HOT 6
- [Question]: Does the installer support non-dhcp network for openstack platform? HOT 4
- [Question] Specify static IPs on AWS IPI HOT 2
- bootstrap node do not boot HOT 1
- ERROR Attempted to gather ClusterOperator status after installation failure
- ERROR Attempted to gather ClusterOperator status after installation failure HOT 2
- Unable to install Redhat OpenShift using the IPI Method. HOT 7
- Feature Request: Option to Select Specific Nutanix Storage Container for VM Creation in IPI Installation HOT 1
- containerd dependency has known vulnerability HOT 4
- deploying openshift cluster 4.14 i got the below error HOT 2
- Monitoring Cluster Operator is degaraded for OCP 4.14 HOT 2
- unabel to install OpenShift Single Node 4.14.10 HOT 4
- IPI fails on IBM Cloud due to pre_provisioning DNS Services HOT 3
- Openshift is configured with the sriov network card, but the network card cannot be seen in the pod. But it can be seen in the yaml configuration HOT 4
- Unable to install OpenShift SNO 4.14.12. Bootstrapping does not complete. Possibly same as issue #7982. HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from installer.