GithubHelp home page GithubHelp logo

openshift / oc Goto Github PK

View Code? Open in Web Editor NEW
185.0 24.0 374.0 118.24 MB

The OpenShift Command Line, part of OKD

Home Page: https://www.openshift.org

License: Apache License 2.0

Go 99.46% Makefile 0.13% Shell 0.38% Dockerfile 0.03%

oc's Introduction

OpenShift Client - oc

With OpenShift Client CLI (oc), you can create applications and manage OpenShift resources. It is built on top of kubectl which means it provides its full capabilities to connect with any kubernetes compliant cluster, and on top adds commands simplifying interaction with an OpenShift cluster.

Contributing

All contributions are welcome - oc uses the Apache 2 license and does not require any contributor agreement to submit patches. Please open issues for any bugs or problems you encounter, ask questions on the OpenShift IRC channel (#openshift-dev on freenode), or get involved in the kubectl and kubernetes project at the container runtime layer.

Building

To build oc invoke make oc. At any time you can invoke make help and you'll get a list of all supported make sub-commands.

In order to build oc, you will need the GSSAPI sources. On a Fedora/CentOS/RHEL workstation, install them with:

dnf install krb5-devel

Also:

dnf install gpgme-devel
dnf install libassuan-devel

For MacOS you'll need to install a few brew packages before building locally. Install them with:

brew install heimdal
brew install gpgme

Testing

All PRs will have to pass a series of automated tests starting from go tools such as go fmt and go vet, through unit tests, up to e2e against a real cluster.

Locally you can invoke the initial verification and unit test through make verify and make test, accordingly.

Dependencies

Dependencies are managed through Go Modules. When updating any dependency the suggested workflow is:

  1. go mod tidy
  2. go mod vendor

Security Response

If you've found a security issue that you'd like to disclose confidentially please contact Red Hat's Product Security team. Details at https://access.redhat.com/security/team/contact

License

oc is licensed under the Apache License, Version 2.0.

oc's People

Contributors

0xmichalis avatar ardaguclu avatar atiratree avatar bparees avatar coreydaley avatar csrwng avatar damemi avatar deads2k avatar ecordell avatar enj avatar fabianofranz avatar gabemontero avatar ingvagabund avatar ironcladlou avatar juanvallejo avatar kargakis avatar liggitt avatar mfojtik avatar nak3 avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar petr-muller avatar rhcarvalho avatar sallyom avatar smarterclayton avatar soltysh avatar stevekuznetsov avatar stlaz avatar wking avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oc's Issues

oc client registry.redhat.io/openshift4/ose-cli:latest is slow due to $HOME not set

When running oc commands in the ose-cli image it makes a difference whether the container is run by a privileged user or not. With a privileged user HOME is set as /root whereas with an unprivileged user HOME is / and not writable.
This induces many errors that can be seen by bumping the log level, for instance:
I0623 08:08:54.218291 7 cached_discovery.go:87] failed to write cache to /.kube/cache/discovery/172.30.0.1_443/machine.openshift.io/v1beta1/serverresources.json due to mkdir /.kube: permission denied
This is happening all the time when the container is started with the default SA.

As cluster admin:
$ oc run -it --image=registry.redhat.io/openshift4/ose-cli:latest bash
If you don't see a command prompt, try pressing enter.
[root@bash /]# id
uid=0(root) gid=0(root) groups=0(root)
[root@bash /]# echo $HOME
/root

As normal user
$ oc run -it --image=registry.redhat.io/openshift4/ose-cli:latest bash --as=frederic
If you don't see a command prompt, try pressing enter.
bash-4.2$ id
uid=1000600000(1000600000) gid=0(root) groups=0(root),1000600000
bash-4.2$ echo $HOME
/

I am proposing to add the following to oc/images/cli/Dockerfile.rhel
ENV HOME /tmp/home
RUN mkdir $HOME && chmod 777 $HOME

This would solve the issue. If you find that reasonable I can open a PR.

oc login error: "net/http: TLS handshake timeout"

Hello,

I've got an oc login error: "net/http: TLS handshake timeout" on my mac running mojave, even though connecting with the same user using the ui or using oc on a linux system works fine

command used and output:

 oc login -u kubeadmin -p <mypwd> api.myocp.mydomain.local:6443 --insecure-skip-tls-verify=true
error: net/http: TLS handshake timeout

with log

oc login -u kubeadmin -p <mypwd> api.myocp.mydomain.local:6443 --insecure-skip-tls-verify=true --loglevel=9
I0717 19:04:01.721157    8593 loader.go:375] Config loaded from file:  /Users/fb/.kube/config
I0717 19:04:01.722654    8593 round_trippers.go:423] curl -k -v -XHEAD  'https://myocp.mydomain.local:6443/'
I0717 19:04:11.763330    8593 round_trippers.go:443] HEAD https://api.myocp.mydomain.local:6443/  in 10040 milliseconds
I0717 19:04:11.763367    8593 round_trippers.go:449] Response Headers:
F0717 19:04:11.763447    8593 helpers.go:115] error: net/http: TLS handshake timeout

Same problem without specifying the option --insecure-skip-tls-verify=true

oc version:
Client Version: 4.5.2 on my mac mojave (Version 10.14.6)
Openshift Server Version: 4.2.21 (same problem with another server in 4.3)

The funny thing is that if I do a sudo it works, here is the beginning of the log with same loglevel

sudo oc login -u kubeadmin -p <mypwd> api.myocp.mydomain.local:6443 --insecure-skip-tls-verify=true --loglevel=9
Password:
I0717 19:09:31.987104    8867 loader.go:375] Config loaded from file:  /Users/fb/.kube/config
I0717 19:09:31.988190    8867 round_trippers.go:423] curl -k -v -XHEAD  'https://api.myocp.mydomain.local:6443/'
I0717 19:09:32.226527    8867 round_trippers.go:443] HEAD https://api.myocp.mydomain.local:6443/  in 238 milliseconds
I0717 19:09:32.226548    8867 round_trippers.go:449] Response Headers:
I0717 19:09:32.226598    8867 round_trippers.go:423] curl -k -v -XHEAD  'https://api.myocp.mydomain.local:6443/'
I0717 19:09:32.354212    8867 round_trippers.go:443] HEAD https://api.myocp.mydomain:6443/ 403 Forbidden in 127 milliseconds
I0717 19:09:32.354232    8867 round_trippers.go:449] Response Headers:
I0717 19:09:32.354239    8867 round_trippers.go:452]     Date: Fri, 17 Jul 2020 17:09:31 GMT
I0717 19:09:32.354242    8867 round_trippers.go:452]     Audit-Id: d887cfda-c53c-4279-89d4-a49f0c072523
I0717 19:09:32.354247    8867 round_trippers.go:452]     Cache-Control: no-cache, private
I0717 19:09:32.354253    8867 round_trippers.go:452]     Content-Type: application/json
I0717 19:09:32.354256    8867 round_trippers.go:452]     X-Content-Type-Options: nosniff
I0717 19:09:32.354262    8867 round_trippers.go:452]     Content-Length: 186
I0717 19:09:32.354301    8867 round_trippers.go:423] curl -k -v -XGET  -H "X-Csrf-Token: 1" 'https://api.myocp.mydomain.local:6443/.well-known/oauth-authorization-server'
I0717 19:09:32.385115    8867 round_trippers.go:443] GET https://api.myocp.mydomain.local:6443/.well-known/oauth-authorization-server 200 OK in 30 milliseconds
I0717 19:09:32.385171    8867 round_trippers.go:449] Response Headers:
...

I've tried removing the .kube directory but I'm getting the same results, does not work with my user & work with sudo.

 oc login -u kubeadmin -p <mypwd> api.myocp.mydomain.local:6443 --insecure-skip-tls-verify=true --loglevel=9
I0717 19:12:47.864785    8889 round_trippers.go:423] curl -k -v -XHEAD  'https://api.myocp.mydomain.local:6443/'
I0717 19:12:57.938432    8889 round_trippers.go:443] HEAD https://api.myocp.mydomain.local:6443/  in 10073 milliseconds
I0717 19:12:57.938458    8889 round_trippers.go:449] Response Headers:
F0717 19:12:57.938851    8889 helpers.go:115] error: net/http: TLS handshake timeout

after googling, I've seen a bunch of folks having this problem but it looks that it was never really solved.
Any ideas or hints ? I'm actually ready to spend some time on this to crack it.

"oc config use-context" should complete contexts

kubectl config use-context can complete contexts. oc config use-context cannot. Being able to complete contexts would be quite helpful in its own right, but especially so since oc login clobbers kubeconfig contexts while simultaneously oc breaks kubectl's ability to complete contexts.

One way to accomplish this would be to piggy-back kubectl's context completion by adding.

        oc_config_use-context | oc_config_rename-context)
            __kubectl_config_get_contexts
            return
            ;;

to __custom_func, and optionally

    flags_with_completion+=("--context")
    flags_completion+=("__kubectl_config_get_contexts")

after every

    flags+=("--context=")

The downside to that approach is the dependency on kubectl completion, without which you get something like

$ oc config use-context <tab>__kubectl_config_get_contexts: command not found

oc create clusterrolebinding already delegates to __kubectl_get_resource_clusterrule so there is precedence, however, I'd personally consider contexts to be a far more prominent concept than cluster role bindings.

In a similar vein, clusters and users could be completed with __kubectl_config_get_clusters and __kubectl_config_get_users.

An even simpler way to resolve this is to resolve #371 and declare these completions outside of oc's scope.

Validation prevents mirroring release to ipv6 registry

When attempting to use oc adm release mirror in an environment where the local registry is only listening on ipv6, it seems there is some validation which prevents using that address in the --to location:

$ oc adm release mirror --insecure=true -a combined-pullsecret--O2zxxDYRKZ --from registry.svc.ci.openshift.org/ipv6/release:4.3.0-0.nightly-2019-12-20-152137-ipv6.1 --to-release-image fd2e:6f44:5dd8:c956:0:0:0:1:5000/localimages/local-release-image:4.3.0-0.nightly-2019-12-20-152137-ipv6.1 --to [fd2e:6f44:5dd8:c956:0:0:0:1]:5000/localimages/local-release-image
error: --to must be a valid image repository: "[fd2e:6f44:5dd8:c956:0:0:0:1]:5000/localimages/local-release-image" is not a valid image reference: invalid reference format

A similar issue exists if you use the IP in the --to-release-image pullspec - replacing the IP with a hostname hard-coded in /etc/hosts appears to work around the issue.

Can this validation be relaxed to allow for ipv6 addresses?

Catalog build broken because of usage of an instable image as parent image.

Context: I'm using ocp 4.3 trying to make the disconnected operators installed working as described in the described documentation but was failing, the command oc get packagemanifest would be empty.
I found out the catalog container generated is broken it uses in the FROM a unstable image .

type AppregistryBuildOption func(*AppregistryBuildOptions)
func DefaultAppregistryBuildOptions() *AppregistryBuildOptions {
return &AppregistryBuildOptions{
AppRegistryEndpoint: "https://quay.io/cnr",
From: "quay.io/operator-framework/operator-registry-server:latest",
}
}

$ oc version
Client Version: 4.4.0-rc.6
Server Version: 4.3.1
Kubernetes Version: v1.16.2

Using the default image operator-registry-server (latest)

$ # building with the default paramater from the FROM
$ oc adm catalog build --appregistry-org=redhat-operators --to=registry.corp.net/bmillemathias/olm/redhat-operators:v202004081934  -a pull.secret.json 
...

# Testing the container, note there is a warning
$ podman run  -p 50051:50051 -ti registry.corp.net/bmillemathias/olm/redhat-operators:v202004081934
Trying to pull registry.corp.net/bmillemathias/olm/redhat-operators:v202004081939...
Getting image source signatures
Copying blob 09dbbf8834d2 skipped: already exists
Copying blob fcd63ccfdd0c skipped: already exists
Copying blob bcdcb44ae450 skipped: already exists
Copying blob 9a46ec39f532 skipped: already exists
Copying blob 4d3702f2ede8 skipped: already exists
Copying blob 82b5c88c5e24 done
Copying config b903f1c8d0 done
Writing manifest to image destination
Storing signatures
WARN[0000] unable to set termination log path            error="open /dev/termination-log: permission denied"
WARN[0000] couldn't migrate db                           database=/bundles.db error="attempt to write a readonly database" port=50051
INFO[0000] serving registry                              database=/bundles.db port=50051

# testing the API to list packages
$ grpcurl -plaintext localhost:50051 api.Registry/ListPackages | head
{
  "name": "3scale-operator"
}

#  testing the API to get the bundle list
$ grpcurl -plaintext -d '{"pkgName":"kiali-ossm","channelName":"stable"}' localhost:50051 api.Registry/GetBundleForChannel
ERROR:
  Code: Unknown
  Message: no such column: api_provider.operatorbundle_name

Using the image operator-registry-server tagged v1.6.1

$ # building the catalog but targeting release v1.6.1 of the image quay.io/operator-framework/operator-registry-server
$ oc adm catalog build --appregistry-org=redhat-operators --to=registry.corp.net/bmillemathias/olm/redhat-operators:v202004081934  -a pull.secret.json --from=quay.io/operator-framework/operator-registry-server:v1.6.1

$ podman run registry.corp.net/bmillemathias/olm/redhat-operators:v202004081934
Trying to pull registry.corp.net/bmillemathias/olm/redhat-operators:v202004081934...
Getting image source signatures
Copying blob a3ed95caeb02 done
Copying blob a3ed95caeb02 done
Copying blob a3ed95caeb02 done
Copying blob cc199fa9fde9 done
Copying blob bfb26f0443bb done
Writing manifest to image destination
Storing signatures
1b9e232c7e74f9384670c99153cdee42fa74462ad269eaeead373d3287a0ea6e

# testing the container, this time no error about migration.
$ podman run -p 50051:50051 -ti registry.corp.net/bmillemathias/olm/redhat-operators:v202004081934
time="2020-04-08T17:36:07Z" level=info msg="serving registry" database=/bundles.db port=50051

# 
# testing the API to list packages
$ grpcurl -plaintext localhost:50051 api.Registry/ListPackages | head
{
  "name": "3scale-operator"
}

#  testing the API to get the bundle list
$ [bmillemathias@fedoravm ~]$ grpcurl -plaintext -d '{"pkgName":"cluster-logging","channelName":"4.3"}' localhost:50051 api.Registry/GetBundleForChannel
{
  "csvName": "clusterlogging.4.3.10-202004010435",
  "packageName": "cluster-logging",
  "channelName": "4.3",
  "csvJson": "{\"apiVersion\":\"operators.coreos.com/v1alpha1\
 ...

digging into the problem, I've compared the sqlite file on each generated image
built with default image

sqlite> .schema api_provider
CREATE TABLE api_provider (
                        group_name TEXT,
                        version TEXT,
                        kind TEXT,
                        channel_entry_id INTEGER,
                        PRIMARY KEY(group_name, version, kind, channel_entry_id),
                        FOREIGN KEY(channel_entry_id) REFERENCES channel_entry(entry_id) ON DELETE CASCADE,
                        FOREIGN KEY(group_name, version, kind) REFERENCES api(group_name, version, kind)
                );

built with the image tagged v1.6.1 (migration happens at start)

sqlite> .schema api_provider
CREATE TABLE "api_provider" (
                        group_name TEXT,
                        version TEXT,
                        kind TEXT,
                        operatorbundle_name TEXT,
                        operatorbundle_version TEXT,
                        operatorbundle_path TEXT,
                        FOREIGN KEY(operatorbundle_name, operatorbundle_version, operatorbundle_path) REFERENCES operatorbundle(name, version, bundlepath) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
                        FOREIGN KEY(group_name, version, kind) REFERENCES api(group_name, version, kind) ON DELETE CASCADE
                );
CREATE TRIGGER api_provider_cleanup
                AFTER DELETE ON api_provider
                WHEN NOT EXISTS (SELECT 1 FROM api_provider JOIN api_requirer WHERE
                    (api_provider.group_name = OLD.group_name AND api_provider.version = OLD.version AND api_provider.kind = OLD.kind) OR
                        (api_requirer.group_name = OLD.group_name AND api_requirer.version = OLD.version AND api_requirer.kind = OLD.kind))
                BEGIN
                        DELETE FROM api WHERE group_name = OLD.group_name AND version = OLD.version AND kind = OLD.kind;
                END;

I don't know what is the root component of the problem but targeting a latest image is problematic (at least on a release version) because with the same version you can't reproduce the same build over the tine as the latest will differs)

oc login should avoid clobbering kubeconfig contexts

Default behavior of oc login https://openshift.example.com --username=thelonelyghost --password='hunter2' is to do the following:

  • Setup a new (or if exists, update) cluster in the kubeconfig by the name of openshift-example-com:443
  • Setup a new (or if exists, update) user in the kubeconfig by the name of thelonelyghost/openshift-example-com:443
  • Setup a new (or if exists, update) context in the kubeconfig by the name of default/openshift-example-com:443/thelonelyghost.
  • Set the current context to be default/openshift-example-com:443/thelonelyghost

This is fine for initial setup, but periodically I'm forced to login again to refresh the bearer token stored in the user section of the kubeconfig. When I do that, I must run the same oc login command again. Here's the problem:

If I re-run that command, it clobbers any changes I've made to human-facing names of the cluster, the context, or the user that have zero functional effect on the file. If I renamed the default/openshift-example-com:443/thelonelyghost context to be more user-friendly, such as company-prod, logging in again adds an additional context of default/openshift-example-com:443/thelonelyghost back again. It does not update credentials in-place, it adds them if they're not named exactly what is expected.

I've written a tool in python named oc-replacement to re-authenticate in-place as I desire, but it would be much more desirable for this to be default behavior or even an option with the oc command line tool.

`oc adm release new` fails to get right auth from `docker.json`

Hey, so I have been recently experimenting with oc adm release new to create a custom OKD release. I am using a private docker registry tied to a Gitlab project protected with an access token.
The commands I use to create a custom release:

$ oc registry login --registry <my registry> --auth-basic=user:pass
$ oc adm release new \
--from-release=quay.io/openshift/okd:4.4.0-0.okd-2020-04-21-163702-beta4 \
--to-image=<my registry>/project/custom-okd-release:latest \
--name="4.4.0-okd-custom-release-$(date -u +'%Y-%m-%d-%H-%M-%S')"

This results in an error like the following:

denied: requested access to the resource is denied
unauthorized: authentication required

I debugged it a bit and it turned out that docker.json was missing a specific key with auth-related data (even though there was an entry for .

The problem is the following (as far as I can tell based on reading the oc code):

  1. oc pings and checks whether there is WWW-Authenticate header present in the response.
  2. If there is one, then it uses this as the name of the key to retrieve the auth-related data from docker.json
  3. Of course, this might not be the same url as the one used with oc registry login.

So provided that:

  1. I do oc registry login test.registry
  2. And WWW-authenticate is for example referring to test2.registry/auth/jwt

I will end up with an error and docker.json similar to the following:

{
  "test.registry": {"auth": {}}
}

Which seems to be incorrect. Shouldn't the docker.json be updated with the value from WWW-authenticate?

Creating a yaml file and adding content to it on the CLI

I'm having issues performing the following:

image

I've tried oc create -f and many other commands with no success.
I've used the web openshift web client in the past to create and upload content to a yaml file, but never from the command line. Any help would be greatly appreciated.

"panic: runtime error: slice bounds out of range" while "oc adm release extract"

Describe the bug
Just running a make within the https://github.com/openshift-metal3/dev-scripts/ with my usual hardware/environment

To Reproduce
From dev-scripts b37f69396 commit, RHEL8 helper node

Expected/observed behavior

+ oc adm release extract --registry-config installer--q65IkgS0h2/pullsecret --command=openshift-install --to installer--q65IkgS0h2 registry.svc.ci.openshift.org/kni/release:4.2.0-0.ci-2019-07-31-123929-kni.0
panic: runtime error: slice bounds out of range

goroutine 163 [running]:
bufio.(*Reader).fill(0xc000f35c80)
        /opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/bufio/bufio.go:89 +0x211
bufio.(*Reader).WriteTo(0xc000f35c80, 0x2dfa380, 0xc000010838, 0x7f60a71fde08, 0xc000f35c80, 0x1)
        /opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/bufio/bufio.go:511 +0x106
io.copyBuffer(0x2dfa380, 0xc000010838, 0x2df4f60, 0xc000f35c80, 0x0, 0x0, 0x0, 0xc0013e5260, 0xc0000edb00, 0x0)
        /opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/io/io.go:384 +0x34e
io.Copy(...)
        /opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/io/io.go:364
os/exec.(*Cmd).stdin.func1(0xc000ef4900, 0x0)
        /opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/os/exec/exec.go:243 +0x67
os/exec.(*Cmd).Start.func1(0xc0008f34a0, 0xc0014f7680)
        /opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/os/exec/exec.go:409 +0x27
created by os/exec.(*Cmd).Start
        /opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/os/exec/exec.go:408 +0x58f
make: *** [Makefile:21: ocp_run] Error 2

Additional context
Trying again seems to work and I've not been able to reproduce it.

Reported first here openshift-metal3/dev-scripts#734

oc should not use /etc/resolv.conf on macOS

On OCP 4.3 the oc login command generated from the dashboard "Copy Login Command"

oc login --token=asdfghjk... --server=https://api.xxx.com:6443

fails with:

error: dial tcp: lookup api.xxx.com on 192.168.0.1:53: no such host - verify you have provided the correct host and port and that the server is currently running.

When I substitute the public ip of my cluster for the host name it works.

oc login --token=asdfghjk... --server=https://1.2.3.4:6443

I can successfully ping api.xxx.com, the curl command generated by "Copy Login Command" resolves the hostname, and the curl url also works in chrome. I've tried adding the host and public ip to my /etc/hosts file but it still fails.

The problem appears to be that oc is using /etc/resolv.conf for hostname resolution. When I edit /etc/resolv.conf and change:

nameserver 192.168.0.1 to nameserver 8.8.8.8 I get:

error: dial tcp: lookup api.xxx.com on 8.8.8.8:53: no such host...

resolve.conf contains the following deprecation notice:

$ cat /etc/resolv.conf
#
# macOS Notice
#
# This file is not consulted for DNS hostname resolution, address
# resolution, or the DNS query routing mechanism used by most
# processes on this system.
#
# To view the DNS configuration used by this system, use:
#   scutil --dns

Relevant versions:

macOS 10.15.3

$ oc version
Client Version: openshift-clients-4.3.0-201910250623-88-g6a937dfe
Server Version: 4.3.0
Kubernetes Version: v1.16.2
$

RFC: oc debug: detect node has been accessed

This is just an RFC for now as I don't fully know the scope of oc debug - what we would need from an MCO perspective is a way to detect that someone has jumped on a node with oc debug.
Today we can do that with SSH but oc debug effectively defeats our detection (based only on ssh).

I was thinking maybe we can mount a file from the host when the oc debug pod runs and the MCO can check that to detect that the node has been accessed.

Again, the above is based upon my little knowledge of this code base but anything in this direction would make the MCO more robust when claiming that a user manually patched the system.

Inconsistency with `oc` command(s)

Version
Client Version: version.Info{Major:"4", Minor:"1+", GitVersion:"v4.1.0+b4261e0", GitCommit:"b4261e07ed", GitTreeState:"clean", BuildDate:"2019-10-06T23:21:44Z", GoVersion:"go1.13.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.2", GitCommit:"94e669a", GitTreeState:"clean", BuildDate:"2020-02-03T23:11:39Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Steps To Reproduce
  1. oc new-project foo
  2. oc delete project foo
Expected Result

It would be nice if these either both had the - or didn't have the -

@jdob @btannous 🤘

oc expose generates invalid hostname if namespace or service name is too long

I have:

  • service: nodejs-rest-http-crud
  • namespace: test-namespace-2a55e626-8028-45e7-a353-25c8317d1b16 which is valid
$ oc version

Client Version: 4.5.0-rc.6
Server Version: 4.5.0-rc.6
Kubernetes Version: v1.18.3+6025c28

When i try to expose the service to a route, I get:

$ oc expose service nodejs-rest-http-crud -n test-namespace-2a55e626-8028-45e7-a353-25c8317d1b16 

route.route.openshift.io/nodejs-rest-http-crud exposed
$ oc get routes

NAME                    HOST/PORT     PATH   SERVICES                PORT       TERMINATION   WILDCARD
nodejs-rest-http-crud   InvalidHost          nodejs-rest-http-crud   8080-tcp                 None
$ oc get route nodejs-rest-http-crud -o yaml 
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  annotations:
    openshift.io/host.generated: "true"
...
  name: nodejs-rest-http-crud
  namespace: test-namespace-2a55e626-8028-45e7-a353-25c8317d1b16
...
status:
  ingress:
  - conditions:
    - lastTransitionTime: "2020-07-07T14:29:55Z"
      message: 'host name validation errors: spec.host: Invalid value: "nodejs-rest-http-crud-test-namespace-2a55e626-8028-45e7-a353-25c8317d1b16.apps.helm-acc-tests-pm-1.devcluster.openshift.com":
        must be no more than 63 characters'
      reason: InvalidHost
      status: "False"
      type: Admitted
    host: nodejs-rest-http-crud-test-namespace-2a55e626-8028-45e7-a353-25c8317d1b16.apps.helm-acc-tests-pm-1.devcluster.openshift.com
    routerCanonicalHostname: apps.helm-acc-tests-pm-1.devcluster.openshift.com
    routerName: default
    wildcardPolicy: None

must-gather receives unexpected EOF

When running must-gather on a plugin image that doesn't output anything for a certain period of time (reproduce with sleep 70), it seems that must-gather receives a response with an empty body and this causes DefaultConsumeRequest to return an ErrUnexpectedEOF

cannot add-cluster-role-to-group in namespace

"local role bindings can reference both cluster and local roles."

but when I do
oc adm policy add-role-to-group admin myadmins -n myproject
then the role admin is bound to the group, not the cluster-role admin
and
oc adm policy add-cluster-role-to-group admin myadmins -n myproject
fails because -n namespace is not allowed

I can do it using the Kubernetes provider in Terraform and the OpenShift Console, but not with oc

https://docs.openshift.com/container-platform/4.4/authentication/using-rbac.html

[tracking] Replace fsouza/go-dockerclient with docker/docker/client

Placeholder for tracking the issue

The latest changes in openshift/library-go pulled changes in golang.org/x/sys which are incompatible with the current revision of github.com/docker/docker. In order to have the docker built properly again, moby/moby@c3a0a37 needs to be present. However, updating docker to the revision makes fsouza/go-dockerclient incompatible. Also, k8s.io/kubernetes is already fsouza/go-dockerclient code free, we should do the same for oc (and other openshift packages).

Info:

  • docker.Client required to have Ping() error method (new method has Ping(ctx context.Context) (types.Ping, error) signature)
  • ListImages(opts docker.ListImagesOptions) ([]docker.APIImages, error) replaced with ImageList(ctx context.Context, options types.ImageListOptions) ([]types.ImageSummary, error)
  • InspectImage(name string) (*docker.Image, error) replaced with ImageInspectWithRaw(ctx context.Context, imageID string) (types.ImageInspect, []byte, error)
  • github.com.fsouza/go-dockerclient.Port replaced with github.com/docker/go-connections/nat.Port

DockerConfig field without an equivalent field in container.Config:

Memory            int64               `json:"Memory,omitempty" yaml:"Memory,omitempty"`
MemorySwap        int64               `json:"MemorySwap,omitempty" yaml:"MemorySwap,omitempty"`
MemoryReservation int64               `json:"MemoryReservation,omitempty" yaml:"MemoryReservation,omitempty"`
KernelMemory      int64               `json:"KernelMemory,omitempty" yaml:"KernelMemory,omitempty"`
PidsLimit         int64               `json:"PidsLimit,omitempty" yaml:"PidsLimit,omitempty"`
CPUShares         int64               `json:"CpuShares,omitempty" yaml:"CpuShares,omitempty"`
CPUSet            string              `json:"Cpuset,omitempty" yaml:"Cpuset,omitempty"`
PortSpecs         []string            `json:"PortSpecs,omitempty" yaml:"PortSpecs,omitempty"`
DNS               []string            `json:"Dns,omitempty" yaml:"Dns,omitempty"` // For Docker API v1.9 and below only
VolumeDriver      string              `json:"VolumeDriver,omitempty" yaml:"VolumeDriver,omitempty"`
VolumesFrom       string              `json:"VolumesFrom,omitempty" yaml:"VolumesFrom,omitempty"`
SecurityOpts    []string            `json:"SecurityOpts,omitempty"`
Mounts            []Mount             `json:"Mounts,omitempty" yaml:"Mounts,omitempty"`

compiling `oc` with gcc 10.x throws warnings

When you run make in the repo with gcc 10.x being used as your compiler, you're going to see warnings about pointer to local variable being printed, like:

go build -mod=vendor -tags 'include_gcs include_oss containers_image_openpgp gssapi' -ldflags "-s -w -X github.com/openshift/oc/pkg/version.versionFromGit="v4.2.0-alpha.0-555-g8426ead" -X github.com/openshift/oc/pkg/version.commitFromGit="8426ead01" -X github.com/openshift/oc/pkg/version.gitTreeState="dirty" -X github.com/openshift/oc/pkg/version.buildDate="2020-05-28T09:09:45Z" -X k8s.io/component-base/version.gitMajor="1" -X k8s.io/component-base/version.gitMinor="18" -X k8s.io/component-base/version.gitVersion="v1.18.0-0-g9e99141" -X k8s.io/component-base/version.gitCommit="8426ead01" -X k8s.io/component-base/version.buildDate="2020-05-28T09:09:44Z" -X k8s.io/component-base/version.gitTreeState="clean" -X k8s.io/client-go/pkg/version.gitVersion="v4.2.0-alpha.0-555-g8426ead" -X k8s.io/client-go/pkg/version.gitCommit="8426ead01" -X k8s.io/client-go/pkg/version.buildDate="2020-05-28T09:09:44Z" -X k8s.io/client-go/pkg/version.gitTreeState="dirty"" github.com/openshift/oc/cmd/oc
# github.com/mattn/go-sqlite3
sqlite3-binding.c: In function ‘sqlite3SelectNew’:
sqlite3-binding.c:123303:10: warning: function may return address of local variable [-Wreturn-local-addr]
123303 |   return pNew;
       |          ^~~~
sqlite3-binding.c:123263:10: note: declared here
123263 |   Select standin;
       |          ^~~~~~~

This is a known upstream bug, a little investigation leads you from mattn/go-sqlite3#803 to https://sqlite.org/forum/forumpost/845dd0be91

Build failure: invalid pseudo-version: revision is shorter than canonical

Commit: 9d412f4

What I tried

Build OC by running make on a freshly cloned master branch.

What I expected

I expected OC to build.

What I really got

Unsuccessful build.

~/projects/oc-make-test [master]$ make
go: github.com/openshift/[email protected]: invalid pseudo-version: revision is shorter than canonical (86def77f6f90)
make: Nothing to be done for 'all'.
~/projects/oc-make-test [master]$ echo $?
0

Environment information:

go version: go version go1.13.4 linux/amd64

go env:

GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/imeixner/.cache/go-build"
GOENV="/home/imeixner/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/imeixner/go"
GOPRIVATE=""
GOPROXY="direct"
GOROOT="/usr/lib/golang"
GOSUMDB="off"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/golang/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/imeixner/projects/oc-make-test/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build804934264=/tmp/go-build -gno-record-gcc-switches"

uname -a: Linux imeixner-fedora 5.3.11-300.fc31.x86_64 #1 SMP Tue Nov 12 19:08:07 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Last commit that builds for me is b53e708, with the following make output:

build .: cannot find module for path .
build .: cannot find module for path .
build .: cannot find module for path .
build .: cannot find module for path .
go build -tags 'include_gcs include_oss containers_image_openpgp gssapi' -ldflags "-s -w -X no_package_detected/pkg/version.versionFromGit="v4.2.0-alpha.0-265-gb53e708" -X no_package_detected/pkg/version.commitFromGit="b53e708cc" -X no_package_detected/pkg/version.gitTreeState="clean" -X no_package_detected/pkg/version.buildDate="2019-11-22T11:01:36Z" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitMajor="1" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitMinor="16" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitVersion="v1.16.0-7-gab72ed5" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitCommit="b53e708cc" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.buildDate="2019-11-22T11:01:36Z" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitTreeState="clean"" github.com/openshift/oc/cmd/oc
go build -tags 'include_gcs include_oss containers_image_openpgp gssapi' -ldflags "-s -w -X no_package_detected/pkg/version.versionFromGit="v4.2.0-alpha.0-265-gb53e708" -X no_package_detected/pkg/version.commitFromGit="b53e708cc" -X no_package_detected/pkg/version.gitTreeState="clean" -X no_package_detected/pkg/version.buildDate="2019-11-22T11:01:37Z" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitMajor="1" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitMinor="16" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitVersion="v1.16.0-7-gab72ed5" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitCommit="b53e708cc" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.buildDate="2019-11-22T11:01:36Z" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitTreeState="clean"" github.com/openshift/oc/tools/clicheck
go build -tags 'include_gcs include_oss containers_image_openpgp gssapi' -ldflags "-s -w -X no_package_detected/pkg/version.versionFromGit="v4.2.0-alpha.0-265-gb53e708" -X no_package_detected/pkg/version.commitFromGit="b53e708cc" -X no_package_detected/pkg/version.gitTreeState="clean" -X no_package_detected/pkg/version.buildDate="2019-11-22T11:01:37Z" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitMajor="1" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitMinor="16" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitVersion="v1.16.0-7-gab72ed5" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitCommit="b53e708cc" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.buildDate="2019-11-22T11:01:36Z" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitTreeState="clean"" github.com/openshift/oc/tools/gendocs
go build -tags 'include_gcs include_oss containers_image_openpgp gssapi' -ldflags "-s -w -X no_package_detected/pkg/version.versionFromGit="v4.2.0-alpha.0-265-gb53e708" -X no_package_detected/pkg/version.commitFromGit="b53e708cc" -X no_package_detected/pkg/version.gitTreeState="clean" -X no_package_detected/pkg/version.buildDate="2019-11-22T11:01:37Z" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitMajor="1" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitMinor="16" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitVersion="v1.16.0-7-gab72ed5" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitCommit="b53e708cc" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.buildDate="2019-11-22T11:01:36Z" -X github.com/openshift/oc/vendor/k8s.io/kubernetes/pkg/version.gitTreeState="clean"" github.com/openshift/oc/tools/genman

Cleaning Go cache didn't help.

OC seems to build fine on a fresh RHEL 8 after installing Git and Go (environment info below). Could the problem possibly be caused by Go 1.13?

[root@rhel oc]# uname -a
Linux rhel 4.18.0-147.el8.x86_64 #1 SMP Thu Sep 26 15:52:44 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@rhel oc]# go version
go version go1.12.8 linux/amd64
[root@rhel oc]# go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/root/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/root/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/lib/golang"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/golang/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/root/oc/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build841880008=/tmp/go-build -gno-record-gcc-switches"

CC @tisnik

"oc adm catalog mirror" set an invalid value for field meta.name for imagecontentsourcepolicy manifest

When you create the mirror of the artifacts of the catalog, the content of the field meta.name of the manifest file is based on the path of the source registry, including / which are invalid.

so for the command

oc adm catalog mirror registry:5000/bmillemathias/olm/redhat-operators:v202004071506 \
 registry:5000/bmillemathias/olm/redhat-operators-build

You end with

kind: ImageContentSourcePolicy
metadata:
  name: olm/redhat-operators
...

oc __custom_func clobbers kubectl __custom_func

kubectls Bash completion defines a function creatively named __custom_func, which serves as a sort of entry point to certain kubectl commands (config, create, get, etc.). Not all, and it's not entirely clear whether there is a pattern to the covered commands.

I've verified this in kubectl v1.11.0+d4cacc0 and v1.14.0.

oc' s Bash completion also defines a function named __custom_func, and it serves exactly the same purpose for oc as kubectl's __custom_func serves for kubectl. Because "o" comes after "k", oc's __custom_func clobbers kubectl's. While we can largely use oc as a proxy for kubectl, kubectl's completion is considerably more refined than oc's completion of kubectl's operations; for instance, context completion.

I've verified this only in oc v3.11.0+0cbc58b but it is evidently still true. It looks to the naked eye as if it can be trivially renamed.

Of course, it doesn't really matter which function clobbers which because both kubectl and oc are being bad citizens. It surprises me that either tool has gotten away with an unnamespaced function for so long, especially considering that the other more than 400 total functions are all properly namespaced.

ADD README.md

Currently the README is just a placeholder for a TODO message, is there a plan to add a README and some docs on how the cli is structure?

oc build from `master` sources gives outdated version number

Built oc from master source commit f415627, which I expected to be a development branch. Seems to be sync'ed with release-4.7 branch:

commit f415627b3a8df305c4dd0ada0b4bc1271846a777 (HEAD -> master, origin/release-4.7, origin/release-4.6, origin/release-4.5, origin/master, origin/HEAD)
Merge: 361c30813 b27b4901a
Author: OpenShift Merge Robot <[email protected]>
Date:   Thu May 7 21:37:23 2020 +0200

    Merge pull request #409 from soltysh/bug1826230
    
    Bug 1826230: bring missing fixes to oc

However, running oc version gives something a bit nonsensical:

% ./oc version
Client Version: v4.2.0-alpha.0-571-gf415627
Kubernetes Version: v1.18.2

Saw the same problem on Fedora 32 and macOS.

Feature request: Show banners defined by ConsoleNotification CR also in CLI

Hi,

I love the ConsoleNotification feature where custom banners are presented to the users (can do a 'We'll do maintenance on this cluster in two weeks' stuff).

Would it be possible to also show a banner after logging in to a console on the terminal with the CLI oc or kubectl ? It would be useful to notify our console hackers about maintanance tasks :-)

oc login ... 

Message: The cluster okd4.c1.dev will get planned maintenance in two weeks.

Greetings,

Josef

Add oc commands to install operators

Since OKD ships Operator Hub, why not adding oc commands to interact with operators? List, Get, install, uninstall.

In some contexts, e.g. automated testing on Tekton CI, going to the OKD UI and install an operator manually is not an option.

We're currently providing bash scripts to check operator groups and subscriptions and creating them if missing, but I definitely think that the oc CLI is the best place for this functionality.

"oc adm inspect namespace" files are deeply nested

When try "oc adm inspect" to collect namespace data, find the files are deeply nested, not easy to find a target file. Even get familiar with the folder structure, still need open several folders then can check the log.
So would like a feature request to make some changes to "deeply nested files", to have a clearer view of the collect data.

For resource yaml, move nested folder name to be part of file name
For example :

./inspect.local.6573161925693834685/
└── namespaces
    └── openshift-monitoring
        ├── apps
        │   ├── daemonsets.yaml
        ...
        │   └── statefulsets.yaml
        ├── apps.openshift.io
        │   └── deploymentconfigs.yaml
        ...

adjust to

./inspect.local.6573161925693834685/
└── namespaces
    └── openshift-monitoring
        ├── apps.daemonsets.yaml
        ├── apps.statefulsets.yaml
        ├── apps.openshift.io.deploymentconfigs.yaml

For pod logs, remove duplicate container-name folder and logs folder
For example :

./inspect.local.6573161925693834685/
└── namespaces
    └── openshift-monitoring
        ├── pods
        │   ├── thanos-querier-6647bc5cd7-kfngd
...
        │   │   ├── oauth-proxy
        │   │   │   └── oauth-proxy
        │   │   │       ├── healthz
        │   │   │       ├── heap
        │   │   │       ├── logs
        │   │   │       │   ├── current.log
        │   │   │       │   ├── previous.insecure.log
        │   │   │       │   └── previous.log
        │   │   │       ├── metrics.json
        │   │   │       ├── profile
        │   │   │       └── trace
...
        │   │   ├── thanos-querier
        │   │   │   └── thanos-querier
        │   │   │       └── logs
        │   │   │           ├── current.log
        │   │   │           ├── previous.insecure.log
        │   │   │           └── previous.log
        │   │   └── thanos-querier-6647bc5cd7-kfngd.yaml
...

adjust to

./inspect.local.6573161925693834685/
└── namespaces
    └── openshift-monitoring
        ├── pods
        │   ├── thanos-querier-6647bc5cd7-kfngd
...
        │   │   ├── oauth-proxy
        │   │   │   ├── healthz
        │   │   │   ├── heap
        │   │   │   ├── current.log
        │   │   │   ├── previous.insecure.log
        │   │   │   ├── previous.log
        │   │   │   ├── metrics.json
        │   │   │   ├── profile
        │   │   │   └── trace
...
        │   │   ├── thanos-querier
        │   │   │   ├── current.log
        │   │   │   ├── previous.insecure.log
        │   │   │   └── previous.log
        │   │   └── thanos-querier-6647bc5cd7-kfngd.yaml
...

oc logs -l does not work

oc logs -l does not work as supposed:

oc logs -l serving.knative.dev/service=event-display -c user-container
error: You must provide one or more resources by argument or filename.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
   '<resource> <name>'
   '<resource>'

this is for oc:

 oc version
Client Version: 4.3.8
Kubernetes Version: v1.17.4+k3s1

but i have the same behaviour also for older ocs. With kubectl the very same command works without issue.

Support oc adm catalog on Mac

Using the GA version of OCP 4.3, downloaded the oc.zip for mac:

> oc version
Client Version: openshift-clients-4.3.0-201910250623-88-g6a937dfe
Server Version: 4.3.0
Kubernetes Version: v1.16.2
oc adm catalog
error: unknown command "catalog"

Shouldn't the adm catalog command be available in 4.3?

Latest oc builds from CI missing release information

The latest nightlies of oc are missing data in the oc version like buildDate, gitVersion, etc.

$ oc adm release extract --command='oc' registry.svc.ci.openshift.org/ocp/release:4.4.0-0.ci-2019-12-13-145806
$ ./oc version -o json
{
  "clientVersion": {
    "major": "",
    "minor": "",
    "gitVersion": "unknown",
    "gitCommit": "",
    "gitTreeState": "",
    "buildDate": "",
    "goVersion": "go1.13.5",
    "compiler": "gc",
    "platform": "linux/amd64"
  },
  "releaseClientVersion": "4.4.0-0.ci-2019-12-13-145806"
}

From https://mirror.openshift.com/pub/openshift-v4/clients/oc/4.4/

{
  "clientVersion": {
    "major": "",
    "minor": "",
    "gitVersion": "v4.4.0",
    "gitCommit": "60ccef2e295e103fa6bd01b9ae5de9061386b948",
    "gitTreeState": "clean",
    "buildDate": "2019-12-13T16:45:38Z",
    "goVersion": "go1.12.12",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

RFE: enhanced journalctl selectors (kernel, syslog...)

There are some easy enhancements that would really make oc adm node-logs more useful.

  1. kernel logs. The kernel buffer is special - you have to request it via journalctl -k. You cannot do -u kernel. You cannot do --grep kernel, because --grep only searches the message, not the printed log line.
  2. Syslog identifiers. If we want to look in to selinux denials, we need to either --grep AVC or --syslog-identifier=audit.

We should probably add -k as an option in oc adm node-logs. But it might just make sense to add a --journalctl-options="--foo --bar" passthrough.

oc fails to build on macOS with "Unknown type name `gss_const_name_t`"

Not sure exactly if I'm doing anything wrong. The same source version builds on Fedora32 without a problem:

%make
go build -mod=vendor -tags 'include_gcs include_oss containers_image_openpgp gssapi' -ldflags "-s -w -X github.com/openshift/oc/pkg/version.versionFromGit="v4.2.0-alpha.0-571-gf415627" -X github.com/openshift/oc/pkg/version.commitFromGit="f415627b3" -X github.com/openshift/oc/pkg/version.gitTreeState="clean" -X github.com/openshift/oc/pkg/version.buildDate="2020-05-12T14:11:00Z" -X k8s.io/component-base/version.gitMajor="1" -X k8s.io/component-base/version.gitMinor="18" -X k8s.io/component-base/version.gitVersion="v1.18.2-0-g52c56ce" -X k8s.io/component-base/version.gitCommit="f415627b3" -X k8s.io/component-base/version.buildDate="2020-05-12T14:10:57Z" -X k8s.io/component-base/version.gitTreeState="clean" -X k8s.io/client-go/pkg/version.gitVersion="v4.2.0-alpha.0-571-gf415627" -X k8s.io/client-go/pkg/version.gitCommit="f415627b3" -X k8s.io/client-go/pkg/version.buildDate="2020-05-12T14:10:57Z" -X k8s.io/client-go/pkg/version.gitTreeState="clean"" github.com/openshift/oc/cmd/oc
# github.com/apcera/gssapi
vendor/github.com/apcera/gssapi/name.go:213:9: could not determine kind of name for C.wrap_gss_canonicalize_name
cgo: 
clang errors for preamble:
vendor/github.com/apcera/gssapi/name.go:90:2: error: unknown type name 'gss_const_name_t'
        gss_const_name_t input_name,
        ^
1 error generated.

make: *** [build] Error 2

oc get clusterversion failed

Can anyone show some comments for how to debug such issue and what is wrong with my cluster?

[root@bunnies-inf ~]# oc get  clusterversion version -v=8
I1114 00:57:19.598577    4051 loader.go:359] Config loaded from file /root/auth/kubeconfig
I1114 00:57:19.602575    4051 loader.go:359] Config loaded from file /root/auth/kubeconfig
I1114 00:57:19.625521    4051 loader.go:359] Config loaded from file /root/auth/kubeconfig
I1114 00:57:19.641196    4051 loader.go:359] Config loaded from file /root/auth/kubeconfig
I1114 00:57:19.641965    4051 round_trippers.go:416] GET https://api.bunnies.os.fyre.ibm.com:6443/apis/config.openshift.io/v1/clusterversions/version
I1114 00:57:19.641991    4051 round_trippers.go:423] Request Headers:
I1114 00:57:19.642006    4051 round_trippers.go:426]     Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json
I1114 00:57:19.642022    4051 round_trippers.go:426]     User-Agent: oc/v1.13.4+72d1bea (linux/amd64) kubernetes/72d1bea
I1114 00:57:19.828196    4051 round_trippers.go:441] Response Status: 200 OK in 186 milliseconds
I1114 00:57:19.828228    4051 round_trippers.go:444] Response Headers:
I1114 00:57:19.828239    4051 round_trippers.go:447]     Audit-Id: d66f5140-e3b1-46e8-aba5-87d1c1eea50a
I1114 00:57:19.828249    4051 round_trippers.go:447]     Cache-Control: no-store
I1114 00:57:19.828259    4051 round_trippers.go:447]     Content-Type: application/json
I1114 00:57:19.828270    4051 round_trippers.go:447]     Content-Length: 1967
I1114 00:57:19.828284    4051 round_trippers.go:447]     Date: Thu, 14 Nov 2019 08:57:19 GMT
I1114 00:57:19.828363    4051 request.go:942] Response Body: {"kind":"Table","apiVersion":"meta.k8s.io/v1beta1","metadata":{"selfLink":"/apis/config.openshift.io/v1/clusterversions/version","resourceVersion":"12497363"},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names","priority":0},{"name":"Version","type":"string","format":"","description":"Custom resource definition column (in JSONPath format): .status.history[?(@.state==\"Completed\")].version","priority":0},{"name":"Available","type":"string","format":"","description":"Custom resource definition column (in JSONPath format): .status.conditions[?(@.type==\"Available\")].status","priority":0},{"name":"Progressing","type":"string","format": [truncated 943 chars]
I1114 00:57:19.829523    4051 get.go:563] no kind is registered for the type v1beta1.Table in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:29"
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version             False       True          34d     Unable to apply 4.1.18: an unknown error has occurred
[root@bunnies-inf ~]# oc version
Client Version: version.Info{Major:"4", Minor:"1+", GitVersion:"v4.1.18-201909201915+72d1bea-dirty", GitCommit:"72d1bea", GitTreeState:"dirty", BuildDate:"2019-09-21T02:11:40Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.4+c2a5caf", GitCommit:"c2a5caf", GitTreeState:"clean", BuildDate:"2019-09-21T02:12:52Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}

oc bash-completion on mac

Hi,

I'm trying to get the bash complete behavior working on Mac OS 10.15.

oc completion bash > /usr/local/etc/bash_completion.d/oc_bash_completion.sh   
source /usr/local/etc/bash_completion.d/oc_bash_completion.sh   

Getting this error:

usr/local/etc/bash_completion.d/oc_bash_completion.sh:type:15252: bad option: -t

Tried removing "-t" in these lines:

if [[ $(type -t compopt) = "builtin" ]]; then
    complete -o default -F __start_oc oc
else
    complete -o default -o nospace -F __start_oc oc
fi

But getting a different error upon doing this when I use TABs with 'oc':

__start_oc:5: command not found: _init_completion
__start_oc:5: command not found: _init_completion

Let me know if anyone has it working on mac. Thanks!

build of oc fails, while kubectl built works

Hi,

after I successfully built kubectl from the kubernetes project:

(589) x230:/export/home/olbohlen$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.0-alpha.2.24+0086db9d59b0da", GitCommit:"0086db9d59b0da351c126b5904c21b98a20b2b7b", GitTreeState:"clean", BuildDate:"2020-05-19T11:28:36Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"illumos/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6+cd3a7d0", GitCommit:"cd3a7d0", GitTreeState:"clean", BuildDate:"2020-02-10T13:51:26Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

I tried building oc, but that unfortunately fails. If I got that right from a golang/k8s contributor it should work if oc would not require too old dependencies. Is there a way I could easily fix that (or better...can you? ;)):


(586) x230:/export/home/olbohlen/go/src/github.com/openshift/oc$ gmake oc   
go build -mod=vendor -tags 'include_gcs include_oss containers_image_openpgp' -ldflags "-s -w -X github.com/openshift/oc/pkg/version.versionFromGit="v4.2.0-alpha.0-667-g577fcb0" -X github.com/openshift/oc/pkg/version.commitFromGit="577fcb022" -X github.com/openshift/oc/pkg/version.gitTreeState="dirty" -X github.com/openshift/oc/pkg/version.buildDate="2020-07-08T15:28:55Z" -X k8s.io/component-base/version.gitMajor="1" -X k8s.io/component-base/version.gitMinor="18" -X k8s.io/component-base/version.gitVersion="v1.18.2-0-g52c56ce" -X k8s.io/component-base/version.gitCommit="577fcb022" -X k8s.io/component-base/version.buildDate="2020-07-08T15:28:52Z" -X k8s.io/component-base/version.gitTreeState="clean" -X k8s.io/client-go/pkg/version.gitVersion="v4.2.0-alpha.0-667-g577fcb0" -X k8s.io/client-go/pkg/version.gitCommit="577fcb022" -X k8s.io/client-go/pkg/version.buildDate="2020-07-08T15:28:52Z" -X k8s.io/client-go/pkg/version.gitTreeState="dirty"" github.com/openshift/oc/cmd/oc
# github.com/docker/docker/pkg/term
vendor/github.com/docker/docker/pkg/term/tc.go:12:27: undefined: Termios
vendor/github.com/docker/docker/pkg/term/tc.go:17:27: undefined: Termios
vendor/github.com/docker/docker/pkg/term/term.go:24:10: undefined: Termios
# github.com/docker/docker/pkg/system
vendor/github.com/docker/docker/pkg/system/lstat_unix.go:19:9: undefined: fromStatT
vendor/github.com/docker/docker/pkg/system/stat_unix.go:65:9: undefined: fromStatT
# github.com/docker/docker/client
vendor/github.com/docker/docker/client/client.go:120:35: undefined: DefaultDockerHost
vendor/github.com/docker/docker/client/client.go:125:12: undefined: DefaultDockerHost
vendor/github.com/docker/docker/client/client.go:128:12: undefined: defaultProto
vendor/github.com/docker/docker/client/client.go:129:12: undefined: defaultAddr gmake: *** [vendor/github.com/openshift/build-machinery-go/make/targets/golang/build.mk:14: build] Error 2

Thanks,

Olaf

passing incorrect command-line flags doesn't return an error

I suspect the issue is not tied up to a single command but I run into it when I tried to create a new release from an existing image.

  1. running the following command (notice incorrect —from-release flag)

    oc  adm release new —from-release=registry.svc.ci.openshift.org/origin/release:4.2 cluster-kube-apiserver-operator=quay.io/polynomial/origin-cluster-kube-apiserver-operator:latest --to-image=quay.io/polynomial/origin-release:latest
    

    gives me the following output

    info: Loading override quay.io/polynomial/origin-cluster-kube-apiserver-operator:latest 
    cluster-kube-apiserver-operator
    info: Included 1 images from 1 input operators into the release
    error: unable to create a release: operator "cluster-kube-apiserver-operator" contained an 
    invalid image-references file: no input image tag named "hypershift"
    
  2. running (notice incorrect -from-release flag)

    oc adm release new -from-release=registry.svc.ci.openshift.org/origin/release:4.2 cluster-kube-apiserver-operator=quay.io/polynomial/origin-cluster-kube-apiserver-operator:latest --to-image=quay.io/polynomial/origin-release:latest
    

    is better but not perfect

    error: open rom-release=registry.svc.ci.openshift.org/origin/release:4.2: no such file or 
    directory
    
  3. however the following command

     oc adm release new --key=val --from-release=registry.svc.ci.openshift.org/origin/release:4.2 cluster-kube-apiserver-operator=quay.io/polynomial/origin-cluster-kube-apiserver-operator:latest --to-image=quay.io/polynomial/origin-release:latest
    

    reports an unknow flag

    Error: unknown flag: --key
    
    
    Usage:
      oc adm release new [SRC=DST ...] [flags]
    
    Examples:
     # Create a release from the latest origin images and push to a DockerHub repo
      oc adm release new --from-image-stream=4.1 -n origin --to-image 
    docker.io/mycompany/myrepo:latest
    ....
    

I would expect that cases 1 and 2 would behave the same way as 3.

oc explain should show which parameters are MUST-HAVE vs optional

To help users quickly create OpenShift resource, annotate fields with "must" and "optional".

Running "oc explain --recursive dc" and "oc explain --recursive po" produced files with 963 and 731 lines, respectively. For an experienced users, it may not be problem but for new users it becomes a daunting task to figure out which fields must a provide which can be left out.

spuriously claims "error: the release could not be reproduced from its inputs"

If you do something like:

> oc adm release new -n origin --server https://api.ci.openshift.org \
    --from-release registry.svc.ci.openshift.org/ocp/release:4.3.0-0.ci-2019-11-01-122324 \
    --to-image quay.io/danwinship/ocp-release:ipv6-testing \
    hyperkube=quay.io/danwinship/hyperkube:ipv6

it will build and push the release image, and then say

error: the release could not be reproduced from its inputs

which is wrong because I wasn't trying to reproduce the release, I was trying to modify it.

(AFAICT everything works fine, it just gives the incorrect error message at the end.)

oc new-build fails with custom dockerfilePath

The following command fails

oc new-build --code=https://github.com/akram/jenkins.git  --context-dir=2 \
             --strategy=docker --dockerfile=2/Dockerfile.localdev

or even:

oc new-build --code=https://github.com/akram/jenkins.git  --context-dir=2 \
             --strategy=docker --dockerfile=Dockerfile.localdev

with the error message

error: the Dockerfile in the repository "" has no FROM instruction

oc adm catalog mirror issues with identitytoken?

If I attempt to run oc adm catalog mirror on a catalog image that is on a docker image registry server that uses identitytoken I consistently get auth errors:

n=true -v=5
\I0616 06:25:54.863600    6084 config.go:137] looking for config.json at /root/.docker/config.json
I0616 06:25:54.863861    6084 config.go:145] found valid config.json at /root/.docker/config.json
I0616 06:25:55.122809    6084 credentials.go:108] Found secret to match https://cp.stg.icr.io/oauth/token (cp.stg.icr.io/oauth/token):
I0616 06:25:55.315006    6084 workqueue.go:143] about to send work queue error: unable to read image cp.stg.icr.io/cp/cp4mcm/hktest/testcatalog:airgap-2.0: Head https://cp.stg.icr.io/v2/cp/cp4mcm/hktest/testcatalog/manifests/airgap-2.0: unauthorized: authentication required

Yet, I am able to do a docker pull on the same image without issues

oc patch fails in Windows 10 PowerShell

Note: This patch DOES work when I use Windows Subsystem for Linux (using Ubuntu).

It does not work with Windows 10 Enterprise and PowerShell.

Here's the command and results:

❯ oc patch pipelineresource -n pipelines-tutorial qotd-git --type=json -p '[{"op":"replace","path":"/spec/params/0/value","value":"https://github.com/donschenck/qotd-python.git"}]'

The "" is invalid

Here is the JSON attempting to be patched:

{
"apiVersion": "v1",
"items": [
{
"apiVersion": "tekton.dev/v1alpha1",
"kind": "PipelineResource",
"metadata": {
"creationTimestamp": "2020-06-15T18:08:51Z",
"generation": 4,
"name": "qotd-git",
"namespace": "pipelines-tutorial",
"resourceVersion": "513280",
"selfLink": "/apis/tekton.dev/v1alpha1/namespaces/pipelines-tutorial/pipelineresources/qotd-git",
"uid": "c92d1ae0-0de2-4cfb-bae0-bb3b8b2d9869"
},
"spec": {
"params": [
{
"name": "url",
"value": "https://github.com/redhat-developer-demos/qotd.git"
}
],
"type": "git"
}
},
{
"apiVersion": "tekton.dev/v1alpha1",
"metadata": {
"creationTimestamp": "2020-06-15T18:14:04Z",
"generation": 1,
"name": "qotd-image",
"namespace": "pipelines-tutorial",
"resourceVersion": "84049",
"selfLink": "/apis/tekton.dev/v1alpha1/namespaces/pipelines-tutorial/pipelineresources/qotd-image",
"uid": "4f1da0a8-bca6-4726-bd78-9bf015c1cc57"
},
"spec": {
"params": [
{
"name": "url",
"value": "image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/qotd:latest"
}
],
"type": "image"
}
}
],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": ""
}
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.