GithubHelp home page GithubHelp logo

open-policy-agent / gatekeeper-library Goto Github PK

View Code? Open in Web Editor NEW
603.0 28.0 315.0 6.63 MB

๐Ÿ“š The OPA Gatekeeper policy library

Home Page: https://open-policy-agent.github.io/gatekeeper-library

License: Apache License 2.0

Open Policy Agent 84.87% Makefile 0.84% Shell 2.24% Dockerfile 0.20% Go 9.75% JavaScript 1.37% CSS 0.73%
gatekeeper opa policy cncf kubernetes policy-library hacktoberfest

gatekeeper-library's People

Contributors

alain-baxter avatar apeabody avatar arapulido avatar ctab avatar ctrought avatar dependabot[bot] avatar developer-guy avatar fseldow avatar grosser avatar jaydipgabani avatar juliankatz avatar lwindolf avatar mac-chaffee avatar maxsmythe avatar nilekhc avatar olegy2008 avatar ordovicia avatar peteroruba avatar phillebaba avatar philsphicas avatar reetasingh avatar ribbybibby avatar ritazh avatar robertsheehy-wf avatar sanderma avatar sathieu avatar shinebayar-g avatar sozercan avatar step-security-bot avatar tsandall avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gatekeeper-library's Issues

A policy definition to audit/deny if a pod's imagepolicy is not **set** to Always

Describe the solution you'd like
A policy definition to audit/deny if a pod's imagepolicy is not set to Always

Anything else you would like to add:
Why? Often required in a multitenant kubernetes environment. This ensures a tenant from another namespace is not able to start my image that already exists on the target host. if not set to always then no authentication occurs against the registry
https://kubernetes.io/docs/concepts/configuration/overview/#container-images

Environment:

  • Gatekeeper version:
  • Kubernetes version: (use kubectl version):

Unit testing script

It would be good to have a script to run all of the src.rego and src_test.rego files from each policy directory throughopa test to ensure everything passes.

allowedrepos test case fails in my local

I was tring on the latest commit, which is ea67a19

kind version 0.11.1
Bats 1.3.0
gatekeeper-3.4.0
kubectl 1.19 (c) - 1.21 (s)

But it seems this commit have been passed the tests with success here.

$ bats -t test/bats/test.bats

Full output:

1..8
ok 1 gatekeeper-controller-manager is running
ok 2 gatekeeper-audit is running
ok 3 namespace label webhook is serving
ok 4 constrainttemplates crd is established
ok 5 waiting for validating webhook
ok 6 applying sync config
ok 7 waiting for namespaces to be synced using metrics endpoint
not ok 8 testing constraint templates
# (from function `constraint_enforced' in file test/bats/helpers.bash, line 102,
#  from function `wait_for_process' in file test/bats/helpers.bash, line 58,
#  in test file test/bats/test.bats, line 80)
#   `wait_for_process ${WAIT_TIME} ${SLEEP_TIME} "constraint_enforced $kind $name"' failed
# Context "kind-kind" modified.
# running integration test against policy group: general, constraint template: allowedrepos
# constrainttemplate.templates.gatekeeper.sh/k8sallowedrepos created
# testing sample constraint: repo-must-be-openpolicyagent
# k8sallowedrepos.constraints.gatekeeper.sh/repo-is-openpolicyagent created
# checking constraint {
#     "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#     "kind": "K8sAllowedRepos",
#     "metadata": {
#         "annotations": {
#             "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"constraints.gatekeeper.sh/v1beta1\",\"kind\":\"K8sAllowedRepos\",\"metadata\":{\"annotations\":{},\"name\":\"repo-is-openpolicyagent\"},\"spec\":{\"match\":{\"kinds\":[{\"apiGroups\":[\"\"],\"kinds\":[\"Pod\"]}],\"namespaces\":[\"default\"]},\"parameters\":{\"repos\":[\"openpolicyagent/\"]}}}\n"
#         },
#         "creationTimestamp": "2021-06-02T08:27:35Z",
#         "generation": 1,
#         "managedFields": [
#             {
#                 "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#                 "fieldsType": "FieldsV1",
#                 "fieldsV1": {
#                     "f:metadata": {
#                         "f:annotations": {
#                             ".": {},
#                             "f:kubectl.kubernetes.io/last-applied-configuration": {}
#                         }
#                     },
#                     "f:spec": {
#                         ".": {},
#                         "f:match": {
#                             ".": {},
#                             "f:kinds": {},
#                             "f:namespaces": {}
#                         },
#                         "f:parameters": {
#                             ".": {},
#                             "f:repos": {}
#                         }
#                     }
#                 },
#                 "manager": "kubectl-client-side-apply",
#                 "operation": "Update",
#                 "time": "2021-06-02T08:27:35Z"
#             }
#         ],
#         "name": "repo-is-openpolicyagent",
#         "resourceVersion": "8879",
#         "uid": "d628af69-41bd-4b6d-b3ff-0d1e6dcc422e"
#     },
#     "spec": {
#         "match": {
#             "kinds": [
#                 {
#                     "apiGroups": [
#                         ""
#                     ],
#                     "kinds": [
#                         "Pod"
#                     ]
#                 }
#             ],
#             "namespaces": [
#                 "default"
#             ]
#         },
#         "parameters": {
#             "repos": [
#                 "openpolicyagent/"
#             ]
#         }
#     }
# }
# jq: error (at <stdin>:65): Cannot iterate over null (null)
# ready: , expected: 3
# checking constraint {
#     "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#     "kind": "K8sAllowedRepos",
#     "metadata": {
#         "annotations": {
#             "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"constraints.gatekeeper.sh/v1beta1\",\"kind\":\"K8sAllowedRepos\",\"metadata\":{\"annotations\":{},\"name\":\"repo-is-openpolicyagent\"},\"spec\":{\"match\":{\"kinds\":[{\"apiGroups\":[\"\"],\"kinds\":[\"Pod\"]}],\"namespaces\":[\"default\"]},\"parameters\":{\"repos\":[\"openpolicyagent/\"]}}}\n"
#         },
#         "creationTimestamp": "2021-06-02T08:27:35Z",
#         "generation": 1,
#         "managedFields": [
#             {
#                 "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#                 "fieldsType": "FieldsV1",
#                 "fieldsV1": {
#                     "f:metadata": {
#                         "f:annotations": {
#                             ".": {},
#                             "f:kubectl.kubernetes.io/last-applied-configuration": {}
#                         }
#                     },
#                     "f:spec": {
#                         ".": {},
#                         "f:match": {
#                             ".": {},
#                             "f:kinds": {},
#                             "f:namespaces": {}
#                         },
#                         "f:parameters": {
#                             ".": {},
#                             "f:repos": {}
#                         }
#                     }
#                 },
#                 "manager": "kubectl-client-side-apply",
#                 "operation": "Update",
#                 "time": "2021-06-02T08:27:35Z"
#             }
#         ],
#         "name": "repo-is-openpolicyagent",
#         "resourceVersion": "8879",
#         "uid": "d628af69-41bd-4b6d-b3ff-0d1e6dcc422e"
#     },
#     "spec": {
#         "match": {
#             "kinds": [
#                 {
#                     "apiGroups": [
#                         ""
#                     ],
#                     "kinds": [
#                         "Pod"
#                     ]
#                 }
#             ],
#             "namespaces": [
#                 "default"
#             ]
#         },
#         "parameters": {
#             "repos": [
#                 "openpolicyagent/"
#             ]
#         }
#     }
# }
# jq: error (at <stdin>:65): Cannot iterate over null (null)
# ready: , expected: 3
# checking constraint {
#     "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#     "kind": "K8sAllowedRepos",
#     "metadata": {
#         "annotations": {
#             "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"constraints.gatekeeper.sh/v1beta1\",\"kind\":\"K8sAllowedRepos\",\"metadata\":{\"annotations\":{},\"name\":\"repo-is-openpolicyagent\"},\"spec\":{\"match\":{\"kinds\":[{\"apiGroups\":[\"\"],\"kinds\":[\"Pod\"]}],\"namespaces\":[\"default\"]},\"parameters\":{\"repos\":[\"openpolicyagent/\"]}}}\n"
#         },
#         "creationTimestamp": "2021-06-02T08:27:35Z",
#         "generation": 1,
#         "managedFields": [
#             {
#                 "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#                 "fieldsType": "FieldsV1",
#                 "fieldsV1": {
#                     "f:metadata": {
#                         "f:annotations": {
#                             ".": {},
#                             "f:kubectl.kubernetes.io/last-applied-configuration": {}
#                         }
#                     },
#                     "f:spec": {
#                         ".": {},
#                         "f:match": {
#                             ".": {},
#                             "f:kinds": {},
#                             "f:namespaces": {}
#                         },
#                         "f:parameters": {
#                             ".": {},
#                             "f:repos": {}
#                         }
#                     }
#                 },
#                 "manager": "kubectl-client-side-apply",
#                 "operation": "Update",
#                 "time": "2021-06-02T08:27:35Z"
#             }
#         ],
#         "name": "repo-is-openpolicyagent",
#         "resourceVersion": "8879",
#         "uid": "d628af69-41bd-4b6d-b3ff-0d1e6dcc422e"
#     },
#     "spec": {
#         "match": {
#             "kinds": [
#                 {
#                     "apiGroups": [
#                         ""
#                     ],
#                     "kinds": [
#                         "Pod"
#                     ]
#                 }
#             ],
#             "namespaces": [
#                 "default"
#             ]
#         },
#         "parameters": {
#             "repos": [
#                 "openpolicyagent/"
#             ]
#         }
#     }
# }
# jq: error (at <stdin>:65): Cannot iterate over null (null)
# ready: , expected: 3
# checking constraint {
#     "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#     "kind": "K8sAllowedRepos",
#     "metadata": {
#         "annotations": {
#             "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"constraints.gatekeeper.sh/v1beta1\",\"kind\":\"K8sAllowedRepos\",\"metadata\":{\"annotations\":{},\"name\":\"repo-is-openpolicyagent\"},\"spec\":{\"match\":{\"kinds\":[{\"apiGroups\":[\"\"],\"kinds\":[\"Pod\"]}],\"namespaces\":[\"default\"]},\"parameters\":{\"repos\":[\"openpolicyagent/\"]}}}\n"
#         },
#         "creationTimestamp": "2021-06-02T08:27:35Z",
#         "generation": 1,
#         "managedFields": [
#             {
#                 "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#                 "fieldsType": "FieldsV1",
#                 "fieldsV1": {
#                     "f:metadata": {
#                         "f:annotations": {
#                             ".": {},
#                             "f:kubectl.kubernetes.io/last-applied-configuration": {}
#                         }
#                     },
#                     "f:spec": {
#                         ".": {},
#                         "f:match": {
#                             ".": {},
#                             "f:kinds": {},
#                             "f:namespaces": {}
#                         },
#                         "f:parameters": {
#                             ".": {},
#                             "f:repos": {}
#                         }
#                     }
#                 },
#                 "manager": "kubectl-client-side-apply",
#                 "operation": "Update",
#                 "time": "2021-06-02T08:27:35Z"
#             }
#         ],
#         "name": "repo-is-openpolicyagent",
#         "resourceVersion": "8879",
#         "uid": "d628af69-41bd-4b6d-b3ff-0d1e6dcc422e"
#     },
#     "spec": {
#         "match": {
#             "kinds": [
#                 {
#                     "apiGroups": [
#                         ""
#                     ],
#                     "kinds": [
#                         "Pod"
#                     ]
#                 }
#             ],
#             "namespaces": [
#                 "default"
#             ]
#         },
#         "parameters": {
#             "repos": [
#                 "openpolicyagent/"
#             ]
#         }
#     }
# }
# jq: error (at <stdin>:65): Cannot iterate over null (null)
# ready: , expected: 3
# checking constraint {
#     "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#     "kind": "K8sAllowedRepos",
#     "metadata": {
#         "annotations": {
#             "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"constraints.gatekeeper.sh/v1beta1\",\"kind\":\"K8sAllowedRepos\",\"metadata\":{\"annotations\":{},\"name\":\"repo-is-openpolicyagent\"},\"spec\":{\"match\":{\"kinds\":[{\"apiGroups\":[\"\"],\"kinds\":[\"Pod\"]}],\"namespaces\":[\"default\"]},\"parameters\":{\"repos\":[\"openpolicyagent/\"]}}}\n"
#         },
#         "creationTimestamp": "2021-06-02T08:27:35Z",
#         "generation": 1,
#         "managedFields": [
#             {
#                 "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#                 "fieldsType": "FieldsV1",
#                 "fieldsV1": {
#                     "f:metadata": {
#                         "f:annotations": {
#                             ".": {},
#                             "f:kubectl.kubernetes.io/last-applied-configuration": {}
#                         }
#                     },
#                     "f:spec": {
#                         ".": {},
#                         "f:match": {
#                             ".": {},
#                             "f:kinds": {},
#                             "f:namespaces": {}
#                         },
#                         "f:parameters": {
#                             ".": {},
#                             "f:repos": {}
#                         }
#                     }
#                 },
#                 "manager": "kubectl-client-side-apply",
#                 "operation": "Update",
#                 "time": "2021-06-02T08:27:35Z"
#             }
#         ],
#         "name": "repo-is-openpolicyagent",
#         "resourceVersion": "8879",
#         "uid": "d628af69-41bd-4b6d-b3ff-0d1e6dcc422e"
#     },
#     "spec": {
#         "match": {
#             "kinds": [
#                 {
#                     "apiGroups": [
#                         ""
#                     ],
#                     "kinds": [
#                         "Pod"
#                     ]
#                 }
#             ],
#             "namespaces": [
#                 "default"
#             ]
#         },
#         "parameters": {
#             "repos": [
#                 "openpolicyagent/"
#             ]
#         }
#     }
# }
# jq: error (at <stdin>:65): Cannot iterate over null (null)
# ready: , expected: 3
# checking constraint {
#     "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#     "kind": "K8sAllowedRepos",
#     "metadata": {
#         "annotations": {
#             "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"constraints.gatekeeper.sh/v1beta1\",\"kind\":\"K8sAllowedRepos\",\"metadata\":{\"annotations\":{},\"name\":\"repo-is-openpolicyagent\"},\"spec\":{\"match\":{\"kinds\":[{\"apiGroups\":[\"\"],\"kinds\":[\"Pod\"]}],\"namespaces\":[\"default\"]},\"parameters\":{\"repos\":[\"openpolicyagent/\"]}}}\n"
#         },
#         "creationTimestamp": "2021-06-02T08:27:35Z",
#         "generation": 1,
#         "managedFields": [
#             {
#                 "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#                 "fieldsType": "FieldsV1",
#                 "fieldsV1": {
#                     "f:metadata": {
#                         "f:annotations": {
#                             ".": {},
#                             "f:kubectl.kubernetes.io/last-applied-configuration": {}
#                         }
#                     },
#                     "f:spec": {
#                         ".": {},
#                         "f:match": {
#                             ".": {},
#                             "f:kinds": {},
#                             "f:namespaces": {}
#                         },
#                         "f:parameters": {
#                             ".": {},
#                             "f:repos": {}
#                         }
#                     }
#                 },
#                 "manager": "kubectl-client-side-apply",
#                 "operation": "Update",
#                 "time": "2021-06-02T08:27:35Z"
#             }
#         ],
#         "name": "repo-is-openpolicyagent",
#         "resourceVersion": "8879",
#         "uid": "d628af69-41bd-4b6d-b3ff-0d1e6dcc422e"
#     },
#     "spec": {
#         "match": {
#             "kinds": [
#                 {
#                     "apiGroups": [
#                         ""
#                     ],
#                     "kinds": [
#                         "Pod"
#                     ]
#                 }
#             ],
#             "namespaces": [
#                 "default"
#             ]
#         },
#         "parameters": {
#             "repos": [
#                 "openpolicyagent/"
#             ]
#         }
#     }
# }
# jq: error (at <stdin>:65): Cannot iterate over null (null)
# ready: , expected: 3
# checking constraint {
#     "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#     "kind": "K8sAllowedRepos",
#     "metadata": {
#         "annotations": {
#             "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"constraints.gatekeeper.sh/v1beta1\",\"kind\":\"K8sAllowedRepos\",\"metadata\":{\"annotations\":{},\"name\":\"repo-is-openpolicyagent\"},\"spec\":{\"match\":{\"kinds\":[{\"apiGroups\":[\"\"],\"kinds\":[\"Pod\"]}],\"namespaces\":[\"default\"]},\"parameters\":{\"repos\":[\"openpolicyagent/\"]}}}\n"
#         },
#         "creationTimestamp": "2021-06-02T08:27:35Z",
#         "generation": 1,
#         "managedFields": [
#             {
#                 "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#                 "fieldsType": "FieldsV1",
#                 "fieldsV1": {
#                     "f:metadata": {
#                         "f:annotations": {
#                             ".": {},
#                             "f:kubectl.kubernetes.io/last-applied-configuration": {}
#                         }
#                     },
#                     "f:spec": {
#                         ".": {},
#                         "f:match": {
#                             ".": {},
#                             "f:kinds": {},
#                             "f:namespaces": {}
#                         },
#                         "f:parameters": {
#                             ".": {},
#                             "f:repos": {}
#                         }
#                     }
#                 },
#                 "manager": "kubectl-client-side-apply",
#                 "operation": "Update",
#                 "time": "2021-06-02T08:27:35Z"
#             }
#         ],
#         "name": "repo-is-openpolicyagent",
#         "resourceVersion": "8879",
#         "uid": "d628af69-41bd-4b6d-b3ff-0d1e6dcc422e"
#     },
#     "spec": {
#         "match": {
#             "kinds": [
#                 {
#                     "apiGroups": [
#                         ""
#                     ],
#                     "kinds": [
#                         "Pod"
#                     ]
#                 }
#             ],
#             "namespaces": [
#                 "default"
#             ]
#         },
#         "parameters": {
#             "repos": [
#                 "openpolicyagent/"
#             ]
#         }
#     }
# }
# jq: error (at <stdin>:65): Cannot iterate over null (null)
# ready: , expected: 3
# checking constraint {
#     "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#     "kind": "K8sAllowedRepos",
#     "metadata": {
#         "annotations": {
#             "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"constraints.gatekeeper.sh/v1beta1\",\"kind\":\"K8sAllowedRepos\",\"metadata\":{\"annotations\":{},\"name\":\"repo-is-openpolicyagent\"},\"spec\":{\"match\":{\"kinds\":[{\"apiGroups\":[\"\"],\"kinds\":[\"Pod\"]}],\"namespaces\":[\"default\"]},\"parameters\":{\"repos\":[\"openpolicyagent/\"]}}}\n"
#         },
#         "creationTimestamp": "2021-06-02T08:27:35Z",
#         "generation": 1,
#         "managedFields": [
#             {
#                 "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#                 "fieldsType": "FieldsV1",
#                 "fieldsV1": {
#                     "f:metadata": {
#                         "f:annotations": {
#                             ".": {},
#                             "f:kubectl.kubernetes.io/last-applied-configuration": {}
#                         }
#                     },
#                     "f:spec": {
#                         ".": {},
#                         "f:match": {
#                             ".": {},
#                             "f:kinds": {},
#                             "f:namespaces": {}
#                         },
#                         "f:parameters": {
#                             ".": {},
#                             "f:repos": {}
#                         }
#                     }
#                 },
#                 "manager": "kubectl-client-side-apply",
#                 "operation": "Update",
#                 "time": "2021-06-02T08:27:35Z"
#             },
#             {
#                 "apiVersion": "constraints.gatekeeper.sh/v1beta1",
#                 "fieldsType": "FieldsV1",
#                 "fieldsV1": {
#                     "f:status": {
#                         ".": {},
#                         "f:byPod": {}
#                     }
#                 },
#                 "manager": "gatekeeper",
#                 "operation": "Update",
#                 "time": "2021-06-02T08:28:12Z"
#             }
#         ],
#         "name": "repo-is-openpolicyagent",
#         "resourceVersion": "8950",
#         "uid": "d628af69-41bd-4b6d-b3ff-0d1e6dcc422e"
#     },
#     "spec": {
#         "match": {
#             "kinds": [
#                 {
#                     "apiGroups": [
#                         ""
#                     ],
#                     "kinds": [
#                         "Pod"
#                     ]
#                 }
#             ],
#             "namespaces": [
#                 "default"
#             ]
#         },
#         "parameters": {
#             "repos": [
#                 "openpolicyagent/"
#             ]
#         }
#     },
#     "status": {
#         "byPod": [
#             {
#                 "constraintUID": "d628af69-41bd-4b6d-b3ff-0d1e6dcc422e",
#                 "enforced": true,
#                 "id": "gatekeeper-audit-5cc9fb45b9-5f9l7",
#                 "observedGeneration": 1,
#                 "operations": [
#                     "audit",
#                     "status"
#                 ]
#             },
#             {
#                 "constraintUID": "d628af69-41bd-4b6d-b3ff-0d1e6dcc422e",
#                 "enforced": true,
#                 "id": "gatekeeper-controller-manager-8d7b596c4-7gjxq",
#                 "observedGeneration": 1,
#                 "operations": [
#                     "webhook"
#                 ]
#             },
#             {
#                 "constraintUID": "d628af69-41bd-4b6d-b3ff-0d1e6dcc422e",
#                 "enforced": true,
#                 "id": "gatekeeper-controller-manager-8d7b596c4-f9jjz",
#                 "observedGeneration": 1,
#                 "operations": [
#                     "webhook"
#                 ]
#             },
#             {
#                 "constraintUID": "d628af69-41bd-4b6d-b3ff-0d1e6dcc422e",
#                 "enforced": true,
#                 "id": "gatekeeper-controller-manager-8d7b596c4-ksjfv",
#                 "observedGeneration": 1,
#                 "operations": [
#                     "webhook"
#                 ]
#             }
#         ]
#     }
# }
# ready: 3, expected: 3
# pod "opa-allowed" deleted
# expected: denied the request
# actual: pod/nginx-disallowed unchanged
# cleaning...
# constrainttemplate.templates.gatekeeper.sh "k8sallowedrepos" deleted

Any ideas? ๐Ÿค”

Check timeoutSeconds for readinessProbe and livenessProbe

This Kubernetes PR enabled ExecProbeTimeout feature flag, which ensures kubelet will respect exec probe timeouts. If timeoutSeconds is not specified, the timeout will default to 1 second. A policy to check if timeoutSeconds is set for readinessProbe and livenessProbe will ensure proper timeout is set to avoid any breakage.

RunAsUserName Policy Constraint for windows pods

Describe the solution you'd like
A policy that blocks ContainerAdministrator from being set on the WindowsOptions podspec. It is generally a good idea to run your containers with ContainerUser for windows pods. The users are not shared between the Container and host but the ContainerAdministrator does have additional privileges with in the container. In the PR for kubernetes/kubernetes#92355 an agree was made block ContainerAdministrator if RunAsNonRoot was specified.

The pod sec looks like (can also be set per container):

spec:
  securityContext:
    windowsOptions:
      runAsUserName: "ContainerUser"

There are also username limitations to be aware of: https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/#windows-username-limitations

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

The following has more information on Windows Security Policies which are currently being defined:
kubernetes/kubernetes#64801 (comment)

Environment:

  • Gatekeeper version:
  • Kubernetes version: (use kubectl version):

Organize PSP policies into standardized buckets

Describe the solution you'd like
There are quite a few PSP policies in the library. It might be confusing to users which ones to deploy and what constraints parameters to set.

sig-auth provides a set of standardized PSP definitions (https://docs.google.com/document/d/1d9c4BaDzRw1B5fAcf7gLOMZSVEvrpSutivjfNOwIqT0/edit#heading=h.ihgl7qlgtyzu) that we can align on, ranging from unrestricted to common best practices to restricted.

We should create guidance and buckets for users to adopt PSP policies easily, and mention that these can be customized by users.

debuggability

Describe the solution you'd like
How do I debug with rego when basics fail?

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

Gatekeeper version: 3.2
Kubernetes version: (use kubectl version): v1.18.9-eks-d1db3c
I had a constrainttemplate with this rego

package kubernetes.admission

violation[{"msg": msg}]  {
  input.request.kind.kind == "Pod"
  image := input.request.object.spec.containers[_].image
  value_match(image)
  msg := sprintf("image '%v' comes from untrusted registry", [image])
}

value_match(image) {
  arr := split(image, "/")
  not arr[count(arr)-2] == input.request.object.metadata.namespace
}

Figured out the hard way request should be review for it to work. One of the examples misled I think.

But there was nothing in the logs helping. How can we do this better?

Also this trick to debug would not work because looks like i got a whitespace issue there somehow

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8sdenyall
spec:
  crd:
    spec:
      names:
        kind: K8sDenyAll
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8sdenyall
        violation[{"msg": msg}] {
          msg := sprintf("REVIEW OBJECT: %v", [input.review])
        }

And it would complain error: error parsing template.yaml: error converting YAML to JSON: yaml: line 15: could not find expected ':' Again figured out the hard way!

using `startwith` in allowedrepos policy may allow for bypass

The Allowed Repos policy checks that containers are from approved locations.

It uses startswith as a function to conduct this check.

Taking this approach could allow for bypass as an attacker could just start their image name with one of the approved strings.

As shown in this example where the allowed string is "openpolicyagent" , "openpolicyagentnotreally" will be allowed.

Using an approach which matches the whole account name or other string, would help mitigate this kind of risk.

Duplicated code in template.yaml and src.rego

Right now the code in src.rego is duplicated in template.yaml. Of course we have to keep src.rego around for the tests to work. But it would be great if we could inject the code of src.rego into .spec.targets[0].rego of the template.yaml file.

I assume that's not possible with kustomize? Maybe yq?

tolerations & nodeselectors

It would be great if there was a library to support similar functionalty to the alpha admission controllers:

  • PodTolerationRestriction
  • PodNodeSelector

Helm chart

It would be nice if the library could compile into an easy to use helm chart.

Integration testing via example yaml files in each policy directory

We can add integration testing based on each of the policy directories' templates, constraints, and example resource yaml files. Using the standard from #25, we can require that all resources containing "_allowed*.yaml" pass and "_disallowed*.yaml" are rejected. (We'll need to ensure those files exist first -- #26)

We have a couple options here. The kpt tool has a Gatekeeper resource validator to evaluate a resource, template, and constraint against the OPA client. We can also spin up a kind cluster (similar to in the Gatekeeper repo) and apply the resources there. I imagine the kpt approach is faster but kind more representative of real-world usage.

excludedNamespaces with wildcard for gatekeeper constraint object

I want to apply the constraint on every pod that is not in 'openshift-*' namespace.
The constraint that I tried:

kind: K8sHostNamespaces
metadata:
  name: k8s-host-namespace
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    excludedNamespaces:
      - openshift-*

Also tried:

kind: K8sHostNamespaces
metadata:
  name: k8s-host-namespace
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    excludedNamespaces: ["openshift-*"]

The result is that Openshift namespaces are still affected under this constraint. If exclude the namespaces specifically like 'openshift-etcd' or 'openshift-image-registry' then the namespaces will not fall under the constraint as expected.

Expected results: Be able to include wildcard for spec.match.excludedNamespaces field in gatekeeper constraints.

seccomp policy doesn't take account of new format.

The seccomp policy checks for a valid seccomp profile being set in annotations for the workload. Since Kubernetes 1.19, seccomp policy settings have moved into the securityContext section of the manifest (docs here).

If the current version of the policy is used, it may block workloads which should be allowed.

add an example to library that uses deployments kind

Describe the solution you'd like
Seen a few questions in Slack that users are trying to use Deployment kind (for allowed replica count for example) but not using apps api group, we should add an example to library for this usecase

gMSA policy constraint for Windows Pods

Describe the solution you'd like
A policy in the library to apply a to apply a constraint on gMSA field in a Windows pod spec.

A windows pod has a windowsOptions on the securityContext of the pod spec (and on the individual container):

spec:
      securityContext:
        windowsOptions:
          gmsaCredentialSpecName: gmsa-webapp1-crd
          gmsaCredentialSpec: <optional parameters if not using gmsaCrd>

In the case of using the gmsa webhook and crd the gmsaCredentialSpec should not be set as the info is loaded from the CRD.

There could also be an option to have an AllowList of the gMSA CRD's that could be allowed.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Gatekeeper version:
  • Kubernetes version: (use kubectl version):

Is it possible to allow exemption for a given sidecar container only

I am new to gatekeeper and trying to use PSP constraints.. We are successful in implementing the constraints and exempting some namespaces by using excludednamespaces option.. We have a use case to exempt the constraint only for a side car container in a pod. Is this possible? If so can you point me to an example. Thanks.

Is gatekeeper-policy namespace is mandate?

Is gatekeeper-policy namespace is mandate?
i am using helm for gatekeeper-policy which has templates and constraints which has been inhouse developed.
Its not a bug BUT more like a generic question, if the gatekeeper-policy namespace needs to be created first.
I tried to install it and it errored out with gatekeeper-polcy namespace not found. so i created it and it worked.
is there any documentation for this?

mustRunAsNonRoot evaluations will error out in some cases

This can be reproduced by adding this test to https://github.com/open-policy-agent/gatekeeper-library/blob/master/src/pod-security-policy/users/src_test.rego

test_wcs {
  input := {"review": review({"runAsNonRoot": true}, [ctr("cont1", runAsNonRoot(false))], null), "parameters": user_mustrunasnonroot }
  results := violation with input as input
  count(results) == 0
}

You should see:

data.k8spspallowedusers.test_wcs: ERROR (1.205236ms)
  /opa-policies/src.rego:90: eval_conflict_error: functions must not produce multiple outputs for same inputs

The review object isn't anything too special, just a Pod with PodSecurityContext setting runAsNonRoot=true, but with a container overriding that.

The error is triggered by this line in the policy code: https://github.com/open-policy-agent/gatekeeper-library/blob/master/src/pod-security-policy/users/src.rego#L39

It will run the functions on both L84 and L90, and will obtain different results from them (false and true, respectively).

The root of the problem is probably that L91 is not a good enough guard. The function on L90 will trigger not only when "no container level field exists", it will also trigger when "the container level field exists but is set to false".

runAsNonRoot SecurityContext

The psp users rule checks runAsUser, but ignores the runAsNonRoot SecurityContext.
Is there a specific reason for this?

I'd expect that if a user provides runAsNonRoot and leaves the runAsUser out, the review object is not denied.
In that case the kubelet will ensure that the image doesn't run as root.

apiVersion: v1
kind: Pod
metadata:
  name: nginx-users-disallowed
  labels:
    app: nginx-users
spec:
  securityContext:
    runAsNonRoot: true
  containers:
    - name: nginx
      image: nginx

kubectl describe pod -l app=nginx-users

Normal   Pulling                 2s (x2 over 10s)    kubelet            Pulling image "nginx"
Warning  Failed                  2s                  kubelet            Error: container has runAsNonRoot and image will run as root
Normal   Pulled                  0s (x2 over 2s)     kubelet            Successfully pulled image "nginx"

I changed the rule a bit so when you set MustRunAsNonRoot it also allows setting the securityContext/runAsNonRoot:

# RunAsUser (separate due to "MustRunAsNonRoot")
get_user_violation(params, container) = msg {
  rule := params.rule
  provided_user := get_field_value("runAsUser", container, input.review)
  not accept_users(rule, provided_user)
  msg := sprintf("Container %v is attempting to run as disallowed user %v. Allowed runAsUser: %v", [container.name, provided_user, params])
}

get_user_violation(params, container) = msg {
  not get_field_value("runAsUser", container, input.review)
  not get_field_value("runAsNonRoot", container, input.review)
  params.rule != "RunAsAny"
  params.rule != "runAsNonRoot"
  msg := sprintf("Container %v is attempting to run without a required securityContext/%v", [container.name, params.rule])
}

get_user_violation(params, container) = msg {
  params.rule = "runAsNonRoot"
  not get_field_value("runAsUser", container, input.review)
  not get_field_value("runAsNonRoot", container, input.review)
  msg := sprintf("Container %v is attempting to run without a required securityContext/runAsNonRoot or securityContext/runAsUser != 0", [container.name])
}

Am I missing something, or should I do a PR with this change (and added tests)? @

Create allowed_example.yaml for every constraint

In the demo directory in the gatekeeper repo we have examples of good (allowed by constraint) resources as well as bad (not allowed by constraint). We should have these for the constraints in this repo also. Besides serving as additional illustration they help serve as the basis for integration testing.

defaultAllowPrivilegeEscalation Support

Describe the solution you'd like
I believe this would require a mutating webhook, but I'd like to be able to toggle a flag (on a Constraint?) similar to the defaultAllowPrivilegeEscalation field in the PodSecurityPolicy resource.

After reading the No New Privileges Design Doc I think I'm beginning to understand why the existing Gatekeeper PSP library policy allow-privilege-escalation needs to check every initContainer and container. By default, if allowPrivilegeEscalation isn't explicitly set to false, then the nonroot containers can escalate.

Provided Uniqueingresshost constrainttemplaete/constraint doesn't work

I have tried using the Uniqueingresshost library ref: https://github.com/open-policy-agent/gatekeeper-library/tree/master/library/general/uniqueingresshost on the GCP- GKE cluster where I have my Anthos config management and Policy controller is installed with the latest version.(Having Gatekeeper installed)

I see the provided Uniqueingresshost library (constrainttemplate/constraint) doesn't work as expected. ingress-hosts were being created without being disallowed to create or restricting them to create as per the functionality to allow only a unique ingress host.

I have tried using the same approach for creating a Virtualservice-constrainttemplate & constraint where they need to check the resource if it is already created or using the host and it needs to restrict creating a virtual service when there is already a virtual service that uses the same host name.

used constrainttemplate- posting :
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8svirtualserviceuniquehostmatchnew
spec:
crd:
spec:
names:
kind: K8sVirtualServiceUniqueHostMatchNew
targets:
- target: admission.k8s.gatekeeper.sh

  rego: |
    package k8svirtualserviceuniquehostmatchnew

    identical(obj, review) {
      obj.metadata.namespace == review.object.metadata.namespace
      obj.metadata.name == review.object.metadata.name
    }

    violation[{"msg": msg}] {
      input.review.kind.kind == "VirtualService"
      re_match("^(extensions|networking.k8s.io)$", input.review.kind.group)
      host := input.review.object.spec.hosts[_]
      other := data.inventory.namespace[ns][otherapiversion]["VirtualService"][name].spec.hosts[_]
      re_match("^(extensions|networking.k8s.io)/.+$", otherapiversion)
      other.spec.hosts[_] == host
      not other == host
      not identical(other, input.review)
      msg := sprintf("ingress host conflicts with an existing ingress <%v>", [host])
    }

Trying to create a Virtualservice constrainttemplae/constraint which allows creating a virtualservice with a unique host on the same namespace and restricts the which have the same host.

#Gatekeeper version: Image: gcr.io/config-management-release/gatekeeper:anthos1.6.2-6dd505e.g0
Kubernetes version: v1.20.4

Create basic policy rules

  • Minimum replica count enforcement
  • White listed/ black listed registries
  • White listed/ black listed containers
  • Black list/white list nodes
  • Maximum total resource quota per deployment
  • Maximum total # of apps per user/per namespace
  • Must exist annotation or labels
  • Validate container (twistlock style?) image on create?
  • rejects deployments that does not have a specific apparmor profiles https://kubernetes.io/docs/tutorials/clusters/apparmor/

Default behavior for invalid constraint parameters - Allow or deny?

Currently when invalid inputs are given in constraints, no violations are reported. User doesn't have a clue that input is invalid. What should be the default behavior when constraint parameters are invalid? Should it allow or deny the resource creation in cluster?

Lets consider below example...

Policy description: Ensure container resource limits do not exceed specified limits
Constraint template: https://github.com/open-policy-agent/gatekeeper/blob/master/library/general/containerlimits/template.yaml
Constraint: https://github.com/open-policy-agent/gatekeeper/blob/master/library/general/containerlimits/constraint.yaml

The above example constraint takes cpu and memory limit as parameters. Say, user inputs invalid values while creating the constraint, constraint gets created fine and no violations are reported on the policy.

Build script to insert src.rego into template.yaml files

Per the tooling proposal doc, we should set up a build script to automatically insert/update the Rego source in template.yaml files from the original src.rego. (We will need to update the README.md with instructions to run this script after any Rego code updates.)

The script should ideally also insert a version annotation (e.g. the SHA256 hash of the Rego code) which would let users know if the template they are using is current with the repo.

Library structure

Per discussion in open-policy-agent/gatekeeper#205 and the most recent weekly meeting, I'm opening an issue to decide how the library should be structured and track work for it.

Scope, goals, requirements

One of the main goals of Gatekeeper is to provide a reusable library that includes common policies for Kubernetes. We expect the library to grow over time and be community-owned. We've identified a few relevant personas:

  • Admins who want to install the Gatekeeper library (or a subset of it) on their cluster and begin using it. It should be easy for Admins to install the template library on the cluster and get started.

  • Developers who want to contribute to the Gatekeeper library by implementing new policies, improving or fixing existing policies, etc. It should be easy for developers to author, test, and debug their policies and contribute them to the upstream library.

  • Tinkerers who want to try out Gatekeeper for the first time and kick the tires. It should be easy for these people to deploy Gateekeeper, instantiate a few templates, exercise them, and cleanup.

We have also identified a few goals & requirements related to these personas.

  1. Policies should be organized into categories to facilitate browsing. For example, instead of having a single directory containing hundreds of policies, the library could be broken down into sub-directories for different categories like containers, images, networking, etc.

  2. Installable templates should be located under a separate root directory. This allows admins to easily install templates into their cluster (e.g., kubectl apply -f <url>)

  3. Policies contributed to the library should be accompanied with tests and at least one example constraint and resource for kicking the tires.

  4. Since the installable templates will be located separately, there should be some basic automation that generates the installable templates from the source templates.

Proposal

The existing PSP library in this repo meets (1) and (3) above but lacks (2) and (4). One option would be to:

  • Replace the spec.targets[].rego field with placeholder text and put a build step in place to template the templates (ha!).
  • Take the output of the build step (which would be a set of installable templates) and dump it into a separate root directory in this repository (e.g., templates/)

Questions

  • @maxsmythe do you think we should have unique names on the source files? One nice thing about the current naming convention is that it creates a simple abstraction for people contributing to the library.

is it possible to check fields directly in Pod spec?

hello,
I'm a bit confused. Let say I have fields like below in Pod definition:

...
spec:
  automountServiceAccountToken: true
  containers:
...

I'm wondering if it's possible to create policy that can detect fields like that.
I started with something simple:

---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sSATokenContainer
metadata:
  name: disallow-sa-token
spec:
  match:
    kinds:
      - apiGroups: ["v1"]
        kinds: ["Pod"]
---
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8ssatokencontainer
spec:
  crd:
    spec:
      names:
        kind: K8sSATokenContainer
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8ssatokencontainer

        violation[{"msg": msg}] {
          input.review.kind.kind == "Pod"
          input.review.object.spec.automountServiceAccountToken == true
          msg := "Automount Service Account Token is not allowed"
        }

Pod can can still be created, am I missing something? Thanks in advance for any suggestions.

Modifying containerlimits constraint to limit request/limit ratio

We are working on limiting the ratio allowed between requests and limits on our cluster to prevent rapid expansion of pods, which we've seen could wreak havoc to clusters.

I was wondering if you would like me to open a PR and add that to the library, once the constraint is completed.

WDYT?

Add library policies to e2e

Our e2e tests should test the policies in /library, especially for new constraints and constraint templates.

Reorganize file structure to prepare for tooling implementation

Per the tooling proposal doc, I'd like to restructure the repo in this format to set up a consistent format for testing, separate the source files from the library files, and make the library easily browsable. (Note we'll need to update the README.)

src/
- <policy group>/
  - <template name>/
    - src.rego
      src_test.rego

library/
- <policy group>/
  - <template name>/
    - template.yaml
    - README.md
    - samples/
      - <constraint 1>
        - constraint.yaml
        - allowed_example.yaml
        - disallowed_example.yaml
      - <constraint 2>
        - constraint.yaml
        - allowed_example.yaml
        - disallowed_example.yaml

docs/
- index.md

Remove annotation "kubernetes.io/ingress.allow-http" check in k8shttpsonly policy

AFAIK this annotation is only used in ingress-gce. which make the policy unusable other ingress controllers.
For example nginx ingress controller does not reference this annotation which means this policy will always be violated by despite a correctly configured TLS ingress object

https://github.com/open-policy-agent/gatekeeper-library/blob/master/library/general/httpsonly/template.yaml
kubernetes/ingress-nginx#6590 (comment)

Improvement for SELinux Policy

Hello,

I am integrating some of these policies with the project I am working on and the rego policy for K8sPSPSELinuxV2 is not actually adhering to the specification correctly.
Here is the troublemaker:
https://github.com/open-policy-agent/gatekeeper-library/blob/master/library/pod-security-policy/selinux/template.yaml

My findings so far are about the following sections.

This verification for the seLinux context rule field is not implemented

https://kubernetes.io/docs/concepts/policy/pod-security-policy/#selinux

SELinux
MustRunAs - Requires seLinuxOptions to be configured. Uses seLinuxOptions as the default. Validates against seLinuxOptions.
RunAsAny - No default provided. Allows any seLinuxOptions to be specified.

The verification is now handled only if there is a seLinuxOptions field set. It should validate the profiles only if MustRunAs is set.

  seLinux:
    rule: MustRunAs

It should take into account the context set from the rule and ignore any profile validation if rule is:

  seLinux:
    rule: RunAsAny

I can try adjusting the resources and rego file for this policy and submit a PR.

Create documentation aimed at "constraint admins"

We need to produce docs aimed at Kubernetes admins that primarily work with existing templates. For example, we need docs that cover:

  • How admins can discover available templates
  • How admins can find new templates
  • How admins can customize constraints via parameters
  • How admins can customize resource matching
  • How admins can debug constraints when they're not working
  • How admins can check audit status on constraints

Policy to HPA

Need a policy to enforce HPA for every deployment under dryrun that can alert the team if not block deployments

Image Content Signature Check Policy

This topic is really hot these days, so, If such support can be added to the library, it would be great!

There is a very great blog post available on this area, from Maximilian Siegert, you can refer to this link to the blog post.

But the main rego file for the image content signature check is just a few line codes:
https://gist.github.com/Igelchen1/e60a00f0da8f3b040b0bcf0fb1ca16da#file-main-rego

But this sample is using Notary, but a new project called cosign is now available, so, we can enhance the policy to be able to use cosign too.

Wdyt?

How to manipulate GK policies on the basis of cluster roles?

Team,

In kubernetes, we can take the benefit of cluster role and role bindings to bifurcate the pod security policies as per the privileges of admin and non-admin users.

But, in Azure kubernetes, if we apply any azure policy for eg. not allowing privilege pods then it will restrict all the users including admins as well.

How to control this restriction in Azure policies because PSP in AKS are deprecated and it is mandatory to use Azure policies now onwards.

Earlier I've asked this query in this repositorybut I have been redirected here.

Can anyone help please.

Thank you

Question: Gatekeeper PSP vs Native PSP

Is Gatekeeper a better way to deploy pod security policies, compare to kubernetes native method of deploying PSP? Has anyone use this in production successfully? We'd like to get some feedback from anyone who has been using Gatekeeper PSP and their experience, issues, concerns, etc...

change to privileged: true otherwise disallowed pod still deploys

I propose changing

to privileged: true ; otherwise, the constraint still allows privileged pods to be created.

# install gatekeeper agent
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/release-3.2/deploy/gatekeeper.yaml
# install constraint template
kubectl apply -f https://github.com/open-policy-agent/gatekeeper-library/raw/master/library/pod-security-policy/privileged-containers/template.yaml
# installing the constraint
cat <<EOF | kubectl apply -f -
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPSPPrivilegedContainer
metadata:
  name: ns-hil-noroot
spec:
  match:
    kinds:
      - apiGroups: [ "" ]
        kinds: [ "Pod" ]
    namespaces: [ "hil" ]
EOF

# create the disallow pod succeeded
kubectl -n hil apply -f https://github.com/open-policy-agent/gatekeeper-library/raw/master/library/pod-security-policy/allow-privilege-escalation/samples/psp-allow-privilege-escalation-container/example_disallowed.yaml
pod/nginx-privilege-escalation-disallowed created

gcloud container clusters describe asm-1 --zone=us-central1-f
addonsConfig:
  horizontalPodAutoscaling: {}
  httpLoadBalancing: {}
  kubernetesDashboard:
    disabled: true
  networkPolicyConfig:
    disabled: true
autoscaling: {}
clusterIpv4Cidr: 10.108.0.0/14
createTime: '2021-01-28T01:02:18+00:00'
currentMasterVersion: 1.16.15-gke.6000
currentNodeCount: 4
currentNodeVersion: 1.16.15-gke.6000
databaseEncryption:
  state: DECRYPTED
defaultMaxPodsConstraint:
  maxPodsPerNode: '110'

PodDisruptionBudget check

An issue I've hit a few times that would be really nice to have a library for,
PodDisruptionBudgets with minAvailable: 1 and a Deployment with replicas = 1.

This really messes with node drains. Its a valid config for some clusters but in a multi-tenant environment, not so much.

fix psp templates

I found some errors in the existing templates and decided to help contribute, however, I am unable to make a PR.

read-only-root-filesystem
template.yaml

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8spspreadonlyrootfilesystem
spec:
  crd:
    spec:
      names:
        kind: K8sPSPReadOnlyRootFilesystem
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8spspreadonlyrootfilesystem

        violation[{"msg": msg, "details": {}}] {
            c := input_containers[_]
            c.securityContext.readOnlyRootFilesystem == false
            msg := sprintf("Only read-only root filesystem container is allowed: %v, securityContext: %v", [c.name, c.securityContext])
        }
        input_containers[c] {
            c := input.review.object.spec.containers[_]
        }
        input_containers[c] {
            c := input.review.object.spec.initContainers[_]
        }

allow-privilige-escalation
template.yaml

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8spspallowprivilegeescalationcontainer
spec:
  crd:
    spec:
      names:
        kind: K8sPSPAllowPrivilegeEscalationContainer
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8spspallowprivilegeescalationcontainer

        violation[{"msg": msg, "details": {}}] {
            c := input_containers[_]
            c.securityContext.allowPrivilegeEscalation
            msg := sprintf("Privileged escalation container is not allowed: %v, securityContext: %v", [c.name, c.securityContext])
        }
        input_containers[c] {
            c := input.review.object.spec.containers[_]
        }
        input_containers[c] {
            c := input.review.object.spec.initContainers[_]
        }

Releasing of Gatekeeper-Library

It would be nice if the Gatekeeper-Library would have a release like a semantic release.

I'm about to implement Gatekeeper and a set of Libs at a Kubernetes Cluster using a GitOps approach with ArgoCD.
I would like to be able to install the Gatekeeper-Library directly from the GitHub Repo but if I point my ArgoCD to the GitHub Repo to install the wanted libs, I will automatically get every update of the Library which could damage the integrity of the services running inside the K8S-Cluster.
The alternative is to check out the current gatekeeper-library, copy the files manually to the argocd app and have a look from time to time to stay up-to-date and do the manual checkout again.

I'm not sure how the releasing could be achieved. Maybe release-branches could be a way to achieve the releasing.
A Kubernetes-YAML like the installation YAML for Gatekeeper will not make it possible to choose which libraries should be installed and which not.

Maybe some has a better idea.

Update Config to include the latest Kubernetes 1.19 API group

Update the ReplicateData config to include the latest Kubernetes 1.19 API group version for ingress.

apiVersion: config.gatekeeper.sh/v1alpha1
kind: Config
metadata:
  name: config
  namespace: "gatekeeper-system"
spec:
  sync:
    syncOnly:
      - group: "extensions"
        version: "v1beta1"
        kind: "Ingress"
      - group: "networking.k8s.io"
        version: "v1beta1"
        kind: "Ingress"
     - group: "networking.k8s.io"
        version: "v1"
        kind: "Ingress"

https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#ingress-v1-networking-k8s-io
https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/service/networking/minimal-ingress.yaml

Allow reuse of rego snippets

we have this and some other chunks copied in 5+ policies and idea how to clean this up/make reuse work (except by using a new layer of templating)
some shared libraries or defining and then calling out to go libraries would be great

        # pods
        pod_template() = pod {
          input.review.object.kind == "Pod"
          pod := input.review.object
        }
        # statefulsets, deployment, daemonsets, jobs
        pod_template() = pod {
          input.review.object.spec.template.spec.containers[0]
          pod := input.review.object.spec.template
        }
        # cronjobs
        pod_template() = pod {
          input.review.object.spec.jobTemplate.spec.template.spec.containers[0]
          pod := input.review.object.spec.jobTemplate.spec.template
        }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.