Requirements
Below is a list of possible requirements for namespaces. Those that are checked off have been fully accepted. Unchecked requirements are those that are considered.
Considerations
We'll need to evaluate this proposal with consideration from other teams. The following quorum is proposed before proceeding:
Resources
While we have opened up the discussion of namespaces, we need to discuss
naming in general. Providing an effect of feel, the following description is
going to pretend that namespaces already exist. To open this discussion up, we
must understand the Kinds resources:
- Cluster: A cluster is controlled by a set of managers. Most resources
will be scoped in a cluster. Under the current model, we define this as a quorum set.
- Namespace: A cluster is divided into several namespaces.
- Node: A node resides in a cluster. From a user perspective, there isn't
much access other than reporting their existence. We may want to route a
user to a node for certain requests. We may want to hook the node into the
DNS system.
- Job: A job belongs to a namespace within the cluster. A job may have
multiple tasks. The job itself may have a service endpoint associated with
it, accessible over DNS, such as with a service job.
- Task: A task belongs to a job and a node, when assigned.
- Network: A network belongs to a namespace.
- Volume: A volume belongs to a namespace.
Rules about Naming
All resources in the cluster system use the same naming conventions.
All names should be compatible DNS subdomains, compliant with
RFC1035. This allows any resource to
be expressed over DNS. It also ensures that we have a well-known, restricted
and reliable character space, compatible with existing tools.
For reference, names must comply with the following grammar:
<domain> ::= <subdomain> | " "
<subdomain> ::= <label> | <subdomain> "." <label>
<label> ::= <letter> [ [ <ldh-str> ] <let-dig> ]
<ldh-str> ::= <let-dig-hyp> | <let-dig-hyp> <ldh-str>
<let-dig-hyp> ::= <let-dig> | "-"
<let-dig> ::= <letter> | <digit>
<letter> ::= any one of the 52 alphabetic characters A through Z in
upper case and a through z in lower case
<digit> ::= any one of the ten digits 0 through 9
Each label must be less than 64 characters and the total length must be less
than 256.
Names are case-insensitive, but stored and reported in lowercase, by
convention.
Tools interacting with names should support conversions too and from punycode.
This can be supported via
golang.org/x/net/idna
.
Structure
For each kind of resource, the name must be unique in the name space. This
has the excellent property that all names are unique within the cluster. This
means that by default, we have a way to reference every other thing.
Resource |
Component |
Structure |
Examples |
Cluster |
<cluster> |
<cluster> |
local , cluster0 |
Namespace |
<namespace> |
<namespace>.<cluster> |
production.cluster0 , development.local , xn--7o8h (๐ณ), system |
Node |
<node> |
<node>.<cluster> |
node0.local |
Job |
<job> |
<job>.<namespace>.<cluster> |
job0.production.cluster0 |
Task |
<task> |
<task>.<job>.<namespace>.<cluster> |
task0.job0.production.cluster0 |
Volume |
<volume> |
<volume>.<namespace>.<cluster> |
postgrs.production.cluster0 |
Network |
<network> |
<network>.<namespace>.<cluster> |
frontend.production.cluster0 |
At the base, we have the <cluster>
. The cluster should refer to a specific
cluster and can be named by configuration. Users should all share a common
configuration but it is not necessary to interoperate.
While names are generated from structure, a resource name may have one or more
labels, so they cannot be parsed to infer the source structure. For example, a
node may be named a.b
. When qualified, it may be a.b.default.local
. If we
don't know this is a node name, we may try to infer that based on structure.
It is impossible to tell whether this is a resource named a
on node b
or a
node named a.b
.
Namespaces
A namespace is an area where resources can reference each other without
qualification.
Every operation has a default namespace from which it is conducted. Any
objects created in that context become a member of that namespace.
By default, we will have the following namespaces:
Namespace |
Description |
default |
Default namespace for all resources |
system |
System namespace for cluster jobs |
By default, all resources are created under default
, unless the user modifies their configuration. The system
namespace is used to run cluster tasks, such as plugins and data distribution plains. Resources in the system
namespace are only shown in a special mode.
References
For most service declarations, we reference resources by a name. Typically,
this name is evaluated within a namespace, as described above. To allow access
to objects in disparate namespaces, we define a searchspace as part of an
operation context. When referencing another object, the reference only needs
to be long enough to resolve in the common parent. Two objects in the same
cluster but different namespace only need to include the namespace in the
reference but not the cluster name.
A searchspace consists of one or more namespaces, in precedence order. If a
resource is not resolved with an unqualified name, each available namespace is
tried until a match is found.
This can extend to involve resource sharing between two users. Let's say two
developers are developing an application in their own namespaces, lucy
and
steve
.
Let's say we have an identical service definition myapp
which can be run
independently:
service:
myapp:
instances: 4
requires:
redis # leave this syntax for another discussion!
container:
#...
For Lucy, the fully qualifed service name is myapp.lucy.local
and Steve
has myapp.steve.local
. However, when running the service, the requirement
of redis
is not fulfilled. It is absent from the definition. Running the
service fails. Fortunately, the operations team has made a development
instance available at redis.development.cluster0
. By default, neither Lucy
or Steve cannot see this resource.
A few things can happen here to resolve the issue. They can both edit the
configuration file to add .development
to the redis
reference. While this
does work, it now makes the definition non-portable.
A better resolution is to have both developers add development
to the
searchspace for the operation context. For steve
, the unqualified name
would be expanded to the following fully qualified names:
redis.steve.local
redis.development.cluster0
Lucy does the same gets the following qualified names:
redis.lucy.local
redis.development.cluster0
Note that both developers did the same thing and got the same result but have
different application environments.
With this, we get a very clear order in which resources are resolved. Each
user can set their default namespace and searchspace and control the order
in which resources are resolved. Once this is setup correctly, only
unqualified names will have to be used in practice for most API operations.
The main complexity here is that all names from user input need to resolved at
API request time, associating the resolution with an operation context.
Subsequintly, names get written out from user input during the API call, to
capture the current searchspace.
Clusters
We slightly glossed over a point above. Where did cluster0
come from? This
is simply the domain name of the cluster. In the example above, both
developers have a cluster on their machines, known as local
. This just has
to be one or more endpoints that are available for cluster submission.
Just as in searchspace, we can define a set of clusters that one might want
to use from an environment. These clusters combine with the search space to
create names. Let's say we have the following list of cluster domains:
We can combine this using a cross product with our searchspace ([local, development]
) to get all of the possible references for a resource redis
from the point of view of the user:
redis.steve.local
redis.development.local
redis.development.cluster0
redis.development.cluster1
We Let's say that Steve needs help with his application. Lucy tries to
reference it with myapp.steve.local
but that won't work, since .local
is
different between the two machines. To deal with this, we can define clusters
with names. A possible configuration on Lucy's machine might be the following:
Now, she can reference his app with myapp.steve.steves-mbp
or just
myapp.steve
if she adds "steve" to the search space.
Access Control
Namespaces provide a tool for access control. To build this framework, we
say that every operation has a context with a namespace. Under normal
operation, all creations, updates and deletions happen within the context's
namespace.
Access control operations simply use this framework to operate within. We can
define which namespaces can access other namespaces.
TODO: Work out some examples here. This actually works well, but we need
examples.
Alternative Models
Some other possible models under consideration:
- Similar to the above but resources cannot reference between namespaces. Slightly inflexible in large teams that want to partition a cluster arbitrarily.
- Slash-based model. Not DNS compatible, but somewhat useful against current docker projects.
Vanity
Naming is typically done out of vanity. While this specification is fairly
restrictive in naming, since we intend to use naming as an organizational
tool, we may find it necessary to introduce the concept of a vanity name.
Put whatever you like in this name.
Road Map
@mikegoezler @aluzzardi @amitshukla @icecrime