GithubHelp home page GithubHelp logo

federation-dev's Introduction

federation-dev's People

Contributors

ch-stark avatar codificat avatar cooktheryan avatar coverprice avatar cwilkers avatar halfsquatch avatar krain-arnold avatar mvazquezc avatar sabre1041 avatar scollier avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

federation-dev's Issues

RFE: please split long lines to multiline for readability

Hi,

there are some really long lines in these labs. It would be more readable for user it they were split to multiline, e.g.:

argocd app create --project default --name cluster1-kustomize --repo http://$(oc --context cluster1 -n gogs get route gogs -o jsonpath='{.spec.host}')/student/federation-dev.git --path labs/lab-5-assets/base --dest-server $(argocd cluster list | grep cluster1 | awk '{print $1}')  --dest-namespace dev-web-site  --revision master --nameprefix dev- --sync-policy automated

->

argocd app create --project default --name cluster1-kustomize --repo \
  http://$(oc --context cluster1 -n gogs get route gogs  \
  -o jsonpath='{.spec.host}')/student/federation-dev.git \
  --path labs/lab-5-assets/base \
  --dest-server $(argocd cluster list | grep cluster1 | awk '{print $1}') \
  --dest-namespace dev-web-site  --revision master --nameprefix dev- --sync-policy automated

lab 6 - a bit more explanation in step 7.2

can we explain this a bit more? Why is it showing no server is avail?

curl -k https://$(oc --context cluster1 -n haproxy-lb get route haproxy-lb -o jsonpath='{.status.ingress[*].host}')

<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>

remove or change section in lab 1

From what the lab admins told us, students should not be downloading anything to their workstation. So, this section: Download the example code needs to be changed to some ssh host or deleted.

lab 2 change

Need to remove: Create the two OpenShift clusters because in the pre-reqs lab, we tell them that we already have deployed 3 clusters for them.

Lab 7 - Placement Policies

in lab 7, i cycled through each cluster to ensure we could get to pacman. I placed it on each cluster and verified functionality. Looked good.

When getting to the section on "Placement Policies", it wasn't starting the pods:

$ oc --context=cluster1 -n pacman patch federateddeployment pacman --type=merge -p '{"spec":{"placement":{"clusters": [{"name":"cluster1"}]}}}'
federateddeployment.types.kubefed.k8s.io/pacman patched

Output:

$ for cluster in cluster1 cluster2 cluster3;do echo "*** $cluster ***"; oc get deployment --context $cluster -n pacman;done
*** cluster1 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   0/0     0            0           4s
*** cluster2 ***
No resources found.
*** cluster3 ***
No resources found.

more:

$ oc project pacman
Now using project "pacman" on server "https://api.cluster-cb5e.cb5e.sandbox106.opentlc.com:6443".

$ oc get pods
No resources found.

Then, when trying to add pacman back to all clusters, similar behavior:

$ oc --context=cluster1 -n pacman patch federateddeployment pacman --type=merge -p '{"spec":{"placement":{"clusters": [{"name":"cluster1"}, {"name":"cluster2"}, {"name":"cluster3"}]}}}'
federateddeployment.types.kubefed.k8s.io/pacman patched

output:

$ for cluster in cluster1 cluster2 cluster3;do echo "*** $cluster ***"; oc get deployment --context $cluster -n pacman;done
*** cluster1 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   0/0     0            0           4m44s
*** cluster2 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   0/0     0            0           109s
*** cluster3 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   1/1     1            1           111s

error on lab 5

I was running through lab 5, and when I got to the point of checking the replica set, I hit this error:

$ oc --context=cluster1 -n mongo exec $MONGO_POD -- bash -c 'mongo --norc --quiet --username=admin --password=$MONGODB_ADMIN_PASSWORD --host localhost admin --tls --tlsCAFile /opt/mongo-ssl/ca.pem --eval "rs.status()"'
2019-07-26T19:08:04.697+0000 E NETWORK  [js] The server certificate does not match the host name. Hostname: localhost does not match CN: kubernetes
2019-07-26T19:08:04.698+0000 E QUERY    [js] Error: couldn't connect to server localhost:27017, connection attempt failed: SSLHandshakeFailed: The server certificate does not match the host name. Hostname: localhost does not match CN: kubernetes :
connect@src/mongo/shell/mongo.js:322:13
@(connect):1:21
2019-07-26T19:08:04.700+0000 F -        [main] exception: connect failed
2019-07-26T19:08:04.700+0000 E -        [main] exiting with code 1
command terminated with exit code 1

I had hit that before, so I deleted the mongo namespace on all clusters and restarted the lab from scratch, copying and pasting line by line.

on investigating, i see on cluster1, the pod is crashing with the following:

  Normal   Scheduled               5m46s                  default-scheduler                    Successfully assigned mongo/mongo-7ccb9bccdc-lc7fg to ip-10-0-133-6.ec2.internal
  Warning  FailedAttachVolume      5m42s (x4 over 5m46s)  attachdetach-controller              AttachVolume.Attach failed for volume "pvc-7ad85b5c-afd8-11e9-aca5-0e3aca8efee4" : "Error attaching EBS volume \"vol-0a2d3ff1834484b1f\"" to instance "i-0ad7d72c504f1e409" since volume is in "creating" state
  Normal   SuccessfulAttachVolume  5m34s                  attachdetach-controller              AttachVolume.Attach succeeded for volume "pvc-7ad85b5c-afd8-11e9-aca5-0e3aca8efee4"
  Normal   Started                 3m44s (x4 over 5m21s)  kubelet, ip-10-0-133-6.ec2.internal  Started container mongo
  Normal   Pulling                 2m52s (x5 over 5m22s)  kubelet, ip-10-0-133-6.ec2.internal  Pulling image "quay.io/mavazque/mongodb:autors"
  Normal   Pulled                  2m51s (x5 over 5m21s)  kubelet, ip-10-0-133-6.ec2.internal  Successfully pulled image "quay.io/mavazque/mongodb:autors"
  Normal   Created                 2m51s (x5 over 5m21s)  kubelet, ip-10-0-133-6.ec2.internal  Created container mongo
  Warning  BackOff                 19s (x17 over 4m37s)   kubelet, ip-10-0-133-6.ec2.internal  Back-off restarting failed container

If I delete the pod, it comes up fine. But, the error noted above about the server cert and SSLHandshakeFailed is the same.

Changes to 5.md after agnosticd

  • cfssl tools will be already downloaded, remove download instructions.
  • ROUTE_NAME instructions should be updated based on the information provided by the provisioning email.

Fix required in lab 5.md

SANS="localhost,localhost.localdomain,127.0.0.1,${ROUTE_CLUSTER1},${ROUTE_CLUSTER2},${ROUTE_CLUSTER3},${SERVICE_NAME},${SERVICE_NAME}.${NAMESPACE},${SERVICE_NAME}.${NAMESPACE}.svc.cluster.local"

${NAMESPACE} should be changed by mongo

lab 4 - need to ensure the scc is created before the pods are launched

if not, the pods will end up in:

NAME                               READY   STATUS             RESTARTS   AGE
test-deployment-7c55bb85c7-bqnfm   0/1     CrashLoopBackOff   6          8m46s
test-deployment-7c55bb85c7-rztxg   0/1     CrashLoopBackOff   6          8m46s
test-deployment-7c55bb85c7-zjn76   0/1     CrashLoopBackOff   6          8m46s
------------ cluster2 pods ------------
NAME                               READY   STATUS             RESTARTS   AGE
test-deployment-5856764cdd-4nk9b   0/1     CrashLoopBackOff   6          8m46s
test-deployment-5856764cdd-clm2q   0/1     CrashLoopBackOff   6          8m46s
test-deployment-5856764cdd-tbh77   0/1     CrashLoopBackOff   6          8m46s
test-deployment-5856764cdd-wwz79   0/1     CrashLoopBackOff   6          8m46s
test-deployment-5856764cdd-zzmk8   0/1     CrashLoopBackOff   6          8m46s

Changes to 2.md after agnosticd

  • An email with all three clusters information will be sent to students, instead of using the kubeconfigs we could instruct them to login to the API using kubeadmin and rename context for each cluster.

can't federate pvc

going through:

https://github.com/openshift/federation-dev/blob/master/federated-mongodb/README.md

saw this message:

oc --context=scollier-1 -n test-namespace create -f 03-mongo-federated-pvc.yaml
error: unable to recognize "03-mongo-federated-pvc.yaml": no matches for kind "FederatedPersistentVolumeClaim" in version "types.federation.k8s.io/v1alpha1"

more info:

$ oc get clusters --all-namespaces
NAMESPACE        NAME         AGE
test-namespace   scollier-1   5h21m
test-namespace   scollier-2   5h20m
test-namespace   scollier-3   12m
$ oc get federatedclusters -n test-namespace 
NAME         READY
scollier-1   True
scollier-2   True
scollier-3   True

I'm here:

commit 41874c7e6191c889c95ed240c6706ff269ba023e (HEAD -> master, origin/master, origin/HEAD)
Merge: 0febd8b 00e5c2e
Author: Scott Collier <[email protected]>
Date:   Fri May 3 07:26:47 2019 -0500

    Merge pull request #12 from openshift/video
    
    Demos for people to follow along with the videos

lab 7 - i never saw the app move

I changed the placement:

Put it on all clusters:

$ oc --context=cluster1 -n pacman patch federateddeployment pacman --type=merge -p '{"spec":{"overrides":[]}}'

it's now running on all clusters:

$ for cluster in cluster1 cluster2 cluster3;do echo "*** $cluster ***"; oc get deployment --context $cluster -n pacman;done
*** cluster1 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   1/1     1            1           28m
*** cluster2 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   1/1     1            1           28m
*** cluster3 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   1/1     1            1           28m

and then only on cluster 1:

$ oc --context=cluster1 -n pacman patch federateddeployment pacman --type=merge -p '{"spec":{"overrides":[{"clusterName":"cluster2","clusterOverrides":[{"path":"/spec/replicas","value":0}]},{"clusterName":"cluster3","clusterOverrides":[{"path":"/spec/replicas","value":0}]}]}}'
federateddeployment.types.kubefed.k8s.io/pacman patched

Then moved to cluster 1 only:

$ oc --context=cluster1 -n pacman patch federateddeployment pacman --type=merge -p '{"spec":{"overrides":[{"clusterName":"cluster2","clusterOverrides":[{"path":"/spec/replicas","value":0}]},{"clusterName":"cluster3","clusterOverrides":[{"path":"/spec/replicas","value":0}]}]}}'
federateddeployment.types.kubefed.k8s.io/pacman patched

and it is only cluster 1

$ for cluster in cluster1 cluster2 cluster3;do echo "*** $cluster ***"; oc get deployment --context $cluster -n pacman;done
*** cluster1 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   1/1     1            1           28m
*** cluster2 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   0/0     0            0           28m
*** cluster3 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   0/0     0            0           28m

pacman URL now gives me:

503 Service Unavailable
No server is available to handle this request.

Moved to cluster 3:

oc --context=cluster1 -n pacman patch federateddeployment pacman --type=merge -p '{"spec":{"overrides":[]}}'

oc --context=cluster1 -n pacman patch federateddeployment pacman --type=merge -p '{"spec":{"overrides":[{"clusterName":"cluster1","clusterOverrides":[{"path":"/spec/replicas","value":0}]},{"clusterName":"cluster2","clusterOverrides":[{"path":"/spec/replicas","value":0}]}]}}'

for cluster in cluster1 cluster2 cluster3;do echo "*** $cluster ***"; oc get deployment --context $cluster -n pacman;done
*** cluster1 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   0/0     0            0           30m
*** cluster2 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   0/0     0            0           30m
*** cluster3 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   1/1     1            1           30m

and everytime the game showed:

Cloud: aws Zone: ap-southeast-1a Host: pacman-855bb58d86-852mg Time: 0 Level: 1 Score: 0 Lives:    

Also, in general, the application is just inconsistent in what it displays. Sometimes it won't display anything, sometimes it will display "unknown".

Deployment not quite correct

When running through:

https://github.com/openshift/federation-dev/blob/master/README-ocp4.md

It looks like the deployment finished (I ran it again to confirm):

$ oc apply -R -f sample-app
federatedconfigmap.types.federation.k8s.io/test-configmap unchanged
federateddeployment.types.federation.k8s.io/test-deployment unchanged
federatedsecret.types.federation.k8s.io/test-secret unchanged
federatedservice.types.federation.k8s.io/test-service unchanged
federatedserviceaccount.types.federation.k8s.io/test-serviceaccount unchanged

However, not all the resources are getting deployed:

$ for resource in configmaps secrets deployments services; do     for cluster in scollier-1 scollier-2; do         echo ------------ ${cluster} ${resource} ------------;         oc --context=${cluster} -n test-namespace get ${resource};     done; done
------------ scollier-1 configmaps ------------
NAME                            DATA   AGE
federation-controller-manager   0      13m
------------ scollier-2 configmaps ------------
No resources found.
------------ scollier-1 secrets ------------
NAME                                            TYPE                                  DATA   AGE
builder-dockercfg-gqjf2                         kubernetes.io/dockercfg               1      14m
builder-token-b7lgm                             kubernetes.io/service-account-token   4      14m
builder-token-kpz79                             kubernetes.io/service-account-token   4      14m
default-dockercfg-7djbv                         kubernetes.io/dockercfg               1      14m
default-token-4xj8n                             kubernetes.io/service-account-token   4      14m
default-token-6fj68                             kubernetes.io/service-account-token   4      14m
deployer-dockercfg-j6lb7                        kubernetes.io/dockercfg               1      14m
deployer-token-7lbzd                            kubernetes.io/service-account-token   4      14m
deployer-token-gcc9b                            kubernetes.io/service-account-token   4      14m
federation-controller-manager-dockercfg-dqrgd   kubernetes.io/dockercfg               1      13m
federation-controller-manager-token-mm6hf       kubernetes.io/service-account-token   4      13m
federation-controller-manager-token-n5648       kubernetes.io/service-account-token   4      13m
scollier-1-lr5w7                                Opaque                                4      10m
scollier-1-scollier-1-dockercfg-8hjxj           kubernetes.io/dockercfg               1      10m
scollier-1-scollier-1-token-mxmlg               kubernetes.io/service-account-token   4      10m
scollier-1-scollier-1-token-xzpb2               kubernetes.io/service-account-token   4      10m
scollier-2-dk27l                                Opaque                                4      9m51s
------------ scollier-2 secrets ------------
NAME                                    TYPE                                  DATA   AGE
builder-dockercfg-mzx7r                 kubernetes.io/dockercfg               1      9m52s
builder-token-4ptj5                     kubernetes.io/service-account-token   4      9m52s
builder-token-wshmb                     kubernetes.io/service-account-token   4      9m52s
default-dockercfg-ktwvs                 kubernetes.io/dockercfg               1      9m52s
default-token-fvthm                     kubernetes.io/service-account-token   4      9m52s
default-token-p9crv                     kubernetes.io/service-account-token   4      9m52s
deployer-dockercfg-zc4v5                kubernetes.io/dockercfg               1      9m52s
deployer-token-jrb2z                    kubernetes.io/service-account-token   4      9m52s
deployer-token-r5jfz                    kubernetes.io/service-account-token   4      9m52s
scollier-2-scollier-1-dockercfg-6cfjt   kubernetes.io/dockercfg               1      9m52s
scollier-2-scollier-1-token-2jmwg       kubernetes.io/service-account-token   4      9m52s
scollier-2-scollier-1-token-qvhnm       kubernetes.io/service-account-token   4      9m52s
------------ scollier-1 deployments ------------
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
federation-controller-manager   1/1     1            1           13m
------------ scollier-2 deployments ------------
No resources found.
------------ scollier-1 services ------------
No resources found.
------------ scollier-2 services ------------
No resources found.

Causing the next steps to fail.

$ for cluster in scollier-1 scollier-2; do   echo ------------ ${cluster} test ------------;   oc --context=${cluster} -n test-namespace expose service test-service;   url="http://$(oc --context=${cluster} -n test-namespace get route test-service -o jsonpath='{.spec.host}')";   curl -I $url; done
------------ scollier-1 test ------------
Error from server (NotFound): services "test-service" not found
Error from server (NotFound): routes.route.openshift.io "test-service" not found
curl: (3) Bad URL
------------ scollier-2 test ------------
Error from server (NotFound): services "test-service" not found
Error from server (NotFound): routes.route.openshift.io "test-service" not found
curl: (3) Bad URL

how to add cluster with self signed cert? -- certificate error

oc logs -f federation-controller-manager-0

gave me this error.

W1107 06:03:45.842340       1 controller.go:229] Failed to get zones and region for cluster with client {0xc420250d20}: Get https://cloud.company.io:8443/api/v1/nodes: x509: certificate signed by unknown authority
W1107 06:04:19.704442       1 reflector.go:341] github.com/kubernetes-sigs/federation-v2/pkg/schedulingtypes/plugin.go:118: watch of *v1alpha1.FederatedDeployment ended with: The resourceVersion for the provided watch is too old.
E1107 06:04:25.907928       1 clusterclient.go:122] Failed to list nodes while getting zone names: Get https://cloud.ams1.company.io:8443/api/v1/nodes: x509: certificate is valid for cloud.company.io, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, openshift, openshift.default, openshift.default.svc, openshift.default.svc.cluster.local, scw-f2a996, 10.6.40.249, 172.30.0.1, not cloud.ams1.company.io
W1107 06:04:25.907975       1 controller.go:229] Failed to get zones and region for cluster with client {0xc420250c30}: Get https://cloud.ams1.company.io:8443/api/v1/nodes: x509: certificate is valid for cloud.company.io, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, openshift, openshift.default, openshift.default.svc, openshift.default.svc.cluster.local, scw-f2a996, 10.6.40.249, 172.30.0.1, not cloud.ams1.company.io
E1107 06:04:26.091189       1 clusterclient.go:122] Failed to list nodes while getting zone names: Get https://cloud.company.io:8443/api/v1/nodes: x509: certificate signed by unknown authority
W1107 06:04:26.091227       1 controller.go:229] Failed to get zones and region for cluster with client {0xc420250d20}: Get https://cloud.company.io:8443/api/v1/nodes: x509: certificate signed by unknown authority
W1107 06:04:59.485936       1 reflector.go:341] github.com/kubernetes-sigs/federation-v2/pkg/controller/sync/placement/resource.go:47: watch of <nil> ended with: The resourceVersion for the provided watch is too old.
E1107 06:05:06.252370       1 clusterclient.go:122] Failed to list nodes while getting zone names: Get https://cloud.ams1.company.io:8443/api/v1/nodes: x509: certificate is valid for cloud.company.io, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, openshift, openshift.default, openshift.default.svc, openshift.default.svc.cluster.local, scw-f2a996, 10.6.40.249, 172.30.0.1, not cloud.ams1.company.io
W1107 06:05:06.252412       1 controller.go:229] Failed to get zones and region for cluster with client {0xc420250c30}: Get https://cloud.ams1.company.io:8443/api/v1/nodes: x509: certificate is valid for cloud.company.io, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, openshift, openshift.default, openshift.default.svc, openshift.default.svc.cluster.local, scw-f2a996, 10.6.40.249, 172.30.0.1, not cloud.ams1.company.io
E1107 06:05:06.399046       1 clusterclient.go:122] Failed to list nodes while getting zone names: Get https://cloud.company.io:8443/api/v1/nodes: x509: certificate signed by unknown authority
W1107 06:05:06.399083       1 controller.go:229] Failed to get zones and region for cluster with client {0xc420250d20}: Get https://cloud.company.io:8443/api/v1/nodes: x509: certificate signed by unknown authority
W1107 06:05:10.712777       1 reflector.go:341] github.com/kubernetes-sigs/federation-v2/pkg/controller/sync/controller.go:284: watch of <nil> ended with: The resourceVersion for the provided watch is too old.
W1107 06:05:16.767932       1 reflector.go:341] github.com/kubernetes-sigs/federation-v2/pkg/schedulingtypes/plugin.go:119: watch of *v1alpha1.FederatedReplicaSetOverride ended with: The resourceVersion for the provided watch is too old.
W1107 06:05:37.076449       1 reflector.go:341] github.com/kubernetes-sigs/federation-v2/pkg/controller/sync/controller.go:284: watch of <nil> ended with: The resourceVersion for the provided watch is too old.
E1107 06:05:46.511929       1 clusterclient.go:122] Failed to list nodes while getting zone names: Get https://cloud.ams1.company.io:8443/api/v1/nodes: x509: certificate is valid for cloud.company.io, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, openshift, openshift.default, openshift.default.svc, openshift.default.svc.cluster.local, scw-f2a996, 10.6.40.249, 172.30.0.1, not cloud.ams1.company.io
W1107 06:05:46.511978       1 controller.go:229] Failed to get zones and region for cluster with client {0xc420250c30}: Get https://cloud.ams1.company.io:8443/api/v1/nodes: x509: certificate is valid for cloud.company.io, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, openshift, openshift.default, openshift.default.svc, openshift.default.svc.cluster.local, scw-f2a996, 10.6.40.249, 172.30.0.1, not cloud.ams1.company.io
E1107 06:05:46.692169       1 clusterclient.go:122] Failed to list nodes while getting zone names: Get https://cloud.company.io:8443/api/v1/nodes: x509: certificate signed by unknown authority
W1107 06:05:46.692211       1 controller.go:229] Failed to get zones and region for cluster with client {0xc420250d20}: Get https://cloud.company.io:8443/api/v1/nodes: x509: certificate signed by unknown authority

Lab 10 Run Through

I did have an issue with the argocd app not deleting.

$ argocd app list
NAME                CLUSTER                                                               NAMESPACE     PROJECT  STATUS     HEALTH   SYNCPOLICY  CONDITIONS     REPO                                                                                             PATH                                                TARGET
cluster1-kustomize  https://api.cluster-brno-32bd.brno-32bd.sandbox1682.opentlc.com:6443  dev-web-site  default  Synced     Healthy  Auto        <none>         http://gogs.apps.cluster-brno-32bd.brno-32bd.sandbox1682.opentlc.com/student/federation-dev.git  labs/lab-5-assets/base                              master
cluster1-mongo      https://api.cluster-brno-32bd.brno-32bd.sandbox1682.opentlc.com:6443  mongo         default  OutOfSync  Missing  Auto        DeletionError  http://gogs.apps.cluster-brno-32bd.brno-32bd.sandbox1682.opentlc.com/student/federation-dev.git  labs/lab-6-assets/overlays/cluster1                 master

lab 5, add cleanup notes

so, if someone has a problem with lab 5, how do they clean it up? lab 4 has instructions at the end of the page. Should we create a single page with headers on it like: "Lab 4 cleanup", "Lab 5 Cleanup", "Lab 6 CLeanup", and link to those headers at the bottom of each lab? that would be a consistent experience and allow people to start over at that lab if they need to.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.