GithubHelp home page GithubHelp logo

taskcluster-cli's Introduction

TaskCluster CLI Client

Overview

TaskCluster CLI is a command-line client offering control and access to taskcluster from the comfort of your command-line. It provides utilities ranging from direct calls to the specific API endpoints to more complex and practical tasks like listing and cancelling scheduled runs.

This repository has been merged into the Taskcluster repo; see clients/client-shell.

taskcluster-cli's People

Contributors

ayubmohamed avatar ccooper avatar djmitche avatar imbstack avatar jonasfj avatar lteigrob avatar mutterroland avatar nanjekyejoannah avatar nikhita avatar owlishdeveloper avatar palash25 avatar petemoore avatar simonsapin avatar srfraser avatar t0xiccode avatar walac avatar yannlandry avatar yasch007 avatar ydidwania avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

taskcluster-cli's Issues

Add version command

In summary, this will add taskcluster version command. It can be just a template with a fake command version until we decide how we store and update the version.

Command to wait for a task to complete

It's quite possible that users will write scripts that start a task, then wait for it to complete. Something like:

taskcluster api queue createTask $TASKID <task.json
taskcluster task await-completion $TASKID

New feature: Cancel a subset of a group

This need came out last Friday, when releng had several releases in flight and all of them were claiming the same workers. One way to solve this issue is to handle priorities in a different manner. Nonetheless, having a manual way to cancel/rerun a certain type of tasks, would be great to manually help a graph to claim workers.

For example:

taskcluster group cancel --worker-type 'signing-worker-v1' $TASK_GROUP_ID

will:

  1. look for unscheduled/pending/running tasks that match this worker type
  2. display the name of the tasks found
  3. ask for a confirmation
  4. if confirmation given, cancel them

rerun may also follow the same workflow.

What do you guys think?

Enable the implementation of subcommands

As of now, when we need to add a command, we create an implementation of the CommandProvider interface, and register it. We have already added a few commands using this:

  • taskcluster help
  • taskcluster from-now [...]
  • and others

However, there are commands that have their own subcommands. taskcluster api [...] is an example of this. Another example:

  • taskcluster slugid v4
  • taskcluster slugid nice
  • taskcluster slugid decode <slug>
  • taskcluster slugid encode <uuid>

In this case, the best option has been to implement all four subcommands manually as part of the slugid command. It would be preferable to be able to have one implementation of CommandProvider for each subcommand and ideally be able to register it under the slugid command.

It seems docopt is able to parse subcommands, so this is good.

This will be pretty much essential to all the taskcluster api subcommands.

Add expand-scopes command

In taskcluster we use scopes for authorization, see:
https://docs.taskcluster.net/presentations/scopes/
(formal definition and examples)

The taskcluster-auth service maintains roles... a role is effectively just a mapping from a scope on the form assume:... to a set of scopes that the role grants.

Example:
assume:project:taskcluster:tutorial grants scopes listed here:
https://tools.taskcluster.net/auth/roles/#project:taskcluster:tutorial

A command like this:

$ taskcluster expand-scopes assume:project:taskcluster:tutorial
assume:project:taskcluster:tutorial
queue:create-task:aws-provisioner-v1/tutorial
secrets:get:garbage/*
secrets:set:garbage/*

Would be pretty nice to have. The API has a method for expanding scopes here:
https://docs.taskcluster.net/reference/platform/auth/api-docs#expandScopes

So this is really just parsing arguments, call API and print the result line by line.

Round out taskcluster-github testing, continued

Pulling this out of #92, there's a few things that can be added to taskcluster.yml at some point.

gometalinter things:
--enable=varcheck --enable=aligncheck --enable=errcheck --enable=ineffassign --enable=unconvert -enable=goimports --enable=unused

go fmt

Round out taskclutster-github testing

Now we have .taskcluster.yml going (see #49), we want to be able to run some of the extra things we had for travis:

  • go test -v race
  • go fmt -s -w
  • gometalinter --disable-all --enable=gotype --enable=golint --enable=deadcode --enable=staticcheck --enable=deadcode --enable=misspell --enable=vet --enable=vetshadow --enable=gosimple --enable=varcheck --enable=aligncheck --enable=errcheck --enable=ineffassign --enable=unconvert --enable=staticcheck --enable=goimports --enable=unused
    (from #25, see for more details)

Error in `task log` when the public logs don't exist

When I try to watch or fetch the logs for a task that was just created (pending, unscheduled, or otherwise), I'm getting an ugly error message:

{
  "code": "ResourceNotFound",
  "message": "Artifact not found\n----\nmethod:     getLatestArtifact\nerrorCode:  ResourceNotFound\nstatusCode: 404\ntime:       2017-03-10T15:46:56.672Z",
  "requestInfo": {
    "method": "getLatestArtifact",
    "params": {
      "0": "public/logs/live.log",
      "taskId": "8aATqJZQSfihCp0cI0axdA",
      "name": "public/logs/live.log"
    },
    "payload": {},
    "time": "2017-03-10T15:46:56.672Z"
  }
}
Error: Received unexpected response code 404

The task log should catch that error (logs don't exists), and I think that it should first check the status of the task, so that it can display better error messages if the task doesn't exist or the logs haven't been created yet.

Build releases using taskcluster-github

When we create a new Github release of taskcluster-cli, tc-github should automatically build and upload binaries for the various platforms we support, and make them available for users.

Add test case for the credentials module

Taskcluster uses Hawk for authentication under the hood. The credentials module provides an interface for taskcluster authentication. Unfortunately, it doesn't have any unit test case. Implementing a test case for follows these steps:

  • Create a Credentials object with ClientID = tester and AccessToken = no-secret
  • Create a request to testAuthenticate
  • Sign it using SignRequest
  • Perform the request and check it was successful

Connect user's shell to a one-click-loaner

We have a fancy one-click-loaner option that is accessible via https://tools.taskcluster.net, but the terminal emulation is TERRIBLE and you can't copy/paste or do any of the normal terminal things.

Much better would be to connect my existing terminal to a one-click loaner:

dustin@lamport ~ $ taskcluster terminal $TASKID
# <--- shell prompt from loaner

This should support both websocket protocols

Tail the log of a running task

taskcluster task-log <taskId>

should output the logs for the given taskId as they are generated, and exit when the task completes.

Replace javascript taskcluster-cli (Allow task creation, scheduling, tailing)

A previous attempt at a command line client in javascript exists (see gregarndt/taskcluster-cli).

This client should be able to replace that javascript implementation.

  • taskcluster task create to replace taskcluster run: creates and schedules a task
  • taskcluster task run to replace taskcluster run-task: creates, schedules, and watches a task (streaming logs from public/logs/live.log by default)
  • taskcluster task await to wait the completion of a given task (by taskId)

This depends on #23 and is related to #60.

Make sure `taskcluster signin` is easy to use with other libraries

The Python, Node, and Go TaskCluster clients have certain conventions for getting credentials. In theory they are all the same conventions, but that should be verified.

Then, those conventions should be written down in the docs.

Then, ensure that taskcluster uses the same conventions, and that simply running taskcluster signin will set credentials that all of these things can use.

Support communicating with taskcluster-proxy if running inside a task

We have this neat thing where, if running in a task, you can connect to e.g., http://taskcluster/queue/v1/task/<taskId> This request is proxied to the queue (in this case) with credentials matching the task's scopes attached.

taskcluster-cli should support using this proxy when available.

An open question is, how to tell when the proxy is available, and how to calculate the appropriate URL.

Add CLI command to upload/download to/from S3

Using auth.awsS3Credentials takes taskcluster credentials and returns temporary STS credentials for S3.
It would be cool if we had a command like taskcluster s3 copy ./myfile.txt s3://<bucket>/<prefix>
that could get temp creds from auth and use those to upload a file to S3.
Obivously, it would have to do retries with exponential back-off, ideally also multi-part upload for large files.
Detect mimetype based on file extension... Provide content-md5 for S3 to verify.

See: https://docs.taskcluster.net/reference/platform/auth/api-docs#awsS3Credentials

Further steps could include:

  • Download file
  • Upload a folder of files
  • Download a folder of files
  • Adding extra headers like x-amz-meta-content-sha256 for integrity checks on downloads

Better test coverage

It's not glamorous, but most of taskcluster-cli is currently un-tested, and we need to fix that. So we need to write tests for all (or at least most) of the existing code!

This will probably involve refactoring some of that code to make it more testable, in a go-friendly way.

Implement command to download an artifact

An artifact is a result of of task. It can be a binary file, a test report, a log, etc. We need to implement a command to download an artifact:

taskcluster download <taskId> <runId> <artifact>
taskcluster download <taskId> <artifact>
taskcluster download --index <indexNamespace> <artifact>

When <runId> is omitted, it should download artifacts from latest runId.

  • Automatic retries
  • Exponential backoff
  • Check content-length
  • Follow redirects
  • Enforce HTTPS
  • Stream result to a file
  • gzip decode if content-encoding: gzip

Please check https://godoc.org/github.com/taskcluster/taskcluster-client-go/queue#Queue.GetArtifact_SignedURL

If is omitted, it should list all artifacts.

Automatically export documentation for taskcluster-docs on release

We have taskcluster-lib-docs for JS services, which can upload a tarball containing documentation. Let's build a similar thing for Go (but omitting the API references and exchanges, since we don't implement TaskCluster HTTP APIs in Go), so that we can upload documentation about all of the tc-cli options to taskcluster-docs, too.

Sign requests using NewRequestAuth and sign URLs using NewURLAuth instead of using NewURLAuth for both

@jonasfj

This can't work because here we are generating URL auth, and URL Auth doesn't set a nonce (since bewits are intentionally not protected from replay attack, hence why they should only be used for GET requests).

Instead, we need to use Request auth for signing a request, not URL auth... :-)

Otherwise, it would probably go unnoticed (client may operate without problems) but we'd have a gaping security hole...

Better support for users with `taskcluster signin`

dustin@lamport ~/go/src/github.com/taskcluster/taskcluster-cli [master] $ ./taskcluster signin
Starting
Listening for a callback on: http://localhost:37706

..and it hangs there. What do I do? Some better documentation for the user would be helpful :)

Tab completion

Users will want to be able to hit tab to complete a command, or to see a list of command options if there is more than one available.

Fix misspells

We want to enable gometalinter's misspell check in .taskcluster.yml. The warnings we're getting just need to be fixed, and then this can be enabled again.

Add slugid command

A slugid is an uuid used to uniquely identify tasks. The goal here is to make slugid-go interface into a taskcluster-cli command:

taskcluster slugid v4
taskcluster slugid nice
taskcluster slugid from-uuid <uuid>
taskcluster slugid to-uuid <slugid>

Integrate with Travis-CI

Travis is a Continuous Integration system aimed for small and medium size projects. taskcluster-ci doesn't run any tests or linter on pushes and pull request, which is sad. We would like to use travis to run:

  1. go test -v -race
  2. go fmt -s -w and then check if we have uncommited changes
    3 gometalinter --disable-all --enable=gotype --enable=golint --enable=deadcode --enable=staticcheck --enable=deadcode --enable=misspell --enable=vet --enable=vetshadow --enable=gosimple --enable=varcheck --enable=aligncheck --enable=errcheck --enable=ineffassign --enable=unconvert --enable=staticcheck --enable=goimports --enable=unused

You can see how we run go test and go fmt in taskcluster-worker.

This should run on Linux, MacOSX and Windows. Ideally we should run under go 1.7, but Travis doesn't support 1.7 version, so it is ok to use 1.6 for now. But if a keen to make 1.7 working, you can see how we do it in taskcluster-worker and do something like that.

You can start fixing errors reported by gometalinter and then add travis.

Automatic updates

It would be nice for users to always have the latest and greatest taskcluster client. We could do this a few ways:

  1. Build apt, yum, and homebrew repositories and automatically upload releases to them, so users can update the "normal" way
  2. Only update the API definitions, caching them in ~/.taskcluster. Then taskcluster api would always be up-to-date, but the other subcommands would only get updates when the user manually downloads a new version.
  3. Use some service like https://equinox.io/

There are some serious security concerns with options 1 and 3, of course!

Mount task container file-system w. FUSE

In #57 we have a plan to implement file-transfer by running file-browser-go and talking to it over stdin/stdout through the websocket protocols implemented in #54.

A cool addition would be the ability to mount the entire root file-system of the task container locally using FUSE. There is a golang library here https://godoc.org/bazil.org/fuse.
I'm not sure how easy/hard it would be, also I doubt we need to support all FUSE features, like file locking and what not. But just supporting the platform independent parts would be cool.

This would obviously be a command that only work on linux... but, as always we could look for other virtual file-system implementations to support Windows or OS X. It looks like there already is a Windows thing for golang https://github.com/dokan-dev/dokany, though it might not be as easy to build.

@ckousik, this is another use-case of the file-browser-go, perhaps this is a more efficient way of doing things compared to re-implementing rsync. As this way we would only transfer things as we read/write them.

Issue with credentials

Doing the signin subcommand works fine and generates the configuration file in .config/taskcluster.yml.

However, running any api command after that fails with the following error message:

Failed to sign request, error:  Failed to parse certificate, error: json: Unmarshal(nil *client.certificate)

Run tests using taskcluster-github

We have a basic .travis.yml set up in #46, but we should be using taskcluster via taskcluster-github instead of travis. And doing releases (binary builds) when we push a new tag.

Add scope-check command

In taskcluster we use scopes for authorization, see:
https://docs.taskcluster.net/presentations/scopes/
(formal definition and examples)

It would nice if we had a command like:

$ taskcluster scope-check scopeA scopeB... satisfies scopeC scopeD...

Which will then determine if scopes scopeA, scopeB, ... satisfies scopeC, scopeD, ..., the argument satisfies would be a keyword in this command.

Example:s

$ taskcluster scope-check queue:ping satisfies queue:ping # exit code 0
YES

$ taskcluster scope-check queue:ping satisfies queue:test # exit code 1
NO, missing:
 - queue:test

$ taskcluster scope-check 'queue:*' test:scope other-scope satisfies queue:test # exit code 0
YES

$ taskcluster scope-check 'queue:*' satisfies other-scope # exit code 1
NO, missing:
 - queue:*

$ taskcluster scope-check 'queue:*' satisfies 'queue:create-ta*' # exit code 0
YES 

Maybe the output messages could be formatted better...

Note: Step one is to just do simple string comparison. Next step would then be to take the left-hand-side scopes and expand them using:
https://docs.taskcluster.net/reference/platform/auth/api-docs#expandScopes

Any scope on the form assume:... may be expanded. On the right handside expansion is unnecessary since if assume:... is satisfied, then any expansion on the left-hand-side would also include scopes assume:... expanded to.

What happens if a config option is tied to an evironment variable AND is defined in the config file?

While going over config/serialization I realized that there might be issues when some options are defined both as environment variables and as config file values.

I assume that when an option is tied to an environment variable, we will need to edit that environment variable to change the value of that option. But what happens if I do taskcluster config set OPTION VALUE? Which value then takes precedence?

Shell tests

Perform tests(/write a script) to make sure ./taskcluster is behaving properly (breakout of #47)

Create a `taskcluster status` command

This command would call (maybe in parallel?) the ping endpoint for every service that has one, printing an "OK!" line for each service for which it returns successfully.

Users would find this helpful when something goes wrong and they wonder, as they always do, "is taskcluster down?" There's also https://status.taskcluster.net that shows about the same data.

Find a way to get metadata about command-line options

For command-line completion (#64), we need information on the available command-line options.

Docopt, as-written, doesn't support that: its Parse method just interprets the usage strings, then returns the result of parsing the command line. It doesn't give any information from its interpretation of the usage strings.

Options:

  1. Fork docopt to return that information (if this isn't too hard) (and if it makes sense, make a PR for docopt in case they want to adopt it)
  2. Find another command-line option library that will give us the data we need
  3. Write our own command-line option library

Automatically update the API definitions

The service definitions are fetched from the schema server and turned into go code through go generate .

Currently this is a manual process, with the results checked in when we remember to do so. We have a few ideas regarding how to make this better.

The first idea is to have a scheduled task in taskcluster to run go generate and generate a PR if there are changes.

The other option is to run go generate only as part of the release process, the generated binaries would have the latest definitions, but the repository would have old versions checked-in.

Move API interaction commands under an `api` subcommand

The help currently looks like this:

    auth                 Operate on the auth service
    authEvents           Operate on the authEvents service
    awsProvisioner       Operate on the awsProvisioner service
    awsProvisionerEvents Operate on the awsProvisionerEvents service
    config               Get/set taskcluster CLI configuration options
    github               Operate on the github service
    githubEvents         Operate on the githubEvents service
    help                 Prints help for a command.
    hooks                Operate on the hooks service
    index                Operate on the index service
    login                Operate on the login service
    notify               Operate on the notify service
    pulse                Operate on the pulse service
    purgeCache           Operate on the purgeCache service
    purgeCacheEvents     Operate on the purgeCacheEvents service
    queue                Operate on the queue service
    queueEvents          Operate on the queueEvents service
    scheduler            Operate on the scheduler service
    schedulerEvents      Operate on the schedulerEvents service
    secrets              Operate on the secrets service
    signin               Sign-in to get temporary credentials
    treeherderEvents     Operate on the treeherderEvents service
    version              Prints the TaskCluster version.

and that makes it hard to spot the few options that aren't about a service. I'd prefer

    api                  Perform TaskCluster API operations
    config               Get/set taskcluster CLI configuration options
    help                 Prints help for a command.
    secrets              Operate on the secrets service
    signin               Sign-in to get temporary credentials
    version              Prints the TaskCluster version.

Document common use cases

Users will want to know the best way to do some common things with this tool:

  • install it
  • create a task, wait for it to finish while tailing its logs, and then download an artifact from it
  • connect interactively to a one-click loaner
  • .. other stuff implemented this semester

Some really clear, well-thought-out documentation guiding them through this process would be great, especially if added to the tutorial at https://docs.taskcluster.net.

Add `taskcluster completion` command dumping auto-complete script for bash

I'm not 100% sure how to do this.. There is probably some docs to read :)

But it is possible to do auto-completion for a custom command, I suspect that we can auto-generate such auto-completion logic from the docopt strings and generate a command that can install the auto-complete script in /etc/bash_completion.d/taskcluster that would be pretty cool.

`taskcluster pulse-publish` command

A command taskcluster pulse-publish [--format yaml|json] [--message <message>] <exchange> <routingkey>

That if no --message is given opens a prompt that you can enter the message into...
Also it should declare the exchange if it doesn't exist.

Read more about pulse here: https://wiki.mozilla.org/Auto-tools/Projects/Pulse

Similar to #87 we can store the username/password in the config system for taskcluster-cli... These should also default to PULSE_USERNAME and PULSE_PASSWORD env vars, if not config values are entered...
Notice: this should primarily be based on the stuff in config/...

Ideally, we can reuse the same config options so you don't have to set the password twice...
Maybe for this to work it as to be taskcluster pulse with two subcommands publish and listen, either way let's start by implementing one of these.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.