pyinvoke / invoke Goto Github PK
View Code? Open in Web Editor NEWPythonic task management & command execution.
Home Page: http://pyinvoke.org
License: BSD 2-Clause "Simplified" License
Pythonic task management & command execution.
Home Page: http://pyinvoke.org
License: BSD 2-Clause "Simplified" License
Kinda forgot that part of #23. CONFIDENCE INSPIRING.
When printing text inside the Context test suite and running the tests with invoke itself, I get what looks like classic thread print problems. Running tests via spec works fine. Figure out wtf. Could swear similar test debug prints in eg Parser worked fine yesterday.
Argument
-level API allows for an arg to have one or more names, but right now @task
only does auto-shortflags or no-auto-shortflags without exposing the aliasing behavior to users.
Obvious choice is a map, @task(argnames={'realkwarg': ['r', 'rk']})
(turning into the parser-level flags for realkwarg
being --realkwarg
, --rk
and -r
). Not sure what this option's name should be though, argnames
is kinda fugly.
One pattern that is very useful but never appeared in Fabric 1.x is solid environment variable support for configuring the tool, e.g. setting config options via the invoking environment.
This would be smart to do in Invoke, especially if done in an extensible way (e.g. client libs can say "this option can be set by env var XXX")
See subject.
% virtualenv venv
% source venv/bin/activate
% git clone git://github.com/pyinvoke/invoke.git
% cd invoke
% python setup.py install
% inv
Traceback (most recent call last):
...
File "build/bdist.macosx-10.8-intel/egg/invoke/vendor/lexicon/alias_dict.py", line 1, in <module>
ImportError: No module named six
I can see six.py
in vendor/
but it looks like this is a simple import error since six isn't installed system wide and there's no six.py
in vendor/lexicon
.
This is actually my first interaction with invoke so maybe I'm doing it wrong.
Here's some platform info:
% python --version
Python 2.7.2
% uname -a
Darwin nibbler 12.2.1 Darwin Kernel Version 12.2.1: Thu Oct 18 12:13:47 PDT 2012; root:xnu-2050.20.9~1/RELEASE_X86_64 x86_64
I.e. in the case where you had to name the task something silly to work around namespace collisions within the host Python module. See e.g. https://github.com/pyinvoke/invocations/blob/9c342190c5af1ea2ec74bbcc3b6fc1e935d3cfcd/invocations/docs.py where we had to rename what would've been def build
to def _build
, then build an explicit namespace mapping build=_build
.
Would be marginally cleaner to be able to say @task(name='build')
and then we can simply skip the extra Collection object.
This touches on the fact that in this case, where name-in-collection differs from name-in-Python-module, there's a mismatch between calling a task directly (_build()
) and claiming it as a pre-task (@task(pre=['build'])
). Not sure there's much to be done there though...
EDIT: Well, the obvious answer is to say that calling via an explicit Collection API, e.g. Collection.execute("name")
, is preferred over using __call__
. Feels like we could offer both?
At the very least, need a couple toggles on our Popen
subclass capable of telling it not to print stdout/stderr bytes.
Ideally this would be handled via logging but I am 99% sure logging simply won't fit at the byte-by-byte level. See #15.
Like a moron, I wrote the new parsing stuff without remembering that the initial context needs to be able to influence the rest of the parse run. I.e. invoke --collection=foo task1 task2
.
So we need things to look like this:
Since the "initial" context will no longer ever be followed by non-initial contexts, the API should probably change around a bit.
Also consider a 'rolling' approach where, instead of explicitly/externally running this 2-step parse, we allow contexts to alter the parser itself when they run (as per Fabric #391).
E.g. the initial context actually "runs" instead of just being a container for core options, and as part of that execution, loads + adds the task collection(s)' contexts to the parser.
Then regular tasks/contexts are able to similarly manipulate the list of tasks being executed (though note that this should be distinct from altering the argv being parsed -- objects > lists of strings.)
This approach suffers from a lot of problems (ties together things that really ought to be separate, confuses the act of parsing with the act of execution of tasks, etc) but is at least worth recording.
Invoke could also be usable for projects that are trying to write their own CLI interfaces. Therefor, it should be possible to call a method that parses the arguments and dispatches them to the right task.
More customization should be allowed in that script.
Find a decent way to indicate, on a task or automatically, that e.g. invoke taskname --val=5
should result in a Python value 5
instead of "5"
, and of course similar for bools, lists etc.
Possibilities:
Explicit on the CLI level:
invoke mytask --wants-an-int int:5 --wants-a-list=list:"a;b;c"
Explicit on the task declaration level (off the cuff API, but you get the gist):
@task(argspec={'wants_an_int': int, 'wants_a_list': lambda x: x.split(';')})
def mytask(wants_an_int, wants_a_list):
# ...
Implicit via conventions, e.g. try/except casting to int
and/or float
, anything containing semicolons gets split()
'd, etc
y
/yes
/etc to booleans, but now that we're using --real --flags
booleans happen for free.Explicit via helper functions -- obviously int
/float
already exist, so maybe a helper def listarg(x): return x.split(';')
Right now, I'm leaning towards the explicit-at-task-declaration level, with possible implicit-via-conventions. Very torn on whether the implicit should be on by default -- fits the "get stuff done fast" use case but clashes with "don't be surprising".
Was Fabric #69
Scenario A: somebody, somehow, accidentally ends up trying to Collection.add_task('foo', foofunc)
2x in a row (with foofunc
being identical, or not, doesn't really matter.) Should raise an error by default but probably doesn't now. Add a test.
Scenario B: somebody explicitly wants to override an existing task, e.g. they're dealing with an upstream tasks module and want to customize behavior. They may want to do a not-quite-inheritance method of overriding by defining a function referencing the original task, and have their own function replace the original inside the collection.
(Yes, having true inheritance based class stuff is more ideal, but it feels like users should be capable of replacing individual tasks within a collection if they flip some option/arg enabling this kind of power-user feature.)
General needs (should tie into Collection
):
foo.bar
) both on CLI and in e.g. pre
/post
--list
invoke --list --root foo.bar
to display only the subtree beginning with foo.bar
list
task that does e.g. show_tasks('user')
or whatever?Right now the following task declaration "skips ahead" to avoid conflicts, which means it's tied to argument order, which means it can change sort-of unexpectedly if you have 2nd thoughts about your task function's signature:
@task
def mytask(foo, bar, biz): pass
As-is, that aliases -f
to --foo
, -b
to --bar
and -i
to --biz
(since -b
is already taken.)
If one switched the task to become e.g. mytask(foo, biz, bar)
the shortflags would change, with -b
pointing to --biz
not --bar
, and with -i
disappearing, replaced by -a
. While conscientious users should be aware of this possibility, it feels like an easy foot-shooting.
Ways to avoid this:
-f
.
I.e. instead of just inv --list
spitting out the current default flat list, allow things like inv --list nested
for a nested/indented display, inv --list=foo
to just list the tasks within namespace foo
, etc ad infinitum.
LATER EDITS: so a lot of strongly related tickets have popped up and there seem to be two major currents:
Seems unwise/impossible to shoehorn all of that under a single optional value to --list
so I'm sure we'll grow some additional flags either way, with the main question being which (if any) such option deserves to live as the optional --list
value.
Brainstorm:
--list-format (flat|nested|machine)
(what term do other tools use for "machine-readable" anyways? Can't remember offhand...LATER EDIT: actually these days we often just use JSON for that and just call it eg --list-format=json
) or it could be a handful of booleans like --list-nested
, --list-machine
.
--root
flag which changes where tasks files are sought from; but perhaps we should be, well, namespacing those as well so they are renamed to eg --discovery-root
) and 'depth` and I think that's it, since those together can enable just about anything:
inv --namespace foo --list
spitting out task names like bar
and baz
instead of foo.bar
and foo.baz
) should be relying on "each Python module can be its own standalone namespace" and doing things like inv --collection=namespace --list"
instead (see Collection loading.)
--collection
automatically ensures that everything - display, invocation, etc - instantly works as you'd expect.foo.bar
as an argument to --collection
- even though, between you and me, foo.bar
isn't visible on the Python import paths."(root)
, foo
and bar
"E.g. obtaining a task from a Collection by its name; given the existing emphasis on dict-like syntaxes elsewhere, it feels like it'd make sense to take this step.
🚨 TODO: copy in stuff from the Fabric ticket 🚨
Might be nice to have the API for "I depend on these other tasks running first" be *args
in @task
, e.g. @task(other1, other2, kwargs=here)
. Make-like, simple, *args aren't sensible otherwise (e.g. to call the existing things intended to be kwargs). Props to @mattrobenolt for the idea.
Like set +x
or how Fabric does [hostname] run: command here
lines. Probably off by default?
Experiment with this -- we have enough tests for the current setup in place and passing that it should be obvious whether it will work or not.
Re #41
I didn't tackle this right off so as not to overcomplicate v1.0, but I could see some additional tweaks/options/flavors to how we handle deduplication:
$ invoke foo foo
being deduped might be confusing.make foo foo
will run foo
once and then say foo is up to date
.Left mostly-working help system in a branch. First obvious thing that still needs work: a useful column formatter. Probably vendorize clint or similar?
Using default values on anything other than 'bool' type arguments causes a ParseMachine error.
Context argument:
Argument(
names=('setup', 'S'),
default='~/.stitches',
help="Setup 'DIR' as stitches root, defaults to '~/.stitches'."
)
Output:
$ stch -S
DEBUG:stitches:Base argv from sys: ['-S']
DEBUG:stitches:Parsing initial context (core args)
DEBUG:invoke:Initialized with context: <Context: {'root': <Argument: root (r)>, 'setup': <Argument: setup (S)>, 'version': <Argument: version (V)>, 'list': <Argument: list (l)>, 'help': <Argument: help (h)>}>
DEBUG:invoke:Available contexts: {}
DEBUG:invoke:Wrapping up context None
DEBUG:invoke:Starting argv: ['-S']
DEBUG:invoke:Handling token: '-S'
DEBUG:invoke:Saw flag '-S'
DEBUG:invoke:Moving to flag <Argument: setup (S)>
DEBUG:invoke:ParseMachine: 'context' => 'end'
Flag <Argument: setup (S)> needed value and was not given one!
This led me to:
invoke/parser/init.py:200
def complete_flag(self):
if self.flag and self.flag.takes_value and self.flag.raw_value is None:
self.error("Flag %r needed value and was not given one!" % self.flag)
My quick fix:
def complete_flag(self):
if self.flag and self.flag.takes_value and self.flag.raw_value is None and self.flag.default is None:
self.error("Flag %r needed value and was not given one!" % self.flag)
After that it passes but there is no easy way to tell (that I can see) if a flag value was set or not from the command line or if the default was used. Where that becomes a problem is if I am trying to use for example:
if args.setup.value:
# do stuff
As the argument always has a value (as it falls back to the 'default' attribute and the 'args' Lexicon lists all the context flags regardless if they were passed or not). Maybe a 'was_passed' attribute on the argument itself?
For now I can check back against the originally passed args.
Thanks!
Right now disabling positionals requires @task(positional=[])
which is pretty fugly.
Slightly better would be None
but that's the default implying to use auto-positionals. (Could maybe make a legit sentinel for the default, like we've done elsewhere, then use None
to mean disabling...)
Or we could add another kwarg like no_positionals
, but then there's the usual "what happens if both are given" problem.
Or could rely on global setting added in #27 but I'm kind of sick of that being the only way to configure things in my software.
So far that first option actually sounds cleanest.
Existing examples: Fabric's _AttributeDict
(object attribute access on top of regular dict behavior) and _AliasDict
(allow N-1 relationships from keys=>values, i.e. aliasing.)
Need in Invoke: probably for config objects, similar to how Fabric uses _AttributeDict
now, and to handle task/argument/etc aliasing (as is used in _AliasDict
now.) Would be best to merge both features and make it public.
Fabric needs on top of what's there now:
Ideally, find prior art and use that instead. Possible candidates:
Also maybe related:
It apparently uses 2to3 in setup.py; we need a sixified version that I can re-vendor
There's just too many levels of tests now, I tend to end up writing one for each level of the stack and that's just too much repetition both in terms of actual tests & setup for tests.
E.g. I have at least 3-4 spots where I set up a collection w/ top level tasks + subcollection + aliases + default tasks, all in the same way (or different ways! like in-code vs loading a _support
module) and then make the same assertions in e.g. cli parsing, Collection.to_contexts, Collection.task_names, etc.
In that example we could at least consolidate all the setup bits into the same module, and then remove e.g. the to_contexts or task_names tests (task_names is really just an impl detail at this point).
TDD is great and all but once the tests are done I have to remember to go remove some, otherwise any changes to behavior require altering far too many tests.
There appears to be an issue surrounding the variable names I use to define base and namespaced collections. The following code in 'tasks.py':
from invoke import Collection, task, run
@task
def release():
run("ls")
@task
def example():
run("ls")
base = Collection()
base.add_task(release)
ns = Collection('name')
ns.add_task(example)
base.add_collection(ns)
Only outputs the 'name' collection, and uses it as the base:
$ invoke --list
DEBUG:invoke:Base argv from sys: ['--list']
DEBUG:invoke:Parsing initial context (core args)
DEBUG:invoke:Initialized with context: <Context: {'help': <Argument: help (h)>, 'list': <Argument: list (l)>, 'collection': <Argument: collection (c)>, 'version': <Argument: version (V)>, 'root': <Argument: root (r)>, 'no-dedupe': <Argument: no-dedupe>}>
DEBUG:invoke:Available contexts: {}
DEBUG:invoke:Wrapping up context None
DEBUG:invoke:Starting argv: ['--list']
DEBUG:invoke:Handling token: '--list'
DEBUG:invoke:Saw flag '--list'
DEBUG:invoke:Moving to flag <Argument: list (l)>
DEBUG:invoke:Marking seen flag <Argument: list (l)> as True
DEBUG:invoke:ParseMachine: 'context' => 'end'
DEBUG:invoke:Wrapping up context None
DEBUG:invoke:After core-args pass, leftover argv: []
DEBUG:invoke:No collection given, loading from None
DEBUG:invoke:Adding <Context 'example'>
DEBUG:invoke:Parsing actual tasks against collection <invoke.collection.Collection object at 0x10dd592d0>
DEBUG:invoke:Initialized with context: None
DEBUG:invoke:Available contexts: {'example': <Context 'example'>}
DEBUG:invoke:Wrapping up context None
DEBUG:invoke:Starting argv: []
DEBUG:invoke:ParseMachine: 'context' => 'end'
DEBUG:invoke:Wrapping up context None
Available tasks:
example
But when I swap the variable names around (ns <--> base) like so:
from invoke import Collection, task, run
@task
def release():
run("ls")
@task
def example():
run("ls")
ns = Collection()
ns.add_task(release)
base = Collection('name')
base.add_task(example)
ns.add_collection(base)
I get what I expect on output:
$ invoke --list
DEBUG:invoke:Base argv from sys: ['--list']
DEBUG:invoke:Parsing initial context (core args)
DEBUG:invoke:Initialized with context: <Context: {'help': <Argument: help (h)>, 'list': <Argument: list (l)>, 'collection': <Argument: collection (c)>, 'version': <Argument: version (V)>, 'root': <Argument: root (r)>, 'no-dedupe': <Argument: no-dedupe>}>
DEBUG:invoke:Available contexts: {}
DEBUG:invoke:Wrapping up context None
DEBUG:invoke:Starting argv: ['--list']
DEBUG:invoke:Handling token: '--list'
DEBUG:invoke:Saw flag '--list'
DEBUG:invoke:Moving to flag <Argument: list (l)>
DEBUG:invoke:Marking seen flag <Argument: list (l)> as True
DEBUG:invoke:ParseMachine: 'context' => 'end'
DEBUG:invoke:Wrapping up context None
DEBUG:invoke:After core-args pass, leftover argv: []
DEBUG:invoke:No collection given, loading from None
DEBUG:invoke:Adding <Context 'release'>
DEBUG:invoke:Adding <Context 'name.example'>
DEBUG:invoke:Parsing actual tasks against collection <invoke.collection.Collection object at 0x10ddab210>
DEBUG:invoke:Initialized with context: None
DEBUG:invoke:Available contexts: {'release': <Context 'release'>, 'name.example': <Context 'name.example'>}
DEBUG:invoke:Wrapping up context None
DEBUG:invoke:Starting argv: []
DEBUG:invoke:ParseMachine: 'context' => 'end'
DEBUG:invoke:Wrapping up context None
Available tasks:
release
name.example
EDIT: Added output with debug on
Most of the API is showing up in the autodoc now, but there are probably missing bits. Make sure anything that should be public API is displayed; make sure any impl details are hidden; etc.
Dislike using .value all over parse code; decide if it's worth the magic to implement __str__
and __nonzero__
and etc so one can simply say eg args.name
vs args.name.value
(for e.g. def mytask(name=xxx)
.)
This allows us to place stuff in __init__.py
for a compact API import, while both letting setup.py
access version stuff, and the installed package access version with a normal import.
Could also just move to a pure data format like JSON (given it's stdlib in 2.6) but that cuts out any possibility of doing algorithmic things in the module. Which I'd like to avoid anyway, but still.
(Was: Deal with Fluidity dependency)
This desc is out of date, see this comment below for new game plan.
Original desc follows.
We're using Fluidity for state machine fun, but their packaging is FUBARed right now: version 0.2.1 has a useful-for-debugging hook not in 0.2.0, but 0.2.1 is not actually tagged or on PyPI so it's not possible (AFAIK) to obtain it via a regular pip install
.
I'll ping the devs there but if they are unable to comply, options are:
Scenario:
tasks.py
in my projectfrom invocations.docs import *
to obtain those tasks as top level tasks in my own project's collection.Expected:
Found:
At some point invocations.docs
grew a custom top level namespace, mostly so it could control what it exported to importers who use it in a custom Collection (e.g. Collection(stuff, here, also, docs)
)
However, this means that the star import added that ns
object to the local tasks
module -- and then became "the" local custom namespace. Thus ignoring my local tasks.
Solutions:
Collection.load_module(x)
call only loads namespace collections defined in x
and not imported into x
. Obvious/naive option.OSError
and regular test Failure
sexpect-dev
's unbuffer
, socat
and coreutils
' (>=7.5) stdbuf
, but none of those help on Travis (though some make things "pass" by masking the return value...)pexpect.spawn().interact()
dying when trying to handle an EOF.spawn()
object (exit code, signal code, isalive
, etc) seems sporadic and not super helpfulOSError
s which contain the Input/output error
text and when exitcode
is non-None
? But see above -- sometimes we get the errors even when exitcode
is non-None
too :(inv test
vs spec
) also exhibits the same errors -- it's not just the test suite.run("echo whatever", pty=True)
When running run("python")
:
>>> print 'hm'
prints the next line's prompt + "hm" on the same line.
print >>sys.stderr, 'hm'
correctly deals w/ newlines^[[A
(control chars) instead of interacting with readline.Neither of these problems exist when pty=True
. I feel like this is the same as in Fabric and any other solution where a pty is not the default behavior. At the very least we should make this explicit in the docs, probably by making an FAQ.
and this makes me mad.
What used to be here is now over in #57...
Would be nice to globally/temporarily set certain default-but-configurable behaviors to specific values.
E.g. in our own test suite, when #23 landed I had to add a bunch of positional=[]
args to @task
. Would be nice to be able to set that at the test suite level or something.
Main hurdle is simply that we want to do away with module level bullshit so this needs to be effected some other way; possibly nothing we support ourselves besides "well, just do this:"
from functools import partial
from invoke.task import task
task = partial(task, positional=[])
In which case this would just become a "apply that technique to our own tests."
Turning this into a general "how to do logging and useful output controls for invoke and fabric 2" ticket.
Thoughts for now, jumping off of the description of fabric/fabric#57 & current state of things in Invoke master:
invoke.utils.debug
(w/ setup to add other levels easily) that logs to a package-level logger. Could probably be expanded some, it's mostly used in the parsing and config modules.run
is called, on which host (in Fabric), with which command (tho this may want truncating...), run-time, etc - but not the stdout/stderr contents (though sizes of those, similar to HTTP log formats, is probably good).
Result
objects.run
's stream arguments, and to configure logging to taste.
Right now stdout/stderr is printed in our Popen subclass byte-by-byte to allow interaction etc. However, eventually we'll want real logging, meaning an ability to throw things line-by-line into a log library, meaning an extra buffer at the Popen level that does additional Things with lines as they are tallied up.
I suppose there should be an option to turn off the bytewise in favor of just linewise, but that could be a 2nd tier thing or saved for when we start layering Fab 2 on top of this.
Many tests that are testing specific behavior of e.g run()
take the easy route and call e.g. run('true', xxx)
, then introspect the result. This of course runs an actual subprocess and makes for a slower test suite ("only" ~4s right now, but it will keep growing.) Some even spawn a subprocess that spawns another subprocess! (E.g. the end of #21's new tests do this to get around pty stuff.)
Only the highest level sanity/integration tests really need to do this; the rest should be able to work with a dependency-injected mock object that behaves as needed w/o spawning a real subprocess.
Required:
run
to accept a runner
argument & factor out the pexpect/subprocess using chunk of current function body.run(runner=<mock>)
.Ken's Envoy project has a similar run()
to what Invoke will have, and somebody raised the point about making it easy for users to be secure against injection/quoting attacks, by adding a DB-API like parameterization format.
The link: not-kennethreitz/envoy#16
I agree with Ken's points therein that the point of these tools is ease of use, not correctness, but also with the "let's offer users the option of correctness if they want it" corollary.
So, check out that pypi'd package linked at the end of the discussion and see if it makes sense to integrate.
EDIT: cut to chase, link is http://shell-command.readthedocs.org/en/latest/ . Untouched since initial release, so now ~3 years old.
Strongly related here is the idea of exposing Popen's alternate list-of-parameters format for its initial argument, i.e. not this:
run("foo --the-bars plz")
but instead, this:
run(['foo', '--the-bars', 'plz'])
Some users find this easier than trying to handle/escape complex all-in-one command strings.
See also: Fabric #69
Need to allow parameterization of tasks as invoked via the CLI (tasks invoked via Python are simply passing regular args/kwargs or their list/dict counterparts.) Sub-problem: translating strings/flag existence into Python types such as lists, booleans etc.
Current implementation in Fabric is fab taskname:positional_arg_value,keyword_arg=kwarg_value,...
. Some other tools use "regular" CLI flags, e.g. paver taskname -F value
.
Pluses:
Minuses:
Pluses:
Minuses:
invoke task1 --arg1 --next task2 --arg2
(again, can argparse do this?)-c
/--collection
--foo
flag and a user declares mytask(foo='bar')
, and then we unknowingly add --foo
to core in 1.2, that user's code will break.make
uses shell environment variables, which is similar to how Thor treats flags -- every task is going to get the same values for any overlapping names.taskname[args, here]
instead of taskname:args,here
. I find this actually worse than the colon we use, because some shells like zsh treat square-brackets specially and need extra escaping/quoting.AKA def foo(blah_whatever):
should be invocable as inv foo --blah-whatever
. Probably controllable.
In the same way that one can rename tasks to have a different name (eg to work around builtin shadowing for function names), it should probably be possible to do the same for arguments.
This requires something like @task(arg_names={'realname': 'desiredname'})
or similar.
Ties in strongly with #34 re: adding extra names/aliases to kwargs. Maybe require some holistic "arg spec" option (though that then requires some duplication if you don't want to rename everything.)
Doesn't hurt the "be really concise" use case, but more intuitive for folks who want to be more explicit or who simply think "I wanna hide 'stdout'".
Keep getting tripped up by the "non-defaulted function args do not become valid flags" implication of #23. Even if/when #27 is implemented, I can see it tripping users up (if it trips up the author, it's gonna trip up the users!)
I am not 100% sure we can't have + eat our cake; we still cannot have arbitrary numbers of positional args, but within the "we know whether or not the next non-flag thing is a posarg or a task name" framework, it should be possible to "chip away" at positional args by giving them as flags.
For example, imagine this task:
@task(foo, bar, biz='baz'):
pass
Currently, as of #23, the Context created for this task looks as follows:
foo
is a positional argument, and is in .args
but not .flags
bar
biz
is a nonpositional argument and is in both .args
and .flags
When parsing occurs, the parser knows it cannot skip to the next task boundary until both foo
and bar
have been filled in, by virtue of Context.needs_positional_arg
. That's the only actual test.
Right now we explicitly test to make sure positionals are not available as flags (i.e. they're not added to the flags
data structure and thus --a-positional-arg value
is actually invalid). I do not remember why this is, but hopefully tweaking it and running the test suite will remind me.
Assuming it was just tripping up something dumb in lazy tests, I'll probably just remove that blockade and ensure that positional args given as flags are removed from the positional arg "queue". Thus, in the above example, if we parsed invoke mytask --foo=foovalue barvalue
it would remove foo
from the positionals list, then when it saw barvalue
it would correctly associate it with bar
.
Finally: this does add conceptual complexity to the parser, but I think it's worth it to avoid the more irritating pitfall of "I didn't give this arg a default value, why can't I give it as a flag now?"
E.g. the return value of @task
should be the original function with a new attribute added to it, not a Task object. This preserves ability to call tasks from other tasks without jumping through hoops.
Alternately, implement __call__
?
[Ed: was 'invoke doesn't run commands on Windows']
runner.py imports pty which (through tty -> termios) isn't available on Windows.
@aliases('foo', 'bar')
@task
for simpler use; possibly just singular? (@task(alias='foo')
)A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.