Comments (18)
(Sidenote. good first issue
is a "special" tag that some things such as GitHub use to indicate issues to first-time contributors. Seems to not work when there is an emoji in there, which may or may not be okay).
from docs.
Yeah, I could see sets for specific things like wip. That wouldn't be bad :)
from docs.
I don't have time at the moment to list them all out, but Python is using a slightly different list that we'd like to keep. I don't think we deviate too far from JS or Julia (I cribbed most from of the tags from JS)...but I have added/altered some. I am also rather attached to my emoji and color choices. π . Rather than use a tag for V3 π , we're using an Issue title/project/milestone combo. Jury is still out on its effectiveness.
We added:
claimed πΎ
in progress πΏ
new ___ β¨
(we added documentation
and reference doc
as categories)
improve __ π
(one each for exercises
, test cases
, documentation
, and reference documents
).
draft
We also have/had these, which I added emoji to.
maintainer action requiredβ
β questionβ
pinned π
awaiting review π
& please review π
good first issue π£
& beginner friendly π£
first-timers only π₯
good first patch
spam π«
on hold βπ½
enhancement π¦ β¨
abandoned ποΈ
We make a distinction between choreπ§
(a maintenance task that crosses multiple exercises/docs/tests/etc.) vs maintainer chore π§
(a track-wide maintenance task that needs a maintainer to complete)
from docs.
I'm pretty averse to changing what JavaScript (and TypeScript, and all the tooling) has been using for the past years, so the proposal needs to be incredibly solid for me to give it my support.
from docs.
Basically this: https://github.com/exercism/julia/blob/main/.github/labels.yml
I left invalid
, help wanted
and good first issue
without emoji, because they are special in tooling. The reason I'm a bit change resistant is that we have links to the search with the issue label(s) including the emoji in the name in various documents π―.
Here are the important ones:
- bug π : actual bug
- chore π§ : meta related task such as build, test, linting,
maintainers.json
etc. - dependencies πΌ : changes / updates dependencies
- discussion π¬ : meta issues that don't change repository content by themselves
- do not merge π§ : like WIP, but can be applied by maintainers
- documentation π : changes docs
- enhancement π¦ : changes non-docs
- experimental π¬ : experimental / beta-like changes
- new exercise β¨
- new test case β¨
- security π¨ : security vulnerability
- sync π : basically
tests.toml
- upstream β¬οΈ : like dependencies, but marks an issue as "this is an issue caused by a dependency"
Then a few to mark something as invalid:
- duplicate π
- invalid
- wontfix π π½ββοΈ
I did for a hot minute think of doing type/x
and status/y
, but most of the time I needed these and just these. I purposefully don't have stale
or need more information
, or stuff like that.
Sascha added:
- config change βοΈ
- v3 π
To re-iterate, the following have been unchanged because GitHub (and other tooling) expects them to be named exactly this:
- invalid
- help wanted
- good first issue
from docs.
We could use the action and yml file to create a base set of labels that are synced across repos. Tracks could then add to the label file in their CI script. Technically they could also remove standard labels in the same way but that could be prevented by setting the right codeowners for the workflow.
from docs.
@SleeplessByte - yeah. πΈ Was going to note that I might need to cull (who needs both "needs review" and "please review"?) or edit out the emoji certain places -- especially if it breaks things like good first issue
, which would defeat the whole purpose!
from docs.
Went through and did some housekeeping. Here's the Python set: https://github.com/exercism/python/blob/main/.github/labels.yml
from docs.
Julia uses these labels, which are based on the JS track but extended: https://github.com/exercism/julia/blob/main/.github/labels.yml
It uses an action to keep them in sync with the file.
from docs.
I'd really like to do this right. Primarily because I'd like to be able to do things with the labels across Exercism. For example, say "Want to check some grammar - check these issues". "Want to improve some stories - check these issues". And then let people filter by track etc. I think this could really open doors to early contributions.
[Insert general rant that people like doing different things and if we can make it really easy to contribute in the way that you want, rather than triaging first by the language to contribute to, I think we can get lots more contributions]
[Insert secondary rant about how we can have great templates/docs across exercise for how to do things, and labelling an issue can trigger an action that adds links to all those docs]
from docs.
@SleeplessByte Maybe the rest of us could benefit from adopting what JS has done π Do you have a document etc describing it?
from docs.
Great. That's a great list. Thanks.
So the ones I'd be keen to add would be things that are for specific types of checks. e.g.:
wip/proof-reading
: Issues for exercises in a WIP state that need proof-readingwip/content-checking
: Issues for exercises in a WIP state that require their content checkingwip/story-writing
: Issues for exercises in a WIP state that require their stories writingwip/adherence-to-style-guideline
: Issues for exercises that require checking/bringing-in-line-with the style guide (I'd probably create one of these for every exercise in Problem Specs, and then suggest one per Concept or bespoke exercise in tracks).
My thinking being that we can then create this "set" of issues for each new Exercise that gets merged, and in cases for existing exercises, and then crowd-source people who are specifically interested in those jobs. (This could happen via CI)
Another one/set might be to do with documentation for example.
from docs.
I like the WIP tags. Right now, I am folding tasks like those into "improvement issues" as a checkbox. See Python issue #2342, but separating and tagging is probably a better strategy.
from docs.
@BethanyG Thanks :) I'm excited about getting a really great superset of all these.
from docs.
As an update, we've largely done this with the x:
labels, which are consumed by the website.
Do people feel there is value in adding further labels not used by the website, or should we consider this done?
from docs.
I think there are people who won't interact with the website to look for issues/tasks. I also think there are tasks that for whichever reason might not end up listed on the website, but still need to be tracked or addressed. Not sure if those categories warrant exercism-wide labels, or if they're fine being labeled on a track-by-track basis.
Will there be explanations of the x
tags and/or links in the exercism/exercism README, or another GitHub facing doc? That might help to direct contributors to run/click on the appropriate labels or filters to get lists that parallel the site.
from docs.
Will there be explanations of the x tags and/or links in the exercism/exercism README, or another GitHub facing doc? That might help to direct contributors to run/click on the appropriate labels or filters to get lists that parallel the site.
When the x:
tags are added, a (bot) comment will be added to the issue explaining them, and then subsequently updated when the labels are updated. So someone looking at the issue shouldn't have to look anywhere else to know what to do (in terms of the high level categories). That said, I'd imagine most issues have descriptions detailed enough to superceed most of these comments anyway, so it's really an added layer of continuity from the website.
from docs.
@BethanyG We have this page: https://github.com/exercism/docs/blob/main/product/tasks.md
from docs.
Related Issues (20)
- How-to Docs: Implement a test case generator
- missing anchor links HOT 3
- Understanding the why of learning mode HOT 3
- Building - Practice Exercise Docs: Clarify how append file is rendered
- docker.md link broken on test runners page HOT 1
- Style guide question regarding capitalization of headers HOT 2
- links in this repo are not supposed to work when reading on GitHub
- Hello world
- General documentation about practice exercises
- README.md file under building/github has broken links HOT 4
- Suggestion - Remove "extra" link from building/github/README
- config-json.md: files.invalidator not documented? HOT 1
- Add documentation about giving hints to specific instruction steps HOT 2
- ci: consider adding `markdownlint` workflow
- lint: approaches/articles: `name` key should be called `title`
- lint: consider requiring that every file in an exercise directory is "used" HOT 1
- markdown spec: consider changing admonition syntax from tildes to backticks HOT 1
- lint: specify that `authors` and `contributors` are valid GitHub usernames
- lint: add rules for analyzer tags in track config and approaches config HOT 1
- config: should add `typing/gradual` as a valid `tag`
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from docs.