GithubHelp home page GithubHelp logo

exercism / lfe Goto Github PK

View Code? Open in Web Editor NEW
25.0 13.0 29.0 825 KB

Exercism exercises in Lisp Flavoured Erlang (LFE).

Home Page: https://exercism.org/tracks/lfe

License: MIT License

Shell 1.77% Erlang 21.27% Makefile 11.41% LFE 65.56%
exercism-track lisp unmaintained community-contributions-paused

lfe's Introduction

Exercism LFE Track

Configlet Verify Exercises

Lisp Flavoured Erlang (LFE) is one of many programming language tracks on exercism(dot)org. This repo holds all the instructions, tests, code, & support files for LFE exercises currently under development or implemented and available for students.

🌟 Track exercises and the test runner currently utilize LFE 2.1.3.

Currently, all exercises are open-ended practice exercises, intended to practice concepts learned, try out new techniques, and most importantly play around with.

Contributing Guide

Here to suggest a new feature or new exercise? We'd love if you did that via our Exercism Community Forum.

Want to jump directly into Exercism specifications & detail?

lfe's People

Contributors

acook avatar adolfopa avatar agentofuser avatar angelikatyborska avatar austinlyons avatar benreyn avatar bnandras avatar cgrayson avatar defndaines avatar dependabot[bot] avatar dkinzer avatar ejc123 avatar erikschierboom avatar etrepum avatar exercism-bot avatar hvnsweeting avatar jimlynchcodes avatar kahgoh avatar kytrinyx avatar magthe avatar menketechnologies avatar milo-hyben avatar nobbz avatar oubiwann avatar petertseng avatar pfigue avatar robinhilliard avatar stevenproctor avatar tmcgilchrist avatar yurrriq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lfe's Issues

make test -> 0 tests

I wanted to give this track a try, to really have the latest version of the exercise I removed my leap folder and did a exercism f lfe leap, cd ~/exercism/lfe/leap and make eunit.

After all the dependencies got pulled and compiled I see the following output:

===> Compiling leap
 ~~>    Finding .lfe files ...
===> Performing EUnit tests...

Finished in 0.006 seconds
0 tests

I'd expect to see something like "module leap not available" (I really don't know anything about lfe except that it is lisp compiled to BEAM), but something similar would have happened on the erlang track in a similar situation. Also I'd expected to see 4 tests failing.

[Important] The current website is about to enter maintenance mode to aid with v3 launch

TL;DR; At the end of Jan 2021, all tracks will enter v3 staging mode. Updates will no longer sync with the current live website, but instead sync with the staging website. The LFE section of the v3 repo will be extracted and PR'd into this track (if appropriate). Further issues and information will follow over the coming weeks to prepare LFE for the launch of v3.

Over the last 12 months, we've all been hard at work developing Exercism v3. Up until this point, all v3 tracks have been under development in a single repository - the v3 repository. As we get close to launch, it is time for us to explode that monorepo back into the normal track repos. Therefore, at the end of this month (January 2021), we will copy the v3 tracks contents from the v3 repository back to the corresponding track repositories.

As v3 tracks are structured differently than v2 tracks, the current (v2) website cannot work with v3 tracks. To prevent the v2 website from breaking, we'll disable syncing between track repositories and the website. This will effectively put v2 in maintenance mode, where any changes in the track repos won't show up on the website. This will then allow tracks to work on preparing for the Exercism v3 launch.

Where possible, we will script the changes needed to prepare tracks for v3. For any manual changes that need to be happening, we will create issues on the corresponding track repositories. We will be providing lots of extra information about this in the coming weeks.

We're really excited to enter the next phase of building Exercism v3, and to finally get it launched! πŸ™‚

bob: Update to clarify ambiguity regarding shouted questions

TL;DR: the problem specification for the Bob exercise has been updated. Consider updating the test suite for Bob to match. If you decide not to update the exercise, consider overriding description.md.


Details

The problem description for the Bob exercise lists four conditions:

  • asking a question
  • shouting
  • remaining silent
  • anything else

There's an ambiguity, however, for shouted questions: should they receive the "asking" response or the "shouting" response?

In exercism/problem-specifications#1025 this ambiguity was resolved by adding an additional rule for shouted questions.

If this track uses exercise generators to update test suites based on the canonical-data.json file from problem-specifications, then now would be a good time to regenerate 'bob'. If not, then it will require a manual update to the test case with input "WHAT THE HELL WERE YOU THINKING?".

See the most recent canonical-data.json file for the exact changes.

Remember to regenerate the exercise README after updating the test suite:

configlet generate . --only=bob --spec-path=<path to your local copy of the problem-specifications repository>

You can download the most recent configlet at https://github.com/exercism/configlet/releases/latest if you don't have it.

If, as track maintainers, you decide that you don't want to change the exercise, then please consider copying problem-specifications/exercises/bob/description.md into this track, putting it in exercises/bob/.meta/description.md and updating the description to match the current implementation. This will let us run the configlet README generation without having to worry about the bob README drifting from the implementation.

Pass explicit list of multiples in "Sum of Multiples" exercise rather than defaulting to 3 and 5

Hello, as part of exercism/problem-specifications#198 we'd like to make the sum of multiples exercise less confusing. Currently, the README specifies that if no multiples are given it should default to 3 and 5.

We'd like to remove this default, so that a list of multiples will always be specified by the caller. This makes the behavior explicit, avoiding surprising behavior and simplifying the problem.

Please make sure this track's tests for the sum-of-multiples problem do not expect such a default. Any tests that want to test behavior for multiples of [3, 5] should explicitly pass [3, 5] as the list of multiples.

After all tracks have completed this change, then exercism/problem-specifications#209 can be merged to remove the defaults from the README.

The reason we'd like this change to happen before changing the README is that it was very confusing for students to figure out the default behavior. It wasn't clear from simply looking at the tests that the default should be 3 and 5, as seen in exercism/exercism#2654, so some had to resort to looking at the example solutions (which aren't served by exercism fetch, so they have to find it on GitHub). It was added to the README to fix this confusion, but now we'd like to be explicit so we can remove the default line from the README.

You can find the common test data at https://github.com/exercism/x-common/blob/master/sum-of-multiples.json, in case that is helpful.

Moving from Travis to GitHub Actions

Hello πŸ™‚

Over the last few months we've been transferring all our CI from Travis to GitHub Actions (GHA). We've found that GHA are easier to work with, more reliable, and much much faster.

Based on our success with GHA and increasing intermittent failures on Travis, we have now decided to try and remove Travis from Exercism's org altogether and shift everything to GHA. This issue acts as a call to action if your track is still using Travis.

For most CI checks this should be a transposing from Travis' syntax to GHA syntax, and hopefully quite straightforward (see this PR for an example). However, if you do encounter any issues doing this, please ask on Slack where lots of us now have experience with GHA, or post a comment here and I'll tag relevant people. This would also make a good Hacktoberfest issue for anyone interested in making their first contribution πŸ™‚

If you've already switched this track to GHA, please feel free to close this issue and ignore it.

Thanks!

Ensure LFE track is ready for v2 launch

There are a number of things we're going to want to check before the v2 site goes live. There are notes below that flesh out all the checklist items.

  • The track has a page on the v2 site: https://v2.exercism.io/tracks/lfe
  • The track page has a short description under the name (not starting with TODO)
  • The "About" section is a friendly, colloquial, compelling introduction
  • The "About" section follows the formatting guidelines
  • The code example gives a good taste of the language and fits within the boundaries of the background image
  • There are exercises marked as core
  • Exercises have rough estimates of difficulty
  • Exercises have topics associated with them
  • The first exercise is auto_approve: true

Track landing page

The v2 site has a landing page for each track, which should make people want to join it. If the track page is missing, ping @kytrinyx to get it added.

Blurb

If the header of the page starts with TODO, then submit a pull request to https://github.com/exercism/lfe/blob/master/config.json with a blurb key. Remember to get configlet and run configlet fmt . from the root of the track before submitting.

About section

If the "About" section feels a bit dry, then submit a pull request to https://github.com/exercism/lfe/blob/master/docs/ABOUT.md with suggested tweaks.

Formatting guidelines

In order to work well with the design of the new site, we're restricting the formatting of the ABOUT.md. It can use:

  • Bold
  • Italics
  • Links
  • Bullet lists
  • Number lists

Additionally:

  • Each sentence should be on its own line
  • Paragraphs should be separated by an empty line
  • Explicit <br/> can be used to split a paragraph into lines without spacing between them, however this is discouraged.

Code example

If the code example is too short or too wide or too long or too uninteresting, submit a pull request to https://github.com/exercism/ocaml/blob/master/docs/SNIPPET.txt with a suggested replacement.

Exercise metadata

Where the v1 site has a long, linear list of exercises, the v2 site has organized exercises into a small set of required exercises ("core").

If you update the track config, remember to get configlet and run configlet fmt . from the root of the track before submitting.

Topic and difficulty

Core exercises unlock optional additional exercises, which can be filtered by topic an difficulty, however that will only work if we add topics and difficulties to the exercises in the track config, which is in https://github.com/exercism/lfe/blob/master/config.json

Auto-approval

We've currently made any hello-world exercises auto-approved in the backend of v2. This means that you don't need mentor approval in order to move forward when you've completed that exercise.

Not all tracks have a hello-world, and some tracks might want to auto approve other (or additional) exercises.

Track mentors

There are no bullet points for this one :)

As we move towards the launch of the new version of Exercism we are going to be ramping up on actively recruiting people to help provide feedback. Our goal is to get to 100%: everyone who submits a solution and wants feedback should get feedback. Good feedback.

If you're interested in helping mentor the track, check out http://mentoring.exercism.io/

When all of the boxes are ticked off, please close the issue.

Tracking progress in exercism/meta#104

Fix the Hello World exercise so it passes with 'make test' on CI

I do not know if this is failing locally, but on Travis CI we are getting the following failure:

===> Compiling exercises/hello-world/src/hello-world.lfe failed
/home/travis/build/exercism/lfe/exercises/hello-world/src/hello-world.lfe:9999: unbound function: #(from 1)
make: *** [test] Error 1
The command "make test" exited with 2.

Verify "Largest Series Product" exercise implementation

There was some confusion in this exercise due to the ambiguous use of the term consecutive in the README. This could be taken to mean contiguous, as in consecutive by position, or as in consecutive numerically. The the README has been fixed (exercism/problem-specifications#200).

Please verify that the exercise is implemented in this track correctly (that it finds series of contiguous numbers, not series of numbers that follow each other consecutively).

If it helps, the canonical inputs/outputs for the exercise can be found here:
https://github.com/exercism/x-common/blob/master/largest-series-product.json

If everything is fine, go ahead and just close this issue. If there's something to be done, then please describe the steps needed in order to close the issue.

Move exercises to subdirectory

The problems api (x-api) now supports having exercises collected in a subdirectory
named exercises.

That is to say that instead of having a mix of bin, docs, and individual exercises,
we can have bin, docs, and exercises in the root of the repository, and all
the exercises collected in a subdirectory.

In other words, instead of this:

x{TRACK_ID}/
β”œβ”€β”€ LICENSE
β”œβ”€β”€ README.md
β”œβ”€β”€ bin
β”‚Β Β  └── fetch-configlet
β”œβ”€β”€ bowling
β”‚Β Β  β”œβ”€β”€ bowling_test.ext
β”‚Β Β  └── example.ext
β”œβ”€β”€ clock
β”‚Β Β  β”œβ”€β”€ clock_test.ext
β”‚Β Β  └── example.ext
β”œβ”€β”€ config.json
└── docs
β”‚Β Β  β”œβ”€β”€ ABOUT.md
β”‚Β Β  └── img
... etc

we can have something like this:

x{TRACK_ID}/
β”œβ”€β”€ LICENSE
β”œβ”€β”€ README.md
β”œβ”€β”€ bin
β”‚Β Β  └── fetch-configlet
β”œβ”€β”€ config.json
β”œβ”€β”€ docs
β”‚Β Β  β”œβ”€β”€ ABOUT.md
β”‚Β Β  └── img
β”œβ”€β”€ exercises
β”‚Β Β  β”œβ”€β”€ bowling
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ bowling_test.ext
β”‚Β Β  β”‚Β Β  └── example.ext
β”‚Β Β  └── clock
β”‚Β Β      β”œβ”€β”€ clock_test.ext
β”‚Β Β      └── example.ext
... etc

This has already been deployed to production, so it's safe to make this change whenever you have time.

Problems getting started with LFE on Mac OSX

Following the directions found in http://help.exercism.io/getting-started-with-lfe.html I am not seeing the rebar get-deps and rebar compile steps having any affect. The document implies that a deps directory should be created (given that we find the value for ERL_LIBS from that).

There is no advice on where to run those commands from. I have tried from the following locations:

  • home directory
  • exercism exercises lfe directory
  • exercism exercises lfe/leap directory
  • home brew Cellar directory which is where lfe is installed to.

Am I running it from an invalid location or has some other part of my install gone wrong?

NOTE: I did brew install lfe before seeing that the instructions said to follow the Erlang instructions first, however brew install lfe did in fact install erlang.

Homebrew installed lfe version 0.9.2 and erlang 17.5.

Gitter chat, common BEAM channel

In Exercism there are a couple of gitter-channels used to coordinate development on exercism, the product.

Also there is a global support channel and throughout exercism there are a couple of tracks that have a support channel on gitter.im. Erlang has one as well, but it isn't used quite much.

Since Erlang, Elixir, and LFE do share the same base technology (BEAM-VM) and in case of LFE even the same standard library, I'd suggest to join forces in a single gitter-channel (former erlang channel).

I can't rename the room, but already changed its headline to "Exercism Exercises in BEAM Powered Languages (Erlang, Elixir, LFE)"

If you do not want to join the channel please feel free to deny, I will remove your language from the headline. Also once joined, you are able to split into your own channel once the traffic explodes ;)

Remove obsolete version tracking assertions in exercises

Some tracks have added assertions to the exercise test suites that ensure that the solution has a hard-coded version in it.
In the old version of the site, this was useful, as it let commenters see what version of the test suite the code had been written against, and they wouldn't accidentally tell people that their code was wrong, when really the world had just moved on since it was submitted.

If this track does not have any assertions that track versions in the exercise tests, please close this issue.

If this track does have this bookkeeping code, then please remove it from all the exercises.

See exercism/exercism#4266 for the full explanation of this change.

Problems API is raising an error on the lfe track

Errno::ENOENT at /v2/exercises/lfe
No such file or directory @ rb_sysopen - ./problems/lfe/leap/deps

The deps file is a symlink to a file in the root of xlfe... but that file doesn't exist (presumably because it's in the .gitignore file).

I am temporarily adding an empty deps file so that the API errors stop: ba5fd1f

What was it like to learn LFE?

We’ve recently started a project to find the best way to design our tracks, in order to optimize the learning experience of students.

As a first step, we’ll be examining the ways in which languages are unique and the ways in which they are similar. For this, we’d really like to use the knowledge of everyone involved in the Exercism community (students, mentors, maintainers) to answer the following questions:

  1. How was your experience learning LFE? What was helpful while learning LFE? What did you struggle with? How did you tackle problems?
  2. In what ways did LFE differ from other languages you knew at the time? What was hard to learn? What did you have to unlearn? What syntax did you have to remap? What concepts carried over nicely?

Could you spare 5 minutes to help us by answering these questions? It would greatly help us improve the experience students have learning LFE :)

Note: this issue is not meant as a discussion, just as a place for people to post their own, personal experiences.

Want to keep your thoughts private but still help? Feel free to email me at [email protected]

Thank you!

Getting started needs update.

Currently the getting started seems to mention a lot of rebar2 stuff, but the makefile uses rebar3 and does make many of the steps mentioned obselete.

rna-transcription: don't transcribe both ways

I can't remember the history of this, but we ended up with a weird non-biological thing in the RNA transcription exercise, where some test suites also have tests for transcribing from RNA back to DNA. This makes no sense.

If this track does have tests for the reverse transcription, we should remove them, and also simplify the reference solution to match.

If this track doesn't have any tests for RNA->DNA transcription, then this issue can be closed.

See exercism/problem-specifications#148

Update config.json to match new specification

For the past three years, the ordering of exercises has been done based on gut feelings and wild guesses. As a result, the progression of the exercises has been somewhat haphazard.

In the past few months maintainers of several tracks have invested a great deal of time in analyzing what concepts various exercises require, and then reordering the tracks as a result of that analysis.

It would be useful to bake this data into the track configuration so that we can adjust it over time as we learn more about each exercise.

To this end, we've decided to add a new key exercises in the config.json file, and deprecate the problems key.

See exercism/discussions#60 for details about this decision.

Note that we will not be removing the problems key at this time, as this would break the website and a number of tools.

The process for deprecating the old problems array will be:

  • Update all of the track configs to contain the new exercises key, with whatever data we have.
  • Simultaneously change the website and tools to support both formats.
  • Once all of the tracks have added the exercises key, remove support for the old key in the site and tools.
  • Remove the old key from all of the track configs.

In the new format, each exercise is a JSON object with three properties:

  • slug: the identifier of the exercise
  • difficulty: a number from 1 to 10 where 1 is the easiest and 10 is the most difficult
  • topics: an array of strings describing topics relevant to the exercise. We maintain
    a list of common topics at https://github.com/exercism/x-common/blob/master/TOPICS.txt. Do not feel like you need to restrict yourself to this list;
    it's only there so that we don't end up with 20 variations on the same topic. Each
    language is different, and there will likely be topics specific to each language that will
    not make it onto the list.

The difficulty rating can be a very rough estimate.

The topics array can be empty if this analysis has not yet been done.

Example:

"exercises": [
  {
    "slug": "hello-world" ,
    "difficulty": 1,
    "topics": [
        "control-flow (if-statements)",
        "optional values",
        "text formatting"
    ]
  },
  {
    "difficulty": 3,
    "slug": "anagram",
    "topics": [
        "strings",
        "filtering"
    ]
  },
  {
    "difficulty": 10,
    "slug": "forth",
    "topics": [
        "parsing",
        "transforming",
        "stacks"
    ]
  }
]

It may be worth making the change in several passes:

  • Add the exercises key with the array of objects, where difficulty is 1 and topics is empty.
  • Update the difficulty settings to reflect a more accurate guess.
  • Add topics (perhaps one-by-one, in separate pull requests, in order to have useful discussions about each exercise).

binary: improve tests for invalid numbers

We should have separate tests for:

  • alphabetic characters at the beginning of a valid binary number
  • alphabetic characters at the end of a valid binary number
  • alphabetic characters in the middle of an otherwise valid binary number
  • invalid digits (e.g. 2)

If the test suite for binary has test cases that cover these edge cases, this issue can safely be closed.

See exercism/problem-specifications#95

"point-mutations" is deprecated in favor of hamming

This happened a while back, and it was for really weird legacy reasons.

I've since fixed the underlying issues that caused the problem, but for consistency
it would be nice to rename point-mutation to hamming, so that all the tracks are using
the same exercise name.

Once the problem has been renamed, I can run a script on the website to point people's
existing point-mutations solutions to the new hamming exercise so that they'll be able
to review solutions to hamming, and people who solve the new hamming exercise can see
all the old ones.

Where are the LFE communities and enthusiasts?

As we move towards the launch of the new version of Exercism we are going to be ramping up on actively recruiting people to help provide feedback.

Our goal is to get to 100%: everyone who submits a solution and wants feedback should get feedback. Good feedback. You can read more about this aspect of the new site here: http://mentoring.exercism.io/

To do this, we're going to need a lot more information about where we can find language enthusiasts.

  • Is LFE supported by one or more large organizations?
  • Does LFE have an official community manager?
  • Do you know of specific communities (online or offline) that are enthusiastic about LFE? (Chat communities, forums, meetups, student clubs, etc)
  • Are there popular conferences for LFE? (If so, what are some examples?)
  • Are there any organizations who are targeted specifically at getting certain subgroups or demographics interested in LFE? (e.g. kids, teenagers, career changers, people belonging to various groups that are typically underrepresented in tech?)
  • Are there specific groups or programs dedicated to mentoring people in LFE?
  • Are there popular newsletters for LFE?
  • Is LFE taught at programming bootcamps? (If so, what are some examples?)
  • Is LFE taught at universities? (If so, what are some examples?)

In other words: where do people care a lot and/or know a lot about LFE?

This is part of the project being tracked in exercism/meta#103

Syntax highlighting for lfe

This track has not yet specified the file format mapping
for syntax highlighting in exercism/meta#90

Please provide a mapping of file formats used by your track
in exercism/meta#90 (e.g. md = markdown)

This issue can be closed after this is completed. (Apologies
if you've already done this - this issue was generated
automatically).

Open discussion re: Windows support

The current version uses symlinks and I'm not sure how Windows will handle that. There are a few other issues with LFE/Erlang on Windows too.

Once we get a Windows user or if we feel motivated before then, we should make sure everything works as expected.

Launch Checklist

In order to launch we should have:

  • LFE as a submodule in x-api
  • at least 10 problems
  • a "how to get started" topic in the help repo repo (app/pages/languages/getting-started-with-lfe.md)
  • one to a handful of people willing to check exercism regularly (daily?) for nitpicks to ensure that the track gets off on the right foot
  • add track implementors and other designated nitpickers as mentors to the track
  • toggle "active" to true in config.json

Some tracks have been more successful than others, and I believe the key features of the successful tracks are:

  • Each submission receives feedback quickly, preferably within the first 24 hours.
  • The nitpicks do not direct users to do specific things, but rather ask questions challenging people to think about different aspects of their solution, or explore new aspects of the language.

For more about contributing to language tracks on exercism, check out the Problem API Contributing guide: https://github.com/exercism/x-api/blob/master/CONTRIBUTING.md

Syntax highlighting desires?

There is a current issue open about adding syntax highlighting to tracks that are missing it.
exercism/exercism#2988

Exercism uses Rouge for syntax highlighting http://rouge.jneen.net/

Is 'common_lisp' an appropriate highlighter to use for LFE?
If it's not, please look through the examples on the rouge website and let us know what you would like to use instead.

`make test` fails on hello world: compiling clj-seq.lfe failed

make test fails with the following:

/usr/bin/rebar3 eunit \
-m hello-world-tests
===> Verifying dependencies...
===> Compiling lfe-version
===> Compiling kla
===> Compiling clj
===> Compiling _build/test/lib/clj/src/clj-seq.lfe failed
/home/user/exercism/lfe/hello-world/_build/test/lib/clj/src/clj-seq.lfe:none: internal error in lint_module;
crash reason: {badmatch,[{var,235,'_'}]}

  in function  erl_lint:taint_stack_var/3 (erl_lint.erl, line 3373)
  in call from erl_lint:icrt_clause/3 (erl_lint.erl, line 3363)
  in call from lists:mapfoldl/3 (lists.erl, line 1358)
  in call from erl_lint:try_clauses/5 (erl_lint.erl, line 3343)
  in call from erl_lint:expr/3 (erl_lint.erl, line 2533)
  in call from erl_lint:exprs/3 (erl_lint.erl, line 2331)
  in call from erl_lint:clause/2 (erl_lint.erl, line 1593)
  in call from erl_lint:'-clauses/2-fun-0-'/2 (erl_lint.erl, line 1582)

make: *** [Makefile:20: test] Error 1

rebar3 lfe versions

===> Verifying dependencies...
(#(apps ())
 #(languages
   (#(lfe "2.0-dev")
    #(erlang "23")
    #(emulator "11.1")
    #(driver_version "3.3")))
 #(tooling (#(rebar "3.14.1") #(rebar3_lfe "0.2.0"))))

https://github.com/lfex/clj seems deprecated.

The master branch will be renamed to main

In line with our new org-wide policy, the master branch of this repo will be renamed to main. All open PRs will be automatically repointed.

GitHub will show you a notification about this when you look at this repo after renaming:

Screenshot 2021-01-27 at 15 31 45

In case it doesn't, this is the command it suggests:

git branch -m master main
git fetch origin
git branch -u origin/main main

You may like to update the primary branch on your forks too, which you can do under Settings->Branches and clicking the pencil icon on the right-hand-side under Default Branch:

Screenshot 2021-01-27 at 18 50 08

We will post a comment below when this is done. We expect it to happen within the next 12 hours.

Copy track icon into language track repository

Right now all of the icons used for the language tracks (which can be seen at http://exercism.io/languages) are stored in the exercism/exercism.io repository in public/img/tracks/. It would make a lot more sense to keep these images along with all of the other language-specific stuff in each individual language track repository.

There's a pull request that is adding support for serving up the track icon from the x-api, which deals with language-specific stuff.

In order to support this change, each track will need to

In other words, at the end of it you should have the following file:

./img/icon.png

See exercism/exercism#2925 for more details.

leap-year updates

  • Rename leap-year to leap-year?
  • Investigate pre-Gregorian dates and add test(s)

Verify contents and format of track documentation

Each language track has documentation in the docs/ directory, which gets included on the site
on each track-specific set of pages under /languages.

We've added some general guidelines about how we'd like the track to be documented in exercism/exercism#3315
which can be found at https://github.com/exercism/exercism.io/blob/master/docs/writing-track-documentation.md

Please take a moment to look through the documentation about documentation, and make sure that
the track is following these guidelines. Pay particularly close attention to how to use images
in the markdown files.

Lastly, if you find that the guidelines are confusing or missing important details, then a pull request
would be greatly appreciated.

clock: canonical test data has been improved

The JSON file containing canonical inputs/outputs for the Clock exercise has gotten new data.

There are two situations that the original data didn't account for:

  • Sometimes people perform computation/mutation in the display method instead of in add. This means that you might have two copies of clock that are identical, and if you add 1440 minutes to one and 2880 minutes to the other, they display the same value but are not equal.
  • Sometimes people only account for one adjustment in either direction, meaning that if you add 1,000,000 minutes, then the clock would not end up with a valid display time.

If this track has a generator for the Clock exercise, go ahead and regenerate it now. If it doesn't, then please verify the implementation of the test suite against the new data. If any cases are missing, they should be added.

See exercism/problem-specifications#166

Verify that nothing links to help.exercism.io

The old help site was deprecated in December 2015. We now have content that is displayed on the main exercism.io website, under each individual language on http://exercism.io/languages.

The content itself is maintained along with the language track itself, under the docs/ directory.

We decided on this approach since the maintainers of each individual language track are in the best position to review documentation about the language itself or the language track on Exercism.

Please verify that nothing in docs/ refers to the help.exercism.io site. It should instead point to http://exercism.io/languages/:track_id (at the moment the various tabs are not linkable, unfortunately, we may need to reorganize the pages in order to fix that).

Also, some language tracks reference help.exercism.io in the SETUP.md file, which gets included into the README of every single exercise in the track.

We may also have referenced non-track-specific content that lived on help.exercism.io. This content has probably been migrated to the Contributing Guide of the x-common repository. If it has not been migrated, it would be a great help if you opened an issue in x-common so that we can remedy the situation. If possible, please link to the old article in the deprecated help repository.

If nothing in this repository references help.exercism.io, then this can safely be closed.

Investigate track health and status of the track

I've used Sarah Sharp's FOSS Heartbeat project to generate stats for each of the language track repositories, as well as the x-common repository.

The Exercism heartbeat data is published here: https://exercism.github.io/heartbeat/

When looking at the data, please disregard any activity from me (kytrinyx), as I would like to get the language tracks to a point where they are entirely maintained by the community.

Please take a look at the heartbeat data for this track, and answer the following questions:

  • To what degree is the track maintained?
  • Who (if anyone) is merging pull requests?
  • Who (if anyone) is reviewing pull requests?
  • Is there someone who is not merging pull requests, but who comments on issues and pull requests, has thoughtful feedback, and is generally helpful? If so, maybe we can invite them to be a maintainer on the track.

I've made up the following scale:

  • ORPHANED - Nobody (other than me) has merged anything in the past year.
  • ENDANGERED - Somewhere between ORPHANED and AT RISK.
  • AT RISK - Two people (other than me) are actively discussing issues and reviewing and merging pull requests.
  • MAINTAINED - Three or more people (other than me) are actively discussing issues and reviewing and merging pull requests.

It would also be useful to know if there a lot of activity on the track, or just the occasional issue or comment.

Please report the current status of the track, including your best guess on the above scale, back to the top-level issue in the discussions repository: exercism/discussions#97

Create stub files for all exercises

We have decided to require all file-based tracks to provide stubs for their exercises.

The lack of stub file generates an unnecessary pain point within Exercism, contributing a significant proportion of support requests, making things more complex for our students, and hindering our ability to automatically run test-suites and provide automated analysis of solutions.

We believe that it’s essential to understand error messages, know how to use an IDE, and create files. However, getting this right as you’re just getting used to a language can be a frustrating distraction, as it can often require a lot of knowledge that tends to seep in over time. At the start, it can be challenging to google for all of these details: what file extension to use, what needs to be included, etc. Getting people up to speed with these things are not Exercism’s focus, and we’ve decided that we are better served by removing this source of confusion, letting people get on with actually solving the exercises.

The original discussion for this is at exercism/discussions#238.

Therefore, we’d like this track to provide a stub file for each exercise.

  • If this track already provides stub files for all exercises, please close this issue.
  • If this track already has an open issue for creating stubs, then my apologies. Please close one as a duplicate.
  • Otherwise, please respond to this issue with useful details about what needs to be done to complete this task in this track so that people who are not familiar with the track may easily contribute.

Fix getting started instructions for lfe

Some exercise README templates contain links to pages which no longer exist in v2 Exercism.

For example, C++'s README template had a link to /languages/cpp for instructions on running tests. The correct URLs to use can be found in the 'Still stuck?' sidebar of exercise pages on the live site. You'll need to join the track and go to the first exercise to see them.

Please update any broken links in the 'config/exercise_readme.go.tmpl' file, and run 'configlet generate .' to generate new exercise READMEs with the fixes.

Instructions for generating READMEs with configlet can be found at:
https://github.com/exercism/docs/blob/master/language-tracks/exercises/anatomy/readmes.md#generating-a-readme

Instructions for installing configlet can be found at:
https://github.com/exercism/docs/blob/bc29a1884da6c401de6f3f211d03aabe53894318/language-tracks/launch/first-exercise.md#the-configlet-tool

Tracking exercism/exercism#4102

Recruiting additional maintainers for LFE

We're about to start a big push towards version 3 (v3) of Exercism. This is going to be a really exciting step forward for Exercism, with in-browser coding, new Concept Exercises with automated feedback, improved mentoring and much more.

This to be a big community effort, with the work spread out among hundreds of volunteers across Exercism. One key thing is going to be each track having enough maintainers who have the time to manage that community effort. We are therefore putting out a call for new maintainers to bolster our numbers. We're hoping that our existing maintainers will be able to act as mentors to the newer maintainers we add, and take on a parental role in the tracks.

If you are an existing maintainer, could you please reply to this letting us know that you think you'll have time (2-3hrs/week) to help with this over the next 6 months. If you won't have that time, but still want to be a maintainer and just help where you can instead, please tell us that too. If you have come to the end of the road as a maintainer, then we totally understand that and appreciate all your effort, so just let us know.

For anyone new who's interested in becoming a maintainer, thanks for your interest! Being an Exercism maintainer is also a great opportunity to work with some other smart people, learn more about your language of choice, and gain useful skills and experience that are useful for growing your career in the technical leadership direction. Please write a comment below introducing yourself along with your Exercism handle, and telling us why you're interested in becoming a maintainer, and any relevant experience. We will then evaluate every application and contact you using your exercism email address once we have finished the evaluation process.

Thank you!

See also exercism/exercism#5161

Override probot/stale defaults, if necessary

Per the discussion in exercism/discussions#128 we
will be installing the probot/stale integration on the Exercism organization on
April 10th, 2017.

By default, probot will comment on issues that are older than 60 days, warning
that they are stale. If there is no movement in 7 days, the bot will close the issue.
By default, anything with the labels security or pinned will not be closed by
probot.

If you wish to override these settings, create a .github/stale.yml file as described
in https://github.com/probot/stale#usage, and make sure that it is merged
before April 10th.

If the defaults are fine for this repository, then there is nothing further to do.
You may close this issue.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.