exercism / elm Goto Github PK
View Code? Open in Web Editor NEWExercism exercises in Elm.
Home Page: https://exercism.org/tracks/elm
License: MIT License
Exercism exercises in Elm.
Home Page: https://exercism.org/tracks/elm
License: MIT License
WIP tgecho@39adbba
Hi, I just did the ListOps exercise and I missed one test to cover the difference between foldl
and foldr
. Something on the lines of
, test "direction" (assertEqual [4,3,2,1] (foldl (::) [] [1..4]))
and
, test "direction" (assertEqual [1..4] (foldr (::) [] [1..4]))
What do you think?
For the past three years, the ordering of exercises has been done based on gut feelings and wild guesses. As a result, the progression of the exercises has been somewhat haphazard.
In the past few months maintainers of several tracks have invested a great deal of time in analyzing what concepts various exercises require, and then reordering the tracks as a result of that analysis.
It would be useful to bake this data into the track configuration so that we can adjust it over time as we learn more about each exercise.
To this end, we've decided to add a new key exercises in the config.json file, and deprecate the problems
key.
See exercism/discussions#60 for details about this decision.
Note that we will not be removing the problems
key at this time, as this would break the website and a number of tools.
The process for deprecating the old problems
array will be:
In the new format, each exercise is a JSON object with three properties:
The difficulty rating can be a very rough estimate.
The topics array can be empty if this analysis has not yet been done.
Example:
"exercises": [
{
"slug": "hello-world" ,
"difficulty": 1,
"topics": [
"control-flow (if-statements)",
"optional values",
"text formatting"
]
},
{
"difficulty": 3,
"slug": "anagram",
"topics": [
"strings",
"filtering"
]
},
{
"difficulty": 10,
"slug": "forth",
"topics": [
"parsing",
"transforming",
"stacks"
]
}
]
It may be worth making the change in several passes:
Launch Checklist
In order to launch we should have:
"active"
to true
in config.json
The documentation lives in the docs/
directory here in this repository, and gets served to the site via the x-api. It should contain at minimim:
INSTALLATION.md
- about how to get the language set up locally.TESTS.md
- about how to run the tests for the exercises.Some nice to haves:
ABOUT.md
- a short, friendly blurb about the language. What types of problems does it solve really well? What is it typically used for?LEARNING.md
- a few notes about where people might want to go to learn the language from scratch.RESOURCES.md
- references and other useful resources.Some tracks have been more successful than others, and I believe the key features of the successful tracks are:
For more about contributing to language tracks on exercism, check out the Problem API Contributing guide: https://github.com/exercism/x-api/blob/master/CONTRIBUTING.md
I'm not sure if it's a problem with how things are set up in this repo, my own installation or if it's related to the current ongoing issues with elm-reactor, but it doesn't seem to be working for me here at all.
For reference, I have been able to use it with other projects, so it's not completely broken on my computer. Might it make sense to switch to elm-test
as the recommended method for now?
In progress
We need the following:
The only real code change that should be necessary is updating the module lines from module Main (..) where
to module Main exposing (..)
form. We're not doing any signal stuff yet, so the rest of the upgrade doc shouldn't affect us: https://github.com/elm-lang/elm-platform/blob/master/upgrade-docs/0.17.md
It also looks like we may need to switch to elm-community/elm-test
to gain 0.17 support. I'm traveling with sketchy internet access at the moment, so I won't be able to sort this out in the next day or two.
It looks like we'll want
Official logo SVG: https://github.com/elm-lang/elm-lang.org/blob/master/resources/logo.svg
Now that we have a decent starting set of exercises (pending PR merges), I'm wondering about a good starting order. I've generally thrown them in haphazardly, somewhat based on the order in the other tracks I've pulled them from.
The current order, as of #31 is this:
My questions for @kytrinyx, @parkerl and whoever else:
In the Triangle exercise readme (http://exercism.io/exercises/elm/triangle/readme), it says
"Tests are provided, delete one `skip` at a time."
But there are no skipped tests. I'm happy to fix it, but I couldn't find the readme in this repo or the exercism.io
repo. Can anyone point me in the right direction?
And thanks for all the work bringing Elm to Exercism! I'm loving the exercises.
$ ./runtests.sh
-bash: ./runtests.sh: Permission denied
I didn't even think to check/ask, but it looks like this behavior was changed a few months ago: exercism/cli#276 :(
So... seeing as in my haste I've failed to create a clean/generic wrapper on the first try, I see a few obvious options:
elm-test
to ship. elm-test MyTests.elm
actually works right now, it just has a bunch of extraneous garbage at the end of the output.bash runtests.sh
.node runtests.js
.Thoughts @parkerl @lukewestby?
@kytrinyx do you already have this up and I just have the wrong url?
WIP tgecho@862d6e7
$ pwd
~/dev/exercism/elm/hello-world
$ npm install -g elm-test
$ elm-test HelloWorldTests.elm
Success! Compiled 1 module.
Successfully generated /var/folders/mv/dy0670cn3255t3fjlv5dwk1m0000gn/T/elm_test_116720-91005-5fq85q.m8bp22o6r.js
undefined:1929
throw new Error(
^
Error: You are giving module `Main` an argument in JavaScript.
This module does not take arguments though! You probably need to change the
initialization code to something like `Elm.Main.fullscreen()`
at init (eval at <anonymous> (/usr/local/lib/node_modules/elm-test/bin/elm-test:86:37), <anonymous>:1929:10)
at Object.eval [as callback] (eval at <anonymous> (/usr/local/lib/node_modules/elm-test/bin/elm-test:86:37), <anonymous>:1973:17)
at step (eval at <anonymous> (/usr/local/lib/node_modules/elm-test/bin/elm-test:86:37), <anonymous>:2613:39)
at Timeout.work [as _onTimeout] (eval at <anonymous> (/usr/local/lib/node_modules/elm-test/bin/elm-test:86:37), <anonymous>:2671:15)
at tryOnTimeout (timers.js:228:11)
at Timer.listOnTimeout (timers.js:202:5)
I'm not sure where to go from here...
I recall that the exercises came with a .bat
/.sh
-script. Why have they been removed?
Also, elm-package.json
defines "rtfeldman/node-test-runner"
as a dependency. But I still need to $ npm install -g elm-test
to have elm-test
available as a command.
Is there a means to make locally installed binaries available within the project dir (i.e. elm-test
)?
The exercise pages have a "Test Suite" tab, but it seems to be broken for the Elm exercises.
For example...
The Python test suite: http://exercism.io/exercises/python/hello-world
The Elm version is broken: http://exercism.io/exercises/elm/hello-world
Is there something we can/should tweak to fix this?
The problems api (x-api) now supports having exercises collected in a subdirectory
named exercises
.
That is to say that instead of having a mix of bin
, docs
, and individual exercises,
we can have bin
, docs
, and exercises
in the root of the repository, and all
the exercises collected in a subdirectory.
In other words, instead of this:
x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│ └── fetch-configlet
├── bowling
│ ├── bowling_test.ext
│ └── example.ext
├── clock
│ ├── clock_test.ext
│ └── example.ext
├── config.json
└── docs
│ ├── ABOUT.md
│ └── img
... etc
we can have something like this:
x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│ └── fetch-configlet
├── config.json
├── docs
│ ├── ABOUT.md
│ └── img
├── exercises
│ ├── bowling
│ │ ├── bowling_test.ext
│ │ └── example.ext
│ └── clock
│ ├── clock_test.ext
│ └── example.ext
... etc
This has already been deployed to production, so it's safe to make this change whenever you have time.
WIP
Right now all of the icons used for the language tracks (which can be seen at http://exercism.io/languages) are stored in the exercism/exercism.io repository in public/img/tracks/
. It would make a lot more sense to keep these images along with all of the other language-specific stuff in each individual language track repository.
There's a pull request that is adding support for serving up the track icon from the x-api, which deals with language-specific stuff.
In order to support this change, each track will need to
img/
at the root of this repository if it doesn't already exist, thenimg/
directory, and importantlyicon.png
In other words, at the end of it you should have the following file:
./img/icon.png
See exercism/exercism#2925 for more details.
Not started
Since things are still in an early state, does anyone have any objections to switching to ExerciseExample.elm
as the example naming convention? I'm working on adding a few exercises and it's a bit tedious since editors don't really like the .example extension. Also, it would make it possible to dev on a test by simply changing the import line to import BobExample exposing (hey)
.
Because we are being strict about formatting we should fail the build if elm-format causes diffs.
So what did I wrong here?
Output of elm-test
(slightly reformatted to make the obvious even more obvious!):
count one word: passed.
count one of each word: passed.
multiple occurrences of a word: FAILED.
Expected: Dict.fromList [("blue",1),("fish",4),("one",1),("red",1),("two",1)];
got: Dict.fromList [("blue",1),("fish",4),("one",1),("red",1),("two",1)]
ignore punctuation: FAILED.
Expected: Dict.fromList [("as",1),("car",1),("carpet",1),("java",1),("javascript",1)];
got: Dict.fromList [("as",1),("car",1),("carpet",1),("java",1),("javascript",1)]
include numbers: passed
normalize case: FAILED.
Expected: Dict.fromList [("go",3),("stop",2)];
got: Dict.fromList [("go",3),("stop",2)]
Since the results seem to be okay, I have already submitted my solution.
So what did I wrong here?
Output of elm-test
(slightly reformatted to make the obvious even more obvious!):
count one word: passed.
count one of each word: passed.
multiple occurrences of a word: FAILED.
Expected: Dict.fromList [("blue",1),("fish",4),("one",1),("red",1),("two",1)];
got: Dict.fromList [("blue",1),("fish",4),("one",1),("red",1),("two",1)]
ignore punctuation: FAILED.
Expected: Dict.fromList [("as",1),("car",1),("carpet",1),("java",1),("javascript",1)];
got: Dict.fromList [("as",1),("car",1),("carpet",1),("java",1),("javascript",1)]
include numbers: passed
normalize case: FAILED.
Expected: Dict.fromList [("go",3),("stop",2)];
got: Dict.fromList [("go",3),("stop",2)]
Since the results seem to be okay, I have already submitted my solution.
Not started
The README for raindrops mentions running npm test
to run the tests instead of using elm-test
. Does the README need to be updated?
Per the discussion in exercism/discussions#128 we
will be installing the probot/stale integration on the Exercism organization on
April 10th, 2017.
By default, probot will comment on issues that are older than 60 days, warning
that they are stale. If there is no movement in 7 days, the bot will close the issue.
By default, anything with the labels security
or pinned
will not be closed by
probot.
If you wish to override these settings, create a .github/stale.yml file as described
in https://github.com/probot/stale#usage, and make sure that it is merged
before April 10th.
If the defaults are fine for this repository, then there is nothing further to do.
You may close this issue.
We have found that the Pangram tests miss edge cases allowing students to pass all of the current tests with an incorrect implementation.
To cover these cases we have added new tests to the Pangram test set. Those new tests were added in this commit
Since this track implements Pangram, please take a look at the new pangram.json file and see if your track should update its tests.
If you do need to update your tests, please refer to this issue in your PR. That helps us see which tracks still need to update their tests.
If your track is already up to date, go ahead and close this issue.
More details on this change are available in x-common issue 222.
Thank you for your help!
WIP: tgecho@dbef5d1
I've used Sarah Sharp's FOSS Heartbeat project to generate stats for each of the language track repositories, as well as the x-common repository.
The Exercism heartbeat data is published here: https://exercism.github.io/heartbeat/
When looking at the data, please disregard any activity from me (kytrinyx
), as I would like to get the language tracks to a point where they are entirely maintained by the community.
Please take a look at the heartbeat data for this track, and answer the following questions:
I've made up the following scale:
It would also be useful to know if there a lot of activity on the track, or just the occasional issue or comment.
Please report the current status of the track, including your best guess on the above scale, back to the top-level issue in the discussions repository: exercism/discussions#97
Hello!
Currently the Elm exercises are using an older version of Elm Test that is unmaintained and uses ugly hacks to report tests results to the user.
I believe we should update to use the latest version.
There are a few problems with this:
I can't actually figure out what this file is for. It doesn't seem to be used or referenced anywhere else. @parkerl ?
Hi,
I think I've noticed a mistake inside the Triangle test suit:
test "triangles violating triangle inequality are illegal 2"
(assertEqual (Err "Violates inequality") (triangleKind 2 4 2))
This one is a flat triangle and doesn't violate the inequality.
I'm getting started with Elm exercises and am a complete newbie with Elm. Following the instructions, when I enter elm-test HelloWorldTests.elm
I get Could not find Elm compiler "elm-make". Is it installed?
I can run elm make HelloWorldTests.elm
successfully (it generates an index.html
file), so I think that means elm-make
is installed.
I'm on Windows if that makes a difference. I installed via npm install --global elm elm-test
Now that Elm 0.18 is out, is upgrading the exercises desirable? This is probably not critical, since by default we ask users to:
$ npm install --global [email protected] [email protected]
thus 0.17 is always the one run as far as exercism is concerned.
Regardless, I hope to outline the considerations about upgrading and start a discussion about how we handle it.
While the version bump in elm is minor, the accompanying packages all have major bumps
Thus, if we simply change the version line in elm-package.json to:
"elm-version": "0.18.0 <= v < 0.19.0
It works, but the other packages require major version bumps:
Error: I cannot find a set of packages that works with your constraints.
--> Your elm-package.json has the following dependency:
"elm-lang/core": "4.0.0 <= v < 5.0.0"
But none of the versions in that range work with Elm 0.18.0. I recommend
removing that dependency by hand and adding it back with:
elm-package install elm-lang/core 5.0.0
--> Your elm-package.json has the following dependency:
"elm-community/elm-test": "2.0.0 <= v < 3.0.0"
But none of the versions in that range work with Elm 0.18.0. I recommend
removing that dependency by hand and adding it back with:
elm-package install elm-community/elm-test 3.0.0
As far as I can tell, simply upgrading to 0.18 with the suggestions above would preclude backwards-compatibility with someone using 0.17, since the following change cannot be satisfied with 0.17:
"elm-lang/core": "4.0.0 <= v < 5.0.0"
"elm-lang/core": "5.0.0 <= v < 6.0.0"
Fair enough, to keep compatibility, we could do:
"elm-lang/core": "4.0.0 <= v < 6.0.0",
[...]
"elm-version": "0.17.0 <= v < 0.19.0
This will be satisfied with elm-lang/core 5.0.0
in 0.18 and elm-lang/core 4.0.5
in 0.17.
The above seems nice in keeping both 0.17 and 0.18 for people, but our *Test.elm files are still going to be written in one style or the other. While the syntax changes in Elm are small, the changes in the packages that receive major version bumps might be larger. This is already the case with function signature changes in elm-test. It seems that we would have to move to 0.18 as a whole.
The steps are pretty much described in https://github.com/elm-lang/elm-platform/blob/master/upgrade-docs/0.18.md
In addition to that:
For instance, changing the pangram exercise to handle both:
{
"version": "3.0.0",
"summary": "Exercism problems in Elm.",
"repository": "https://github.com/exercism/xelm.git",
"license": "BSD3",
"source-directories": [
"."
],
"exposed-modules": [],
"dependencies": {
"elm-lang/core": "4.0.0 <= v < 6.0.0",
"elm-community/elm-test": "2.0.0 <= v < 4.0.0",
"rtfeldman/node-test-runner": "2.0.0 <= v < 4.0.0"
},
"elm-version": "0.17.0 <= v < 0.19.0"
}
throws the error:
The definition of `main` does not match its type annotation.
52| main : Program Value
53| main =
54|> run emit tests
The type annotation for `main` says it is a:
Program Value
But the definition (shown above) is a:
Test.Runner.Node.TestProgram
Sure enough, changing it makes it work in 0.18 and related packages, but breaks 0.17:
-- NAMING ERROR ----------------------------------------------- PangramTests.elm
Cannot find type `Test.Runner.Node.TestProgram`.
52| main : Test.Runner.Node.TestProgram
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The qualifier `Test.Runner.Node` is not in scope.
I haven't run through all the exercises, but I expect to find errors like that in many of them.
This is recent, so I might have missed something; please be gentle if that is the case.
If we do decide to upgrade to 0.18, I'm willing to help!
WIP tgecho@cee1332
I'm not crazy about the formatting in these tests with long lines. They should probably be more consistent, but it seems wasteful to make them all take up 4-5 lines.
I just fetched the anagram exercise, and it did not contain instructions on how to run the tests. Here's the contents of the README as as I received it.
# Anagram
Write a program that, given a word and a list of possible anagrams, selects the correct sublist.
Given `"listen"` and a list of candidates like `"enlists" "google"
"inlets" "banana"` the program should return a list containing
`"inlets"`.
## Source
Inspired by the Extreme Startup game [https://github.com/rchatley/extreme_startup](https://github.com/rchatley/extreme_startup)
## Submitting Incomplete Problems
It's possible to submit an incomplete solution so you can see how others have completed the exercise.
Not started
Since the new testing method I have not beenable to get my test to run on the accumulate exercise. I was receiving this error:
C:\Dev\exercism.io\elm\hello-world>elm-test HelloWorldTests.elm
Success! Compiled 45 modules.
Successfully generated C:\Users\Clark\AppData\Local\Temp\elm_test_116726-4760-c8vvvm.js
Successfully compiled HelloWorldTests.elm
Running tests...
[stdin]:12394
if (typeof Elm === "undefined") { throw "elm-io config error: Elm is not defined. M
ake sure you call elm-io with a real Elm output file"}
^
elm-io config error: Elm is not defined. Make sure you call elm-io with a real Elm out
put file
I rolled back, deleted my old hello-world, and fetched again. I got the same error, but this was fixed on the hello-word exercise after I updated my Elm installation.
I switched back to my accumulate exercise and now I get a new error:
C:\Dev\exercism.io\elm\accumulate>elm-test AccumulateTests.elm
Success! Compiled 1 module.
Successfully generated C:\Users\Clark\AppData\Local\Temp\elm_test_116726-11332-k9pm22.js
undefined:1929
throw new Error(
^
Error: You are giving module `Main` an argument in JavaScript.
This module does not take arguments though! You probably need to change the
initialization code to something like `Elm.Main.fullscreen()`
at init (eval at <anonymous> (C:\Users\Clark\AppData\Roaming\npm\node_modules\elm-t
est\bin\elm-test:86:37), <anonymous>:1929:10)
at Object.eval [as callback] (eval at <anonymous> (C:\Users\Clark\AppData\Roaming\n
pm\node_modules\elm-test\bin\elm-test:86:37), <anonymous>:1973:17)
at step (eval at <anonymous> (C:\Users\Clark\AppData\Roaming\npm\node_modules\elm-t
est\bin\elm-test:86:37), <anonymous>:2613:39)
at work [as _onTimeout] (eval at <anonymous> (C:\Users\Clark\AppData\Roaming\npm\no
de_modules\elm-test\bin\elm-test:86:37), <anonymous>:2671:15)
at Timer.listOnTimeout (timers.js:92:15)
Each language track has documentation in the docs/
directory, which gets included on the site
on each track-specific set of pages under /languages.
We've added some general guidelines about how we'd like the track to be documented in exercism/exercism#3315
which can be found at https://github.com/exercism/exercism.io/blob/master/docs/writing-track-documentation.md
Please take a moment to look through the documentation about documentation, and make sure that
the track is following these guidelines. Pay particularly close attention to how to use images
in the markdown files.
Lastly, if you find that the guidelines are confusing or missing important details, then a pull request
would be greatly appreciated.
The old help site was deprecated in December 2015. We now have content that is displayed on the main exercism.io website, under each individual language on http://exercism.io/languages.
The content itself is maintained along with the language track itself, under the docs/
directory.
We decided on this approach since the maintainers of each individual language track are in the best position to review documentation about the language itself or the language track on Exercism.
Please verify that nothing in docs/
refers to the help.exercism.io site. It should instead point to http://exercism.io/languages/:track_id (at the moment the various tabs are not linkable, unfortunately, we may need to reorganize the pages in order to fix that).
Also, some language tracks reference help.exercism.io
in the SETUP.md file, which gets included into the README of every single exercise in the track.
We may also have referenced non-track-specific content that lived on help.exercism.io. This content has probably been migrated to the Contributing Guide of the x-common repository. If it has not been migrated, it would be a great help if you opened an issue in x-common so that we can remedy the situation. If possible, please link to the old article in the deprecated help repository.
If nothing in this repository references help.exercism.io, then this can safely be closed.
One of the features of Elm I really like is all its front end magic, but I don't know how to write tests for that. I don't know if they are even possible (without say Selenium)?
I want to design a new exercise that calculates the amount of seconds from the Sept 1 1970 moment till present, this would require getting the "date.now" functionality from JS through a port. And I've not idea how to design that. Anybody want to help?
I can't remember the history of this, but we ended up with a weird non-biological thing in the RNA transcription exercise, where some test suites also have tests for transcribing from RNA back to DNA. This makes no sense.
If this track does have tests for the reverse transcription, we should remove them, and also simplify the reference solution to match.
If this track doesn't have any tests for RNA->DNA transcription, then this issue can be closed.
On the elm track, the 'Hello, World' exercise has a shell script that runs tests via elm-make
. So far so good.
Then with 'Bob', you try the same thing to run the tests and you are referred to the 'Running the Tests' directions. Those say to run elm-test *Test.elm
. I installed Elm in the way recommended on their website, but I don't have an elm-test
anywhere. Looks like it is a third-party package that needs to be installed. Maybe we can indicate that and explain how to do it? Let me know if I can provide some text.
Thanks -- exercism is awesome!
WIP tgecho@db1226b
In progress
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.