exercism / fortran Goto Github PK
View Code? Open in Web Editor NEWExercism exercises in Fortran.
Home Page: https://exercism.org/tracks/fortran
License: MIT License
Exercism exercises in Fortran.
Home Page: https://exercism.org/tracks/fortran
License: MIT License
We’ve recently started a project to find the best way to design our tracks, in order to optimize the learning experience of students.
As a first step, we’ll be examining the ways in which languages are unique and the ways in which they are similar. For this, we’d really like to use the knowledge of everyone involved in the Exercism community (students, mentors, maintainers) to answer the following questions:
Could you spare 5 minutes to help us by answering these questions? It would greatly help us improve the experience students have learning Fortran :)
Note: this issue is not meant as a discussion, just as a place for people to post their own, personal experiences.
Want to keep your thoughts private but still help? Feel free to email me at [email protected]
Thank you!
https://github.com/orgs/exercism/teams/fortran if you have some time, it might be worth taking a look at the Windows build, as it looks like several of the exercise tests are failing there.
https://travis-ci.org/pclausen/fortran/builds/658030086?utm_medium=notification&utm_source=email
Not quite sure what is going wrong here... Seems to be something more central to the travis script. Fortran files build and execute succesfully.
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
The command "bin/fetch-configlet" exited with 2.
0.00s$ bin/configlet lint .
/home/travis/.travis/functions: line 109: bin/configlet: No such file or directory
The command "bin/configlet lint ." exited with 127.
This issue is part of the migration to v3. You can read full details about the various changes here.
Concept Exercises can have a status specified in their "status"
field in their config.json
entry, as specified in the spec. This status can be one of four values:
"wip"
: A work-in-progress exercise not ready for public consumption. Exercises with this tag will not be shown to students on the UI or be used for unlocking logic. They may appear for maintainers."beta"
: This signifies active exercises that are new and which we would like feedback on. We show a beta label on the site for these exercise, with a Call To Action of "Please give us feedback.""active"
: The normal state of active exercises"deprecated"
: Exercises that are no longer shown to students who have not started them (not usable at this stage).The "status"
key can also be omitted, which is the equivalent of setting it to "active"
.
The "status"
field of Concept Exercises in the config.json
file should be updated to reflect the status of the Concept Exercises. See the spec for more information.
If your track doesn't have any Concept Exercises, this issue can be closed.
{
"exercises": {
"concept": [
{
"uuid": "93fbc7cf-3a7e-4450-ad22-e30129c36bb9",
"slug": "cars-assemble",
"name": "Cars, Assemble!",
"concepts": ["if-statements", "numbers"],
"prerequisites": ["basics"]
},
...
]
}
}
{
"exercises": {
"concept": [
{
"uuid": "93fbc7cf-3a7e-4450-ad22-e30129c36bb9",
"slug": "cars-assemble",
"name": "Cars, Assemble!",
"concepts": ["if-statements", "numbers"],
"prerequisites": ["basics"],
"status": "active"
},
...
]
}
}
So I have now setup a branch fortran-cmake in fork:
https://github.com/pclausen/fortran/tree/fortran-cmake
The main CMakelist.txt is a modified version of the one from https://github.com/exercism/cpp
First build only includes hello-world and kind of completes OK
https://travis-ci.org/pclausen/fortran/jobs/418407246
But I get an error in The command "bin/configlet lint ." exited with 1.
Do we need this configlet lint ?
The next step is to convert leap or bob to use ctest and some asserts which I am thinking a bit about. When it gets a bit further I will create a pull request(?)
I also updated installation doc https://github.com/pclausen/fortran/blob/fortran-cmake/docs/INSTALLATION.md
Suggestions for improvements are very welcome. I dont know if you can see or commit to my branch. let me know if you need access and please also write how I can give you access.
This issue is part of the migration to v3. You can read full details about the various changes here.
In Exercism v3, one of the biggest changes is that we'll automatically check if a submitted solution passes all the tests.
We'll check this via a new, track-specific tool: the Test Runner. Each test runner is track-specific. When a new solution is submitted, we run the track's test runner, which outputs a JSON file that describes the test results.
The test runner must be able to run the tests suites of both Concept Exercises and Practice Exercises. Depending on the test runner implementation, this could mean having to update the Practice Exercises to the format expected by the test runner.
Build a test runner for your track according to the spec.
If you are building a test runner from scratch, we have a starting guide and a generic test runner that can be used as the base for the new test runner.
If a test runner has already been built for this track, please check if it works on both Concept Exercises and Practice Exercises.
It can be very useful to check how other tracks have implemented their test runner.
Note that this is about the exercises (the test suites and code examples), not people's solutions.
Fortran really doesn't have a logo as far as I can tell. I'd love to see something really cool that looks like a punchcard, but I'm not exactly super good at illustration. Would love help here, but if not we'll end up with a simple pink/black fixed width "f".
This issue is part of the migration to v3. You can read full details about the various changes here.
In Exercism v3, tracks can be annotated with tags. This allows searching for tracks with a certain tag combination, making it easy for students to find an interesting track to join.
Tags are specified in the top-level "tags"
field in the track's config.json
file and are defined as an array of strings, as specified in the spec.
The "tags"
field in the config.json
file should be updated to contain the tags that are relevant to this track. The list of tags that can be used is listed in the spec.
{
"tags": [
"runtime/jvm",
"platform/windows",
"platform/linux",
"paradigm/declarative",
"paradigm/functional",
"paradigm/object_oriented"
]
}
This issue is part of the migration to v3. You can read full details about the various changes here.
In Exercism v3, each track must specify exactly six "key features". Exercism uses these features to highlight the most interesting, unique or "best" features of a language to a student.
Key features are specified in the top-level "key_features"
field in the track's config.json
file and are defined as an array of objects, as specified in the spec.
The "key_features"
field in the config.json
file should be updated to describe the six "key features" of this track. See the spec.
{
"key_features": [
{
"icon": "features-oop",
"title": "Modern",
"content": "C# is a modern, fast-evolving language."
},
{
"icon": "features-strongly-typed",
"title": "Cross-platform",
"content": "C# runs on almost any platform and chipset."
},
{
"icon": "features-functional",
"title": "Multi-paradigm",
"content": "C# is primarily an object-oriented language, but also has lots of functional features."
},
{
"icon": "features-lazy",
"title": "General purpose",
"content": "C# can be used for a wide variety of workloads, like websites, console applications, and even games."
},
{
"icon": "features-declarative",
"title": "Tooling",
"content": "C# has excellent tooling, with linting and advanced refactoring options built-in."
},
{
"icon": "features-generic",
"title": "Documentation",
"content": "Documentation is excellent and exhaustive, making it easy to get started with C#."
}
]
}
Hello 🙂
Over the last few months we've been transferring all our CI from Travis to GitHub Actions (GHA). We've found that GHA are easier to work with, more reliable, and much much faster.
Based on our success with GHA and increasing intermittent failures on Travis, we have now decided to try and remove Travis from Exercism's org altogether and shift everything to GHA. This issue acts as a call to action if your track is still using Travis.
For most CI checks this should be a transposing from Travis' syntax to GHA syntax, and hopefully quite straightforward (see this PR for an example). However, if you do encounter any issues doing this, please ask on Slack where lots of us now have experience with GHA, or post a comment here and I'll tag relevant people. This would also make a good Hacktoberfest issue for anyone interested in making their first contribution 🙂
If you've already switched this track to GHA, please feel free to close this issue and ignore it.
Thanks!
This issue is part of the migration to v3. You can read full details about the various changes here.
There are several new features in Exercism v3 for tracks to build. To selectively enable these features on the Exercism v3 website, each track must keep track of the status of the following features:
The status of these features is specified in the top-level "status"
field in the track's config.json
, as specified in the spec.
The "status"
field in the config.json
file should be updated to indicate the status of the features for this track. The list of features is defined in the spec.
{
"status": {
"concept_exercises": true,
"test_runner": true,
"representer": false,
"analyzer": false
}
}
if(CMAKE_Fortran_COMPILER_ID MATCHES "Intel") # Intel fortran
if(WIN32)
set (CCMAKE_Fortran_FLAG ${CCMAKE_Fortran_FLAGS} "/warn:all")
else()
set (CMAKE_Fortran_FLAGS ${CCMAKE_Fortran_FLAGS} "-warn all")
endif()
CCMAKE
should be CMAKE
, in three places.FLAG
should be FLAGS
.FFLAGS
instead of internal cmake CMAKE_Fortran_FLAGS
. Setting the latter effectively blocks FFLAGS
from being used at all.Lower in the file there is a comment which reads GFrotran
(yes, with a typo). Should be fixed as well.
TL;DR; At the end of Jan 2021, all tracks will enter v3 staging mode. Updates will no longer sync with the current live website, but instead sync with the staging website. The Fortran section of the v3 repo will be extracted and PR'd into this track (if appropriate). Further issues and information will follow over the coming weeks to prepare Fortran for the launch of v3.
Over the last 12 months, we've all been hard at work developing Exercism v3. Up until this point, all v3 tracks have been under development in a single repository - the v3 repository. As we get close to launch, it is time for us to explode that monorepo back into the normal track repos. Therefore, at the end of this month (January 2021), we will copy the v3 tracks contents from the v3 repository back to the corresponding track repositories.
As v3 tracks are structured differently than v2 tracks, the current (v2) website cannot work with v3 tracks. To prevent the v2 website from breaking, we'll disable syncing between track repositories and the website. This will effectively put v2 in maintenance mode, where any changes in the track repos won't show up on the website. This will then allow tracks to work on preparing for the Exercism v3 launch.
Where possible, we will script the changes needed to prepare tracks for v3. For any manual changes that need to be happening, we will create issues on the corresponding track repositories. We will be providing lots of extra information about this in the coming weeks.
We're really excited to enter the next phase of building Exercism v3, and to finally get it launched! 🙂
Note: the launch checklist has been made obsolete by a brand new Launch Guide: https://github.com/exercism/docs/blob/master/language-tracks/launch/README.md
Let's keep the general discussions around what needs to be done here in this issue, and open new, actionable issues for next steps.
This issue is part of the migration to v3. You can read full details about the various changes here.
In Exercism v3, we're introducing a new (optional) tool: the representer. The goal of the representer is to take a solution and returning a representation, which is an extraction of a solution to its essence with normalized names, comments, spacing, etc. but still uniquely identifying the approach taken. Two different ways of solving the same exercise must not have the same representation.
Each representer is track-specific. When a new solution is submitted, we run the track's representer, which outputs two JSON files that describe the representation.
Once we have a normalized representation for a solution, a team of vetted mentors will look at the solution and comment on it (if needed). These comments will then automatically be submitted to each new solution with the same representation. A notification will be sent for old solutions with a matching representation.
Each track should build a representer according to the spec. For tracks building a representer from scratch, we have a starting guide.
The representer is an optional tool though, which means that if a track does not have a representer, it will still function normally.
In Exercism v3, we are making increased use of our v2 analyzers. Analyzers automatically assess student's submissions and provide mentor-style commentary. They can be used to catch common mistakes and/or do complex solution analysis that can't easily be done directly in a test suite.
Each analyzer is track-specific. When a new solution is submitted, we run the track's analyzer, which outputs a JSON file that contains the analysis results.
In v2, analyzer comments were given to a mentor to pass to a student. In v3, the analyzers will normally output directly to students, although we have added an extra key to output suggestions to mentors. If your track already has an analyzer, the only requisite change is updating the outputted copy to be student-facing.
Each track should build an analyzer according to the spec. For tracks building an analyzer from scratch, we have a starting guide.
The analyzer is an optional tool though, which means that if a track does not have an analyzer, it will still function normally.
Build a representer for your track according to the spec. Check this page to help you get started with building a representer.
Note that the simplest representer is one that merely returns the solution's source code.
It can be very useful to check how other tracks have implemented their representer.
Build an analyzer for your track according to the spec. Check this page to help you get started with building an analyzer.
It can be very useful to check how other tracks have implemented their analyzer.
If you want to build both, we recommend starting by building the representer for the following reasons:
TL;DR: the problem specification for the Bob exercise has been updated. Consider updating the test suite for Bob to match. If you decide not to update the exercise, consider overriding description.md.
Details
The problem description for the Bob exercise lists four conditions:
There's an ambiguity, however, for shouted questions: should they receive the "asking" response or the "shouting" response?
In exercism/problem-specifications#1025 this ambiguity was resolved by adding an additional rule for shouted questions.
If this track uses exercise generators to update test suites based on the canonical-data.json file from problem-specifications, then now would be a good time to regenerate 'bob'. If not, then it will require a manual update to the test case with input "WHAT THE HELL WERE YOU THINKING?".
See the most recent canonical-data.json file for the exact changes.
Remember to regenerate the exercise README after updating the test suite:
configlet generate . --only=bob --spec-path=<path to your local copy of the problem-specifications repository>
You can download the most recent configlet at https://github.com/exercism/configlet/releases/latest if you don't have it.
If, as track maintainers, you decide that you don't want to change the exercise, then please consider copying problem-specifications/exercises/bob/description.md into this track, putting it in exercises/bob/.meta/description.md
and updating the description to match the current implementation. This will let us run the configlet README generation without having to worry about the bob README drifting from the implementation.
This was previously a comment on #227, but I'm moving it to a new issue because I think it's specific to how the test module is made available when testing in the editor.
Changes made to the test module in #226 and #227 are not applied when testing in the editor. When testing an incorrect solution to Sieve, there's still the An error occurred while running your tests. This might mean...
message — this is the issue that #227 was intended to solve. For Saddle Points (with correct and incorrect solutions), the message is:
We received the following error when we ran your code:
/tmp/saddle-points/saddle_points_test.f:::
| character(MAX_RESULT_STRING_LEN) :: s
|
Error: GNU Extension: Symbol max_result_string_len is used before it is typed at ()
/tmp/saddle-points/saddle_points_test.f:::
| function pa_to_s(p) result(s)
|
Error: Function result s at () has no IMPLICIT type
This suggests that the editor is using the updated version of saddle_points_test.f90
, but not the latest TesterMain.f90
. I can reproduce the same error when working locally by using the old TesterMain.f90
with the new saddle_points_test.f90
.
When testing an incorrect solution to High Scores in the editor, failed tests are double-counted, so the same issue applies to #226 (i.e., the solution is being tested by the pre-#226 version of the test module).
Originally posted by @simisc in #227 (comment)
In Exercism v3, we are making increased use of our v2 analyzers. Analyzers automatically assess student's submissions and provide mentor-style commentary. They can be used to catch common mistakes and/or do complex solution analysis that can't easily be done directly in a test suite.
Each analyzer is track-specific. When a new solution is submitted, we run the track's analyzer, which outputs a JSON file that contains the analysis results.
In v2, analyzer comments were given to a mentor to pass to a student. In v3, the analyzers will normally output directly to students, although we have added an extra key to output suggestions to mentors. If your track already has an analyzer, the only requisite change is updating the outputted copy to be student-facing.
The analyzer is an optional tool though, which means that if a track does not have an analyzer, it will still function normally.
Build an analyzer for your track according to the spec. Check this page to help you get started with building an analyzer.
It can be very useful to check how other tracks have implemented their analyzer.
If your track already has a working analyzer, please close this issue and ensure that the .status.analyzer
key in the track config.json
file is set to true
.
There is some overlap between the goals of the representer and the analyzer. If you want to build both, we recommend starting by building the representer for the following reasons:
This issue is part of the migration to v3. You can read full details about the various changes here.
The configlet tool has a lint
command that checks if a track's configuration files are properly structured - both syntactically and semantically. Misconfigured tracks may not sync correctly, may look wrong on the website, or may present a suboptimal user experience, so configlet's guards play an important part in maintaining the integrity of Exercism.
We're updating configlet to work with v3 tracks, which have a different set of requirements than v2 tracks.
The full list of rules that will be checked by the linter can be found in this spec.
⚠ Note that only a subset of the linting rules has been implemented at this moment. This means that while your track may be passing the checks at this moment, it might fail later. We thus strongly suggest you keep this issue open until we let you know otherwise.
Ensure that the track passes all the (v3 track) checks defined in configlet lint
.
To help verify that the track passes all the linting rules, the v3 preparation PR has added a GitHub Actions workflow that automatically runs configlet lint
.
It is also possible to run configlet lint
locally by running the ./bin/fetch-configlet
(or ./bin/fetch-configlet.ps1
) script to download a local copy of the configlet binary. Once downloaded, you can then do ./bin/configlet lint
to run the linting on your own machine.
The track's config.json
file must have a non-empty blurb
property. This property should contain a short description (less than or equal to 400 characters) of the language. See https://github.com/exercism/docs/blob/main/building/tracks/config-json.md.
There are some issues with the stub:
m
declaration doesn't have the dimension that it needs (:
)r
or c
) are wrong for both row
and column
(they should be switched as it stands)A
declaration is not neededAlso, I feel that the dims should be reversed in the argument/tests. Although Fortran uses column-major storage order, I think most people still think of the first dimension as the "rows" dimension if you have a two-dimensional array (matrix). Wikipedia agrees:
in Fortran, arrays are stored in column-major order, while the array indexes are still written row-first (colexicographical access order)
As we move towards the launch of the new version of Exercism we are going to be ramping up on actively recruiting people to help provide feedback.
Our goal is to get to 100%: everyone who submits a solution and wants feedback should get feedback. Good feedback. You can read more about this aspect of the new site here: http://mentoring.exercism.io/
To do this, we're going to need a lot more information about where we can find language enthusiasts.
In other words: where do people care a lot and/or know a lot about Fortran?
This is part of the project being tracked in exercism/meta#103
This issue is part of the migration to v3. You can read full details about the various changes here.
In Exercism v3, students can now choose to work on exercises directly from their browser, instead of having to download exercises to their local machine. The track-specific settings for the in-browser editor are defined in the top-level "online_editor"
field in the track's config.json
file. This field is defined as an object with two fields:
"indent_style"
: the indent style, either "space" or "tab"."indent_size"
: the indent size, which is an integer (e.g. 4).You can find a full description of these fields in the spec.
The "online_editor"
field should be updated to correspond to the track's best practices regarding indentation.
"online_editor": {
"indent_style": "space",
"indent_size": 4
}
This issue is part of the migration to v3. You can read full details about the various changes here.
To get your track ready for Exercism v3, the following needs to be done:
This issue may be automatically added to over time. While track maintainers should check off completed items, please do not add/edit items in the list.
Some exercise README templates contain links to pages which no longer exist in v2 Exercism.
For example, C++'s README template had a link to /languages/cpp for instructions on running tests. The correct URLs to use can be found in the 'Still stuck?' sidebar of exercise pages on the live site. You'll need to join the track and go to the first exercise to see them.
Please update any broken links in the 'config/exercise_readme.go.tmpl' file, and run 'configlet generate .' to generate new exercise READMEs with the fixes.
Instructions for generating READMEs with configlet can be found at:
https://github.com/exercism/docs/blob/master/language-tracks/exercises/anatomy/readmes.md#generating-a-readme
Instructions for installing configlet can be found at:
https://github.com/exercism/docs/blob/bc29a1884da6c401de6f3f211d03aabe53894318/language-tracks/launch/first-exercise.md#the-configlet-tool
Tracking exercism/exercism#4102
Right now, I've started with funit.
Pros:
Cons:
Given the list of other options, I'm not sure I'm seeing anything right now that meets what I would like:
http://fortranwiki.org/fortran/show/Unit+testing+frameworks
Ideally, I want something that allows you to write valid Fortran (so it can by syntax checked first), and requires as minimal setup for a user as possible (i.e. no new languages).
So I want to write this...
module hello_test
use hello
character(20) :: expected_greeting
function setup
expected_greeting = 'Hello, World!'
end function setup
function test_hello
assert_equals( expected_greeting, greet() )
end function test_hello
end module hello_test
and be able run it like...
$ xfunit hello_test.f90
.
1 test passed, 1 assertion
This may require parsing the test Fortran module to scan for methods like setup/teardown & test_*.
Implement a track test suite that can run both locally and on Travis CI. The track test suite should verify that each exercise makes sense, by running the exercise tests against the example solution.
Definition of terms
Background
When implementing an exercise test suite, we want to provide a good user experience for the people writing a solution to the exercise. People should not be confused or overwhelmed.
In most Exercism language tracks, we simulate Test-Driven Development (TDD) by implementing the tests in order of increasing complexity. We try to ensure that each test either
Many test frameworks will randomize the order of the tests when running them. This is an excellent practice, which helps ensure that subsequent tests are not dependent on side effects from earlier tests. However, in order to simulate TDD we want tests to run in the order that they are defined, and we want them to fail fast, that is to say, as soon as the test suite encounters a failure, we want the execution to stop. This ensures that the person implementing the solution sees only one error or failure message at a time, unless they make a change which causes prior tests to fail.
This is the same experience that they would get if they were implementing each new test themselves.
Most testing frameworks do not have the necessary configuration options to get this behavior directly, but they often do have a way of marking tests as skipped or pending. The mechanism for this will vary from language to language and from test framework to test framework.
Whatever the mechanism—functions, methods, annotations, directives, commenting out tests, or some other approach—these are changes made directly to the test file. The person solving the exercise will need to edit the test file in order to "activate" each subsequent test.
Any tests that are marked as skipped will not be verified by the track test suite unless special care is taken.
Additionally, in some programming languages, the name of the file containing the solution is hard-coded in the test suite, and the example solution is not named in the way that we expect people to name their files.
We will need to temporarily (and programmatically) edit the exercise test suites to ensure that all of their tests are active. We may also need to rename the example solution file(s) in order for the exercise test suite to run against it.
Avoiding accidental git check-ins
It's important that if we rewrite files in any way during a test run, that these changes do not accidentally get checked in to the git repository.
Therefore, many language tracks write the track test suite in such a way that it copies the exercise to a temporary location outside of the git repository before editing or rewriting the exercise files during a test run.
Working around long-running track test suites
Usually as people are developing the track, they're focused on a single exercise. If running the entire track test suite against all of the exercises takes a long time, it is often worth making it possible to verify just one exercise at a time.
Example build file
The PHP track has created a Makefile. The Ruby track uses Rake, which is a tool written in Ruby, allowing the track maintainers to write custom code in the language of the track to customize the build with a Rakefile.
Fortran77 only supported fixed format. This course is focused on Fortran90 and free format.
We should add somewhere that fixed format is still in use for older legacy systems.
Ref:
https://en.wikibooks.org/wiki/Fortran/Beginning_Fortran#Free_Form_and_Fixed_Form
There have been multiple unrelated PRs making small tweaks to CI/CD
We should take some time (after getting more exercises etc) to revisit it and see if there's a way to simplify it or make it less fragile
Please check if your documentation files are still up-to-date.
The key documentation files to check are:
docs/ABOUT.md
docs/INSTALLATION.md
docs/LEARNING.md
docs/RESOURCES.md
docs/TESTS.md
exercises/shared/.docs/help.md
exercises/shared/.docs/tests.md
There might be more.
To help identify invalid links, we've automatically checked the links of all *.md
files in this repo.
This is the report of that check:
📝 Summary --------------------- 🔍 Total...........49 ✅ Successful......49 ⏳ Timeouts.........0 🔀 Redirected.......0 👻 Excluded.........0 🚫 Errors...........0
There are a number of things we're going to want to check before the v2 site goes live. There are notes below that flesh out all the checklist items.
TODO
)core
auto_approve: true
The v2 site has a landing page for each track, which should make people want to join it. If the track page is missing, ping @kytrinyx
to get it added.
If the header of the page starts with TODO
, then submit a pull request to https://github.com/exercism/fortran/blob/master/config.json with a blurb
key. Remember to get configlet and run configlet fmt .
from the root of the track before submitting.
If the "About" section feels a bit dry, then submit a pull request to https://github.com/exercism/fortran/blob/master/docs/ABOUT.md with suggested tweaks.
In order to work well with the design of the new site, we're restricting the formatting of the ABOUT.md
. It can use:
Additionally:
<br/>
can be used to split a paragraph into lines without spacing between them, however this is discouraged.If the code example is too short or too wide or too long or too uninteresting, submit a pull request to https://github.com/exercism/ocaml/blob/master/docs/SNIPPET.txt with a suggested replacement.
Where the v1 site has a long, linear list of exercises, the v2 site has organized exercises into a small set of required exercises ("core").
If you update the track config, remember to get configlet and run configlet fmt .
from the root of the track before submitting.
Core exercises unlock optional additional exercises, which can be filtered by topic an difficulty, however that will only work if we add topics and difficulties to the exercises in the track config, which is in https://github.com/exercism/fortran/blob/master/config.json
We've currently made any hello-world exercises auto-approved in the backend of v2. This means that you don't need mentor approval in order to move forward when you've completed that exercise.
Not all tracks have a hello-world, and some tracks might want to auto approve other (or additional) exercises.
There are no bullet points for this one :)
As we move towards the launch of the new version of Exercism we are going to be ramping up on actively recruiting people to help provide feedback. Our goal is to get to 100%: everyone who submits a solution and wants feedback should get feedback. Good feedback.
If you're interested in helping mentor the track, check out http://mentoring.exercism.io/
When all of the boxes are ticked off, please close the issue.
Tracking progress in exercism/meta#104
Add task id to json output
Hello lovely maintainers 👋
We've recently added "tags" to student's solutions. These express the constructs, paradigms and techniques that a solution uses. We are going to be using these tags for lots of things including filtering, pointing a student to alternative approaches, and much more.
In order to do this, we've built out a full AST-based tagger in C#, which has allowed us to do things like detect recursion or bit shifting. We've set things up so other tracks can do the same for their languages, but its a lot of work, and we've determined that actually it may be unnecessary. Instead we think that we can use machine learning to achieve tagging with good enough results. We've fine-tuned a model that can determine the correct tags for C# from the examples with a high success rate. It's also doing reasonably well in an untrained state for other languages. We think that with only a few examples per language, we can potentially get some quite good results, and that we can then refine things further as we go.
I released a new video on the Insiders page that talks through this in more detail.
We're going to be adding a fully-fledged UI in the coming weeks that allow maintainers and mentors to tag solutions and create training sets for the neural networks, but to start with, we're hoping you would be willing to manually tag 20 solutions for this track. In this post we'll add 20 comments, each with a student's solution, and the tags our model has generated. Your mission (should you choose to accept it) is to edit the tags on each issue, removing any incorrect ones, and add any that are missing. In order to build one model that performs well across languages, it's best if you stick as closely as possible to the C# tags as you can. Those are listed here. If you want to add extra tags, that's totally fine, but please don't arbitrarily reword existing tags, even if you don't like what Erik's chosen, as it'll just make it less likely that your language gets the correct tags assigned by the neural network.
To summarise - there are two paths forward for this issue:
If you tell us you're not able/wanting to help or there's no comment added, we'll automatically crowd-source this in a week or so.
Finally, if you have questions or want to discuss things, it would be best done on the forum, so the knowledge can be shared across all maintainers in all tracks.
Thanks for your help! 💙
Some tracks have added assertions to the exercise test suites that ensure that the solution has a hard-coded version in it.
In the old version of the site, this was useful, as it let commenters see what version of the test suite the code had been written against, and they wouldn't accidentally tell people that their code was wrong, when really the world had just moved on since it was submitted.
If this track does not have any assertions that track versions in the exercise tests, please close this issue.
If this track does have this bookkeeping code, then please remove it from all the exercises.
See exercism/exercism#4266 for the full explanation of this change.
In line with our new org-wide policy, the master
branch of this repo will be renamed to main
. All open PRs will be automatically repointed.
GitHub will show you a notification about this when you look at this repo after renaming:
In case it doesn't, this is the command it suggests:
git branch -m master main
git fetch origin
git branch -u origin/main main
You may like to update the primary branch on your forks too, which you can do under Settings->Branches and clicking the pencil icon on the right-hand-side under Default Branch:
We will post a comment below when this is done. We expect it to happen within the next 12 hours.
Test runner needs correctly formatted output from TesterMain
see https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md
{
"version": 2,
"status": "fail",
"message": null,
"tests": [
{
"name": "Test that the thing works",
"status": "fail",
"message": "Expected 42 but got 123123",
"output": "Debugging information output by the user",
"test_code": "assert_equal 42, answerToTheUltimateQuestion()"
}
]
}
Current output missing "message" and "test_code" items.
{ "name" : "Test 23: non-question ending with whitespace",
"status": "fail" }
],
"version": 2,
"status": "fail"
}
This issue is part of the migration to v3. You can read full details about the various changes here.
Exercism v3 introduces a new type of exercise: Concept Exercises. All existing (V2) exercises will become Practice Exercises.
Concept Exercises and Practice Exercises are linked to each other via Concepts. Concepts are taught by Concept Exercises and practiced in Practice Exercises. Each Exercise (Concept or Practice) has prerequisites, which must be met to unlock an Exercise - once all the prerequisite Concepts have been "taught" by a Concept Exercise, the exercise itself becomes unlocked.
For example, in some languages completing the Concept Exercises that teach the "String Interpolation" and "Optional Parameters" concepts might then unlock the two-fer
Practice Exercise.
Each Practice Exercise has two fields containing concepts: a practices
field and a prerequisites
field.
The practices
key should list the slugs of Concepts that this Practice Exercise actively allows a student to practice.
strings
). In those cases we recommend choosing a few good exercises that make people think about those Concepts in interesting ways. For example, exercises that require UTF-8, string concatenation, char enumeration, etc, would all be good examples.The prerequisites
key lists the Concept Exercises that a student must have completed in order to access this Practice Exercise.
strings
, optional-params
, implicit-return
.loops
or recursion
), the maintainer should choose the one approach that they would like to unlock the Exercise, considering the student's journey through the track. For example, the loops/recursion example, they might think this exercise is a good early practice of loops
or that they might like to leave it later to teach recursion. They can also make use of an analyzer to prompt the student to try an alternative approach: "Nice work on solving this via loops. You might also like to try solving this using Recursion."Although ideally all Concepts should be taught by Concept Exercises, we recognise that it will take time for tracks to achieve that. Any Practice Exercises that have prerequisites which are not taught by Concept Exercises, will become unlocked once the final Concept Exercise has been completed.
The "practices"
field of each element in the "exercises.practice"
field in the config.json
file should be updated to contain the practice concepts. See the spec.
To help with identifying the practice concepts, the "topics"
field can be used (if it has any contents). Once prerequisites have been defined for a Practice Exercise, the "topics"
field should be removed.
Each practice concept should have its own entry in the top-level "concepts"
array. See the spec.
The "prerequisites"
field of each element in the "exercises.practice"
field in the config.json
file should be updated to contain the prerequisite concepts. See the spec.
To help with identifying the prerequisites, the "topics"
field can be used (if it has any contents). Once prerequisites have been defined for a Practice Exercise, the "topics"
field should be removed.
Each prerequisite concept should have its own entry in the top-level "concepts"
array. See the spec.
{
"exercises": {
"practice": [
{
"uuid": "8ba15933-29a2-49b1-a9ce-70474bad3007",
"slug": "leap",
"name": "Leap",
"practices": ["if-statements", "numbers", "operator-precedence"],
"prerequisites": ["if-statements", "numbers"],
"difficulty": 1
}
]
}
}
https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md#message
TesterLib writes expected_results.json wrong.
message = null
when a test passes (status=pass) or no message at all.
We have decided to require all file-based tracks to provide stubs for their exercises.
The lack of stub file generates an unnecessary pain point within Exercism, contributing a significant proportion of support requests, making things more complex for our students, and hindering our ability to automatically run test-suites and provide automated analysis of solutions.
We believe that it’s essential to understand error messages, know how to use an IDE, and create files. However, getting this right as you’re just getting used to a language can be a frustrating distraction, as it can often require a lot of knowledge that tends to seep in over time. At the start, it can be challenging to google for all of these details: what file extension to use, what needs to be included, etc. Getting people up to speed with these things are not Exercism’s focus, and we’ve decided that we are better served by removing this source of confusion, letting people get on with actually solving the exercises.
The original discussion for this is at exercism/discussions#238.
Therefore, we’d like this track to provide a stub file for each exercise.
Run configlet lint
The lint command is under development.
Please re-run this command regularly to see if your track passes the latest linting rules.
Missing file:
/home/runner/work/fortran/fortran/docs/RESOURCES.md
Missing file:
/home/runner/work/fortran/fortran/docs/TESTS.md
Configlet detected at least one problem.
For more information on resolving the problems, please see the documentation:
https://github.com/exercism/docs/blob/main/building/configlet/lint.md
Error: Process completed with exit code 1.
currently 14, target is to have 20. Following looks fairly easy to do with Fortran (few string operations)
Suggestions from @SaschaMann and Angelika Tyborska
Each track needs a file that contains track-specific instructions on how to manually run the tests. The contents of this document are only presented to the student when using the CLI. This file lives at exercises/shared/.docs/tests.md
. You almost certainly already have this information, but need to move it to the correct place.
For v2 tracks, this information was (usually) included in the readme template found at config/exercise_readme.go.tmpl
. As such, tracks can extract the test instructions from the config/exercise_readme.go.tmpl
file to the exercises/shared/.docs/tests.md
file.
See https://github.com/exercism/csharp/pull/1557/files for an example PR.
Each track needs a file that contains track-specific instructions on how to get help. The contents of this document are only presented to the student when using the CLI. This file lives at exercises/shared/.docs/help.md
. You almost certainly already have this information, but need to move it to the correct place.
For v2 tracks, this information was (usually) included in the readme template found at config/exercise_readme.go.tmpl
. As such, tracks can extract the help instructions from the config/exercise_readme.go.tmpl
file to the exercises/shared/.docs/help.md
file.
See https://github.com/exercism/csharp/pull/1557/files for an example PR.
Hello,
I thought I'd have a look at the Fortran track, and started the hello world program according to the instructions on the website:
exercism download --exercise=hello-world --track=fortran
Following the instructions in hello_world_test.f90
, I get the following error:
❯ cmake ..
CMake Error at CMakeLists.txt:31 (file):
file COPY cannot find
"/Users/funnellt/Exercism/fortran/hello-world/../../testlib".
CMake Error at CMakeLists.txt:34 (add_subdirectory):
add_subdirectory given source "testlib" which is not an existing directory.
-- Configuring incomplete, errors occurred!
See also "/Users/funnellt/Exercism/fortran/hello-world/Debug/CMakeFiles/CMakeOutput.log".
And testlib
is of course no where to be found. I could try to fix this if someone points me in the right direction.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.