GithubHelp home page GithubHelp logo

exercism / python Goto Github PK

View Code? Open in Web Editor NEW
1.7K 119.0 1.2K 6.67 MB

Exercism exercises in Python.

Home Page: https://exercism.org/tracks/python

License: MIT License

Python 89.83% Shell 0.24% Jinja 9.92%
exercism-track community-contributions-paused

python's Issues

Allow creative error messages

In several exercises the tests require that a solution raise an error message for a bad input.
Usually these tests also demand a specific error message, for example this test for the octal exercise:

def test_8_is_seen_as_invalid(self):
    self.assertRaisesRegexp(ValueError, "^Invalid octal digit: 8$",
                            Octal, "8")

It demands that the error message is exactly "Invalid octal digit: 8".

This precludes students from coming up with good error messages of their own, which I believe to be an important skill in its own right.
In this example, a better error message might point to the index of the bad digit or might be more explicit about which digits are valid etc.

I think that better tests would only demand that a specific type of exception is raised (using assertRaises) or that maybe a certain keyword must be included in the error message (with assertRaisesRegexp).

What does everyone else think about this?

This might also be an issue in other languages…

Running All Unit Tests

I keep finding myself wondering what problem I was working on last. I thought it might be a good idea to run all of the unit tests to figure out where I am. I tried using the command python -m unitest discover and python -m unittest discover . '*_test.py', but none of the tests run.

Am I using the command wrong? Is there another way to run all the unit tests?

bob_test.py: invalid syntax in python3

All strings are unicode in python3, so u"..." is a syntax error. Patch:

--- bob_test.py~        2014-04-05 02:30:58.973471400 -0400
+++ bob_test.py 2014-04-05 02:43:40.582846400 -0400
@@ -72,12 +72,12 @@

     def test_shouting_with_umlauts(self):
         self.assertEqual(
-            'Woah, chill out!', self.bob.hey(u"\xdcML\xc4\xdcTS!")
+            'Woah, chill out!', self.bob.hey("\xdcML\xc4\xdcTS!")
         )

     def test_calmly_speaking_with_umlauts(self):
         self.assertEqual(
-            'Whatever.', self.bob.hey(u"\xdcML\xe4\xdcTS!")
+            'Whatever.', self.bob.hey("\xdcML\xe4\xdcTS!")
         )

     def test_shouting_with_no_exclamation_mark(self):

This patch might not work in python 2, but who cares. We should only be officially supporting python 3, anyway, since all new code should be written in python 3, and people new to python should be learning python 3. And anyway, python 3 is much nicer when it comes to handling different character encodings and raw byte sequences.

acronym: implement new exercise

The exercise acronym (md, yml) is a very easy one ''.join(word[0].upper() for word in words.split()). Should we implement it or should we add it to the "foregone" section in the config.json.

Reworking Allergies

Looking through Allergies, I think it may make sense to do a rewrite. The exercise seems to be intended to practice using binary to represent state, but in execution we really use the Allergy class like a container. Its really a set. We look up allergies with the is_allergic_to function with the allergy as a key. If I were writing the class, I'd probably make it work along these lines:

allergies = ['eggs', 'peanuts', 'shellfish', 'strawberries', 'tomatoes', 'chocolate', 'pollen', 'cats']
example = Allergies(*allergies)

if 'eggs' in example:
    print 'I am allergic to eggs'

if 'dogs' not in example:
    print 'I am not allergic to dogs'

example.add('dogs')
if 'dogs' in example:
    print 'I am allergic to dogs'

allergy_list = list(example)

Where the value of allergy_list is:
['eggs', 'peanuts', 'shellfish', 'strawberries', 'tomatoes', 'chocolate', 'pollen', 'cats', 'dogs']

I'm a bit conflicted by this exercise. It causes problems with order as seen in #186 and makes adding or removing allergies harder than it should be.

Does anyone have any input?

Add missing stub/skeleton files

I only worked on the Python and Go track and I noticed a small difference. In the go track (at least for the first 5 problems) there are always stub files, while in the python track there are only stub files for hello-world and bob.
Maybe there shouldn't be stub files for all exercises, but at least for most of them and definitely for the first exercises. It would make the start easier and support TDD, because there wouldn't be ImportError: No module named <exercise> if you try to directly run the tests.

Python test files should not mask ImportErrors

Copied from exercism/exercism#1376 - reported by @hop

Please see the original discussion for details.


In case there is an import error in the implementation of the exercise, the misleading error message 'Could not find wordcount.py. Does it exist?' is emitted and the original error message is hidden.

I'm not entirely sure what a good alternative would look like, but something like this would be a start:

try:
    from foo import Bar
except ImportError as e:
    raise SystemExit('Could not import foo.py.' 
                     'This was the error: '+str(e))

Error when fetching anagram

Got this message when trying to run exercism fetch after submitting twelve-days:

Error parsing API response: [invalid character '<' looking for beginning of value]

Python3 compatibility

While many students already solve assignments with Python3, we currently don't know which test suites actually run with Python3 and which require manual adaptation by the students.

To fix this, I propose that we (@kytrinyx?) create a new development branch "py3k" and take the following steps:

  1. Configure Travis to use Python versions 2.7, 3.3 and, once available, 3.4.
  2. Determine from the build logs which exercises have compatibility issues and fix these one by one. Both the test suites and the example implementation should be fully compatible!
  3. Change the setup recommendation to include Python3.

Once this is done it should be possible to simply replace the master branch by py3k.

Testing Text-Heavy Exercises

I'm working on the Twelve Days problem right now, and it's reminding me of something that's been annoying me as I've worked through the exercises.

The default AssertionError that prints to the terminal contains a diff of the two bodies of text, but the diff is hard to parse when you're dealing with large bodies of text, and what's extra/missing from your program's output isn't immediately apparent.

Is there another way to present test output for these kinds of exercises? Is there some extension of UnitTest that gives more useful feedback?

Change sum-of-multiples exercise to require a function instead of a class

At the core of the exercise is a simple mapping from a collection of numbers to a sum of multiples of those factors. This can be represented as a simple mathematical function and doesn't require any manipulation of an object.

The test suite should therefore require (and import) a function sum_of_multiples instead of a class.

For some discussion regarding the exercise see #139.

The README for octal is ruby-specific

This is the current README.md for the octal exercise as downloaded by exercism fetch python octal:

# Octal

Write a program that will convert a octal number, represented as a string (e.g. '1735263'), to its decimal equivalent using first principles (i.e. no, you may not use built-in ruby libraries or gems to accomplish the conversion).

The program should consider strings specifying an invalid octal as the value 0.

Tests are provided, delete one `skip` at a time.


## Source

All of Computer Science [view source](http://www.wolframalpha.com/input/?i=base+8)

What do I have to do to adapt it for Python?

Bob readme does not say how to run tests

I'm seeing many, many submissions to the bob problem which are misformatted by using class Bob instead of def bob. If they ran the test case, this would immediately catch it.

As the first excercise, the bob readme should give the command that runs the test cases (python3 bob_test.py).

word-count: Normalization independent testing

When @wobh added a test for normalization in #207 @kytrinyx noted the following:

I like that it leaves the choice up to the implementer as to how to do normalization.

Sadly this is only true for that single test case, all other 10 assume lower case normalization.

The best way to fix this would be if Python or some built-in package like unittest would provide us with a case insensitive string comparison function or something like it. I couldn't find one but there has to be something. We can't be the first having this problem.

I wrote a prove of concept but as we can't ship additional modules with the cli and it only works with Python 3 that doesn't do the trick.
https://gist.github.com/behrtam/2894facca0f35f642300

word_count: test mixed case

I just saw a solution to the word_count exercise where my feeling was that this shouldn't pass all tests. There is already one mixed case test, but it has a specific word order so that this solution works.

def word_count(input):
    list = {}
    for word in input.split():
        if word in list:
            list[word]+=1
        elif word.lower() in list:
            list[word.lower()]+=1
        else:
            list[word]=1
    return list

>>> word_count('GO go Go')
{'GO': 1, 'go': 2}
>>> word_count('go Go GO')
{'go': 3}
>>>

Is Sublist correct

Is this test actually correct?

 def test_spread_sublist(self):
        multiples_of_3 = list(range(3, 200, 3))
        multiples_of_15 = list(range(3, 200, 15))
        self.assertEqual(UNEQUAL,
                         check_lists(multiples_of_15, multiples_of_3))

Shouldn't it be a sublist?
@sjakobi @betegelse

Inconsistency in Python Test for Bob Exercise

I was going coding through the exercises and I noticed that I couldn't really pass a certain test:

     def test_calmly_speaking_with_umlauts(self):
        self.assertEqual(
        'Whatever.', bob.hey('ÜMLäÜTS!')
    )

The code that I used:

    if what.isupper() or what[-1:] == "!" : 
        return "Whoa, chill out!"

According to the readme, if we yell at him (assuming that by yelling, you mean that the input either ends with an exclamation mark or is in upper case), Bob should respond with 'Whoa, chill out!'. Seeing as the input for the above test both contains uppercase letters and an exclamation mark, I don't understand why the expected response would be 'Whatever.'.

.. anyways, @kytrinyx, I really like the Exercism project and your book. Wish you all the best c:

Small text error in JS readme files

All of the ones I've downloaded say

Execute the tests with:

```bash
$ jasmine-node bob_test.spec.js
```

regardless of the actual problem test case filename.

I'd propose just leaving it at $ jasmine-node . which I run anyway.

Otherwise, great concept! Really having fun with the exercises.

Times and Multiplied By in Wordy

Do we really need to have both "times" and "multiplied by" supported in the Wordy exercise? It seems like one or the other would suffice.

Allergies test case issue

https://github.com/exercism/xpython/blob/master/allergies/allergies_test.py#L26
... to ...
https://github.com/exercism/xpython/blob/master/allergies/allergies_test.py#L36

TC: test_allergic_to_just_peanuts -- self.assertEqual(['peanuts'], Allergies(2).list
TC: test_allergic_to_everything -- sum([2**x for x in range(0,8)]) == 255
TC: test_ignore_non_allergen_score_parts -- self.assertEqual(['eggs'], Allergies(257).list

257-255 == 2 == ['peanuts']
257-2**8 == 1 == ['eggs']

The last test case with 257 being passed-in, should approximate to the same as the Line 26 test case.

Ignore all tests except the first one by default

It'd be useful to have unittest.SkipTest decorators that ignores all but the first test. This way, you can try to make the first test work, once it works, delete the next unittest.SkipTest decorator, make the code pass that test, etc., instead of being overloaded by a barrage of tests that fail.

I've noticed that this approach seems to be used in some language tracks (e.g. Rust), but it'd be useful if this was standard in all tracks. I now do this manually, but it'd be nice if this was implemented as standard. Another approach is to group tests that test similar functionality together (ie. different testcase classes), and then ignore all but the first class. That way, you don't get error messages concerning functionality that you're not yet trying to deal with.

Why do we skip tests?

The following exercises contain tests marked with the decorator
@unittest.skipUnless('NO_SKIP' in os.environ, "Not implemented yet"):

  • minesweeper
  • ocr-numbers
  • pascals-triangle
  • secret-handshake
  • wordy

In order to run all tests in these exercises the user is expected either to comment out/delete the decorators or to set the NO_SKIP environment variable.
While both methods are trivial on *NIX systems with the necessary knowledge of environment variables or a tool like awk, they are more troublesome on a computer running Windows.

I must admit that I've never quite understood why exactly we skip some tests anyway.
Can somebody explain this? What would be lost if we simply delete all these decorators?

Bob Tests produces different results between 2/3

Under Python 2: 'ÜMLäÜTS!'.isupper() isTrue, butu'ÜMLäÜTS!'.isupper()isFalse`.

Under Python 3: 'ÜMLäÜTS!'.isupper() is False

The problem is that the string is using precomposed characters and under Python 2, str is encoded in the source code encoding of the tests (UTF-8).

Overhaul the Ocr Numbers exercise

The current test suite is pretty rudimentary compared to those on other language tracks like Go or Haskell.

Also, the ASCII numbers in the test suite should be well formatted.

There's some discussion regarding the exercise here: #137

Name nucleobases, not nucleosides

The primary nucleobases are cytosine (DNA and RNA), guanine (DNA and RNA), adenine (DNA and RNA), thymine (DNA) and uracil (RNA), abbreviated as C, G, A, T, and U, respectively. Because A, G, C, and T appear in the DNA, these molecules are called DNA-bases; A, G, C, and U are called RNA-bases. - Wikipedia

In other words, we should rename the values in the RNA transcription problem to reflect the following:

  • cytidine -> cytosine
  • guanosine -> guanine
  • adenosine -> adenine
  • thymidine -> thymine
  • uridine -> uracil

Make all exercises PEP8 compliant except line length

I would like to make all exercises PEP8 compliant except line length (E501) like I already did with the secret-handshake 14b726c. Now I am not sure what's the best way to do this. With one pull request for all or a single pull request per exercise that is not compliant.

Exercises with 5 problems on average: ['wordy', 'word-count', 'poker', 'simple-cipher', 'nucleotide-count', 'nth-prime', 'ocr-numbers', 'house', 'sublist', 'pythagorean-triplet', 'roman-numerals', 'meetup', 'strain', 'saddle-points', 'secret-handshake', 'minesweeper', 'accumulate', 'triangle', 'bob', 'pascals-triangle', 'phone-number']

Delete configlet binaries from history?

I made a really stupid choice a while back to commit the cross-compiled
binaries for configlet (the tool that sanity-checks the config.json
against the implemented problems) into the repository itself.

Those binaries are HUGE, and every time they change the entire 4 or 5 megs get
recommitted. This means that cloning the repository takes a ridiculously long
time.

I've added a script that can be run on travis to grab the latest release from
the configlet repository (bin/fetch-configlet), and travis is set up to run
this now instead of using the committed binary.

I would really like to thoroughly delete the binaries from the entire git
history, but this will break all the existing clones and forks.

The commands I would run are:

# ensure this happens on an up-to-date master
git checkout master && git fetch origin && git reset --hard origin/master

# delete from history
git filter-branch --index-filter 'git rm -r --cached --ignore-unmatch bin/configlet-*' --prune-empty

# clean up
rm -rf .git/refs/original/
git reflog expire --all
git gc --aggressive --prune

# push up the new master, force override existing master branch
git push -fu origin master

If we do this everyone who has a fork will need to make sure that their master
is reset to the new upstream master:

git checkout master
git fetch upstream master
git reset --hard upstream/master
git push -fu origin master

We can at-mention (@) all the contributors and everyone who has a fork here in this
issue if we decide to do it.

The important question though, is: Is it worth doing?

Do you have any other suggestions of how to make sure this doesn't confuse people and break their
repository if we do proceed with this change?

largest-series-product - invalid test case

The following test case for this exercise doesn't appear to be valid:

    def test_identity(self):
        self.assertEqual(1, largest_product("", 0))

in the previous 'series' exercise, it was defined that passing a length argument of 0 should raise a ValueError. But in this case, it expects a return value. Further, it expects a return value of 1, but I don't see any reason why 1 would be the proper response given these arguments.

How to set up a local dev environment

See issue exercism/exercism#2092 for an overview of operation welcome contributors.


Provide instructions on how to contribute patches to the exercism test suites
and examples: dependencies, running the tests, what gets tested on Travis-CI,
etc.

The contributing document
in the x-api repository describes how all the language tracks are put
together, as well as details about the common metadata, and high-level
information about contributing to existing problems, or adding new problems.

The README here should be language-specific, and can point to the contributing
guide for more context.

From the OpenHatch guide:

Here are common elements of setting up a development environment you’ll want your guide to address:

Preparing their computer
Make sure they’re familiar with their operating system’s tools, such as the terminal/command prompt. You can do this by linking to a tutorial and asking contributors to make sure they understand it. There are usually great tutorials already out there - OpenHatch’s command line tutorial can be found here.
If contributors need to set up a virtual environment, access a virtual machine, or download a specific development kit, give them instructions on how to do so.
List any dependencies needed to run your project, and how to install them. If there are good installation guides for those dependencies, link to them.

Downloading the source
Give detailed instructions on how to download the source of the project, including common missteps or obstacles.

How to view/test changes
Give instructions on how to view and test the changes they’ve made. This may vary depending on what they’ve changed, but do your best to cover common changes. This can be as simple as viewing an html document in a browser, but may be more complicated.

Installation will often differ depending on the operating system of the contributor. You will probably need to create separate instructions in various parts of your guide for Windows, Mac and Linux users. If you only want to support development on a single operating system, make sure that is clear to users, ideally in the top-level documentation.

Faulty test for Grade School

Problem

The README for the Grade School exercise states that the School object should be able to

[g]et a sorted list of all students in all grades. Grades should sort as 1, 2, 3, etc., and students within a grade should be sorted alphabetically by name.

However, the corresponding test does not enforce sorting by grade, since dict objects are unordered:

sorted_students = {
    3: ("Kyle",),
    4: ("Christopher", "Jennifer",),
    6: ("Kareem",)
}
self.assertEqual(sorted_students, self.school.sort())

Goal

This test should enforce the grade-wise ordering of the result of School.sort as stated in the README.

Proposed Solutions

I can think of several possible solutions:

  • change the sorted_students variable to a list of tuples, i.e.
    sorted_students = [(3, ("Kyle")), (4, ("Christopher", "Jennifer")), (6, ("Kareem"))],
  • compare against an analogous OrderedDict, which would teach users about the collections module in the standard library,
  • pass the result of the sort() call to an OrderedDict, then make sure the result is correct, i.e.
sorted_students = OrderedDict((3, ("Kyle",),)
                              (4, ("Christopher", "Jennifer",),)
                              (6, ("Kareem",)))
self.assertEqual(sorted_students, OrderedDict(self.school.sort()))

I'm partial to this last solution, since it allows sort to return any ordered iterable where each element is a 2-element ordered iterable of the form grade, student_tuple. I'd say this pretty Pythonic; we don't care whether the return value is a list, tuple, OrderedDict, generator, etc., just that it's an ordered iterable with the values in the correct order.

Is Sublist correct

Is this test actually correct?

 def test_spread_sublist(self):
        multiples_of_3 = list(range(3, 200, 3))
        multiples_of_15 = list(range(3, 200, 15))
        self.assertEqual(UNEQUAL,
                         check_lists(multiples_of_15, multiples_of_3))

Shouldn't it be a sublist?
@sjakobi @betegelse

Reorder problem list in config.json, deprecate some

The current order of the exercises doesn't seem to make much sense - "leap" and "etl" come very late, "hamming" and "point-mutations" are basically the same, etc.

For the reordering, are there other important criteria apart from difficulty?

I'd also like to focus the nitpicking community by deprecating some exercises whose core problem appears in other exercises.

gigasecond: use times (not dates) for inputs and outputs

A duration of a gigasecond should be measured in seconds, not
days.

The gigasecond problem has been implemented in a number of languages,
and this issue has been generated for each of these language tracks.
This may already be fixed in this track, if so, please make a note of it
and close the issue.

There has been some discussion about whether or not gigaseconds should
take daylight savings time into account, and the conclusion was "no", since
not all locations observe daylight savings time.

accumulate: Remove deprecation status

The exercise accumulate is currently deprecated config.json#L58 and I would like to change that. The only hint to the reason for the deprecation I could find was in the example.py.

# [op(x) for x in seq] would be nice but trivial

The README.md lists map() as restricted which leaves us with a for-loop or list comprehension. While this might seam trivial (xgo for example looks as trivial) I think we can score two good learning points here.

  • Most people who are new to Python don't know list comprehension and will most likely start with a for-loop and receive a nit to look into list comprehension.
  • Some people might not know that Python has anonymous functions (lambda) which they can discover in the test suite.

If we put this exercise towards the start of the track it might help some people to create simpler solutions for other exercises by applying those two concepts.

Implement Python exercises

Copied from exercism/exercism#272 reported by @BrianHicks


This is a placeholder issue for converting ruby/clojure exercises to Python.

(I'm thinking ruby/clojure since Python is a mix of OO and functional, depending on what's most appropriate)

Exercism fetch does not pull secret-handshake

From the CLI, if I have an empty secret-handshake folder and run exercism fetch, the test is not populated. exercism fetch python secret-handhake does work, though. Not sure if this is a CLI or xpython issue.

Allergies' "test_allergic_to_everything" implicitly requires a specific list ordering

def test_allergic_to_everything(self):
    self.assertEqual(
        ('eggs peanuts shellfish strawberries tomatoes '
         'chocolate pollen cats').split(),
        Allergies(255).list)

This requires that Allergies(255).list return items in exactly the same order as they are written above. If the order is not important, assertItemsEqual() is better. Unfortunately, in Python 3 this was renamed to assertCountEqual(), which does not exist in Python 2.7. So perhaps assertEqual(sorted(expected), sorted(actual)) would in fact be the best solution here.

Alternatively, if the requirement for identical ordering is intentional, it should be made clearer using assertSequenceEqual(). At present, the README merely states that it should return "All the allergens Tom is allergic to."

I believe this may also affect other Python exercises.

I am happy to submit a pull request for said changes, but would first like to know if there is any consensus on exactly what the correct behaviour should be: to require identical ordering or not?

Prefer functional style solutions where classes aren't necessary

As @0xae has demonstrated in #75, there are quite a few exercises that currently demand a class-based solution although they could be solved more idiomatically in a functional style.

I have singled out the following exercises that could be adapted in the same way as the bob exercise - although a few of these cases may be debatable:

  • rna-transcription
  • word-count
  • anagram
  • beer-song
  • nucleotide-counts
  • series
  • largest-series-product
  • octal
  • point-mutations
  • leap
  • gigasecond
  • triangle
  • scrabble-score
  • roman-numerals
  • binary

What does everybody think?

Font size perhaps too big in <blockquote>

I think the quoted text should have similar (if not the same) size to the rest of the text in the nitpicks. See the image bellow:

blockquote

Instead, it could be a little gray and/or in italics and/or with a different background color.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.