GithubHelp home page GithubHelp logo

ufal / npfl114 Goto Github PK

View Code? Open in Web Editor NEW
30.0 8.0 69.0 60.51 MB

Materials for the Deep Learning -- ÚFAL course NPFL114

License: Creative Commons Attribution Share Alike 4.0 International

TeX 4.57% HTML 0.23% Python 95.18% Shell 0.02%
deep-learning university-course material

npfl114's Introduction

Deep Learning – ÚFAL Course NPFL114

This repository contains materials to the Deep Learning (ÚFAL course NPFL114).

All created content is available under CC BY-SA 4.0 license, while all existing materials (mostly images and excerpts from papers) are properly referenced and are subject to original licensing.

npfl114's People

Contributors

dahnj avatar dan-bart avatar daraghmeehan avatar darthdeus avatar foxik avatar hamalcij avatar henczati avatar jjdelvalle avatar kasnerz avatar kategerasimenko avatar kszabova avatar mabi12 avatar mafi412 avatar martinpopel avatar matospiso avatar mikulaszelinka avatar mpicek avatar okarlicek avatar patrikvalkovic avatar peskaf avatar petrroll avatar rattko avatar sejsel avatar shulda avatar simonmandlik avatar strakam avatar tommybark avatar vvolhejn avatar yokto13 avatar zouharvi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

npfl114's Issues

mnist package

In pca_first.py there is a line

from mnist import MNIST

but this mnist package is nowhere.
Even after installing tf and gym, it gives errors.

lemmatizer BOW/EOW confusion

This comment (and a few others, in the other files as well) mention [EOW], but it is actually [BOW] in the results. This is because BOW and EOW are both defined as 1 in the MorphoDataset. I don't know if this is done on purpose or not, but I am guessing that it's not because both have characters added here.

Uppercase.py

args.logdir not in base template, making people lose data by not saving it.
It's masked by the
os.environ.setdefault("TF_CPP_MIN_LOG_LEVEL", "2") # Report only TF errors by default
which i think should NOT by used.

site bboxes_utils.py issue

Sites hyperlink for bboxes_utils.py doesn't work. It gives out 404 error as lab06 doesn't contain the script.

Wildly different example outputs in `sequence_classification`

Hi,

outputs in the 'Examples' section of the task sequence_classification are wildly different to what I'm getting locally. The outputs in the 'Tests' section are identical to mine and my solution also passed the tests in ReCodEx. Perhaps you used a different seed? I doubt there's such a huge difference between macOS and Linux.

Btw. it seems you swapped the order of 'Examples' and 'Tests' on the webpage :D

Example of the output I'm getting

$ python3 sequence_classification.py --rnn=LSTM --epochs=5 --hidden_layer=50

Epoch 1/5 loss: 0.6828 - accuracy: 0.5167 - val_loss: 0.6590 - val_accuracy: 0.5178
Epoch 2/5 loss: 0.6441 - accuracy: 0.5408 - val_loss: 0.6303 - val_accuracy: 0.5264
Epoch 3/5 loss: 0.6227 - accuracy: 0.5565 - val_loss: 0.6145 - val_accuracy: 0.5573
Epoch 4/5 loss: 0.6108 - accuracy: 0.5579 - val_loss: 0.6020 - val_accuracy: 0.5617
Epoch 5/5 loss: 0.5932 - accuracy: 0.5699 - val_loss: 0.5876 - val_accuracy: 0.5717

Cheers.

Non-binary masks for CAGS dataset

The expected result of the model in cags_segmentation.py is a probabilistic mask of shape (batch_size, 224, 224, 1) with values of elements being tf.float32 in 0, 1 range (i.e. probabilities).
Therefore, the true mask should have the corresponding form of (batch_size, 224, 224, 1) with elements being binary 0/1 for presence of the mask and comparison using the 0.5 threshold.
The comments were updated to warn that the mask's dtype is tf.uint8, however I would still expect the masks to have binary 0/255 values. Nevertheless, when I plotted one of the masks, the tensor was full of various unique numbers and the mask was not binary. This might be hard to discover at first, since the masks at https://ufal.mff.cuni.cz/~straka/courses/npfl114/2223/demos/cags_train.html seem to be binary. I propose to binarize the masks accordingly inside the dataset and not only inside MaskIoUMetric, since otherwise, in order for the model training to work, the users have to correctly binarize the masks themselves, which, in turn, breaks the MaskIoUMetric (at least it did for me, I have not yet discovered why that was the case (see the row at the bottom))

image
image

pca_first.py throws IndentationError

Hello,

in file https://github.com/ufal/npfl114/blob/master/labs/01/pca_first.py, section:

    # TODO: Now run `args.iterations` of the power iteration algorithm.
    # Start with a vector of `cov.shape[0]` ones of type tf.float32 using `tf.ones`.
    v = None
    for i in range(args.iterations):
        # TODO: In the power iteration algorithm, we compute
        # 1. v = cov * v
        #    The matrix-vector multiplication can be computed using `tf.linalg.matvec`.
        # 2. s = l2_norm(v)
        #    The l2_norm can be computed using `tf.linalg.norm`.
        # 3. v = v / s

    # The `v` is now the eigenvector of the largest eigenvalue, `s`. We now
    # compute the explained variance, which is a ration of `s` and `total_variance`.
    explained_variance = s / total_variance

would be nice to add something to the code block inside the for loop, for example:

    # TODO: Now run `args.iterations` of the power iteration algorithm.
    # Start with a vector of `cov.shape[0]` ones of type tf.float32 using `tf.ones`.
    v = None
    for i in range(args.iterations):
        # TODO: In the power iteration algorithm, we compute
        # 1. v = cov * v
        #    The matrix-vector multiplication can be computed using `tf.linalg.matvec`.
        # 2. s = l2_norm(v)
        #    The l2_norm can be computed using `tf.linalg.norm`.
        # 3. v = v / s
        ...

    # The `v` is now the eigenvector of the largest eigenvalue, `s`. We now
    # compute the explained variance, which is a ration of `s` and `total_variance`.
    explained_variance = s / total_variance

will be valid code that can be executed without throwing an error.

Thanks :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.