GithubHelp home page GithubHelp logo

kuonanhong / undaw Goto Github PK

View Code? Open in Web Editor NEW

This project forked from dr-costas/undaw

0.0 1.0 0.0 42 KB

Unsupervised Domain Adaptation for Acoustic Scene Classification with Wasserstein Distance

Home Page: https://arxiv.org/abs/1904.10678

License: Other

Python 99.82% Shell 0.18%

undaw's Introduction

UNDAW Repository

Welcome to the repository of UNDAW (Unsupervised Adversarial Domain Adaptation Based on the Waserstein Distance)

This is the repository for the method presented in the paper: "Unsupervised Adversarial Domain Adaptation Based on the Waserstein Distance," by K. Drossos, P. Magron, and T. Virtanen.

Our paper is accepted for publication at the 2019 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), Mohonk Mountain House, New Paltz, NY.

You can find an online version of our paper at arXiv: https://arxiv.org/abs/1904.10678

If you use our method, please cite our paper.


Table of Contents

  1. Dependencies, pre-requisites, and setting up the code
  2. Reproduce the results of the paper
  3. Use the code with your own data
  4. Previous work
  5. Acknowledgement

Dependencies, pre-requisites, and setting up the code

In order to use our code, you have to firstly:

  • use Python 3.x and install all the required packages listed at the requirements file for PiP or at the requirements file for Anaconda
  • download the data (the file AUDASC_features_labels.zip) from DOI
  • download the pre-trained non-adapted model (the file AUDASC_pretrained_weights.zip) from DOI and the adapted model from DOI (this is optional and is required only in the case that you want to reproduce the results of the paper)

Then:

  • unzip the file AUDASC_features_labels.zip. This will produce the following files, which will have to be placed inside the directory dataset/data:

    • test_features.p
    • test_scene_labels.p
    • training_features.p
    • training_scene_labels.p
    • validation_features.p
    • validation_scene_labels.p
  • unzip the file AUDASC_pretrained_weights.zip. This will produce the following files, which will have to be place inside the directory pretrained_weights:

    • label_classifier.pytorch
    • LICENSE
    • non_adapted_cnn.pytorch
    • target_cnn.pytorch
  • unzip the file undaw.zip. This will produce the following files, which will have to be place inside the directory outputs/img/models:

    • adapted_cnn.pt

That's it!

You can either refer to the reproduce the results of the paper section for reproducing the results presented in our paper, or to the use the code with your own data section if you want to use our code for your own task and/or with your own data.

Enjoy!


Reproduce the results of the paper

To reproduce the results of the paper, you have to:

If you find any problem doing the above, please let us know through the issues section of this repository.


Use the code with your own data

To use your code with your own data, you will have to:

  • provide a set of features to be used
  • modify the data_handlers._domain_dataset.DomainDataset class
  • modify the modules used and are in the modules package
  • modify the settings to be used (i.e. the file that you will use and will be in the settings directory)
  • modify the settings reading for each of the modules, by modifying the functions in the helpers._models.py and helpers._modules_functions.py files

To use the code with your new settings, you will have to place the new settings file in the settings directory and specify the new settings file at the command line, when calling the main.py. For example, like:

python scripts/main.py --config-file new_settings_file

Notice that the file name is without extension, meaning that only YAML (i.e. *.yaml extension) files can be used.

The processes (i.e. pre-training, adaptation, and evaluation) should be run with any module/neural network.

If you have any question, please ask it using the issues section of this repository.


Previous work

Our work is based on the following previous work:


Acknowledgement

  • Part of the computations leading to these results were performed on a TITAN-X GPU donated by NVIDIA to K. Drossos.
  • The authors wish to acknowledge CSC-IT Center for Science, Finland, for computational resources.
  • The research leading to these results has received funding from the European Research Council under the European Union’s H2020 Framework Programme through ERC Grant Agreement 637422 EVERYSOUND.
  • P. Magron is supported by the Academy of Finland, project no. 290190.

undaw's People

Contributors

dr-costas avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.