GithubHelp home page GithubHelp logo

mldl's Projects

2021-cvpr-mvcln icon 2021-cvpr-mvcln

PyTorch implementation for Partially View-aligned Representation Learning with Noise-robust Contrastive Loss (CVPR 2021)

aaai2023-pvd icon aaai2023-pvd

Official Implementation of PVD:One is All: Bridging the Gap Between Neural Radiance Fields Architectures with Progressive Volume Distillation

acne icon acne

Code Release for CVPR 2020, "ACNe: Attentive Context Normalizationfor Robust Permutation-Equivariant Learning"

active-passive-losses icon active-passive-losses

Code for ICML2020 paper [“Normalized Loss Functions for Deep Learning with Noisy Labels"] https://arxiv.org/abs/2006.13554

actnn icon actnn

ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training

adab2n icon adab2n

Official Implementation of NeurIPS 2023 paper "Overcoming Recency Bias of Normalization Statistics in Continual Learning: Balance and Adaptation"

adabelief-optimizer icon adabelief-optimizer

Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"

adainf icon adainf

Official code for ICLR 2024 paper Do Generated Data Always Help Contrastive Learning?

adamp icon adamp

Slowing Down the Weight Norm Increase in Momentum-based Optimizers

adapool icon adapool

exponential adaptive pooling for PyTorch

adaptively-customizing-activation-functions icon adaptively-customizing-activation-functions

To enhance the nonlinearity of neural networks and increase their mapping abilities between the inputs and response variables, activation functions play a crucial role to model more complex relationships and patterns in the data. In this work, a novel methodology is proposed to adaptively customize activation functions only by adding very few parameters to the traditional activation functions such as Sigmoid, Tanh, and ReLU. To verify the effectiveness of the proposed methodology, some theoretical and experimental analysis on accelerating the convergence and improving the performance is presented, and a series of experiments are conducted based on various network models (such as AlexNet, VGGNet, GoogLeNet, ResNet and DenseNet), and various datasets (such as CIFAR10, CIFAR100, miniImageNet, PASCAL VOC and COCO) . To further verify the validity and suitability in various optimization strategies and usage scenarios, some comparison experiments are also implemented among different optimization strategies (such as SGD, Momentum, AdaGrad, AdaDelta and ADAM) and different recognition tasks like classification and detection. The results show that the proposed methodology is very simple but with significant performance in convergence speed, precision and generalization, and it can surpass other popular methods like ReLU and adaptive functions like Swish in almost all experiments in terms of overall performance.

addernet icon addernet

Code for paper " AdderNet: Do We Really Need Multiplications in Deep Learning?"

ade-czsl icon ade-czsl

[CVPR 2023] Learning Attention as Disentangler for Compositional Zero-shot Learning

admrl icon admrl

Code for paper "Model-based Adversarial Meta-Reinforcement Learning" (https://arxiv.org/abs/2006.08875)

adv-ss-pretraining icon adv-ss-pretraining

[CVPR 2020] Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning

adversarial icon adversarial

Code and hyperparameters for the paper "Generative Adversarial Networks"

adversarial-contrastive-learning icon adversarial-contrastive-learning

[NeurIPS 2020] “Adversarial Contrastive Learning: Harvesting More Robustness from Unsupervised Pre-Training”, Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang

agd icon agd

[ICML2020] "AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks" by Yonggan Fu, Wuyang Chen, Haotao Wang, Haoran Li, Yingyan Lin, Zhangyang Wang

agvm icon agvm

Large-batch Optimization for Dense Visual Predictions (NeurIPS 2022)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.