GithubHelp home page GithubHelp logo

cosmaadrian / gaitformer Goto Github PK

View Code? Open in Web Editor NEW
14.0 2.0 0.0 1.05 MB

GaitFormer Official Codebase for the paper "Learning Gait Representations with Noisy Multi-Task Learning"

License: Other

Python 58.54% Shell 41.46%
dataset gait-recognition self-supervised-learning transformer-architecture

gaitformer's Introduction

Official repository for "Learning Gait Representations with Noisy Multi-Task Learning"

Adrian Cosma, Emilian Radoi

Abstract

Gait analysis is proven to be a reliable way to perform person identification without relying on subject cooperation. Walking is a biometric that does not significantly change in short periods of time and can be regarded as unique to each person. So far, the study of gait analysis focused mostly on identification and demographics estimation, without considering many of the pedestrian attributes that appearance-based methods rely on. In this work, alongside gait-based person identification, we explore pedestrian attribute identification solely from movement patterns. We propose DenseGait, the largest dataset for pretraining gait analysis systems containing 217K anonymized tracklets, annotated automatically with 42 appearance attributes. DenseGait is constructed by automatically processing video streams and offers the full array of gait covariates present in the real world. We make the dataset available to the research community. Additionally, we propose GaitFormer, a transformer-based model that after pretraining in a multi-task fashion on DenseGait, achieves 92.5% accuracy on CASIA-B and 85.33% on FVG, without utilizing any manually annotated data. This corresponds to a +14.2% and +9.67% accuracy increase compared to similar methods. Moreover, GaitFormer is able to accurately identify gender information and a multitude of appearance attributes utilizing only movement patterns.

Getting Started

In this work, we proposed the DenseGait dataset, an automatically gathered dataset with 217k pose sequences and 42 appearance attributes, and GaitFormer, a transformer model for gait recognition, which operates on sequences of skeletons.

The 42 attributes are described below:

The automatic annotation process is showcased below. For more details, refer to our paper.

DenseGait can be downloaded at: https://bit.ly/3SLO8RW The dataset is under open credentialized access. To request access, email Adrian Cosma at cosma.i.adrian@gmail.

The implementation for GaitFormer can be found in models/gaitformer.py.

This repo is based on acumen-template to organise the project, and uses wandb.ai for experiment tracking. We adapted the implementation of ST-GCN from https://github.com/yysijie/st-gcn

Citation

If you found our work useful, please cite our works:

Learning Gait Representations with Noisy Multi-Task Learning

@Article{cosma22gaitformer,
  AUTHOR = {Cosma, Adrian and Radoi, Emilian},
  TITLE = {Learning Gait Representations with Noisy Multi-Task Learning},
  JOURNAL = {Sensors},
  VOLUME = {22},
  YEAR = {2022},
  NUMBER = {18},
  ARTICLE-NUMBER = {6803},
  URL = {https://www.mdpi.com/1424-8220/22/18/6803},
  ISSN = {1424-8220},
  DOI = {10.3390/s22186803}
}

This work relies on our previous paper WildGait: Learning Gait Representations from Raw Surveillance Streams. Please consider citing with:

@Article{cosma20wildgait,
  AUTHOR = {Cosma, Adrian and Radoi, Ion Emilian},
  TITLE = {WildGait: Learning Gait Representations from Raw Surveillance Streams},
  JOURNAL = {Sensors},
  VOLUME = {21},
  YEAR = {2021},
  NUMBER = {24},
  ARTICLE-NUMBER = {8387},
  URL = {https://www.mdpi.com/1424-8220/21/24/8387},
  PubMedID = {34960479},
  ISSN = {1424-8220},
  DOI = {10.3390/s21248387}
}

License

This work is protected by CC BY-NC-ND 4.0 License (Non-Commercial & No Derivatives).

gaitformer's People

Contributors

cosmaadrian avatar cosmadrian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

gaitformer's Issues

Pipeline for reproducibility

Hi, I am trying to reproduce the baseline described in the Psymo dataset paper in order to see its results and how it works, I would be so thankful if you can provide with some kind of instructions on how to use the scripts for training on a reduced set (focused on 40 persons) of the Psymo dataset. Thanks a lot in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.