GithubHelp home page GithubHelp logo

prowdiy / segland Goto Github PK

View Code? Open in Web Editor NEW

This project forked from lizhuohong/segland

0.0 0.0 0.0 284 KB

Generalized Few-Shot Meets Remote Sensing: Discovering Novel Classes in Land Cover Mapping via Hybrid Semantic Segmentation Framework

License: MIT License

Shell 1.23% Python 98.77%

segland's Introduction

SegLand: Discovering Novel Classes in Land Cover Mapping via Hybrid Semantic Segmentation Framework

Land-cover mapping is one of the vital applications in Earth observation. As natural and human activities change the landscape, the land-cover map needs to be rapidly updated. However, discovering newly appeared land-cover types in existing classification systems is still a non-trivial task hindered by various scales of complex land objects and insufficient labeled data over a wide-span geographic area. To address these limitations, we propose a generalized few-shot segmentation-based framework, named SegLand, to update novel classes in high-resolution land-cover mapping. 

The SegLand is accepted by the CVPR 2024 L3D-IVU workshop and score 🚀1st place in the OpenEarthMap Land Cover Mapping Few-Shot Challenge🚀. See you in CVPR (Seattle, 17 June)!

Contact me at [email protected]

Our previous works:

  • Paraformer (L2HNet V2): accepted by CVPR 2024 (Highlight), the hybrid CNN-ViT framework for HR land-cover mapping using LR labels.Code
  • L2HNet V1: accepted by ISPRS P&RS in 2022, The low-to-high network for HR land-cover mapping using LR labels.
  • SinoLC-1: accepted by ESSD in 2023, the first 1-m resolution national-scale land-cover map of China.Data
  • BuildingMap: accepted by IGARSS 2024 (Oral), To identify every building's function in urban area.Data

Training Instructions

  • To train and test the SegLand on the contest dataset, follow these steps:
  1. Dataset and project preprocessing
  • Replace the 'YOUR_PROJECT_ROOT' in ./scripts/train_oem.sh with your POP project directory;
  • Download the OEM trainset and unzip the file, then replace the 'YOUR_PATH_FOR_OEM_TRAIN_DATA' in ./scripts/train_oem.sh;
  • Download the OEM testset and unzip the file, then replace the 'YOUR_PATH_FOR_OEM_TEST_DATA' in ./scripts/evaluate_oem_base.sh and ./scripts/evaluate_oem.sh; (The train.txt, val.txt, all_5shot_seed123.txt (the list of support set), and test.txt have already been set according to the released data list, which do not need any modification)
  1. Base class training and evaluation
  • Train the base model by running CUDA_VISIBLE_DEVICES=0 bash ./scripts/train_oem.sh, and the model together with the log file will be stored in ./model_saved_base;
  • Evaluate the trained base model by running CUDA_VISIBLE_DEVICES=0 bash ./scripts/evaluate_oem_base.sh, you shall replace the 'RESTORE_PATH' with your own saved checkpoint path, and the output prediction maps together with the log file will be stored in ./output;
  1. Novel class updating and evaluation
  • Run python gen_new_samples_for_new_class.py to transform the samples generated with cutmixing operation, the generated samples and list are stored in 'YOUR_PATH_OF_CUTMIX_SAMPLES', and the samples should be copied to 'YOUR_PATH_FOR_OEM_TRAIN_DATA', while the list should be appended after all_5shot_seed123.txt;
  • Update the trained base model by running CUDA_VISIBLE_DEVICES=0 bash ./scripts/ft_oem.sh, you shall replace the 'RESTORE_PATH' with your own saved checkpoint path, and the model together with the log file will be stored in ./model_saved_ft;
  • Evaluate the trained base model by running CUDA_VISIBLE_DEVICES=0 bash ./scripts/evaluate_oem_base.sh, you shall replace the 'RESTORE_PATH' with your own saved checkpoint path, and the output prediction maps together with the log file will be stored in ./output;
  1. Output transformation and probability map fusion
  • Run python trans.py to transform the output map to the format that matches the requirements of the competetion, the output will be stored in ./upload;
  • (Optional) If multiple probability outputs (in *.mat format) are generated, these can be fused by running python fusemat.py, you shall replace all the 'PATH_OF_PROBABILITY_MAP_*' with your own paths (which will be generated under ./output/prob)

segland's People

Contributors

lizhuohong avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.