GithubHelp home page GithubHelp logo

open-mmlab / mmcv Goto Github PK

View Code? Open in Web Editor NEW
5.7K 84.0 1.6K 14.04 MB

OpenMMLab Computer Vision Foundation

Home Page: https://mmcv.readthedocs.io/en/latest/

License: Apache License 2.0

Python 43.04% C++ 36.00% Dockerfile 0.08% Cuda 20.65% Objective-C++ 0.22% C 0.01%

mmcv's People

Contributors

cokedong avatar daavoo avatar dcnsw avatar defei-coder avatar dreamerlin avatar grimoire avatar haochenye avatar hejm37 avatar hellock avatar innerlee avatar johnson-wang avatar jshilong avatar luopeichao avatar lxxxxr avatar ly015 avatar meowzheng avatar mzr1996 avatar nbei avatar oceanpang avatar runningleon avatar teamwong111 avatar triple-mu avatar v-qjqs avatar xvjiarui avatar ycxioooong avatar yhcao6 avatar yl-1993 avatar zhouzaida avatar zwwwayne avatar zytx121 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mmcv's Issues

about cascade rcnn

I'am just looking into the cascade rcnn, I notice that in bbox_head, all of the three part's reg_class_agnostic are True, but in normal rcnn this configuration is False so it could get the bbox of all the classes which in coco is 81。I wonder why the third part in cascade rcnn 's bbox_head is not False so i could get the same result like normal rcnn or in cascade this operation is finished in other place .Thanks.

mmcv numpy error

ValueError: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject

import error after install mmcv 0.2.8

install mmcv success:
git clone https://github.com/open-mmlab/mmcv.git
cd mmcv
pip install .

but import error

import mmcv
Traceback (most recent call last):
File "", line 1, in
File "/home/xgf/mmcv/mmcv/init.py", line 7, in
from .video import *
File "/home/xgf/mmcv/mmcv/video/init.py", line 3, in
from .optflow import (flowread, flowwrite, quantize_flow,
File "/home/xgf/mmcv/mmcv/video/optflow.py", line 6, in
from mmcv._ext import flow_warp_c
ModuleNotFoundError: No module named 'mmcv._ext'

anyone can help me? thanks

Install from source hangs.

I tried to install mmcv from source via python setup.py install and pip install . -vvv, but both got stuck.
Here is the output of pip install . -vvv:

Created temporary directory: /tmp/pip-ephem-wheel-cache-_h5nxl5e
Created temporary directory: /tmp/pip-req-tracker-oen4tfry
Created requirements tracker '/tmp/pip-req-tracker-oen4tfry'
Created temporary directory: /tmp/pip-install-3wqlrtts
Processing /data/wanggu/software/mmcv
  Created temporary directory: /tmp/pip-req-build-o9kocm8j
  Added file:///data/wanggu/software/mmcv to build tracker '/tmp/pip-req-tracker-oen4tfry'
  Running setup.py (path:/tmp/pip-req-build-o9kocm8j/setup.py) egg_info for package from file:///data/wanggu/software/mmcv
    Running command python setup.py egg_info

runner/utils.py bugfix

Could you please make calling dist.is_initialized() conditional on dist.is_available()? Not all platforms (e.g. Windows) support it.

if torch.__version__ < '1.0':
    initialized = dist._initialized
else:
  if dist.is_available() :
	        initialized = dist.is_initialized()
  else:
	        initialized = 0

Thanks.

About video sub-module

Two options:

  1. Move out video submodule and merge it into the repo mmvideo.

or

  1. Have some basic functionalities related to videos in mmcv, and have mmvideo focus on video analytics frameworks like TSN.

@hellock Please talk to Yuanjun and Yue to figure out.

Config can not be used in python multiprocessing spawn context

The following code can cause the error.

from multiprocessing import Process
from mmcv import Config
import multiprocessing as mp


def f(cfg):
    print(cfg)


if __name__ == '__main__':
    mp.set_start_method('spawn')
    cfgs = Config(dict(a=1, b=dict(b1=[0, 1])))
    process_list = []
    for _ in range(3):
        p = Process(target=f, args=(cfgs, ))
        p.start()
        process_list.append(p)
    for p in process_list:
        p.join()

Running this code cause the following error:

Traceback (most recent call last):
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "<string>", line 1, in <module>
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/albert/Softwares/anaconda3/envs/py36/lib/python3.6/multiprocessing/spawn.py", line 105, in spawn_main
  File "/home/albert/Softwares/anaconda3/envs/py36/lib/python3.6/multiprocessing/spawn.py", line 105, in spawn_main
  File "/home/albert/Softwares/anaconda3/envs/py36/lib/python3.6/multiprocessing/spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
    exitcode = _main(fd)
  File "/home/albert/Softwares/anaconda3/envs/py36/lib/python3.6/multiprocessing/spawn.py", line 115, in _main
  File "/home/albert/Softwares/anaconda3/envs/py36/lib/python3.6/multiprocessing/spawn.py", line 115, in _main
    exitcode = _main(fd)
  File "/home/albert/Softwares/anaconda3/envs/py36/lib/python3.6/multiprocessing/spawn.py", line 115, in _main
    self = reduction.pickle.load(from_parent)
    self = reduction.pickle.load(from_parent)
  File "/home/albert/Softwares/mmcv-0.2.14/mmcv/utils/config.py", line 143, in __getattr__
  File "/home/albert/Softwares/mmcv-0.2.14/mmcv/utils/config.py", line 143, in __getattr__
    self = reduction.pickle.load(from_parent)
  File "/home/albert/Softwares/mmcv-0.2.14/mmcv/utils/config.py", line 143, in __getattr__
    return getattr(self._cfg_dict, name)
    return getattr(self._cfg_dict, name)
  File "/home/albert/Softwares/mmcv-0.2.14/mmcv/utils/config.py", line 143, in __getattr__
  File "/home/albert/Softwares/mmcv-0.2.14/mmcv/utils/config.py", line 143, in __getattr__
    return getattr(self._cfg_dict, name)
  File "/home/albert/Softwares/mmcv-0.2.14/mmcv/utils/config.py", line 143, in __getattr__
    return getattr(self._cfg_dict, name)
    return getattr(self._cfg_dict, name)
  File "/home/albert/Softwares/mmcv-0.2.14/mmcv/utils/config.py", line 143, in __getattr__
  File "/home/albert/Softwares/mmcv-0.2.14/mmcv/utils/config.py", line 143, in __getattr__
    return getattr(self._cfg_dict, name)
  File "/home/albert/Softwares/mmcv-0.2.14/mmcv/utils/config.py", line 143, in __getattr__
    return getattr(self._cfg_dict, name)
    return getattr(self._cfg_dict, name)
  [Previous line repeated 329 more times]
  [Previous line repeated 329 more times]
RecursionError: maximum recursion depth exceeded while calling a Python object
RecursionError: maximum recursion depth exceeded while calling a Python object
    return getattr(self._cfg_dict, name)
  [Previous line repeated 329 more times]
RecursionError: maximum recursion depth exceeded while calling a Python object

Non-portable latest checkpoint symlink

As for now the latest checkpoint created by runner.py is a symlink to a full path of the target checkpoint. Renaming or moving the original folder results into invalid symlink.

Should we change

filename = osp.join(out_dir, filename_tmpl.format(self.epoch + 1))
linkname = osp.join(out_dir, 'latest.pth')
...
mmcv.symlink(filename, linkname)

to

filename = osp.join(filename_tmpl.format(self.epoch + 1))
linkname = osp.join(out_dir, 'latest.pth')
...
mmcv.symlink(filename, linkname)

to make it relative and thus moving/renaming-proof?

If it's ok I can create a pull request.

how to use syncbn in mmcv

When I use syncbn in pytorch, following error occurs:
raise AttributeError('SyncBatchNorm is only supported within torch.nn.parallel.DistributedDataParallel')

I have downloaded latest mmcv, whether latest mmcv hasn't support syncbn in pytorch?

object of type 'DataContainer' has no len()

i am runing test.py with config "grid_rcnn_gn_head_r50_fpn_2x.py", it raised a typeError: object of type 'DataContainer' has no len() while it executed this line for img_id in range(len(img_metas)): in "/mmdetection/mmdet/models/anchor_heads/anchor_head.py". Well, have printed the type of img_metas: <class 'mmcv.parallel.data_container.DataContainer'>, it's not a list type.

Could not find a version that satisfies the requirement opencv-python (from mmcv==0.2.13) (from versions:

bozhao@bozhao-desktop:~/workspace/mmcv$ pip3 install -e .
Obtaining file:///home/bozhao/workspace/mmcv
Collecting numpy>=1.11.1 (from mmcv==0.2.13)
Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))': /simple/numpy/
Using cached https://files.pythonhosted.org/packages/cb/79/96df883cd6df0c86cb010e6f4ff790b7a30a45016a9509c94ea72c8695cd/numpy-1.17.1.zip
Collecting pyyaml (from mmcv==0.2.13)
Using cached https://files.pythonhosted.org/packages/e3/e8/b3212641ee2718d556df0f23f78de8303f068fe29cdaa7a91018849582fe/PyYAML-5.1.2.tar.gz
Collecting six (from mmcv==0.2.13)
Using cached https://files.pythonhosted.org/packages/73/fb/00a976f728d0d1fecfe898238ce23f502a721c0ac0ecfedb80e0d88c64e9/six-1.12.0-py2.py3-none-any.whl
Collecting addict (from mmcv==0.2.13)
Using cached https://files.pythonhosted.org/packages/14/6f/beb258220417c1a0fe11e842f2e012a1be7eeeaa72a1d10ba17a804da367/addict-2.2.1-py3-none-any.whl
Collecting requests (from mmcv==0.2.13)
Using cached https://files.pythonhosted.org/packages/51/bd/23c926cd341ea6b7dd0b2a00aba99ae0f828be89d72b2190f27c11d4b7fb/requests-2.22.0-py2.py3-none-any.whl
Collecting opencv-python (from mmcv==0.2.13)
Could not find a version that satisfies the requirement opencv-python (from mmcv==0.2.13) (from versions: )
No matching distribution found for opencv-python (from mmcv==0.2.13)

what's the problem?
I have install opencv-python, and no problem when using import cv2

The function of latest.pth generated by mmcv

What is the function of the latest.pth in mmdetection training generated by mmcv?It only cause trouble when training in colab.I cannot directly set work_dir in google drive path.It will cause os.symlink error.I can only change the source code to solve it.

pytorch 1.3 error

    model = init_detector(args.config, args.checkpoint, device="cpu")
  File "/usr/local/lib/python3.5/dist-packages/mmdet-1.0rc0+aab283c-py3.5-linux-x86_64.egg/mmdet/apis/inference.py", line 36, in init_detector
    checkpoint = load_checkpoint(model, checkpoint)
  File "/usr/local/lib/python3.5/dist-packages/mmcv/runner/checkpoint.py", line 188, in load_checkpoint
    load_state_dict(model, state_dict, strict, logger)
  File "/usr/local/lib/python3.5/dist-packages/mmcv/runner/checkpoint.py", line 96, in load_state_dict
    rank, _ = get_dist_info()
  File "/usr/local/lib/python3.5/dist-packages/mmcv/runner/utils.py", line 21, in get_dist_info
    initialized = dist.is_initialized()
AttributeError: module 'torch.distributed' has no attribute 'is_initialized'

Any idea?

Improve the interface of FileHandler

  1. Change abstract methods from static methods to instance methods. This allows future extensions to include options in instances.

  2. dump_to_file can have a default implementation based on dump_to_fileobject.

  3. Provide an interface to register handlers. I would suggest introducing a class decorator for this purpose.

assert len(indices) == self.total_size error during multi-GPU training

I am trying to train my dataset on 8 GPU's. However, after calling ./dist_train.sh this error assertion appeares:

Traceback (most recent call last):
File "./tools/train.py", line 113, in
main()
File "./tools/train.py", line 109, in main
logger=logger)
File "/mmdetection/mmdet/apis/train.py", line 58, in train_detector
_dist_train(model, dataset, cfg, validate=validate)
File "/mmdetection/mmdet/apis/train.py", line 186, in _dist_train
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/runner.py", line 358, in run
epoch_runner(data_loaders[i], **kwargs)
File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/runner.py", line 260, in train
for i, data_batch in enumerate(data_loader):
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 193, in iter return _DataLoaderIter(self)
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 493, in init
self._put_indices()
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 591, in _put_indices
indices = next(self.sample_iter, None)
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 172, in iter
for idx in self.sampler:
File "/mmdetection/mmdet/datasets/loader/sampler.py", line 138, in iter
assert len(indices) == self.total_size
...

in the config I tried various values for imgs_per_gpu and workers_per_gpu, currently it is:
imgs_per_gpu=2, workers_per_gpu=2,
no settings was working though. Single-GPU training works well.

What is the meaning of this assert?
Thanks!

ZeroDivisionError: float division by zero

I am using mmcv.track_iter_progress() in a file verify task in Windows 10.
An error occurred while running the task for the second time.
I am thinking it's due to the file system cache mechanism in RAM.

  File "D:\code\***\tools\dataset_verify.py", line 71, in <module>
    main()
  File "D:\code\***\tools\dataset_verify.py", line 62, in main
    for f in mmcv.track_iter_progress(files):
  File "D:\Anaconda3\envs\open-mmlab\lib\site-packages\mmcv\utils\progressbar.py", line 206, in track_iter_progress
    prog_bar.update()
  File "D:\Anaconda3\envs\open-mmlab\lib\site-packages\mmcv\utils\progressbar.py", line 47, in update
    fps = self.completed / elapsed
ZeroDivisionError: float division by zero

Suggest to modified the image showing function from cv2 to another

When I first test the demo code from mmdetection on jupyter notebook, it leads to a dead kernel.
https://github.com/open-mmlab/mmdetection#test-images

import mmcv
from mmcv.runner import load_checkpoint
from mmdet.models import build_detector
from mmdet.apis import inference_detector, show_result

cfg = mmcv.Config.fromfile('configs/faster_rcnn_r50_fpn_1x.py')
cfg.model.pretrained = None

# construct the model and load checkpoint
model = build_detector(cfg.model, test_cfg=cfg.test_cfg)
_ = load_checkpoint(model, 'https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/faster_rcnn_r50_fpn_1x_20181010-3d1b3351.pth')

# test a single image
img = mmcv.imread('test.jpg')
result = inference_detector(model, img, cfg)
show_result(img, result)

# test a list of images
imgs = ['test1.jpg', 'test2.jpg']
for i, result in enumerate(inference_detector(model, imgs, cfg, device='cuda:0')):
    print(i, imgs[i])
    show_result(imgs[i], result)

and I found that was occurred by mmcv.visualization.image.imshow_det_bboxes
https://github.com/open-mmlab/mmcv/blob/master/mmcv/visualization/image.py#L69

default parameter show=True will try to render the image on a new window and it will crash the jupyter or the scenario on a server.

Whould it be better if it returns the image instead of cv2.imshow or implements by plt.imshow ?

Adding running loss functionality

Hi!

I have been trying to add running loss in mmcv/runner/runner.py but can't see any changes getting materialized. Please let me know what to do.

Thanks!

mmcv error

My environment is macOS Mojave 10.14.4, Anaconda 4.4.0,Python 3.6.1.
I directly use "pip install mmcv and got:
"Running setup.py clean for mmcv
Failed to build mmcv
Installing collected packages: mmcv
Running setup.py install for mmcv ... error" and :
"In file included from ./mmcv/video/optflow_warp/flow_warp.cpp:1:
./mmcv/video/optflow_warp/flow_warp.hpp:3:10: fatal error: 'iostream' file not found
#include "
Anybody help? Thank you very much.

shape check of impad function

In this line:
Instead of for i in range(len(shape) - 1): assert shape[i] >= img.shape[i]
Should it be
for i in range(len(shape)): assert shape[i] >= img.shape[i]
to ensure all input shape dimensions equal or less than pad shape?

Should progressbar goes to stderr?

Sometimes I want to capture the output of some of my downstream scripts. The progress is meant for indication and does not really belong to the things I want to capture in stdout.

UnboundLocalError: local variable 'checkpoint' referenced before assignment

Traceback (most recent call last):
File "tools/train.py", line 110, in
main()
File "tools/train.py", line 86, in main
cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
File "/home/workspaces/code/Detection/mmdetection/mmdet/models/builder.py", line 43, in build_detector
return build(cfg, DETECTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg))
File "/home/workspaces/code/Detection/mmdetection/mmdet/models/builder.py", line 15, in build
return build_from_cfg(cfg, registry, default_args)
File "/home/workspaces/code/Detection/mmdetection/mmdet/utils/registry.py", line 76, in build_from_cfg
return obj_cls(**args)
File "/home/workspaces/code/Detection/mmdetection/mmdet/models/detectors/cascade_rcnn.py", line 87, in init
self.init_weights(pretrained=pretrained)
File "/home/workspaces/code/Detection/mmdetection/mmdet/models/detectors/cascade_rcnn.py", line 95, in init_weights
self.backbone.init_weights(pretrained=pretrained)
File "/home/workspaces/code/Detection/mmdetection/mmdet/models/backbones/resnet.py", line 499, in init_weights
load_checkpoint(self, pretrained, strict=False, logger=logger)
File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/checkpoint.py", line 163, in load_checkpoint
checkpoint = load_url_dist(model_urls[model_name])
File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/checkpoint.py", line 120, in load_url_dist
return checkpoint
UnboundLocalError: local variable 'checkpoint' referenced before assignment

Can I use Adam optimizer together with FixedLrUpdaterHook?

Hi! I'd like to use mmcv.runner in my project to do the babysitting during training. I want to use an Adam optimizer and warmup strategy, can I use a FixedLrUpdaterHook in mmcv.runner together with the Adam optimizer directly? It seems that FixedLrUpdaterHook will set the lr to the original one every epoch.

你好,请问关于focal loss一些问题

我想请问一下您,focal loss有没有被封装过,我没有找到focal loss梯度下降求偏导数部分。
我能否进入相应的函数内部,修改其超参定义,而不是仅仅修改 type='FocalLoss', use_sigmoid=True, gamma=2.0,alpha=0.25这些超参,我更想修改其相关定义,请告诉我能否修改,是否已经被封装在里面了

imrescale error when the input image shape is arbitrary

When resize images keeping aspect ratio, it seems that the imrescale calculate a wrong scale factor in some case, for example: (600, 1200) ->(1024, 608)

max_long_edge = max(scale)
max_short_edge = min(scale)
scale_factor = min(max_long_edge / max(h, w), max_short_edge / min(h, w))

Assume the input scale is (width, height), the scale factor should be

scale_factor = min(float(scale[1]) / h, float(scale[0]) / w)

weights_to_cpu() does not take None val into consideration

In mmcv/runner/checkpoint.py line 192
def weights_to_cpu(state_dict):
"""Copy a model state_dict to cpu.

Args:
    state_dict (OrderedDict): Model weights on GPU.

Returns:
    OrderedDict: Model weights on GPU.
"""
state_dict_cpu = OrderedDict()
for key, val in state_dict.items():
    state_dict_cpu[key] = val.cpu()
return state_dict_cpu

If val is None, this will raise an error

./compile.sh error with cuda path

My enviroments is :
  ubuntu 18.04.2
  gcc 5.5
  python 3.6.5
  pytorch 1.0
  cuda10.0
When i run compile.sh,it occurs that "unable to execute '/usr/local/cuda-10.0:/usr/local/cuda-10.0:/usr/local/cuda-10.0:/bin/nvcc': No such file or directory
error: command '/usr/local/cuda-10.0:/usr/local/cuda-10.0:/usr/local/cuda-10.0:/bin/nvcc' failed with exit status 1
"
And i have add the cuda to the path.In my bashrc,the path about cuda is :
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64:$LD_LIBRARY_PATH

export CUDA_HOME=/usr/local/cuda-10.0:$CUDA_HOME

export PATH=/usr/local/cuda-10.0/bin:$PATH

I can't solve this problem.Could you help me?
Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.