GithubHelp home page GithubHelp logo

liuruijin17 / lstr Goto Github PK

View Code? Open in Web Editor NEW
635.0 635.0 129.0 13.1 MB

This is an official repository of End-to-end Lane Shape Prediction with Transformers.

License: BSD 3-Clause "New" or "Revised" License

Python 100.00%

lstr's People

Contributors

liuruijin17 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lstr's Issues

How to utilize multi-gpu

Hi, I'm trying to train the model with four GPUs by setting
batch size : [32]
chunk size : [8,8,8,8]

is this correct? it's not working quiet well...

cannot load LSTR_500000.pkl

File "/home/xx/ailab/LSTR/db/tusimple.py", line 114, in init
self._load_data()
File "/home/xx/ailab/LSTR/db/tusimple.py", line 137, in _load_data
self.max_points) = pickle.load(f)
TypeError: 'int' object is not iterable

f=open('./cache/nnet/LSTR/LSTR_500000.pkl','rb')
pickle.load(f)
119547037146038801333356

pitch accuracy

I have a question. How is the pitch accuracy by the transform model inference?

training error

loading from cache file: ./cache\tusimple_['label_data_0313', 'label_data_0601'].pkl
loading from cache file: ./cache\tusimple_['label_data_0531'].pkl
No cache file found...
Now transforming annotations...
len of training db: 3268
len of testing db: 358
freeze the pretrained network: False
start prefetching data...
shuffling indices...
Traceback (most recent call last):
File "D:\ML\LSTR-main\LSTR-main\train.py", line 50, in prefetch_data
data, ind = sample_data(db, ind)
File "D:\ML\LSTR-main\LSTR-main\sample\tusimple.py", line 80, in sample_data
return globals()[system_configs.sampling_function](db, k_ind)
File "D:\ML\LSTR-main\LSTR-main\sample\tusimple.py", line 45, in kp_detection
line_strings.clip_out_of_image_()
File "D:\software\Miniconda\envs\torch-1.6\lib\site-packages\imgaug\augmentables\lines.py", line 2048, in clip_out_of_image_
for ls in self.line_strings
File "D:\software\Miniconda\envs\torch-1.6\lib\site-packages\imgaug\augmentables\lines.py", line 2049, in
for ls_clipped in ls.clip_out_of_image(self.shape)]
File "D:\software\Miniconda\envs\torch-1.6\lib\site-packages\imgaug\augmentables\lines.py", line 550, in clip_out_of_image
intersections = self.find_intersections_with(edges)
File "D:\software\Miniconda\envs\torch-1.6\lib\site-packages\imgaug\augmentables\lines.py", line 634, in find_intersections_with
import shapely.geometry
File "D:\software\Miniconda\envs\torch-1.6\lib\site-packages\shapely\geometry_init_.py", line 4, in
from .base import CAP_STYLE, JOIN_STYLE
File "D:\software\Miniconda\envs\torch-1.6\lib\site-packages\shapely\geometry\base.py", line 18, in
from shapely.coords import CoordinateSequence
File "D:\software\Miniconda\envs\torch-1.6\lib\site-packages\shapely\coords.py", line 8, in
from shapely.geos import lgeos
File "D:\software\Miniconda\envs\torch-1.6\lib\site-packages\shapely\geos.py", line 145, in
lgeos = CDLL(os.path.join(sys.prefix, 'Library', 'bin', 'geos_c.dll'))
File "D:\software\Miniconda\envs\torch-1.6\lib\ctypes_init
.py", line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] 找不到指定的模块。
Process Process-1:
Traceback (most recent call last):
File "D:\software\Miniconda\envs\torch-1.6\lib\multiprocessing\process.py", line 258, in _bootstrap
self.run()
File "D:\software\Miniconda\envs\torch-1.6\lib\multiprocessing\process.py", line 93, in run
self.target(*self.args, **self.kwargs)
File "D:\ML\LSTR-main\LSTR-main\train.py", line 54, in prefetch_data
raise e
File "D:\ML\LSTR-main\LSTR-main\train.py", line 50, in prefetch_data
data, ind = sample_data(db, ind)
File "D:\ML\LSTR-main\LSTR-main\sample\tusimple.py", line 80, in sample_data
return globals()[system_configs.sampling_function](db, k_ind)
File "D:\ML\LSTR-main\LSTR-main\sample\tusimple.py", line 45, in kp_detection
line_strings.clip_out_of_image
()
File "D:\software\Miniconda\envs\torch-1.6\lib\site-packages\imgaug\augmentables\lines.py", line 2048, in clip_out_of_image

for ls in self.line_strings
File "D:\software\Miniconda\envs\torch-1.6\lib\site-packages\imgaug\augmentables\lines.py", line 2049, in
for ls_clipped in ls.clip_out_of_image(self.shape)]
File "D:\software\Miniconda\envs\torch-1.6\lib\site-packages\imgaug\augmentables\lines.py", line 550, in clip_out_of_image
intersections = self.find_intersections_with(edges)
File "D:\software\Miniconda\envs\torch-1.6\lib\site-packages\imgaug\augmentables\lines.py", line 634, in find_intersections_with
import shapely.geometry
File "D:\software\Miniconda\envs\torch-1.6\lib\site-packages\shapely\geometry_init
.py", line 4, in
from .base import CAP_STYLE, JOIN_STYLE
File "D:\software\Miniconda\envs\torch-1.6\lib\site-packages\shapely\geometry\base.py", line 18, in
from shapely.coords import CoordinateSequence
File "D:\software\Miniconda\envs\torch-1.6\lib\site-packages\shapely\coords.py", line 8, in
from shapely.geos import lgeos
File "D:\software\Miniconda\envs\torch-1.6\lib\site-packages\shapely\geos.py", line 145, in
lgeos = CDLL(os.path.join(sys.prefix, 'Library', 'bin', 'geos_c.dll'))
File "D:\software\Miniconda\envs\torch-1.6\lib\ctypes_init
.py", line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] 找不到指定的模块。
start prefetching data...

model's robostness is so bad?

I run the demo on TuSimple test data by following the master branch(Tusimple) readme.md with the default model you offered. the result is just ok. but when i use the model to test my own test data or the Culane test data, the performance is very bad. so i changed to the Culane branch to make a test(use culane model) with Tusimple data or my own data, the result is also very bad.
I knew that different data were captured by different cameras. so different data related to different camera params. I don't know if  the model's polynomial coefficients should be aligned with the test data's camera params when testing with different dataset?

Question about FPS?

I noticed the FPS in your paper is 420, but I test the model of batch-size 1, and the FPS is 120 on 2080TI.
Whether the result is tested on batchsize=16? Why repeat the images to increase the number of batch-size will run faster?

关于模型预测的参数的疑问

你好,请教一个问题:
根据论文中的式子(3)、(4)
模型预测的参数中的b'' 、 f''、 b'''满足b'' * f'' = b''',如果模型预测出来的值不满足这个式子是怎么处理的?看代码里好像没有处理。
谢谢!

How does the model perform on CurveLanes datasets?

Great and meaningful work!From the results of the paper,the lane line detection fits well,I consider that the advantage of this model lies in the curve lanes’ prediction, so I can't wait to see how it performs on the CurveLanes dataset , which has more complex and large curvature lanes compared to the TuSimple dataset. I am glad to provide a little help as much as I can.

how can I train on my own dataset

@liuruijin17 How can I use in my owndata? I create a json file, and write in my data path,
Dingtalk_20210413093733

I use the command:
"python test.py LSTR --testiter 500000 --modality eval --split testing --debug"
and get the wrong result as follow:
AEBSdata_1492626047222176976_0_3

Unable to run train.py. Please help as soon as possible.

Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "train.py", line 54, in prefetch_data
raise e
File "train.py", line 50, in prefetch_data
data, ind = sample_data(db, ind)
File "/content/TuSimple/LSTR/sample/tusimple.py", line 80, in sample_data
return globals()[system_configs.sampling_function](db, k_ind)
File "/content/TuSimple/LSTR/sample/tusimple.py", line 44, in kp_detection
img, line_strings, mask = db.transform(image=img, line_strings=line_strings, segmentation_maps=mask)
File "/usr/local/lib/python3.6/dist-packages/imgaug/augmenters/meta.py", line 1888, in call
return self.augment(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/imgaug/augmenters/meta.py", line 1859, in augment
batch_aug = self.augment_batch(batch, hooks=hooks)
File "/usr/local/lib/python3.6/dist-packages/imgaug/augmenters/meta.py", line 424, in augment_batch
batch = batch.to_normalized_batch()
File "/usr/local/lib/python3.6/dist-packages/imgaug/augmentables/batches.py", line 213, in to_normalized_batch
self.segmentation_maps_unaug, shapes),
File "/usr/local/lib/python3.6/dist-packages/imgaug/augmentables/normalization.py", line 206, in normalize_segmentation_maps
"SegmentationMapOnImage")
File "/usr/local/lib/python3.6/dist-packages/imgaug/augmentables/normalization.py", line 62, in _assert_single_array_ndim
to_ntype, shape_str, ndim, arr.ndim,)
ValueError: Tried to convert an array to list of SegmentationMapOnImage. Expected that array to be of shape (N,H,W), i.e. 3-dimensional, but got 4 dimensions instead.

Optimize model parameters according to DETR.

Do you make grid search about parameters of enc_layers、dec_layers?
According to DETR, more encoder layers and decoder layers are beneficial. Maybe enc_layers and dec_layers set to 3 at least.
image
image

A simple question.

****I am so sorry but I am a beginner,I want to know how do you achieve Hungarian Fitting Loss.Please tell me in detail(eg,in detr_loss, Lines 25 to 47 ), thank you very much.

Label own dataset

Hi, please ask How to annotate own dataset in the same format as Tusimple? I want to test model on my own dataset

About the backbone

@liuruijin17 Hi! I'm re-implementing your great method.
I'm just checking the modifications you made on the ResNet-18 backbone. Except for the width change, I find there seems to be also a drop of basicblock at layer1, other settings (not counting frozen BN and width change) are exactly the same as He's ResNet-18, could you check if I'm correct?

Detect other images

Hello, I have a question: If I want to detect other Dataset(e,g: my car's DVR images), do I need to retrain the model(I use culane_model to detect, the results are not well)?
thanks!

使用自己的数据测试

非常感谢作者的工作,我想使用自己的图片测试(大小和tusimple的不一样),但得到的结果很离谱。我应该修改那些参数呢

Train error

I'd like to ask when I run train.py by thread=4 and batch size change to 4
Ram memory still cost to 16G then computer crashed
Can I ask ur PC format?

How does the model perform on CULane datasets?

Thanks, Great and meaningful work!I have got a good result on Tusimple dataset. Have you trained model on CULane datasets?How does it performs on CULane datasets?I am glad to provide a little help as much as I can.

Something wrong with the CE loss?

Hi,
In dataset:
lanes[lane_pos, 0] = category #which is 1 in the code

While in loss_label, target_classes is all 1 when self.num_classes==1

...
target_classes = torch.full(src_logits.shape[:2], self.num_classes, dtype=torch.int64, device=src_logits.device)
target_classes[idx] = target_classes_o
loss_ce = F.cross_entropy(src_logits.transpose(1, 2), target_classes, self.empty_weight)

!!! loss_ce can be alway 0 there.
Thanks for your work, wish a reply

0%| | 0/500000 [00:00<?, ?it/s]\

Traceback (most recent call last):
File "/home/goodldz/anaconda3/envs/lstr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/goodldz/anaconda3/envs/lstr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "train.py", line 54, in prefetch_data
raise e
File "train.py", line 50, in prefetch_data
data, ind = sample_data(db, ind)
File "/media/goodldz/xiaoliu/LSTR-main/LSTR-main/sample/tusimple.py", line 80, in sample_data
return globals()[system_configs.sampling_function](db, k_ind)
File "/media/goodldz/xiaoliu/LSTR-main/LSTR-main/sample/tusimple.py", line 38, in kp_detection
mask = np.ones((1, img.shape[0], img.shape[1], 1), dtype=np.bool)
AttributeError: 'NoneType' object has no attribute 'shape'
Total parameters: 765787
MACs: 574.280M
setting learning rate to: 0.0001
training start...
0%| | 0/500000 [00:00<?, ?it/s]\

Does anyone know how to solve this problem

How to genereate label_data_0313.json?

I downloaded Tusimple testset, the dataset looks like:

.
├── clips
│   ├── 0530
│   ├── 0531
│   └── 0601
├── readme.md
└── test_tasks_0627.json

where to found these:

label_data_0313.json
        label_data_0531.json
        label_data_0601.json

No cache file found.

loading from cache file: ./cache\tusimple_['label_data_0313', 'label_data_0601', 'label_data_0531'].pkl
No cache file found...

FileNotFoundError: [Errno 2] No such file or directory: '../../TuSimple\LaneDetection\label_data_0313.json'
why?

Why 3 classes instead of 2?

@liuruijin17 Hi! I have a small question: would you kindly tell me why use 3 classes? Labels are all 1(lane)/2(no object), it seems 2 classes should suffice. Then there is another question, maybe one can directly use BCE loss here with only 1-dim output with class_embed?
I've checked the DETR repo for discussions on single class detection and still can't figure out why you use 3-dim output at class_embed.

EDIT:
Is it related to COCO index starts at 1?

Questions about LSTR's performance.

I just tested the performance,and the result seems not that fast,just about 100fps. Is there something wrong with the testing method?Here's the testing code below.

speed_simple
import torch
import time
import numpy as np
from nnet.py_factory import NetworkFactory
from config import system_configs
import os
import json

cfg_file = os.path.join(system_configs.config_dir, "LSTR.json")
print("cfg_file: {}".format(cfg_file))
with open(cfg_file, "r") as f:
    configs = json.load(f)

configs["system"]["snapshot_name"] = "LSTR"
system_configs.update_config(configs["system"])

torch.cuda.set_device(0)
torch.backends.cudnn.benchmark = True
nnet = NetworkFactory()
nnet.load_params(500000)
nnet.cuda()
nnet.eval_mode()

input_size = [360, 640]
images = torch.zeros((1, 3, input_size[0], input_size[1]), dtype=torch.float32).cuda() + 1
masks = torch.ones((1, 1, input_size[0], input_size[1]), dtype=torch.float32).cuda() + 2

for i in range(10):
    outputs, weights = nnet.test([images, masks])

t_all = []
for i in range(200):
    torch.cuda.synchronize(0)
    t1 = time.time()
    outputs, weights = nnet.test([images, masks])
    torch.cuda.synchronize(0)
    t2 = time.time()
    t_all.append(t2 - t1)

print('average time:', np.mean(t_all) / 1)
print('average fps:', 1 / np.mean(t_all))

print('fastest time:', min(t_all) / 1)
print('fastest fps:', 1 / min(t_all))

print('slowest time:', max(t_all) / 1)
print('slowest fps:', 1 / max(t_all))

image

Question about equation (10)?

Thanks for your great work!
I have a question about equation 10, and I think it should be X = Z*u/fu according to the perspective projection. What's more you ignores the cu and cv parameters of camera instrinc.

Training fails on Windows 10

The code trains fine on Linux but when I try to train it on Windows 10 with the same steps it results in this error. Any idea what is wrong here?

training start...
  0%|                                   | 0/500000 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "train.py", line 233, in <module>
    train(training_dbs, validation_db, args.start_iter, args.freeze) # 0
  File "train.py", line 158, in train
    = nnet.train(iteration, save, viz_split, **training)
  File "C:\Users\mamoo\OneDrive\Documents\GitHub\TuSimple\LSTR\nnet\py_factory.py", line 111, in train
    loss_kp = self.network(iteration,
  File "C:\Users\mamoo\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\mamoo\OneDrive\Documents\GitHub\TuSimple\LSTR\models\py_utils\data_parallel.py", line 66, in forward
    inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids, self.chunk_sizes)
  File "C:\Users\mamoo\OneDrive\Documents\GitHub\TuSimple\LSTR\models\py_utils\data_parallel.py", line 77, in scatter
    return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim, chunk_sizes=self.chunk_sizes)
  File "C:\Users\mamoo\OneDrive\Documents\GitHub\TuSimple\LSTR\models\py_utils\scatter_gather.py", line 30, in scatter_kwargs
    inputs = scatter(inputs, target_gpus, dim, chunk_sizes) if inputs else []
  File "C:\Users\mamoo\OneDrive\Documents\GitHub\TuSimple\LSTR\models\py_utils\scatter_gather.py", line 25, in scatter
    return scatter_map(inputs)
  File "C:\Users\mamoo\OneDrive\Documents\GitHub\TuSimple\LSTR\models\py_utils\scatter_gather.py", line 18, in scatter_map
    return list(zip(*map(scatter_map, obj)))
  File "C:\Users\mamoo\OneDrive\Documents\GitHub\TuSimple\LSTR\models\py_utils\scatter_gather.py", line 20, in scatter_map
    return list(map(list, zip(*map(scatter_map, obj))))
  File "C:\Users\mamoo\OneDrive\Documents\GitHub\TuSimple\LSTR\models\py_utils\scatter_gather.py", line 15, in scatter_map
    return Scatter.apply(target_gpus, chunk_sizes, dim, obj)
  File "C:\Users\mamoo\anaconda3\lib\site-packages\torch\nn\parallel\_functions.py", line 93, in forward
    outputs = comm.scatter(input, target_gpus, chunk_sizes, ctx.dim, streams)
  File "C:\Users\mamoo\anaconda3\lib\site-packages\torch\nn\parallel\comm.py", line 189, in scatter
    return tuple(torch._C._scatter(tensor, devices, chunk_sizes, dim, streams))
RuntimeError: start (0) + length (16) exceeds dimension size (1).

API for testing other data?

This is great work! Quite light model with quite fast speed! Just wandering if you provided any API for testing any image others collected?

training error

I tried to train the model on CurveLanes. I changed some parameters, and then this error occurred.

loading all datasets TUSIMPLE...
using 4 threads
loading from cache file: ./cache/tusimple_['train'].pkl
loading from cache file: ./cache/tusimple_['train'].pkl
loading from cache file: ./cache/tusimple_['train'].pkl
loading from cache file: ./cache/tusimple_['train'].pkl
loading from cache file: ./cache/tusimple_['val'].pkl
len of training db: 92794
len of testing db: 17015
freeze the pretrained network: False
building model...
Total parameters: 765787
MACs: 574.280M
setting learning rate to: 1e-05
training starts from iteration 500001 with learning_rate 1e-05
training start...

0%| | 0/10000 [00:00<?, ?it/s]

[ 0/10000] eta: 9 days, 7:23:18 lr: 0.000010 class_error: 56.05 loss: 28.0187 (28.0187) loss_ce: 9.5054 (9.5054) loss_lowers: 0.5512 (0.5512) loss_uppers: 0.2334 (0.2334) loss_curves: 3.5105 (3.5105) loss_ce_0: 10.0556 (10.0556) loss_lowers_0: 0.5423 (0.5423) loss_uppers_0: 0.2343 (0.2343) loss_curves_0: 3.3861 (3.3861) loss_ce_unscaled: 3.1685 (3.1685) class_error_unscaled: 56.0491 (56.0491) loss_lowers_unscaled: 0.2756 (0.2756) loss_uppers_unscaled: 0.1167 (0.1167) loss_curves_unscaled: 0.7021 (0.7021) cardinality_error_unscaled: 2.3333 (2.3333) loss_ce_0_unscaled: 3.3519 (3.3519) loss_lowers_0_unscaled: 0.2711 (0.2711) loss_uppers_0_unscaled: 0.1171 (0.1171) loss_curves_0_unscaled: 0.6772 (0.6772) cardinality_error_0_unscaled: 2.3917 (2.3917) time: 80.4198 data: 0.0001 max mem: 8305

0%| | 0/10000 [01:20<?, ?it/s]
0%| | 1/10000 [01:20<223:22:42, 80.42s/it]
0%| | 2/10000 [01:21<157:33:30, 56.73s/it]
0%| | 3/10000 [01:23<111:28:31, 40.14s/it]
0%| | 4/10000 [01:24<79:16:24, 28.55s/it]
0%| | 5/10000 [02:27<107:54:57, 38.87s/it]
0%| | 6/10000 [02:30<78:07:15, 28.14s/it]
0%| | 7/10000 [02:32<55:58:42, 20.17s/it]
0%| | 8/10000 [02:34<40:32:06, 14.60s/it]
0%| | 9/10000 [03:55<95:58:27, 34.58s/it]
0%| | 10/10000 [03:57<69:06:31, 24.90s/it]

[ 10/10000] eta: 2 days, 12:20:59 lr: 0.000010 class_error: 46.38 loss: 28.0187 (27.6038) loss_ce: 8.9655 (8.7563) loss_lowers: 0.5512 (0.5520) loss_uppers: 0.2219 (0.2197) loss_curves: 3.6420 (3.9598) loss_ce_0: 9.7788 (9.5492) loss_lowers_0: 0.5423 (0.5451) loss_uppers_0: 0.2240 (0.2220) loss_curves_0: 3.4391 (3.7996) loss_ce_unscaled: 2.9885 (2.9188) class_error_unscaled: 53.5714 (52.6482) loss_lowers_unscaled: 0.2756 (0.2760) loss_uppers_unscaled: 0.1110 (0.1099) loss_curves_unscaled: 0.7284 (0.7920) cardinality_error_unscaled: 2.2375 (2.2125) loss_ce_0_unscaled: 3.2596 (3.1831) loss_lowers_0_unscaled: 0.2711 (0.2725) loss_uppers_0_unscaled: 0.1120 (0.1110) loss_curves_0_unscaled: 0.6878 (0.7599) cardinality_error_0_unscaled: 2.3042 (2.2674) time: 21.7477 data: 0.0023 max mem: 8405

0%| | 10/10000 [03:59<69:06:31, 24.90s/it]
0%| | 11/10000 [03:59<49:45:42, 17.93s/it]
0%| | 12/10000 [04:00<36:11:00, 13.04s/it]
0%| | 13/10000 [05:04<78:27:37, 28.28s/it]
0%| | 14/10000 [05:12<61:31:45, 22.18s/it]
0%| | 15/10000 [05:14<44:49:27, 16.16s/it]
0%| | 16/10000 [05:16<33:08:41, 11.95s/it]
0%| | 17/10000 [06:22<77:22:48, 27.90s/it]
0%| | 18/10000 [06:30<61:33:33, 22.20s/it]
0%| | 19/10000 [06:33<45:38:30, 16.46s/it]
0%| | 20/10000 [06:36<34:25:31, 12.42s/it]

[ 20/10000] eta: 2 days, 12:34:38 lr: 0.000010 class_error: 42.19 loss: 25.0865 (25.8781) loss_ce: 7.8072 (8.0656) loss_lowers: 0.5494 (0.5500) loss_uppers: 0.2083 (0.2108) loss_curves: 3.3714 (3.7720) loss_ce_0: 8.4600 (8.8677) loss_lowers_0: 0.5413 (0.5434) loss_uppers_0: 0.2113 (0.2143) loss_curves_0: 3.3045 (3.6543) loss_ce_unscaled: 2.6024 (2.6885) class_error_unscaled: 49.1356 (49.6023) loss_lowers_unscaled: 0.2747 (0.2750) loss_uppers_unscaled: 0.1042 (0.1054) loss_curves_unscaled: 0.6743 (0.7544) cardinality_error_unscaled: 2.1042 (2.0935) loss_ce_0_unscaled: 2.8200 (2.9559) loss_lowers_0_unscaled: 0.2707 (0.2717) loss_uppers_0_unscaled: 0.1057 (0.1071) loss_curves_0_unscaled: 0.6609 (0.7309) cardinality_error_0_unscaled: 2.1583 (2.1643) time: 18.9232 data: 0.0020 max mem: 8422

0%| | 20/10000 [07:38<34:25:31, 12.42s/it]
0%| | 21/10000 [07:38<75:35:50, 27.27s/it]
0%| | 22/10000 [07:51<63:36:22, 22.95s/it]
0%| | 23/10000 [07:56<48:18:19, 17.43s/it]
0%| | 24/10000 [08:00<37:07:01, 13.39s/it]
0%| | 25/10000 [08:53<70:26:49, 25.42s/it]
0%| | 26/10000 [09:13<65:49:13, 23.76s/it]
0%| | 27/10000 [09:19<51:09:24, 18.47s/it]
0%| | 28/10000 [09:24<39:39:17, 14.32s/it]
0%| | 29/10000 [10:08<64:06:55, 23.15s/it]
0%| | 30/10000 [10:30<63:44:02, 23.01s/it]

[ 30/10000] eta: 2 days, 9:04:02 lr: 0.000010 class_error: 37.11 loss: 22.2942 (24.2119) loss_ce: 6.2356 (7.3237) loss_lowers: 0.5474 (0.5494) loss_uppers: 0.1872 (0.2017) loss_curves: 3.3493 (3.6920) loss_ce_0: 6.9071 (8.0938) loss_lowers_0: 0.5434 (0.5434) loss_uppers_0: 0.1968 (0.2068) loss_curves_0: 3.2570 (3.6011) loss_ce_unscaled: 2.0785 (2.4412) class_error_unscaled: 42.7374 (46.7383) loss_lowers_unscaled: 0.2737 (0.2747) loss_uppers_unscaled: 0.0936 (0.1008) loss_curves_unscaled: 0.6699 (0.7384) cardinality_error_unscaled: 1.7458 (1.9647) loss_ce_0_unscaled: 2.3024 (2.6979) loss_lowers_0_unscaled: 0.2717 (0.2717) loss_uppers_0_unscaled: 0.0984 (0.1034) loss_curves_0_unscaled: 0.6514 (0.7202) cardinality_error_0_unscaled: 1.8583 (2.0434) time: 19.9782 data: 0.0016 max mem: 8422

0%| | 30/10000 [10:38<63:44:02, 23.01s/it]
0%| | 31/10000 [10:38<51:12:55, 18.49s/it]
0%| | 32/10000 [10:44<40:12:53, 14.52s/it]
0%| | 33/10000 [11:21<59:13:58, 21.39s/it]
0%| | 34/10000 [11:47<62:46:46, 22.68s/it]
0%| | 35/10000 [11:58<53:06:02, 19.18s/it]
0%| | 36/10000 [12:02<40:56:07, 14.79s/it]
0%| | 37/10000 [12:34<55:05:05, 19.90s/it]
0%| | 38/10000 [13:03<62:29:26, 22.58s/it]
0%| | 39/10000 [13:16<54:17:14, 19.62s/it]
0%| | 40/10000 [13:22<42:57:12, 15.53s/it]

[ 40/10000] eta: 2 days, 7:59:32 lr: 0.000010 class_error: 31.95 loss: 18.4215 (22.3800) loss_ce: 4.9553 (6.6601) loss_lowers: 0.5474 (0.5488) loss_uppers: 0.1759 (0.1945) loss_curves: 2.8598 (3.4533) loss_ce_0: 5.4973 (7.3892) loss_lowers_0: 0.5446 (0.5435) loss_uppers_0: 0.1824 (0.2004) loss_curves_0: 2.8902 (3.3903) loss_ce_unscaled: 1.6518 (2.2200) class_error_unscaled: 37.4188 (44.0451) loss_lowers_unscaled: 0.2737 (0.2744) loss_uppers_unscaled: 0.0879 (0.0973) loss_curves_unscaled: 0.5720 (0.6907) cardinality_error_unscaled: 1.5875 (1.8561) loss_ce_0_unscaled: 1.8324 (2.4631) loss_lowers_0_unscaled: 0.2723 (0.2717) loss_uppers_0_unscaled: 0.0912 (0.1002) loss_curves_0_unscaled: 0.5780 (0.6781) cardinality_error_0_unscaled: 1.7000 (1.9488) time: 18.5441 data: 0.0018 max mem: 8422

0%| | 40/10000 [13:49<42:57:12, 15.53s/it]
0%| | 41/10000 [13:49<53:03:40, 19.18s/it]
0%| | 42/10000 [14:21<63:39:38, 23.01s/it]
0%| | 43/10000 [14:37<57:50:16, 20.91s/it]
0%| | 44/10000 [14:43<44:58:22, 16.26s/it]
0%| | 45/10000 [15:00<45:58:17, 16.62s/it]
0%| | 46/10000 [15:37<62:50:35, 22.73s/it]
0%| | 47/10000 [15:54<58:17:12, 21.08s/it]
0%| | 48/10000 [16:00<45:43:57, 16.54s/it]
0%| | 49/10000 [16:10<39:47:58, 14.40s/it]
0%| | 50/10000 [16:51<62:02:30, 22.45s/it]

[ 50/10000] eta: 2 days, 7:51:52 lr: 0.000010 class_error: 26.33 loss: 15.7494 (20.9579) loss_ce: 4.2598 (6.1101) loss_lowers: 0.5415 (0.5471) loss_uppers: 0.1678 (0.1885) loss_curves: 2.5450 (3.3171) loss_ce_0: 4.7282 (6.7795) loss_lowers_0: 0.5397 (0.5425) loss_uppers_0: 0.1765 (0.1944) loss_curves_0: 2.5641 (3.2787) loss_ce_unscaled: 1.4199 (2.0367) class_error_unscaled: 34.1420 (41.4404) loss_lowers_unscaled: 0.2708 (0.2736) loss_uppers_unscaled: 0.0839 (0.0942) loss_curves_unscaled: 0.5090 (0.6634) cardinality_error_unscaled: 1.4958 (1.7900) loss_ce_0_unscaled: 1.5761 (2.2598) loss_lowers_0_unscaled: 0.2699 (0.2712) loss_uppers_0_unscaled: 0.0882 (0.0972) loss_curves_0_unscaled: 0.5128 (0.6557) cardinality_error_0_unscaled: 1.6292 (1.8821) time: 19.6019 data: 0.0018 max mem: 8439

0%| | 50/10000 [17:10<62:02:30, 22.45s/it]
1%| | 51/10000 [17:10<59:32:14, 21.54s/it]
1%| | 52/10000 [17:18<47:40:30, 17.25s/it]
1%|▏ | 53/10000 [17:19<34:31:38, 12.50s/it]
1%|▏ | 54/10000 [18:07<63:56:16, 23.14s/it]
1%|▏ | 55/10000 [18:27<61:30:14, 22.26s/it]
1%|▏ | 56/10000 [18:28<44:03:30, 15.95s/it]
1%|▏ | 57/10000 [18:32<33:58:11, 12.30s/it]
1%|▏ | 58/10000 [19:20<62:58:27, 22.80s/it]
1%|▏ | 59/10000 [19:35<56:41:02, 20.53s/it]
1%|▏ | 60/10000 [19:43<46:39:32, 16.90s/it]

[ 60/10000] eta: 2 days, 5:53:45 lr: 0.000010 class_error: 22.85 loss: 13.0635 (19.5814) loss_ce: 3.4124 (5.6438) loss_lowers: 0.5318 (0.5447) loss_uppers: 0.1535 (0.1823) loss_curves: 2.2930 (3.1334) loss_ce_0: 3.6814 (6.2375) loss_lowers_0: 0.5329 (0.5408) loss_uppers_0: 0.1587 (0.1880) loss_curves_0: 2.3868 (3.1109) loss_ce_unscaled: 1.1375 (1.8813) class_error_unscaled: 25.7618 (38.4689) loss_lowers_unscaled: 0.2659 (0.2723) loss_uppers_unscaled: 0.0767 (0.0912) loss_curves_unscaled: 0.4586 (0.6267) cardinality_error_unscaled: 1.4750 (1.7335) loss_ce_0_unscaled: 1.2271 (2.0792) loss_lowers_0_unscaled: 0.2664 (0.2704) loss_uppers_0_unscaled: 0.0793 (0.0940) loss_curves_0_unscaled: 0.4774 (0.6222) cardinality_error_0_unscaled: 1.5542 (1.8153) time: 18.0469 data: 0.0014 max mem: 8439

1%|▏ | 60/10000 [19:50<46:39:32, 16.90s/it]
1%|▏ | 61/10000 [19:50<38:31:25, 13.95s/it]
1%|▏ | 62/10000 [20:34<63:30:54, 23.01s/it]
1%|▏ | 63/10000 [20:44<52:24:56, 18.99s/it]
1%|▏ | 64/10000 [21:02<51:32:58, 18.68s/it]
1%|▏ | 65/10000 [21:07<39:52:47, 14.45s/it]
1%|▏ | 66/10000 [21:50<64:04:59, 23.22s/it]
1%|▏ | 67/10000 [21:55<49:05:29, 17.79s/it]
1%|▏ | 68/10000 [22:20<54:29:48, 19.75s/it]
1%|▏ | 69/10000 [22:24<41:27:34, 15.03s/it]
1%|▏ | 70/10000 [23:07<64:33:08, 23.40s/it]

[ 70/10000] eta: 2 days, 5:56:29 lr: 0.000010 class_error: 17.71 loss: 12.2602 (18.5166) loss_ce: 3.1350 (5.2774) loss_lowers: 0.5300 (0.5429) loss_uppers: 0.1485 (0.1772) loss_curves: 2.1565 (2.9975) loss_ce_0: 3.3350 (5.8074) loss_lowers_0: 0.5291 (0.5396) loss_uppers_0: 0.1547 (0.1830) loss_curves_0: 2.2314 (2.9917) loss_ce_unscaled: 1.0450 (1.7591) class_error_unscaled: 20.7650 (35.8245) loss_lowers_unscaled: 0.2650 (0.2714) loss_uppers_unscaled: 0.0742 (0.0886) loss_curves_unscaled: 0.4313 (0.5995) cardinality_error_unscaled: 1.4417 (1.6981) loss_ce_0_unscaled: 1.1117 (1.9358) loss_lowers_0_unscaled: 0.2645 (0.2698) loss_uppers_0_unscaled: 0.0773 (0.0915) loss_curves_0_unscaled: 0.4463 (0.5983) cardinality_error_0_unscaled: 1.4458 (1.7646) time: 17.8820 data: 0.0025 max mem: 8439

1%|▏ | 70/10000 [23:08<64:33:08, 23.40s/it]
1%|▏ | 71/10000 [23:08<46:20:22, 16.80s/it]
1%|▏ | 72/10000 [23:38<57:30:32, 20.85s/it]
1%|▏ | 73/10000 [23:44<44:48:15, 16.25s/it]
1%|▏ | 74/10000 [24:16<58:00:23, 21.04s/it]
1%|▏ | 75/10000 [24:23<46:08:02, 16.73s/it]
1%|▏ | 76/10000 [24:56<60:07:44, 21.81s/it]
1%|▏ | 77/10000 [25:02<46:18:26, 16.80s/it]
1%|▏ | 78/10000 [25:26<52:16:39, 18.97s/it]
1%|▏ | 79/10000 [25:38<46:30:09, 16.87s/it]
1%|▏ | 80/10000 [26:13<62:10:33, 22.56s/it]

[ 80/10000] eta: 2 days, 5:41:27 lr: 0.000010 class_error: 17.87 loss: 11.8092 (17.7214) loss_ce: 3.0373 (5.0065) loss_lowers: 0.5300 (0.5416) loss_uppers: 0.1434 (0.1727) loss_curves: 2.0666 (2.8988) loss_ce_0: 3.1440 (5.4792) loss_lowers_0: 0.5311 (0.5389) loss_uppers_0: 0.1518 (0.1786) loss_curves_0: 2.1665 (2.9052) loss_ce_unscaled: 1.0124 (1.6688) class_error_unscaled: 18.9608 (33.6154) loss_lowers_unscaled: 0.2650 (0.2708) loss_uppers_unscaled: 0.0717 (0.0863) loss_curves_unscaled: 0.4133 (0.5798) cardinality_error_unscaled: 1.5125 (1.6780) loss_ce_0_unscaled: 1.0480 (1.8264) loss_lowers_0_unscaled: 0.2656 (0.2695) loss_uppers_0_unscaled: 0.0759 (0.0893) loss_curves_0_unscaled: 0.4333 (0.5810) cardinality_error_0_unscaled: 1.4667 (1.7300) time: 19.3776 data: 0.0029 max mem: 8439

1%|▏ | 80/10000 [26:18<62:10:33, 22.56s/it]
1%|▏ | 81/10000 [26:18<47:12:01, 17.13s/it]
1%|▏ | 82/10000 [26:35<47:02:57, 17.08s/it]
1%|▏ | 83/10000 [26:53<47:38:47, 17.30s/it]
1%|▏ | 84/10000 [27:34<67:20:26, 24.45s/it]
1%|▏ | 85/10000 [27:39<51:08:05, 18.57s/it]
1%|▏ | 86/10000 [27:47<42:43:35, 15.51s/it]start prefetching data...
shuffling indices...
Traceback (most recent call last):
File "train.py", line 50, in prefetch_data
data, ind = sample_data(db, ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 80, in sample_data
return globals()[system_configs.sampling_function](db, k_ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 56, in kp_detection
label[:, 1][...] = np.min(label[:, 1])
File "<array_function internals>", line 6, in amin
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 2746, in amin
keepdims=keepdims, initial=initial, where=where)
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 90, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation minimum which has no identity
Process Process-3:
Traceback (most recent call last):
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "train.py", line 54, in prefetch_data
raise e
File "train.py", line 50, in prefetch_data
data, ind = sample_data(db, ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 80, in sample_data
return globals()[system_configs.sampling_function](db, k_ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 56, in kp_detection
label[:, 1][...] = np.min(label[:, 1])
File "<array_function internals>", line 6, in amin
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 2746, in amin
keepdims=keepdims, initial=initial, where=where)
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 90, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation minimum which has no identity

1%|▏ | 87/10000 [28:09<48:08:04, 17.48s/it]
1%|▏ | 88/10000 [28:48<66:05:59, 24.01s/it]
1%|▏ | 89/10000 [28:54<50:42:39, 18.42s/it]
1%|▏ | 90/10000 [29:21<58:11:21, 21.14s/it]

[ 90/10000] eta: 2 days, 6:31:59 lr: 0.000010 class_error: 17.53 loss: 11.3433 (16.9882) loss_ce: 2.9765 (4.7805) loss_lowers: 0.5248 (0.5400) loss_uppers: 0.1374 (0.1687) loss_curves: 1.9169 (2.7849) loss_ce_0: 2.9991 (5.2023) loss_lowers_0: 0.5281 (0.5380) loss_uppers_0: 0.1422 (0.1744) loss_curves_0: 2.0014 (2.7994) loss_ce_unscaled: 0.9922 (1.5935) class_error_unscaled: 17.5306 (31.7672) loss_lowers_unscaled: 0.2624 (0.2700) loss_uppers_unscaled: 0.0687 (0.0843) loss_curves_unscaled: 0.3834 (0.5570) cardinality_error_unscaled: 1.5083 (1.6624) loss_ce_0_unscaled: 0.9997 (1.7341) loss_lowers_0_unscaled: 0.2640 (0.2690) loss_uppers_0_unscaled: 0.0711 (0.0872) loss_curves_0_unscaled: 0.4003 (0.5599) cardinality_error_0_unscaled: 1.4667 (1.6991) time: 20.7130 data: 0.0020 max mem: 8439

1%|▏ | 90/10000 [30:02<58:11:21, 21.14s/it]
1%|▏ | 91/10000 [30:02<74:43:51, 27.15s/it]
1%|▏ | 92/10000 [30:07<56:21:59, 20.48s/it]
1%|▏ | 93/10000 [30:33<61:02:17, 22.18s/it]
1%|▏ | 94/10000 [31:06<69:55:05, 25.41s/it]
1%|▏ | 95/10000 [31:22<61:58:36, 22.53s/it]
1%|▏ | 96/10000 [31:36<54:28:12, 19.80s/it]
1%|▏ | 97/10000 [32:21<75:50:37, 27.57s/it]
1%|▏ | 98/10000 [32:38<67:11:31, 24.43s/it]
1%|▏ | 99/10000 [32:46<53:36:26, 19.49s/it]start prefetching data...
shuffling indices...
Traceback (most recent call last):
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/resource_sharer.py", line 149, in _serve
send(conn, destination_pid)
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/resource_sharer.py", line 50, in send
reduction.send_handle(conn, new_fd, pid)
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/reduction.py", line 176, in send_handle
with socket.fromfd(conn.fileno(), socket.AF_UNIX, socket.SOCK_STREAM) as s:
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/socket.py", line 460, in fromfd
nfd = dup(fd)
OSError: [Errno 24] Too many open files
Traceback (most recent call last):
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/queues.py", line 234, in _feed
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 337, in reduce_storage
df = multiprocessing.reduction.DupFd(fd)
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/reduction.py", line 191, in DupFd
return resource_sharer.DupFd(fd)
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/resource_sharer.py", line 48, in init
new_fd = os.dup(fd)
OSError: [Errno 24] Too many open files
Traceback (most recent call last):
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/queues.py", line 234, in _feed
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 333, in reduce_storage
fd, size = storage.share_fd()
RuntimeError: unable to open shared memory object </torch_50890_1850420892> in read-write mode
Traceback (most recent call last):
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/queues.py", line 234, in _feed
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 333, in reduce_storage
fd, size = storage.share_fd()
RuntimeError: unable to open shared memory object </torch_50890_1322717930> in read-write mode
Traceback (most recent call last):
File "train.py", line 50, in prefetch_data
data, ind = sample_data(db, ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 80, in sample_data
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 38, in kp_detection
AttributeError: 'NoneType' object has no attribute 'shape'
Traceback (most recent call last):
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "train.py", line 54, in prefetch_data
raise e
File "train.py", line 50, in prefetch_data
data, ind = sample_data(db, ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 80, in sample_data
return globals()[system_configs.sampling_function](db, k_ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 38, in kp_detection
mask = np.ones((1, img.shape[0], img.shape[1], 1), dtype=np.bool)
AttributeError: 'NoneType' object has no attribute 'shape'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/util.py", line 262, in _run_finalizers
finalizer()
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/util.py", line 186, in call
res = self._callback(*self._args, **self._kwargs)
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/shutil.py", line 486, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/shutil.py", line 408, in _rmtree_safe_fd
onerror(os.listdir, path, sys.exc_info())
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/shutil.py", line 405, in _rmtree_safe_fd
names = os.listdir(topfd)
OSError: [Errno 24] Too many open files: '/tmp/pymp-adpczg8j'
Process Process-5:
Traceback (most recent call last):
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "train.py", line 54, in prefetch_data
raise e
File "train.py", line 50, in prefetch_data
data, ind = sample_data(db, ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 80, in sample_data
return globals()[system_configs.sampling_function](db, k_ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 38, in kp_detection
mask = np.ones((1, img.shape[0], img.shape[1], 1), dtype=np.bool)
AttributeError: 'NoneType' object has no attribute 'shape'

[VAL LOG] [Saving training and evaluating images...]

1%|▏ | 99/10000 [33:52<53:36:26, 19.49s/it]
1%|▏ | 100/10000 [33:52<91:28:06, 33.26s/it]

[ 100/10000] eta: 2 days, 7:35:54 lr: 0.000010 class_error: 17.55 loss: 11.0255 (16.3756) loss_ce: 2.8995 (4.5726) loss_lowers: 0.5222 (0.5374) loss_uppers: 0.1366 (0.1654) loss_curves: 1.8644 (2.7088) loss_ce_0: 2.9395 (4.9538) loss_lowers_0: 0.5252 (0.5359) loss_uppers_0: 0.1409 (0.1709) loss_curves_0: 1.9471 (2.7308) loss_ce_unscaled: 0.9665 (1.5242) class_error_unscaled: 17.6895 (30.2719) loss_lowers_unscaled: 0.2611 (0.2687) loss_uppers_unscaled: 0.0683 (0.0827) loss_curves_unscaled: 0.3729 (0.5418) cardinality_error_unscaled: 1.4917 (1.6444) loss_ce_0_unscaled: 0.9798 (1.6513) loss_lowers_0_unscaled: 0.2626 (0.2680) loss_uppers_0_unscaled: 0.0705 (0.0855) loss_curves_0_unscaled: 0.3894 (0.5462) cardinality_error_0_unscaled: 1.4542 (1.6717) time: 23.1861 data: 0.0018 max mem: 8629

1%|▏ | 100/10000 [34:02<91:28:06, 33.26s/it]
1%|▏ | 101/10000 [34:02<72:08:55, 26.24s/it]
1%|▏ | 102/10000 [34:11<58:41:53, 21.35s/it]
1%|▏ | 103/10000 [35:07<86:36:33, 31.50s/it]
1%|▏ | 104/10000 [35:28<78:10:32, 28.44s/it]
1%|▏ | 105/10000 [35:38<63:13:21, 23.00s/it]
1%|▏ | 106/10000 [36:33<89:17:18, 32.49s/it]
1%|▏ | 107/10000 [36:53<79:16:02, 28.84s/it]
1%|▏ | 108/10000 [37:04<64:01:18, 23.30s/it]
1%|▎ | 109/10000 [37:59<90:46:45, 33.04s/it]
1%|▎ | 110/10000 [38:16<77:18:33, 28.14s/it]

[ 110/10000] eta: 2 days, 9:05:35 lr: 0.000010 class_error: 17.36 loss: 10.6776 (15.8492) loss_ce: 2.7136 (4.4056) loss_lowers: 0.5213 (0.5361) loss_uppers: 0.1366 (0.1628) loss_curves: 1.8028 (2.6266) loss_ce_0: 2.8543 (4.7616) loss_lowers_0: 0.5248 (0.5352) loss_uppers_0: 0.1406 (0.1682) loss_curves_0: 1.8755 (2.6531) loss_ce_unscaled: 0.9045 (1.4685) class_error_unscaled: 17.5522 (29.1298) loss_lowers_unscaled: 0.2607 (0.2680) loss_uppers_unscaled: 0.0683 (0.0814) loss_curves_unscaled: 0.3606 (0.5253) cardinality_error_unscaled: 1.4917 (1.6331) loss_ce_0_unscaled: 0.9514 (1.5872) loss_lowers_0_unscaled: 0.2624 (0.2676) loss_uppers_0_unscaled: 0.0703 (0.0841) loss_curves_0_unscaled: 0.3751 (0.5306) cardinality_error_0_unscaled: 1.4583 (1.6532) time: 25.2043 data: 0.0019 max mem: 8629

1%|▎ | 110/10000 [38:26<77:18:33, 28.14s/it]
1%|▎ | 111/10000 [38:26<62:35:50, 22.79s/it]
1%|▎ | 112/10000 [39:24<91:39:19, 33.37s/it]
1%|▎ | 113/10000 [39:39<76:22:53, 27.81s/it]
1%|▎ | 114/10000 [39:49<61:35:55, 22.43s/it]
1%|▎ | 115/10000 [40:50<92:52:54, 33.83s/it]
1%|▎ | 116/10000 [41:04<77:08:08, 28.09s/it]
1%|▎ | 117/10000 [41:14<62:13:27, 22.67s/it]
1%|▎ | 118/10000 [42:15<93:25:58, 34.04s/it]
1%|▎ | 119/10000 [42:28<75:58:11, 27.68s/it]
1%|▎ | 120/10000 [42:37<61:12:11, 22.30s/it]

[ 120/10000] eta: 2 days, 11:26:42 lr: 0.000010 class_error: 16.46 loss: 10.3174 (15.4277) loss_ce: 2.6559 (4.2625) loss_lowers: 0.5186 (0.5345) loss_uppers: 0.1374 (0.1608) loss_curves: 1.7806 (2.5720) loss_ce_0: 2.7467 (4.5955) loss_lowers_0: 0.5262 (0.5340) loss_uppers_0: 0.1386 (0.1658) loss_curves_0: 1.8500 (2.6026) loss_ce_unscaled: 0.8853 (1.4208) class_error_unscaled: 16.9953 (28.1814) loss_lowers_unscaled: 0.2593 (0.2672) loss_uppers_unscaled: 0.0687 (0.0804) loss_curves_unscaled: 0.3561 (0.5144) cardinality_error_unscaled: 1.4750 (1.6194) loss_ce_0_unscaled: 0.9156 (1.5318) loss_lowers_0_unscaled: 0.2631 (0.2670) loss_uppers_0_unscaled: 0.0693 (0.0829) loss_curves_0_unscaled: 0.3700 (0.5205) cardinality_error_0_unscaled: 1.4583 (1.6346) time: 28.9453 data: 0.0019 max mem: 8629

1%|▎ | 120/10000 [43:40<61:12:11, 22.30s/it]
1%|▎ | 121/10000 [43:40<94:41:53, 34.51s/it]
1%|▎ | 122/10000 [43:50<74:29:09, 27.15s/it]
1%|▎ | 123/10000 [44:00<60:15:56, 21.97s/it]
1%|▎ | 124/10000 [45:01<91:52:31, 33.49s/it]
1%|▎ | 125/10000 [45:13<74:15:25, 27.07s/it]
1%|▎ | 126/10000 [45:27<63:38:03, 23.20s/it]
1%|▎ | 127/10000 [46:27<93:42:36, 34.17s/it]
1%|▎ | 128/10000 [46:41<76:58:47, 28.07s/it]
1%|▎ | 129/10000 [46:56<66:22:50, 24.21s/it]
1%|▎ | 130/10000 [47:49<90:03:12, 32.85s/it]

[ 130/10000] eta: 2 days, 12:24:31 lr: 0.000010 class_error: 18.69 loss: 10.1321 (15.0607) loss_ce: 2.6237 (4.1363) loss_lowers: 0.5084 (0.5320) loss_uppers: 0.1374 (0.1588) loss_curves: 1.7518 (2.5290) loss_ce_0: 2.6835 (4.4473) loss_lowers_0: 0.5156 (0.5321) loss_uppers_0: 0.1386 (0.1636) loss_curves_0: 1.8061 (2.5615) loss_ce_unscaled: 0.8746 (1.3788) class_error_unscaled: 16.8843 (27.3191) loss_lowers_unscaled: 0.2542 (0.2660) loss_uppers_unscaled: 0.0687 (0.0794) loss_curves_unscaled: 0.3504 (0.5058) cardinality_error_unscaled: 1.4708 (1.6100) loss_ce_0_unscaled: 0.8945 (1.4824) loss_lowers_0_unscaled: 0.2578 (0.2661) loss_uppers_0_unscaled: 0.0693 (0.0818) loss_curves_0_unscaled: 0.3612 (0.5123) cardinality_error_0_unscaled: 1.4167 (1.6194) time: 28.9792 data: 0.0016 max mem: 8629

1%|▎ | 130/10000 [48:06<90:03:12, 32.85s/it]
1%|▎ | 131/10000 [48:06<77:11:57, 28.16s/it]
1%|▎ | 132/10000 [48:23<67:40:27, 24.69s/it]
1%|▎ | 133/10000 [49:13<88:30:00, 32.29s/it]
1%|▎ | 134/10000 [49:32<77:48:43, 28.39s/it]
1%|▎ | 135/10000 [49:50<68:56:32, 25.16s/it]
1%|▎ | 136/10000 [50:37<87:32:54, 31.95s/it]
1%|▎ | 137/10000 [51:00<80:08:03, 29.25s/it]
1%|▎ | 138/10000 [51:16<69:18:57, 25.30s/it]
1%|▎ | 139/10000 [52:01<84:59:15, 31.03s/it]
1%|▎ | 140/10000 [52:27<80:54:20, 29.54s/it]

[ 140/10000] eta: 2 days, 13:27:56 lr: 0.000010 class_error: 17.10 loss: 9.8577 (14.7119) loss_ce: 2.4952 (4.0209) loss_lowers: 0.4854 (0.5285) loss_uppers: 0.1391 (0.1576) loss_curves: 1.7576 (2.4837) loss_ce_0: 2.5230 (4.3126) loss_lowers_0: 0.4903 (0.5290) loss_uppers_0: 0.1386 (0.1621) loss_curves_0: 1.8047 (2.5176) loss_ce_unscaled: 0.8317 (1.3403) class_error_unscaled: 16.8831 (26.6091) loss_lowers_unscaled: 0.2427 (0.2643) loss_uppers_unscaled: 0.0696 (0.0788) loss_curves_unscaled: 0.3515 (0.4967) cardinality_error_unscaled: 1.4917 (1.6029) loss_ce_0_unscaled: 0.8410 (1.4375) loss_lowers_0_unscaled: 0.2451 (0.2645) loss_uppers_0_unscaled: 0.0693 (0.0810) loss_curves_0_unscaled: 0.3609 (0.5035) cardinality_error_0_unscaled: 1.4125 (1.6061) time: 27.1708 data: 0.0016 max mem: 8629

1%|▎ | 140/10000 [52:44<80:54:20, 29.54s/it]
1%|▎ | 141/10000 [52:44<70:40:05, 25.80s/it]
1%|▎ | 142/10000 [53:24<82:21:13, 30.07s/it]
1%|▎ | 143/10000 [53:53<81:43:45, 29.85s/it]
1%|▎ | 144/10000 [54:03<65:28:53, 23.92s/it]
1%|▎ | 145/10000 [54:48<82:41:26, 30.21s/it]
1%|▎ | 146/10000 [55:21<84:26:01, 30.85s/it]
1%|▎ | 147/10000 [55:31<67:19:55, 24.60s/it]
1%|▎ | 148/10000 [56:12<81:22:43, 29.74s/it]
1%|▎ | 149/10000 [56:39<78:50:47, 28.81s/it]
2%|▎ | 150/10000 [56:49<63:22:29, 23.16s/it]

[ 150/10000] eta: 2 days, 14:37:38 lr: 0.000010 class_error: 16.70 loss: 9.7313 (14.4185) loss_ce: 2.4731 (3.9181) loss_lowers: 0.4807 (0.5256) loss_uppers: 0.1393 (0.1563) loss_curves: 1.7584 (2.4511) loss_ce_0: 2.4997 (4.1941) loss_lowers_0: 0.4867 (0.5264) loss_uppers_0: 0.1402 (0.1606) loss_curves_0: 1.8047 (2.4862) loss_ce_unscaled: 0.8244 (1.3060) class_error_unscaled: 17.0956 (25.9839) loss_lowers_unscaled: 0.2404 (0.2628) loss_uppers_unscaled: 0.0697 (0.0782) loss_curves_unscaled: 0.3517 (0.4902) cardinality_error_unscaled: 1.4708 (1.5938) loss_ce_0_unscaled: 0.8332 (1.3980) loss_lowers_0_unscaled: 0.2434 (0.2632) loss_uppers_0_unscaled: 0.0701 (0.0803) loss_curves_0_unscaled: 0.3609 (0.4972) cardinality_error_0_unscaled: 1.4125 (1.5941) time: 28.4934 data: 0.0016 max mem: 8629

2%|▎ | 150/10000 [57:36<63:22:29, 23.16s/it]
2%|▎ | 151/10000 [57:36<82:52:38, 30.29s/it]
2%|▎ | 152/10000 [58:05<81:50:07, 29.92s/it]
2%|▎ | 153/10000 [58:15<65:13:20, 23.84s/it]
2%|▎ | 154/10000 [59:00<82:31:18, 30.17s/it]
2%|▎ | 155/10000 [59:29<82:01:38, 29.99s/it]
2%|▎ | 156/10000 [59:39<65:41:16, 24.02s/it]
2%|▎ | 157/10000 [1:00:25<83:24:53, 30.51s/it]
2%|▎ | 158/10000 [1:00:57<84:56:23, 31.07s/it]
2%|▎ | 159/10000 [1:01:07<67:23:14, 24.65s/it]
2%|▎ | 160/10000 [1:01:48<80:34:59, 29.48s/it]

[ 160/10000] eta: 2 days, 15:33:26 lr: 0.000010 class_error: 16.28 loss: 9.6637 (14.1664) loss_ce: 2.4511 (3.8260) loss_lowers: 0.4706 (0.5218) loss_uppers: 0.1395 (0.1555) loss_curves: 1.7757 (2.4274) loss_ce_0: 2.4997 (4.0907) loss_lowers_0: 0.4767 (0.5230) loss_uppers_0: 0.1412 (0.1595) loss_curves_0: 1.8060 (2.4625) loss_ce_unscaled: 0.8170 (1.2753) class_error_unscaled: 16.9794 (25.4463) loss_lowers_unscaled: 0.2353 (0.2609) loss_uppers_unscaled: 0.0698 (0.0777) loss_curves_unscaled: 0.3551 (0.4855) cardinality_error_unscaled: 1.4542 (1.5872) loss_ce_0_unscaled: 0.8332 (1.3636) loss_lowers_0_unscaled: 0.2383 (0.2615) loss_uppers_0_unscaled: 0.0706 (0.0798) loss_curves_0_unscaled: 0.3612 (0.4925) cardinality_error_0_unscaled: 1.4083 (1.5823) time: 28.9692 data: 0.0018 max mem: 8629

2%|▎ | 160/10000 [1:02:23<80:34:59, 29.48s/it]
2%|▎ | 161/10000 [1:02:23<85:38:10, 31.33s/it]
2%|▎ | 162/10000 [1:02:33<68:03:22, 24.90s/it]
2%|▎ | 163/10000 [1:03:11<78:58:02, 28.90s/it]start prefetching data...
shuffling indices...
Traceback (most recent call last):
File "train.py", line 50, in prefetch_data
data, ind = sample_data(db, ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 80, in sample_data
return globals()[system_configs.sampling_function](db, k_ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 56, in kp_detection
label[:, 1][...] = np.min(label[:, 1])
File "<array_function internals>", line 6, in amin
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 2746, in amin
keepdims=keepdims, initial=initial, where=where)
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 90, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation minimum which has no identity
Process Process-2:
Traceback (most recent call last):
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "train.py", line 54, in prefetch_data
raise e
File "train.py", line 50, in prefetch_data
data, ind = sample_data(db, ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 80, in sample_data
return globals()[system_configs.sampling_function](db, k_ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 56, in kp_detection
label[:, 1][...] = np.min(label[:, 1])
File "<array_function internals>", line 6, in amin
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 2746, in amin
keepdims=keepdims, initial=initial, where=where)
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 90, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation minimum which has no identity

2%|▎ | 164/10000 [1:03:49<86:18:30, 31.59s/it]
2%|▎ | 165/10000 [1:03:59<68:10:29, 24.95s/it]
2%|▎ | 166/10000 [1:05:10<106:06:55, 38.85s/it]
2%|▎ | 167/10000 [1:05:20<82:10:09, 30.08s/it]
2%|▎ | 168/10000 [1:06:19<105:45:06, 38.72s/it]
2%|▎ | 169/10000 [1:06:32<85:16:04, 31.22s/it]
2%|▎ | 170/10000 [1:07:35<111:16:59, 40.75s/it]

[ 170/10000] eta: 2 days, 17:00:32 lr: 0.000010 class_error: 17.83 loss: 9.4990 (13.8913) loss_ce: 2.3993 (3.7408) loss_lowers: 0.4620 (0.5182) loss_uppers: 0.1395 (0.1546) loss_curves: 1.7304 (2.3853) loss_ce_0: 2.4440 (3.9940) loss_lowers_0: 0.4686 (0.5197) loss_uppers_0: 0.1412 (0.1583) loss_curves_0: 1.7671 (2.4203) loss_ce_unscaled: 0.7998 (1.2469) class_error_unscaled: 17.6152 (24.9868) loss_lowers_unscaled: 0.2310 (0.2591) loss_uppers_unscaled: 0.0698 (0.0773) loss_curves_unscaled: 0.3461 (0.4771) cardinality_error_unscaled: 1.4708 (1.5807) loss_ce_0_unscaled: 0.8147 (1.3313) loss_lowers_0_unscaled: 0.2343 (0.2599) loss_uppers_0_unscaled: 0.0706 (0.0792) loss_curves_0_unscaled: 0.3534 (0.4841) cardinality_error_0_unscaled: 1.4208 (1.5745) time: 30.7453 data: 0.0017 max mem: 8629

2%|▎ | 170/10000 [1:07:51<111:16:59, 40.75s/it]
2%|▎ | 171/10000 [1:07:51<90:36:53, 33.19s/it]
2%|▎ | 172/10000 [1:08:51<113:06:28, 41.43s/it]start prefetching data...
shuffling indices...
Traceback (most recent call last):
File "train.py", line 50, in prefetch_data
data, ind = sample_data(db, ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 80, in sample_data
return globals()[system_configs.sampling_function](db, k_ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 56, in kp_detection
label[:, 1][...] = np.min(label[:, 1])
File "<array_function internals>", line 6, in amin
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 2746, in amin
keepdims=keepdims, initial=initial, where=where)
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 90, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation minimum which has no identity
Process Process-4:
Traceback (most recent call last):
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "train.py", line 54, in prefetch_data
raise e
File "train.py", line 50, in prefetch_data
data, ind = sample_data(db, ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 80, in sample_data
return globals()[system_configs.sampling_function](db, k_ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 56, in kp_detection
label[:, 1][...] = np.min(label[:, 1])
File "<array_function internals>", line 6, in amin
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 2746, in amin
keepdims=keepdims, initial=initial, where=where)
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 90, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation minimum which has no identity

2%|▎ | 173/10000 [1:09:08<92:59:24, 34.07s/it]
2%|▎ | 174/10000 [1:10:21<124:38:53, 45.67s/it]
2%|▎ | 175/10000 [1:11:31<144:50:47, 53.07s/it]
2%|▎ | 176/10000 [1:12:42<158:54:46, 58.23s/it]
2%|▎ | 177/10000 [1:13:52<169:08:22, 61.99s/it]
2%|▎ | 178/10000 [1:15:04<176:58:49, 64.87s/it]
2%|▎ | 179/10000 [1:16:14<180:46:56, 66.27s/it]
2%|▎ | 180/10000 [1:17:23<183:19:03, 67.20s/it]

[ 180/10000] eta: 2 days, 23:03:32 lr: 0.000010 class_error: 16.71 loss: 9.3920 (13.6690) loss_ce: 2.3256 (3.6634) loss_lowers: 0.4540 (0.5144) loss_uppers: 0.1397 (0.1538) loss_curves: 1.7051 (2.3612) loss_ce_0: 2.3976 (3.9062) loss_lowers_0: 0.4621 (0.5162) loss_uppers_0: 0.1368 (0.1573) loss_curves_0: 1.7376 (2.3965) loss_ce_unscaled: 0.7752 (1.2211) class_error_unscaled: 17.4545 (24.5537) loss_lowers_unscaled: 0.2270 (0.2572) loss_uppers_unscaled: 0.0698 (0.0769) loss_curves_unscaled: 0.3410 (0.4722) cardinality_error_unscaled: 1.4750 (1.5769) loss_ce_0_unscaled: 0.7992 (1.3021) loss_lowers_0_unscaled: 0.2311 (0.2581) loss_uppers_0_unscaled: 0.0684 (0.0787) loss_curves_0_unscaled: 0.3475 (0.4793) cardinality_error_0_unscaled: 1.4292 (1.5651) time: 48.5700 data: 0.0016 max mem: 8629

2%|▎ | 180/10000 [1:18:35<183:19:03, 67.20s/it]
2%|▎ | 181/10000 [1:18:35<187:00:56, 68.57s/it]
2%|▎ | 182/10000 [1:19:45<188:18:58, 69.05s/it]
2%|▎ | 183/10000 [1:20:55<189:08:09, 69.36s/it]
2%|▎ | 184/10000 [1:22:05<189:37:14, 69.54s/it]
2%|▎ | 185/10000 [1:23:16<191:15:05, 70.15s/it]
2%|▎ | 186/10000 [1:24:26<190:19:00, 69.81s/it]
2%|▎ | 187/10000 [1:25:35<190:07:44, 69.75s/it]
2%|▍ | 188/10000 [1:26:44<189:21:48, 69.48s/it]
2%|▍ | 189/10000 [1:27:55<190:51:39, 70.03s/it]
2%|▍ | 190/10000 [1:29:05<190:58:46, 70.08s/it]

[ 190/10000] eta: 3 days, 5:15:47 lr: 0.000010 class_error: 15.40 loss: 9.2294 (13.4512) loss_ce: 2.2715 (3.5904) loss_lowers: 0.4437 (0.5106) loss_uppers: 0.1408 (0.1532) loss_curves: 1.7065 (2.3338) loss_ce_0: 2.3265 (3.8251) loss_lowers_0: 0.4501 (0.5127) loss_uppers_0: 0.1421 (0.1565) loss_curves_0: 1.7540 (2.3690) loss_ce_unscaled: 0.7572 (1.1968) class_error_unscaled: 16.7144 (24.1520) loss_lowers_unscaled: 0.2219 (0.2553) loss_uppers_unscaled: 0.0704 (0.0766) loss_curves_unscaled: 0.3413 (0.4668) cardinality_error_unscaled: 1.4708 (1.5708) loss_ce_0_unscaled: 0.7755 (1.2750) loss_lowers_0_unscaled: 0.2251 (0.2563) loss_uppers_0_unscaled: 0.0710 (0.0782) loss_curves_0_unscaled: 0.3508 (0.4738) cardinality_error_0_unscaled: 1.4125 (1.5568) time: 67.2167 data: 0.0018 max mem: 8629

2%|▍ | 190/10000 [1:30:15<190:58:46, 70.08s/it]
2%|▍ | 191/10000 [1:30:15<190:35:12, 69.95s/it]
2%|▍ | 192/10000 [1:31:24<190:01:13, 69.75s/it]
2%|▍ | 193/10000 [1:32:34<190:00:55, 69.75s/it]
2%|▍ | 194/10000 [1:33:45<190:45:51, 70.03s/it]
2%|▍ | 195/10000 [1:34:55<191:05:39, 70.16s/it]
2%|▍ | 196/10000 [1:36:05<190:19:01, 69.88s/it]
2%|▍ | 197/10000 [1:37:14<190:04:35, 69.80s/it]
2%|▍ | 198/10000 [1:38:23<189:24:43, 69.57s/it]
2%|▍ | 199/10000 [1:39:34<190:29:18, 69.97s/it]

[VAL LOG] [Saving training and evaluating images...]

2%|▍ | 199/10000 [1:41:03<190:29:18, 69.97s/it]
2%|▍ | 200/10000 [1:41:03<205:53:21, 75.63s/it]

[ 200/10000] eta: 3 days, 10:49:51 lr: 0.000010 class_error: 16.77 loss: 9.1541 (13.2323) loss_ce: 2.2418 (3.5185) loss_lowers: 0.4286 (0.5057) loss_uppers: 0.1424 (0.1527) loss_curves: 1.7101 (2.3061) loss_ce_0: 2.3205 (3.7451) loss_lowers_0: 0.4346 (0.5080) loss_uppers_0: 0.1420 (0.1557) loss_curves_0: 1.7353 (2.3406) loss_ce_unscaled: 0.7473 (1.1728) class_error_unscaled: 16.7276 (23.7820) loss_lowers_unscaled: 0.2143 (0.2528) loss_uppers_unscaled: 0.0712 (0.0764) loss_curves_unscaled: 0.3420 (0.4612) cardinality_error_unscaled: 1.4583 (1.5653) loss_ce_0_unscaled: 0.7735 (1.2484) loss_lowers_0_unscaled: 0.2173 (0.2540) loss_uppers_0_unscaled: 0.0710 (0.0779) loss_curves_0_unscaled: 0.3471 (0.4681) cardinality_error_0_unscaled: 1.4292 (1.5509) time: 70.0441 data: 0.0015 max mem: 8634

2%|▍ | 200/10000 [1:41:56<205:53:21, 75.63s/it]
2%|▍ | 201/10000 [1:41:56<187:06:01, 68.74s/it]
2%|▍ | 202/10000 [1:43:06<188:48:48, 69.37s/it]
2%|▍ | 203/10000 [1:44:17<189:54:30, 69.78s/it]
2%|▍ | 204/10000 [1:45:29<191:39:14, 70.43s/it]
2%|▍ | 205/10000 [1:46:39<190:50:10, 70.14s/it]
2%|▍ | 206/10000 [1:47:48<189:51:45, 69.79s/it]
2%|▍ | 207/10000 [1:48:58<190:09:52, 69.91s/it]
2%|▍ | 208/10000 [1:50:11<192:32:32, 70.79s/it]
2%|▍ | 209/10000 [1:51:19<190:50:16, 70.17s/it]
2%|▍ | 210/10000 [1:52:30<190:53:50, 70.20s/it]start prefetching data...
shuffling indices...
Traceback (most recent call last):
File "train.py", line 50, in prefetch_data
data, ind = sample_data(db, ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 80, in sample_data
return globals()[system_configs.sampling_function](db, k_ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 56, in kp_detection
label[:, 1][...] = np.min(label[:, 1])
File "<array_function internals>", line 6, in amin
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 2746, in amin
keepdims=keepdims, initial=initial, where=where)
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 90, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation minimum which has no identity
Process Process-1:
Traceback (most recent call last):
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "train.py", line 54, in prefetch_data
raise e
File "train.py", line 50, in prefetch_data
data, ind = sample_data(db, ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 80, in sample_data
return globals()[system_configs.sampling_function](db, k_ind)
File "/mnt/sdd/luchengyu/lstr/TuSimple/LSTR/sample/tusimple.py", line 56, in kp_detection
label[:, 1][...] = np.min(label[:, 1])
File "<array_function internals>", line 6, in amin
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 2746, in amin
keepdims=keepdims, initial=initial, where=where)
File "/home/luchengyu/anaconda3/envs/lstr/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 90, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation minimum which has no identity

what's the masks mean?

Thank you for sharing your code !
I am confused
masks = xs[1] # B 1 360 640
Why need mask as input ? I can't find it in the paper

怎么制作自己的数据集

老哥请问一下怎么把自己的数据集转成tusimple那种格式的呀,,那个数据集太大了我没下,所以我不知道他json的内容,我之前用lanenet的时候是把数据集按照img,gt_binary_image,gt_instance_image,train.txt,val.txt。不知道这里应该怎么存放求助求助

Is the trained parameter model valid only for the same camera?

Hello, is your algorithm model only useful for images produced by a fixed camera?If you have multiple cameras with different angles, isn't it possible to use the same set of parameters?Because your algorithm involves calculating the Angle of the camera?

Parallel assumption and complex topologies

Hi Ruijin,

First of all, congratulations on the great work! The results look amazing, and I particularly liked the equation derivation part.

If my understanding is correct, the paper used shape consistency constraint which basically assumes that the lanes are parallel polynomials. This perhaps covers only 90% of the case but it does not seem to address more complex topologies such as merges and splits. Any insights you could shed on this?

Is training and running lstr on macOS possible?

Hello folks
I downloaded the TuSimple LANE DETECTION CHALLENGE dataset (github.com/TuSimple/tusimple-benchmark) and the LSTR (github.com/liuruijin17/LSTR) code package. However, it failed with “Solving environment: failed” while I was setting the environment for lstr (conda env create --name lstr --file environment.txt), since lstr requires Linux ubuntu 16.04, while I use macOS Big Sur. Does anybody have any idea on how to solve this? Thanks ~

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.