GithubHelp home page GithubHelp logo

Comments (13)

loicland avatar loicland commented on June 18, 2024

Which fold are you using for validation?

from superpoint_graph.

LuZaiJiaoXiaL avatar LuZaiJiaoXiaL commented on June 18, 2024

Thank u for your quick reply. I use my own dataset. It is a kind of photogrammetric dataset capturing city objects. Using other models with similar parameter settings of S3DIS, I can achieve relatively good results, like mIoU = 0.6. But not in SPGraph.

I guess there are two possibilities.

  1. Because of using a higher version of pytorch, there existed two kinds of user warnings when I trained the model.

/model1/SPGraph1/superpoint_graph-ssp-spg_n1/learning/../learning/ecc/GraphConvModule.py:38: UserWarning: An output with one or more elements was resized since it had shape [6030, 32], which does not match the required output shape [6030, 1, 32].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /opt/conda/conda-bld/pytorch_1634272178570/work/aten/src/ATen/native/Resize.cpp:23.)
torch.bmm(f_a(a) if f_a else a, f_b(b) if f_b else b, out=out)
/model1/SPGraph1/superpoint_graph-ssp-spg_n1/learning/../learning/ecc/GraphConvModule.py:147: UserWarning: Warning: source/destination slices have same size but different shape for an index operation. This behavior is deprecated.
(Triggered internally at /opt/conda/conda-bld/pytorch_1634272178570/work/aten/src/ATen/native/cuda/Indexing.cu:324.)

BTW, I also cloned the weights at the forward stage, otherwise, there would cause an inplace error using the original code. I am not sure whether the modified code could potentially cause something wrong.

class RNNGraphConvModule(nn.Module):
...
def forward(self, hx):
# get graph structure information tensors
idxn, idxe, degs, degs_gpu, edgefeats = self._gci.get_buffers()

    edge_indexes = self._gci.get_pyg_buffers()
    ###edgefeats = Variable(edgefeats, requires_grad=False)

    # evalute and reshape filter weights (shared among RNN iterations)
    weights = self._fnet(edgefeats)
    weights_clone=weights.clone()
    
    nc = hx.size(1)
    assert hx.dim()==2 and weights_clone.dim()==2 and weights_clone.size(1) in [nc, nc*nc]
    if weights_clone.size(1) != nc:
        weights_clone = weights_clone.view(-1, nc, nc)

    # repeatedly evaluate RNN cell
    hxs = [hx]
    if self._isLSTM:
        cx = Variable(hx.data.new(hx.size()).fill_(0))

    for r in range(self._nrepeats):
        if self.use_pyg:
            input = self.nn(hx, edge_indexes, weights_clone.clone())
        else:
            input = ecc.GraphConvFunction.apply(hx, weights_clone.clone(), nc, nc, idxn, idxe, degs, degs_gpu,
                                                self._edge_mem_limit)
        if self._isLSTM:
            hx, cx = self._cell(input, (hx, cx))
        else:
            hx = self._cell(input, hx)
        hxs.append(hx)

    return torch.cat(hxs,1) if self._cat_all else hx
  1. My dataset is with geographic coordinates in meters. That means they are not centered before the preprocessing. Does it matter about the quick overfitting issue?

I use xyzrgbelpsv as point cloud attributes.

Best,

Eric.

from superpoint_graph.

loicland avatar loicland commented on June 18, 2024

Can't hurt to center your data spatially and not give absolute coordinates, but I doubt that's the issue.

Can you tell me more about the data? Is it composed of a few large scenes or many small scenes? If the former, it could help to set spg_augm_hardcutoff to a lower value (to prevent the same graphs from being sampled over and over again), maybe 256, and activate all augmentations.

Btw we are releasing next month a new, modern, and stable version of SPG, which should fix most of these issues (this was written when pyTorch 0.3 just came out...).

from superpoint_graph.

LuZaiJiaoXiaL avatar LuZaiJiaoXiaL commented on June 18, 2024

The dataset is composed of large urban scenes, each of which covers 125m x 125 m, like this.

image

I have tested the code on S3DIS and the training process worked well. That means the modified code is all right.

I am trying to set spg_augm_hardcutoff to a lower value and activate all augmentations.

Here you can find the coverage log.

Epoch 103/600 :
ecoph-------> 103
iou:[0.6566380912236217, 0.3800617829837083, 0.8940360448718363, 0.32101390892380727, 0.8850787977369635, 0.9492109481372457, 0.3596640085759631]
�[0m-> Train Loss: 0.6213 Train accuracy: 77.63%
iou:[0.4404596478715731, 0.011887909967392758, 0.7420587573541609, 0.01480105582168345, 0.5449726410808915, 0.49260565491602665, 0.2737869854971336]
�[0;94m-> Val Loss: 1.2844 Val accuracy: 75.81% Val oAcc: 75.41% Val IoU: 36.01% best ioU: 44.61%�[0m
Epoch 104/600 :
ecoph-------> 104
iou:[0.6589372578435381, 0.3817101438933501, 0.8933283303487887, 0.327000814225088, 0.8872410908121436, 0.9449670599515143, 0.3677511173869743]
�[0m-> Train Loss: 0.6130 Train accuracy: 78.10%
iou:[0.5027424335029329, 0.006355913782159967, 0.7281921311961872, 0.09818156270856995, 0.47333669955350305, 0.9396267271602392, 0.2552578877485897]
�[0;94m-> Val Loss: 1.3079 Val accuracy: 73.05% Val oAcc: 75.77% Val IoU: 42.91% best ioU: 44.61%�[0m
Epoch 105/600 :
ecoph-------> 105
iou:[0.6615522876953676, 0.3697841735218824, 0.8947551720160777, 0.2901540882132645, 0.8870370823608509, 0.9533809983512918, 0.36271826760800224]
�[0m-> Train Loss: 0.6126 Train accuracy: 78.10%
iou:[0.49125010649136136, 0.0, 0.7492919684482543, 0.11733278536936197, 0.5351337283570262, 0.9454390890361077, 0.159220589727361]
�[0;94m-> Val Loss: 1.0288 Val accuracy: 74.99% Val oAcc: 76.98% Val IoU: 42.82% best ioU: 44.61%�[0m
Epoch 106/600 :
ecoph-------> 106
iou:[0.6661328882464304, 0.3844873178783362, 0.8970037247384871, 0.2851588462693653, 0.88903764325098, 0.955135507001192, 0.37493692213314256]
�[0m-> Train Loss: 0.6119 Train accuracy: 78.17%
iou:[0.5060049828316692, 0.009148324376133913, 0.7180178914474473, 0.12324348652297845, 0.4963218107869462, 0.8831731634060321, 0.028927363071391158]
�[0;94m-> Val Loss: 1.4117 Val accuracy: 72.52% Val oAcc: 75.53% Val IoU: 39.50% best ioU: 44.61%�[0m
Epoch 107/600 :
ecoph-------> 107
iou:[0.6693749557301371, 0.402991629133051, 0.8974858975513205, 0.33841541604393466, 0.893175000271439, 0.9470561735997238, 0.3713003871729778]
�[0m-> Train Loss: 0.6083 Train accuracy: 78.08%
iou:[0.37095995673474136, 0.1081313163364059, 0.6923371044578516, 0.051245013069197966, 0.3527589400005447, 0.876163649390725, 0.12827342742445547]
�[0;94m-> Val Loss: 1.3401 Val accuracy: 70.13% Val oAcc: 67.96% Val IoU: 36.86% best ioU: 44.61%�[0m
Epoch 108/600 :
ecoph-------> 108
iou:[0.6677409718952819, 0.424706218665792, 0.9031530205010774, 0.32805150535108796, 0.8905602884572038, 0.9347215723413527, 0.3826377260423606]
�[0m-> Train Loss: 0.6003 Train accuracy: 78.52%
iou:[0.5245886508949194, 0.004659905577898234, 0.7153302259672607, 0.14323750584679595, 0.38662487534486967, 0.9143180272459347, 0.19319895942380585]
�[0;94m-> Val Loss: 1.2102 Val accuracy: 72.61% Val oAcc: 73.68% Val IoU: 41.17% best ioU: 44.61%�[0m
Epoch 109/600 :
ecoph-------> 109
iou:[0.6594039313695325, 0.3956099239462756, 0.8936706286663746, 0.29427287269016006, 0.8855307837934447, 0.9383831192347523, 0.38088675029971825]
�[0m-> Train Loss: 0.6170 Train accuracy: 78.02%
iou:[0.3225183155223509, 0.008661651158186498, 0.6607797929418523, 0.03248192012236956, 0.2833599632658436, 0.8333932950988899, 0.1422310548832261]
�[0;94m-> Val Loss: 1.9173 Val accuracy: 60.97% Val oAcc: 61.40% Val IoU: 32.62% best ioU: 44.61%�[0m

The value of train loss was continuously smaller, whereas the val loss fluctuated at 1.2 from the scratch.

Do you have any suggestions?

'Can't hurt to center your data spatially and not give absolute coordinates, but I doubt that's the issue.' Do you mean I should center the data before the preprocessing? I did not do it before.

Best,

Eric.

from superpoint_graph.

loicland avatar loicland commented on June 18, 2024

The partition seems more or less okay (a bit coarse maybe), and training is reasonable.How many tiles do you have in your dataset?

The drastic dimension reduction of SPG can be a source of overfitting. Instead of 100 millions of points from 10 scenes you suddenly have 10 smallish graphs that can easily be rote learned during training.

Steps to mitigate this are as follows:

  • chose a finer partition by decreasing the reg_strength
  • add xyz coordinates to each point before partioning (see learned partition for how to do this, pretty straightforward).
  • sample smaller graphs (maybe even 128 nodes)
  • samples fewer points per superpoints (to make it harder to overfit)
  • add augmentations.

In the upcoming version of SPG, we have new tools to prevent this caveat. Soon TM!

from superpoint_graph.

jing-zhao9 avatar jing-zhao9 commented on June 18, 2024

Hello professor, when will your latest version of SPG be released

from superpoint_graph.

loicland avatar loicland commented on June 18, 2024

Mid May!

from superpoint_graph.

LuZaiJiaoXiaL avatar LuZaiJiaoXiaL commented on June 18, 2024

Mid May!

Hi, professor! May I know if the latest version has been released?

Best,

Eric

from superpoint_graph.

loicland avatar loicland commented on June 18, 2024

Sorry we ran into some delays. We will release it by next week z I'll let you know personally.

from superpoint_graph.

LuZaiJiaoXiaL avatar LuZaiJiaoXiaL commented on June 18, 2024

Sorry we ran into some delays. We will release it by next week z I'll let you know personally.

Thanks, professor. Glad to know it.

from superpoint_graph.

LuZaiJiaoXiaL avatar LuZaiJiaoXiaL commented on June 18, 2024

Sorry we ran into some delays. We will release it by next week z I'll let you know personally.

Hi. Professor. Is there any update?

from superpoint_graph.

loicland avatar loicland commented on June 18, 2024

It's up!

https://github.com/drprojects/superpoint_transformer

from superpoint_graph.

loicland avatar loicland commented on June 18, 2024

Hi!

We are releasing a new version of SuperPoint Graph called SuperPoint Transformer (SPT).

https://github.com/drprojects/superpoint_transformer

It is better in any way:

✨ SPT in numbers ✨
📊 SOTA results: 76.0 mIoU S3DIS 6-Fold, 63.5 mIoU on KITTI-360 Val, 79.6 mIoU on DALES
🦋 212k parameters only!
⚡ Trains on S3DIS in 3h on 1 GPU
Preprocessing is x7 faster than SPG!
🚀 Easy install (no more boost!)

If you are interested in lightweight, high-performance 3D deep learning, you should check it out. In the meantime, we will finally retire SPG and stop maintaining this repo.

from superpoint_graph.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.