shrediquette / pivlab Goto Github PK
View Code? Open in Web Editor NEWParticle Image Velocimetry for Matlab, official repository
Home Page: https://shrediquette.github.io/PIVlab/
License: MIT License
Particle Image Velocimetry for Matlab, official repository
Home Page: https://shrediquette.github.io/PIVlab/
License: MIT License
This checkbox might just write "inf" to the existing vector skip edit box.
Helpful for BOS and also for optical flow.
If i want to generate 10000 pairs of synthetic pictures,which func should i use?
Should probably be added in "Modify plot appearance". But this panel is quite full already...
But how...? Popup window where the user can enter a list of coordinates?
When users scroll through frames, the polyline should always be redrawn until it is cleared. So users can extract data using the same polyline in different frames.
Verify
Videos should be possible to read in PIVlab. Best option imho: Add button next to the "Load Images" button: "Load Video". That should open a small window where a filename can be selected, and something like "skip n frames" to reduce the frame rate. The video must be treated the same way that currently image files are treated.
Current situation:
if strcmp(ext,'.b16')
currentimage=f_readB16(filepath{selected});
else
currentimage=imread(filepath{selected});
end
Add sth like:
If is_video == true currentimage = read(videoobject,selected); end
videoreader function was introduced in R2010b, so this feature cannot be available in R2010a
Many people are asking for the correlation coefficient. I am not really sure what the correlation coefficient can tell you, as it can't discriminate between a strong background signal or a strong particle image correlation. However, I am currently implementing it like this:
if lastpass==1 && get_correlationmap == 1 % geht nicht bei nur 1 pass
correlation_map=zeros(size(typevector));
for cor_i=1:size(image1_cut,3)
correlation_map(cor_i)=corr2(image1_cut(:,:,cor_i),image2_cut(:,:,cor_i));
end
correlation_map = permute(reshape(correlation_map, [size(xtable')]), [2 1 3]);
end
The challenge is to integrate this in the GUI in a suitable way....
Support for multi layer tiffs should be added. However, I never used these kind of tiffs, so I don't know how many different types exist. Ideally, someone would provide several multi layer tiffs from different origins, so I can test this.
Can Matlab do this? Does it depend on the codecs that are installed on the specific computer?
As nicely shown in invariant2_factor.m
in https://github.com/Tianshu-Liu/OpenOpticalFlow
But the question is how it is implemented best... It doesn't fit into the GUI right now, because it is a operation that processes a series of frames, but the result is a statially resolved scalar...
Try catch for every setting / item for better backwards compatibility. Stack exchange has a solution (google "sequential try catch end block for matlab").
In plot settings, in derived parameters appearance. Add checkbox that reduces amount of colormap entries using interp1, nearest.
Try if this works with PIVlab somehow:
https://sites.google.com/site/matpihome/matlab-gui-framework/unit-test-gui
corr2 should be used. however, the calculation probably takes quite some time for calculation for every interrogation area... So is it feasible to be implemented?
The Quickviewer is a tool that can help during PIV data acquisition. It is used to quickly validate that the recording settings (exposure, pulse seperation etc.) are good. I am adding this because I need it for our PIV measurement campaigns. It allows the user to load an imagepair (shortcut 'l' <-- this is a small letter "L"), to perform a PIV analysis of the selected ROI (shortcut 'p'), or to maximize the ROI (shortcut 'm'). Histogram, correlation coefficient and vector map are displayed.
Maybe it makes sense to add a "Pass zero". That will just correlate the full image A with the full image B. That will result in a global shift of the full image. This could be applied as the result of the first pass, and makes it possible to start the multipas approach with smaller initial windows even if there are HUGE displacements.
In the past, there was quite a number of useers that had a way too large displacement, and that hence didn't get results. A pass zero could help to get rid of this typical issue.
https://github.com/Shrediquette/PIVlab/blob/main/PIVlab_commandline.m is not ready to run, for example https://github.com/Shrediquette/PIVlab/blob/main/PIVlab_commandline.m#L16 has s=cell(11,2)
but should be s=cell(13,2)
since version 2.01 , or it doesn't save the results, etc. Anyway I have modified the script to make it standalone program, with parallel processing options (which was critical because I had a TB of images to process). If pull requests are welcome, I will submit one.
...so it can be used as a guide to draw a mask or select a ROI.
From Openopticalflow-PIV, as discussed with the author.
Show an indicator that tells the user whether PIVlab is currently doing something or not.
A--B, A--C, A--D
PIVlab uses polygons to store masks. These are converted to BW just before the PIV analysis. Maybe it is feasible to change all the mask handling to pure bitmaps? That would make it much easier and straight forward for users to draw their own masks.
Line 224 in 8754e44
...so people without the toolbox can use PIVlab.
... doesn't work, because piv_fftensemble uses imread. Must be replaced.
add loop to check image dimension. How mcuh time does that take...?
Search for line: disp('temp workaround')
Something with the calculation goes wrong when step is larger than int area
Provide an example script to use ensemble correlation with the command line.
Only first frame get analyzed if I do the analyze all frame option, remaining frames didn't analyzed and then I had to do it one by one which is time consuming. Please resolve this.
https://doi.org/10.1088/0957-0233/24/4/045302 , page 6 shows the process:
After image deformation: multiply images, then threshold. White pixels represent particles tht are present in A and B. Use subpixel estimator to find displacement of each particle in A and B. The "mismatch" is then a measure for the uncertainty (after some statistical stuff that I still have to look at).
Matlab code is available here:
http://piv.de/uncertainty/UncertaintyCodes/ParticleDisparity_Code.zip
... but it is probably better to code on my own to understand whats going on.
Sliderdisp is a very ugly function that needs to be split up in several sub-functions for a better overview. I think that speed can also be optimized. This will have a direct effect on the overall performance (including data export, data exploration etc).
Check if vorticity has the standard rotational sense.
Add button that copies all relevant settings of PIVlab to the clipboard, so it can be shared / cited more easily.
PIVlab already has image capturing implemented. With a fast computer and the parallel computing toolbox (splitting an image pair into four or eight so subregions) it could be possible to do 1-5 Hz realtime PIV. I'll soon buy a nice Asus ROG Flow X13 as my birthday present and test this. Might be helpful to tune the experimental setup on the fly.
Throws error message, seems to have installed itself silently...
Maybe it would make sense to enable the user to use free units, and not only m/s.
Because "mean" does not accept multiple dimensions in older releases:
??? Error using ==> sum
Dimension argument must be a positive integer scalar within indexing range.
Error in ==> mean at 30
y = sum(x,dim)/size(x,dim);
Error in ==> piv_FFTmulti at 655
bg_sig=(1-emptymatrix).*mean(result_conv,1:2); %zeros in middle, average
correlation value in the remaining space
Error in ==> PIVlab_GUI>AnalyzeSingle_Callback at 4689
[x, y, u, v, typevector,correlation_map] = piv_FFTmulti
(image1,image2,interrogationarea, step, subpixfinder, mask,
roirect,passes,int2,int3,int4,imdeform,repeat,mask_auto,do_pad);
??? Error while evaluating uicontrol Callback
In FFT multi. Writes correlation matrix to workspace
Maybe it is possible to automatically ignore regions with low contrast (that wouldn't give any reasonable correlation matrix anyway). Maybe, before analysis, we can find the standard contrast per interrogation area (slightly lowpass the int area, then subtract min intensity from max intensity). Interrogation areas that are below 5% of the average contrast level are automatically assigned a mask (or simply assigned NaN). This could save a lot of work masking images.
I think the best way would be to have an extra function that is run before cross-correlation, and that automatically generates a mask. But this enhancement will be slow I think.
Currently, the background subtraction tool subtracts the background from every image and saves the resulting image. This generates a lot of unnecessary data.
It should be changed, so that only the background images are saved, and they are subtracted on the fly in the GUI.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.