This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
tamiwiki:projects:scanning-tami [2025/02/06 19:41] – wissotsky | tamiwiki:projects:scanning-tami [2025/04/04 18:18] (current) – wissotsky | ||
---|---|---|---|
Line 1: | Line 1: | ||
<WRAP center round todo 60%> | <WRAP center round todo 60%> | ||
- | Very WIP page(as of 6th February | + | Very WIP page(as of 3rd of April 2025), you can help by asking questions in the telegram group or in-person on Mondays |
</ | </ | ||
+ | |||
+ | {{tamiwiki: | ||
+ | |||
+ | < | ||
+ | flowchart TB | ||
+ | subgraph sg1[" | ||
+ | direction LR | ||
+ | vf[(Video Frames)] --> undistort[" | ||
+ | fd[" | ||
+ | | ||
+ | subgraph fm[" | ||
+ | direction LR | ||
+ | curframe@{ shape: circle, label: "Frame t" } | ||
+ | lcpairs[(" | ||
+ | mne[" | ||
+ | islcp{" | ||
+ | mlcp[" | ||
+ | findtransitivepairs[" | ||
+ | imgpairs[(Successfully Matched Image Pairs)] | ||
+ | |||
+ | curframe --> mne | ||
+ | lcpairs -..- islcp | ||
+ | mne --> islcp | ||
+ | mlcp --> findtransitivepairs | ||
+ | islcp --Yes--> mlcp | ||
+ | islcp -->|No| findtransitivepairs | ||
+ | findtransitivepairs --> imgpairs | ||
+ | end | ||
+ | |||
+ | |||
+ | imgpairs --> imgpair[(" | ||
+ | subgraph ComputeEssentialMatrix[" | ||
+ | direction LR | ||
+ | | ||
+ | imgpair --> ransacLoop[" | ||
+ | | ||
+ | subgraph RANSACProcess[" | ||
+ | direction LR | ||
+ | ransacLoop --> randomSample[" | ||
+ | randomSample --> computeE[" | ||
+ | computeE --> countInliers[" | ||
+ | countInliers --> updateBest[" | ||
+ | updateBest --> checkIteration{" | ||
+ | reached?" | ||
+ | checkIteration -->|No| ransacLoop | ||
+ | end | ||
+ | | ||
+ | checkIteration -->|Yes| output[(" | ||
+ | end | ||
+ | | ||
+ | output --> pgo[" | ||
+ | end | ||
+ | |||
+ | | ||
+ | subgraph sg2[" | ||
+ | direction LR | ||
+ | pm[" | ||
+ | nde[" | ||
+ | pm & nde --> preprocessing | ||
+ | |||
+ | subgraph ConfidenceWeightedDepthCorrection[" | ||
+ | direction LR | ||
+ | | ||
+ | preprocessing[" | ||
+ | | ||
+ | subgraph RANSACProcessPolyfit[" | ||
+ | ransacLoopPolyfit --> sampleSelection | ||
+ | sampleSelection[" | ||
+ | from depth maps"] --> weightSamples | ||
+ | | ||
+ | weightSamples[" | ||
+ | confidence map values" | ||
+ | | ||
+ | fitModel[" | ||
+ | to weighted samples" | ||
+ | | ||
+ | evaluateModel[" | ||
+ | | ||
+ | checkConvergence{" | ||
+ | | ||
+ | checkConvergence -->|Yes| bestModel[" | ||
+ | offset model" | ||
+ | end | ||
+ | | ||
+ | bestModel --> applyCorrection[" | ||
+ | | ||
+ | applyCorrection --> outputPolyfit[" | ||
+ | end | ||
+ | |||
+ | outputPolyfit --> rgbdpcd[" | ||
+ | gsd --> kde[" | ||
+ | kde --> storergbd[(RGBD Images)] | ||
+ | end | ||
+ | |||
+ | subgraph sg3[" | ||
+ | direction LR | ||
+ | tsdf --> gltfq[" | ||
+ | end | ||
+ | |||
+ | subgraph tsdf[" | ||
+ | direction LR | ||
+ | storergbd2[(RGBD Images)] | ||
+ | gpuintgr[" | ||
+ | isvram{" | ||
+ | cpuintgr[" | ||
+ | exportmesh[(GLTF Mesh)] | ||
+ | |||
+ | storergbd2 -.- gpuintgr | ||
+ | gpuintgr --> isvram | ||
+ | isvram --Low--> gpuintgr | ||
+ | isvram --High--> | ||
+ | cpuintgr --> gpuintgr | ||
+ | gpuintgr -.Move Data.-o cpuintgr | ||
+ | |||
+ | cpuintgr --> exportmesh | ||
+ | end | ||
+ | |||
+ | sg1 --> sg2 --> sg3 | ||
+ | </ | ||
3d scan of tami | 3d scan of tami | ||
Line 23: | Line 142: | ||
After that I switched to superpoint for the feature detection and lightglue for feature matching which seems to be a fairly popular combo currently. | After that I switched to superpoint for the feature detection and lightglue for feature matching which seems to be a fairly popular combo currently. | ||
+ | |||
+ | {{tamiwiki: | ||
First I matched every frame with its 30 nearest frames(by time). | First I matched every frame with its 30 nearest frames(by time). | ||
Line 58: | Line 179: | ||
I was able smooth the depth over by rasterizing the median depth(according to RaDe-GS) | I was able smooth the depth over by rasterizing the median depth(according to RaDe-GS) | ||
+ | |||
+ | {{tamiwiki: | ||
But I still had some outliers in the depth data. | But I still had some outliers in the depth data. | ||
Line 67: | Line 190: | ||
But the problem I got there is that the outliers were visibly clearly still in the depth data but brought into the distribution and blended into it. | But the problem I got there is that the outliers were visibly clearly still in the depth data but brought into the distribution and blended into it. | ||
- | #TODO | + | After plotting the rasterized median depth from gaussian splats into a frequency histogram I was able to see that in problematic images there are two distinct spikes and a long trail of depths. |
+ | |||
+ | I was able to fit a kernel density estimate to the depth data and then I manually found cutoff value where if the density after the global peak becomes lower it means that were past the primary peak any depth beyond that is an outlier. | ||
+ | |||
+ | {{tamiwiki: | ||
+ | |||
+ | After removing the depth outliers I was able to get much cleaner results | ||
+ | |||
+ | To get a mesh from the depth images I used TSDF integration, | ||
- | depth kernel density outlier detection | + | But the gpu vram wasnt enough for me to extract mesh detail down to 1mm. And running the integration purely on cpu was too slow. |
- | tsdf integration | + | So I ended up computing the tsdf volume in batches on the gpu and them merging them onto a uniform voxel grid on the cpu, where there was overlap between the grids I used trilinear interpolation. |
- | mesh compresson | + | #TODO mesh compresson |
voxelization | voxelization |