Hi all,
Basically, I'm working on a computer vision/AI program that takes DAVID's structured light scans for some extra processing (using it for a small augmented reality program). My program also needs some info about where the camera and projector pair is located for more accurate placement. Thus, it would save me a HUGE amount of time if I could recycle the extrinsic parameters stored in the calibration files! However, I'm having trouble understanding the calibration parameters.
I did some searching on the forum and read through this extremely helpful thread (viewtopic.php?t=1641) but I seem to be misunderstanding something. I understand the NOA represents a rotation matrix and P is a translation, but when I use the numbers in the .cal files with some basic test cases, the transformed coordinates are incorrect. I thought NOA was a rotation from the world to the camera system--but it looks like it's the opposite way instead? Using these transformations to convert between the projector, world, and camera coordinates do not seem to be working, but the conversions do work for matrices I construct by hand, which implies I'm misunderstanding the files. Also, this is more out of curiosity, but why does the projector .cal file provide what appears to be 2 identical set of parameters (except one of them has a 'w' homogeneous coordinate)? I don't know if maybe I'm digging too deep here, but any help would be greatly appreciated!