Hi everyone,

I want to share my problem about using two cameras. I am using David 4.

My aim is to collect ground truth stereo data. So, here is what I need:

- two pictures

- the two associated 3D maps

- the intrinsics of both cameras

- a transformation matrix M12 to pass from the first camera coordinates system to the second one

The matrix M12 allows me to map any point in the image 1 into the image 2.

My problem is that I do not manage to compute the correct transformation matrix.

My questions are:

- in which coordinates system the "Shape Fusion" tab gives the reconstructions? camera or world (=calibration pattern) system?

- any idea of how I can obtain this transformation matrix?

Here some information about what I did:

1. Setup

As I want stereo data, I am using two cameras and I use the David 3D software to acquire the 3D of a same object at the same place, but with two different points of views (Cam1 and Cam2).

The distance between the two cameras (~10cm) is set by the fact that I want narrow baseline.

2. Collecting Data

Both are calibrated with the calibration pattern ***at the same place***.

Why do I calibrate the two cameras with the same calibration pattern at the same place?

According to http://wiki.david-3d.com/calibration :

"the world coordinate system is the same as the calibration pattern coordinate system. "

I thought this will simplify my problem.

In the "Shape Fusion" tab, I noticed that the 2 scans are shifted.

One question is: in which coordinates system the "Shape Fusion" tab gives the reconstructions? camera or world (=calibration pattern) system?

In spite of this shift, I decide to move on.

3. Changes of coordinates system

According to this post: http://forum.david-3d.com/viewtopic.php?f=15&t=6178

"You can interpret them as a 4x4 homogeneous transformation matrix (add bottom row "w", always: 0 0 0 1) which represents the position and orientation of the camera/projector."

Actually, I am using Matlab to prepare this data:

- using the two camera.xml files, I computed the transformation from world to Cam 1 (T_w_to_c1) and Cam 2 (T_w_to_c2)

- I extracted the meshes of the two .obj files and I put the 3D points into their respective camera coordinates system

- I projected the points of the view 1 into the world coordinates system using inv(T_w_to_c1)

- I projected the points of the view 1 in the the world coordinates system into the second camera coordinates system using T_w_to_c2

- I projected the 3D points into the image of the View 2 using the intrinsic parameters of Cam2

However, I keep noticing a shift.

Do not hesitate to ask me more information,

Thanks in advance,

Regards,