as far as I know, no improvements of details with faster videocards. I think that the engine of the reconstruction is based on the IR point cloud of the sensor (in my case the Kinect), So, don't buy better hardware coz details, but buy better hardware coz you can scan faster and avoid the annoing "beep" of missing shape-tracking. But keep in mind that this software is a new born, so, maybe in the next future, new and great possibilities became true... we will see.
BUT: keep in mind an important thing. This kind of technology, that use these kind of sensors, CAN'T achieve results like David or other professional softwares/apporach.
About smal objects: you can scan only the volume of 1 meter cube at time (at the present date) and you MUST keep the sensor far 40 cm from the nearest available plane of scanning, because the technology of the sensor.
If you change-add lenses, you must to recalibrate the sensor (now not possible).
In the Blog the team wrote about that in the future a customizable calibration could be possible, but, if I'm right, the amount of details is strictly limited by the used technology.
In the meantime, have you already tried to use the high resolution mode? (sincerly not so "resoluted" in comparison with the standard mode, ...).
In the DOS propt type:
- Code: Select all
reconstructme.exe --realtime --highres
Try and report (especially the hardware specs), thanks.
p.s.: I hope that, if this software grow up, that it could be implemented in our David workflow: David is great software, and works over a limited area at time (sized as we want). Davidians that have some practice of the aligning workflow, know how is hard to align faster and preciesly all the scans, especially with big objects and huge numbers of scans. If we keep high the resolution of our scans, we must keep the camera close to the surface (or have the highest resoluted cameras). So, we obtain lot of detailed scans that need to be marged (in lot of cases) with the Free and then Precise aligning tool (I mean not with ab automated or semi-automated aligning process). Also we must keep a portion of the scanned area in common, shared, between two objects, because the aligning need this.
But if we have already a raw base mesh (as Gunter suggest lot of times ago, about David process) that we can catch very quicly and without any aligning by the user, we can place above it our meshes and also avoid to have big overlapping between the scans. Obviously first we need to be sure that ReconstructMe create geometric data that is correct and not deformed.
I suppose that this could be, or better, should be, a real boost of our projects with David Laser or SL scanner.