I am currently working on my final capstone project to graduate. This capstone project will be showcased at the end of the school year at the annual capstone fair on April 27th.
The goal of my project is to build a stage for an object or person, where a head to toe 3D scan can be captured by, in this case, Microsoft Kinects and then optimized by open sourced software and transmitted to a 3D printer (e.g. Makerbot Replicator, Ultimaker, etc.) where it will be printed into a full 3D model. Originally the plan was to use 3 kinects to capture point clouds and then mesh them together and clean in up in Blender before printing. With Reconstructme, there is the possibility of being able to skip the point cloud stage and capture a real-time 3d model. I was consider putting the Kinect on a radial dolly track to move the Kinect around a full 360 degrees.
I understand that you can't use the automatic volume stitching to get a full scan of a person, but can't you use Reconstructme with Meshlab and use the volumes capture by Reconstructme and overlap those volumes and use Meshlab's alignment system to connect them creating a full human scan?