background preloader

3D Scanning Made Easy

3D Scanning Made Easy

http://skanect.manctl.com/

Demonstration of RGB Demo0.7 on UBuntu12.04 - 3d reconstruction using kinect Installing Libfreenect Installing Open-NI, Sensor kinect, Avin2Sensor patch and NITE ( Note : only 1.5xx versions work with RGB demo source not the github) Installing PCL-1.6 Installing OpenCV-2.3.1 Installing RGB demo Installing QT Note: PCL-1.6, OpenCV2.3.1 and QT libraries should be downloaded from Ubuntu repositories, otherwise there will be a conflict Getting started with OpenKinect (Libfreenect) Here are the list of Ubuntu Kinect Installation commands. Install the following dependencies.The first line in each of the listings below takes care of this step:

From Kinect to MakerBot Make: Projects The Open Kinect movement has given us some amazing tools to capture the physical world. With some open source software, a few simple steps, and an occasional not-so-simple-step here and there, you can print what your Kinect can see. You’ve got a Kinect, and you’ve got a MakerBot. This guide explains how to scan something using the Kinect, and then to print it on the MakerBot. SLAMDemo - rtabmap - Demo of 6DoF RGBD SLAM using RTAB-Map - RTAB-Map : Real-Time Appearance-Based Mapping In this example, we use a hand-held Kinect sensor: we assemble incrementally the map depending on the 6D visual odometry, then when a loop closure is detected using RTAB-Map, TORO is used to optimize the map with the new constraint. We did two loops in the IntRoLab robotic laboratory of the 3IT . The first image below is the final map without optimizations. The second image is the final map with TORO optimizations with RTAB-Map detected constraints. The third and fourth images are closer examples of before and after map optimizations. The ROS bag for the demo is here : kinect_720.bag .

Shaking some sense into using multiple Kinect's with Shake 'n' Sense This is one of those weird things that you just wouldn't expect until you see it... Shake n Sense Makes Kinects Work Together! Microsoft Research has discovered that shaking Kinects, far from making them fall apart, makes them work together. See it in action in the video.This is one of those ideas that once you have seen it you can't believe you didn't think of it first. The only barrier to thinking of it is that you might not be thinking big enough.

Rendering results with Meshlab Posted: 26th February 2012 by hackengineer in Computer Vision Tags: 3D , meshlab , point cloud Meshlab is pretty great for 3D point clouds and its free! Here are a few steps that help really make the point clouds look good. Open Meshlab and open the .xyz file from the 3D camera Delete any points that look like they dont belong ( if you only see a small group of points you are probably zoomed out really far dude to a rogue point; delete it and zoom in) Orient the pointcloud so that it represents the original scene (looking straight at it). We will now compute the normals for each point.

Skanect by Manctl Contact You can email us at skanect@occipital.com or get help from the Skanect community, in the skanect google group. Tutorials Automate your Meshlab workflow with MLX filter scripts Meshlab is a great program for loading and editing XYZ point cloud data and creating polygon meshes. It also does a good job as a 3D file format converter. After you start using Meshlab for awhile you will typically use the same filter settings over and over again for every project.

ReconstructMe and Realistic 3D Scans May 10, 2012 in Scanning by Tim Owens I’ve been playing recently with a great new piece of software called ReconstructMe which is free for now and Windows only. It uses the Xbox Kinect (an incredibly worthwhile investment for 3D scanning) to create 3D models. 3D Scanner KinectFusion - Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera [28C3] by CCCen 21,171 views KinectFusion Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera This project investigates techniques to track the 6DOF position of handheld depth sensing cameras, such as Kinect, as they move through space and perform high quality 3D surface reconstructions for interaction. While depth cameras are not conceptually new, Kinect has made such sensors accessible to all.

123D Scanner - Home made 3D Scanner Hey - have a look at my new project HERE In this project I built a 3D Scanner, that enables generating 3D models of physical objects. The files can later be viewed in 3D software (GLC Player, Sketchup, Rhino, or sites such as and even manipulated into .STL file and 3D printed. The software for this project is completely free, I am using Autodesk's 123D catch, Link:123D catch The 123D Catch is a great software, it requires taking many photos of an object all around it, and uploading it into the software, and it returns a 3D file. 3D Printed Photograph All of these 3D models were generated algorithmically from Processing using the ModelBuilder library by Marius Watz. This library allows you to save 3D geometries in the STL file format, STL files that form a watertight mesh can be printed by a 3D printer. To get started using this code yourself, download the latest version of the ModelBuilder library, unzip the file, and copy the folder into Processing's "libraries" folder. If you have installed the predecessor to the ModelBuilder library (called the Unlekker library), you will need to delete it.

Related: