background preloader

Oliver Kreylos' Research and Development Homepage - Kinect Hacking

Oliver Kreylos' Research and Development Homepage - Kinect Hacking

Open Frameworks + Kinect + Sound | Ben McChesney's Blog While attending FlITC San Francisco I saw Theo Watson talk about his work with the creative coding libraries known as open frameworks. While I didn’t have very much experience in c++ besides a couple of simple test apps to learn openGL, I thought it would be a good experience to learn a language completely different from actionscript. Below is the result that I after a very productive day of tinkering: Kinect Sound Experiment with Open Frameworks from Ben McChesney on Vimeo What is openframeworks? I think the most accurate term I’ve heard to describe it is “processing on crack”. Download + install xCode for mac Download openframeworks FAT 0061 and unzip it my work folder ( though it will work anywhere ) Compile any example project under openframeworksFolder/apps/examples/ to make things easier for yourself and to get developing quickly. Time to start playing with the Kinect. Running it looked like this : "Looking good Tex" Neat! But what’s next? But what’s next?

Interactive Media Division Throughout the course of my degree progress, one debate raised our very first class meeting of our first year was the concept of traditional authorial narrative vs. emergent narrative. Traditional authorial narrative is what we’ve come to know as our film-based non-interactive media, whereas emergent narrative is procedurally generated by way of a designed system. As I head towards the end of my second year, it’s less of a balanced argument— traditional narrative in games (Ludus, as named by Roger Callois in Man, Play and Games,) seem to be relying on their predecessors as a clutch, while systemic narrative (Paidia) is beginning to show the uniqueness of the new medium that we’re witnessing mature before our eyes. Now as more and more games are transitioning to paidia-based mechanics, traditional narrative in gameplay might start to be viewed as a skeuomorphic artifact from a pre-interactive era. This actually has to do with the concept of “free will”.

FaceCube: Copy Real Life with a Kinect and 3D Printer by nrp The process is currently multi-step, but I hope to have it down to one button press in the future. First, run facecube.py, which brings up a psychedelic preview image showing the closest 10 cm of stuff to the Kinect. Use the up and down arrow keys to adjust that distance threshold. Pressing spacebar toggles pausing capture, to make it easier to pick objects. Click on an object in the preview to segment it out. Everything else will disappear; clicking elsewhere will clear the choice. You can then open the PLY file in MeshLab to turn it into a solid STL. You can then open the STL in OpenSCAD or Blender and scale it and modify to your heart’s (or printer’s) content. Since all of the cool kids are apparently doing it, I’ve put this stuff into a GitHub repository. git clone git@github.com:nrpatel/FaceCube.git

Nicolas Burrus Homepage - Kinect Calibration Calibrating the depth and color camera Here is a preliminary semi-automatic way to calibrate the Kinect depth sensor and the rgb output to enable a mapping between them. You can see some results there: It is basically a standard stereo calibration technique ( the main difficulty comes from the depth image that cannot detect patterns on a flat surface. Thus, the pattern has to be created using depth difference. Here I used a rectangular peace of carton cut around a chessboard printed on an A3 sheet of paper. Calibration of the color camera intrinsics The color camera intrinsics can be calibrated using standard chessboard recognition. Calibration of the depth camera intrinsics This is done by extracting the corners of the chessboard on the depth image and storing them. Transformation of raw depth values into meters Raw depth values are integer between 0 and 2047. Stereo calibration Color Depth

FaceCube: Copy Real Life with a Kinect and 3D Printer This project is a tangent off of something cool I’ve been hacking on in small pieces over the last few months. I probably would not have gone down this tangent had it not been for the recent publication of Fabricate Yourself. Nothing irks inspires me more than when someone does something cool and then releases only a description and pictures of it. Thus, I’ve written FaceCube, my own open source take on automatic creation of solid models of real life objects using the libfreenect python wrapper, pygame, NumPy, MeshLab, and OpenSCAD. The process is currently multi-step, but I hope to have it down to one button press in the future. You can then open the PLY file in MeshLab to turn it into a solid STL. You can then open the STL in OpenSCAD or Blender and scale it and modify to your heart’s (or printer’s) content. Since all of the cool kids are apparently doing it, I’ve put this stuff into a GitHub repository. Download: git clone git@github.com:nrpatel/FaceCube.gitfacecube.pymeshing.mlx

avin2/SensorKinect - GitHub How Motion Detection Works in Xbox Kinect | Gadget Lab The prototype for Microsoft’s Kinect camera and microphone famously cost $30,000. At midnight Thursday morning, you’ll be able to buy it for $150 as an Xbox 360 peripheral. Microsoft is projecting that it will sell 5 million units between now and Christmas. We’ll have more details and a review of the system soon, but for now it’s worth taking some time to think about how it all works. Kinect’s camera is powered by both hardware and software. Older software programs used differences in color and texture to distinguish objects from their backgrounds. Time-of-flight works like sonar: If you know how long the light takes to return, you know how far away an object is. Using an infrared generator also partially solves the problem of ambient light. PrimeSense and Kinect go one step further and encode information in the near-IR light. With this tech, Kinect can distinguish objects’ depth within 1 centimeter and their height and width within 3 mm. Story continues …

Flexible Action and Articulated Skeleton Toolkit Contributors Evan A. Suma, Belinda Lange, Skip Rizzo, David Krum, and Mark Bolas Project Email Address: faast@ict.usc.edu 32-bit(recommended for most users) 64-bit(for advanced users) Note from Evan Suma, the developer of FAAST: I have recently transitioned to a faculty position at USC, and unfortunately that means I have very limited time for further development of the toolkit. You may also view our online video gallery, which contains videos that demonstrate FAAST’s capabilities, as well as interesting applications that use the toolkit. Have a Kinect for Windows v2? We have developed an experimental version of FAAST with support for the Kinect for Windows v2, available for download here (64-bit only). Recent News December 12, 2013 FAAST 1.2 has been released, adding compatibility for Windows 8. Summary FAAST is middleware to facilitate integration of full-body control with games and VR applications using either OpenNI or the Microsoft Kinect for Windows skeleton tracking software. E. Support

Synapse for Kinect SYNAPSE for Kinect Update: There’s some newer Kinect hardware out there, “Kinect for Windows”. This hardware is slightly different, and doesn’t work with Synapse. Be careful when purchasing, Synapse only supports “Kinect for Xbox”. Update to the update: There appears to also be newer “Kinect for Xbox” hardware out there. KinEmote User Forums • Index page CL KB > nui View nui About November 6th, 2010, AlexP was the first to “hack” Microsoft’s new Kinect for use on Windows 7, and after a great response from the community we are continuing our research and development into creating a stable platform for the NUI Audio, Camera and Motor devices and provide useful samples and documentation. News Nov 4th - Kinect Released Nov 6h - Got Kinect (First Communication with Device) Nov 7th - Accelerometer and Motor Video Posted Nov 8th - Full Color and Depth sensor access - First Camera Tests Video Posted Nov 9th - Using CV on Depth sensing - Color & Depth Video Posted - First Docs Added - Basic Interface Specs Nov 16th - First Preview Release of NUI Platform - Kinect Driver Installer Available To start we have a WPF/C# (.NET 3.5) Visual Studio 2010 Sample Application as well as C API (CLNUIDevice.h, DLL, LIB) and plan on extending the SDK similar to our CL Eye SDK which has Samples for C/C++/C#, Java and DirectShow. Here is a screenshot of the sample WPF application:

Main Page kinect electric: Cannot load information on name: kinect, distro: electric, which means that it is not yet in our index. Please see this page for information on how to submit your repository to our index.fuerte: Cannot load information on name: kinect, distro: fuerte, which means that it is not yet in our index. Please see this page for information on how to submit your repository to our index.groovy: Cannot load information on name: kinect, distro: groovy, which means that it is not yet in our index. Please see this page for information on how to submit your repository to our index.hydro: Cannot load information on name: kinect, distro: hydro, which means that it is not yet in our index. Cannot load information on name: kinect, distro: electric, which means that it is not yet in our index. Cannot load information on name: kinect, distro: fuerte, which means that it is not yet in our index. Cannot load information on name: kinect, distro: groovy, which means that it is not yet in our index.

Related: