People This page lists the people currently involved in the OpenKinect community and development effort. If you're starting a new project or working on a particular new feature, check here to make sure someone else isn't already working on it! Joshua Blake (JoshB) is the OpenKinect community founder and lead. He is responsible for bringing together everyone interested in OpenKinect and tries to coordinate the project's efforts and the people who work on them. He also serves as a point-of-contact between the OpenKinect community and other projects, companies, and the general public. Administration This includes mailing list and PR efforts such as twitter and blogs. Joshua Blake (JoshB) Seth Sandler (cerupcat), assists with Google Group mailing list apps Peter Finn (Nink), runs @openkinect on twitter Hector Martin (marcan), runs the openkinect.org server and Wiki software Regional Meetup Coordinators This includes people organizing events and meetups in cities around the world. Repo maintainers
Kinect Creating SL Animations using the Kinect I have been experimenting of late with the Xbox Kinect as a cheap Mocap source to generate BioVision Hierarchy (BVH) files for upload into Second Life, and the results are pretty encouraging. What you Need Hardware A standalone Kinect, with power supply (this comes as standard when purchased as a standalone), if you buy it as part of an Xbox bundle you will need to buy an additional cable sold separately to connect it to the PC. Software Brekel Kinect 3D Scanner OpenNI SensorKinect Drivers NITE User Tracking Module Bvhacker Procedure For my procedure I used a laptop running 32 bit Windows XP , although Brekel Kinect is confirmed to work also with win7 x86 & x64 as well as XP x64 & x86, but no Mac/Linux version of Brekel Kinect planned. 1. 2. Brekel Kinect 3D Scanner v0.36 OpenNI Alpha Build for Windows v1.0.0.23 PrimeSensor v5.0.0 (Modules for OpenNI) Click the Downloads button and choose the zip file. 3. When it asks if it can connect to Windows Update, choose No, not his time. Have fun,
simple-openni - OpenNI library for Processing This project is a simple OpenNI and NITE wrapper for Processing. Therefore not all functions of OpenNI are supported, it's meant more to deliver a simple access to the functionality of this library. For a detailed list of changes see the ChangeLog Version 1.96 Support for Win32/64, OSX32/64, Linux64 Installation is now much simpler --- OpenNI2 --- Version 0.26 Added the autocalibration, now you can only enter the scene and get the skeleton data without the psi pose Updated the examples to enable auto-calibration (User, User3d) Unified the SimpleOpenNI distribution library, from now there is only one library distribution for OSX,Windows and Linux. Older logs This examples shows, how to display the depthMap and the camera: import SimpleOpenNI SimpleOpenNI context; void draw(){ // update the cam context.update(); // draw depthImageMap image(context.depthImage(),0,0); // draw camera image(context.rgbImage(),context.depthWidth() + 10,0);}
Introduction to OpenKinect and as3Kinect This article functions as an introduction to building OpenKinect and as3kinect projects. This is the first in a series of articles on this topic. This opening article attempts to answer the following seven questions:What is Kinect, and what can it do?When and how did OpenKinect get started? What is Kinect, and What Can It Do? A Kinect is a hardware device that has two cameras. In addition, the Kinect has an array of built-in microphones that can be used to capture voice input, and a motor, that tilts the device up and down, so it can capture motion across a wider field of view. When and How Did OpenKinect Get Started? It was November 10, 2010 when Microsoft released the Kinect in Europe. A few hours after that, the libfreenect project was born, headed by Joshua Blake, and maintained by Hector Martin and Kyle Machulis. As an ActionScript developer for a few years now, and with little experience in C, I initiated the quest to make this happen. What Is libfreenect? #include "libfreenect.h"
Graphics and Animation - Mac OS X Technology Overview Sprite Kit Sprite Kit is a powerful graphics framework for 2D games such as side-scrolling shooters, puzzle games, and platformers. A flexible API lets developers control sprite attributes such as position, size, rotation, gravity, and mass. Scene Kit Scene Kit is a high-level Objective-C framework that enables your app to efficiently load, manipulate and render 3D scenes. Core Animation Core Animation lets you build dynamic, animated user experiences using an easy programming model based on compositing independent layers of media. Core Image Core Image is, simply put, “image effects made easy.” Quartz Quartz provides essential graphics services for applications in two integral parts: the Quartz 2D graphics API and the Quartz Extreme windowing environment. OpenGL OpenGL provides the GPU-accelerated foundation for OS X by powering Core Animation, Scene Kit, Sprite Kit and Quartz Extreme.
One Year Anniversary For the Kinect, Over 10 Million Units Shipped, A Game Changer in the World of Entertainment - buildsmartrobots The Microsoft Kinect was released in the US on November 4, 2010. It was hacked on November 10, 2010. For the one or two people on the internet who have not heard about the Xbox Kinect, it is a interactive gaming device for the Microsoft Xbox gaming system. Structured light from the Kinect Structured light is simply light projected in a unique pattern. The data from a 3D camera is called a point cloud. A point cloud is a data structure used to represent a collection of multi-dimensional points and is commonly used to represent three-dimensional data. Point clouds can be used to create range images by assigning a color to the Z axis. Libfreenect OpenKinect is an open community of people interested in making use of the amazing Xbox Kinect hardware with our PCs and other devices. Summary
SYNAPSE for Kinect intrael - Computer vision for the web Intrael is a server that provides an HTTP interface for the MS kinect. It processes the depth stream coming from the device and thresholds it based on fixed depth ranges or a reference background frame. It then measures several properties for the blobs it finds and provides them to network clients wrapped as JSON arrays. These can be retrieved through polling with XHRs or real streaming with Server Sent Events. Using nothing more than plain AJAX, computer vision can be performed directly in the browser. The data provided for each blob consist of 3D points for the extremes of the x,y,z axes plus the geometric center of the object, along with depth readings of the corresponding points on the (thresholded) background. The raw outputs from the cameras are also provided as either JPEG images, MJPEG streams or uncompressed binary frames. The server works great even on low powered platforms like the beagleboard making it ideal for embedded devices.