Kinect tutorial 1: First steps - Robotica Go to root: PhD-3D-Object-Tracking Microsoft Kinect device. This series of tutorials will explain the usage of a depth camera like Kinect for "serious" researching purposes. As you may know, Kinect is in fact an affordable depth sensor, developed with technology from PrimeSense, based on infrarred structured light method. It also has a common camera (which makes it a RGB-D device), a microphone and a motorized pivot. Its use is not limited to playing with a Xbox360 console, you can plug it to a computer and use it like any other sensor. Since its release on November 2010, it has gained a lot of popularity, specially among the scientific community. The new Xbox One ships with an upgraded version of Kinect with enhanced resolution that is able to detect your facial expression and track every one of your fingers, but the PC development-ready version will not be released until 2014. NOTE: The tutorials are written for Linux platforms. You will need the following: Precompiled PCL for Ubuntu
2. Installing the Kinect Sensor · sanghi/metalab_rgbdemo Wiki Important Note: The following instructions are to be executed in the given order. Ubuntu Version Note These instructions were tested for Ubuntu 10.10, 11.04, 11.10, 12.04 Install OpenNI The OpenNI packages are located hereUse the drop down menu to select the appropriate version (32 or 64 bit)Download the appropriate 32 or 64 bit development edition and save it. Install Kinect Driver The latest version of the driver for the Kinect can be found here. Install Nite The Nite packages are located hereUse the drop down menu to select the appropriate OpenNI Compliant Middleware Binaries version (32 or 64 bit)Download the appropriate 32 or 64 bit development edition and save it. Final Steps At this point, the Kinect should be connected to the Ubuntu PC. cd ~/Nite/Samples/Bin/x86-Release . These instructions were tested on MacOS X 10.7 Initial Steps XCode and MacPorts need to be installed. It may be necessary to uninstall libusb and reinstall it with the +universal option. sudo .
1. Kinect Controls Windows 7 – Win&I; 2. Kinect Hacks Are Going Way Too Far | Utopian Frontiers Foundation Ever heard of Microsoft Kinect Open NI Prime Sense NITE? I'm not a program guy who writes code, nor a hacker. I'm just drawing your attention to stuff I admittedly don't fully understand nor, if you'll pardon the pun, kinect with. Yet I bet many of you do kinect with Xbox 360 and related software/hardware. That's where the hacking is happening and it has some people worried for the future of gaming and exactly what we're doing and/or about to do with interactive gaming and related technologies. Here's some background info as provided with the Video Games Awesome Trailer Tuesday piece on hacking, which is the second video you'll be watching - START background info - Kinect for Xbox 360, or simply Kinect (originally known by the code name Project Natal), is a "controller-free gaming and entertainment experience" by Microsoft for the Xbox 360 video game platform, and may later be supported by PCs via Windows 8. END background info. Find further information on
KinectEDucation Finger Tracking with Kinect SDK (and the Kinect for XBox 360 Device) | Coding4Fun Kinect Projects Now we all know the only the Kinect for Windows Device is officially supported when using the Kinect for Windows SDK, but that said, when I saw this project with the academic level documentation, let alone the great code, I just had to highlight it here. This project explained step by step how to perform finger and hand tracking with the Kinect for XBOX with the official Kinect SDK.Finger and hand tracking are included in the field of the human-computer interaction (HCI). The goal of this work is to describe a robust and efficient method to track the hand and the fingers in real time using the Kinect device, but taking into account the Kinect for XBOX limitations, like the lack of the NEAR MODE.First of all the project is focused in develop a Windows application using only libraries and functions that XBOX supports. Project Information URL: Project Source URL:
Kinect 3D Hand Tracking Note: This page regards the demo version alone. Please, find a library version here. This work got the 1st prize at the CHALEARN Gesture Recognition demonstration competition (Check also this link). The competition was organized in conjunction with ICPR 2012 (Tsukuba, Japan, Nov. 2012). By downloading this demo you agree to the bounds and terms described in this license. Note: You can download the 3D Hand Tracking library here. Strict system requirements: PC with at least 1 GB of RAM64bit Windows OSCUDA enabled GPU card (Compute Capability 1.0 and newer) with 256 ΜΒ of RAM and the latest drivers in place Also, if you are interested in performing a live demo, make sure that you have installed the x64 version of your RGB-D camera driver. The demo itself is provided as an installable package of Windows binaries Kinect 3D Hand Tracking Windows 7 x64 This demo relies on a few 3rd party dependencies: Running the live Kinect Hand Tracking demo: Please, start it from the Start menu or the Desktop.
Finger Tracking with Kinect SDK for XBOX - Project Directory CodePlexProject Hosting for Open Source Software Advanced Search 1-5 of 5 projects sorted by ShowingAllprojects 1-5 of 5 projects Previous1Next ShowingAllprojects development status sorted by tag Kinect+OpenNI学习笔记之8(Robert Walter手部提取代码的分析) - tornadomeet 前言 一般情况下,手势识别的第一步就是先手势定位,即手势所在部位的提取。本文是基于kinect来提取手势识别的,即先通过kinect找出人体的轮廓,然后定位轮廓中与手部有关的点,在该点的周围提取出满足一定要求的区域,对该区域进行滤波后得到的区域就是手部了。 本人因为要做这方面的研究,所有本文只是读了他的代码,并稍加分析了下。 开发环境:OpenNI+OpenCV 实验说明 手势定位和提取功能的实现主要是依靠OpenNI和OpenCV的实现,定位部分依靠OpenNI的人体骨架跟踪功能,手部的提取依靠OpenCV中一些与轮廓有关的函数。 手部提取时用到的几个OpenCV函数解释如下: void convexHull(InputArray points, OutputArray hull, bool clockwise=false, bool returnPoints=true ) 该函数的作用是找到输入点集points的凸包集合,参数1为输入的点集;参数2为是与输出凸包集合有关的,它是一个向量,如果向量元素的类型为整型,则表示其为凸包集合的点在原始输入集合点的下标索引,如果向量的数据类型为Point型,则表示其为凸包的点集;参数3表示输出凸包集合的方向,为true时表示顺时针方向输出;参数4表示是否输出凸包的集合中的点坐标,这个只有在参数2的类型为Mat型的时候有效,如果参数2为vector类型,则根据vector中元素类型来选择输出凸包到底是点的坐标还是原始输入点的索引,也就是说此时的参数4没有作用。 void convexityDefects(InputArray contour, InputArray convexhull, OutputArray convexityDefects) 该函数的作用是对输入的轮廓contour,凸包集合来检测其轮廓的凸型缺陷,一个凸型缺陷结构体包括4个元素,缺陷起点坐标,缺陷终点坐标,缺陷中离凸包线距离最远的点的坐标,以及此时最远的距离。 其凸型缺陷的示意图如下所示: void findContours(InputOutputArray image, OutputArrayOfArrays contours, OutputArray hierarchy, int mode, int method, Point offset=Point() 实验结果 参考资料:
Kinect+OpenNI学习笔记之 Kinect+OpenNI学习笔记之14(关于Kinect的深度信息)_免..._百度文库 评分:5/5 18页 Kinect+OpenNI学习笔记之14(关于Kinect的深度信息)Kinect+OpenNI学习笔记之14(关于Kinect的深度信息)隐藏>> 前言由于最近要研究 kinect 采集到的深度信息的一些统计... wenku.baidu.com/link?... Kinect+OpenNI学习笔记之8(Robert Walter手部提取代码... 评分:4.5/5 23页 Kinect+OpenNI学习笔记之8(Robert Walter手部提取代码的分析) Kinect+OpenNI学习笔记之8(Robert Walter手部提取代码的分析)Kinect+OpenNI学习笔记之8(Robert Walter... wenku.baidu.com/link?... Kinect+OpenNI学习笔记之7(OpenNI自带的类实现手部跟踪... 评分:5/5 14页 Kinect+OpenNI学习笔记之7(OpenNI自带的类实现手部跟踪)Kinect+OpenNI学习笔记之7(OpenNI自带的类实现手部跟踪)隐藏>> Kinect+OpenNI 学习笔记之 7(OpenNI 自带的...
patriciogonzalezvivo/KinectCoreVision Kinect Open Source Programming Secrets Kinect Open Source Programming Secrets (KOPS) is the only book that explains the official Java wrappers for OpenNI and NITE. (If you want installation instructions, scroll down this page a little.) The main drawback of using the PrimeSense Java wrappers is their lack of documentation. This book covers programming topics not found elsewhere. Early (sometimes very early) draft versions of KOPS's chapters can be downloaded from here (see the links below). If you're looking for Killer Game Programming in Java then it's here. What this Book is Not About I'm concentrating on the Kinect without including chapters explaining OOP concepts such as classes, objects, and inheritance. More importantly, I don't have the space to seriously explain the topics of 3D graphics or computer vision. Dr.
Kinect This page is a starting point for learning how to use a Kinect RGB-D Depth Camera (originally for Microsoft's XBox 360) with OpenCV. Sam Muscroft has successfully incorporated the raw depth, rgb depth map and rgb output from the Kinect sensor into an OpenCV project using the Windows CL NUI Platform. RGB output as you'd expect is stored in an 8 bit 3 channel matrix. Depth needs to be stored in a 16 bit 1 channel matrix. He found the easiest way to output the data (depth & RGB) was to create an image header of the appropriate bit depth and number of channels and populate with the data returned from the open-source Kinect API. Using the C++ API from nuigroup in Windows, the data can be accessed using the following: // Raw Depth Data PUSHORT rawData = (PUSHORT) malloc(640*480*3); GetNUICameraDepthFrameRAW(KinectCamera, rawData) In OpenCV you can then use the data this way: To access Kinect using the NUI library, visit . Using Kinect with ROS
Ray Chambers | Using Innovation as well as the Kinect In Education Want to make a basic Kinect SDK App… now you can :) | Ray Chambers Want to make a basic Kinect SDK App… now you can :) So I’ve been making the Kinect Apps since the summer and I have been enjoying helping other teachers and seeing the impact it has had. My next step is to encourage other teachers to try and jump into the world of the Kinect SDK and to attempt to actually make some applications themselves. As a small introduction to the Kinect SDK and to the world of programming. Click Here – To find out what you will need to install before you make your first app Click Here – To find out about common problems and how to solve them in seconds Click Here – To go to my SkyDrive and download the Lesson1Sample Once you have all of the system requirements and all of the installs, just sit back and follow the videos below (available in HD). Like this: Like Loading... About raychambers I am currently teaching ICT at Uppingham Community College in Uppingham, Rutland.