Emptied Gestures: Physical Movement Translated into Symmetrical Charcoal Drawings by Heather Hansen
Photo by Bryan Tarnowski Photo by Spencer Hansen at Ochi Gallery Splayed across a giant paper canvas with pieces of charcoal firmly grasped in each hand, Heather Hansen begins a grueling physical routine atop a sizeable paper canvas. Her body contorts into carefully choreographed gestures as her writing implements grate across the floor, the long trails resulting in a permanent recording of her physical movements. Hansen most recently had a group exhibition, The Value of a Line, at Ochi Gallery in Ketchum, Idaho which runs through March 31, 2014.
Kinect Toolbox - Home
Cinematics case study: Mass Effect 3
Earlier this year, Budapest studio Digic Pictures produced a stunning three minute trailer for the Bioware game Mass Effect 3. The trailer, dubbed 'Take Earth Back', tells the story of an alien invasion as Earth is attacked by the game's Reapers. We go in-depth with Digic to show how the cinematic was made - in stereo - featuring behind the scenes video breakdowns, images and commentary from several of the artists involved. Above: watch 'Take the Earth Back' Motion capture Artists: Csaba Kovari (Mocap TD), Istvan Gindele (Mocap TD), Gyorgy Toth (animator) We used Vicon’s T160 camera system to record all motion for this piece. In general we capture 2-3 or sometimes even 4 actors’ movements at once together with their props (swords, shields etc). Usually the mocap shooting days are preceded by rehearsal days, for example if we have a two day mocap shooting session then the actors need at least two-three days rehearsal with the director. Character animation Earth shot Blood effects 1.
Simulated Kinect
The Simulated Kinect supports the following features: Depth image RGB image Tilt. The simulated Kinect behaves very much the same as a real Kinect Sensor. You can specify the resolution of the Depth and RGB cameras independently. The simulated Kinect matches the range limitations of the real Kinect sensor, i.e. minimum range is 800mm and maximum is 4000mm. As with a real Kinect, the simulated Kinect can see through glass. You can use a simulated Kinect on your own simulated robot, or you can use the Simulated Reference Platform which already has a Kinect attached. Refer to the Kinect Services for RDS document for more information on using a Kinect. © 2012 Microsoft Corporation.
IIC_kinesthetic_cognition
IIC. Kinesthetic Spatial Cognition “Kinesthetic spatial cognition” can be defined as referring to the perception, memory, and recall of spatial information via the kinesthetic perceptual-motor system. IIC.10 Spatial Cognition versus Verbal Cognition A great deal of research has demonstrated that spatial cognitive processes and verbal cognitive processes use separate cognitive resources. Other evidence for multi-channel models comes from studies of patients with neurological disease or injury. The right-brain spatial, left-brain verbal specialisation is not a fixed relationship but appears to be based on more fundamental differences in processing styles of the two cerebral hemispheres such as sequential processes of the left hemisphere versus holistic processes of the right hemisphere (Bradshaw and Nettleton, 1981; Luria, 1970; Trevarthen, 1978). However, in some cases when verbal labels are attached to stimuli the memory for those stimuli does not necessarily improve.
Kinect Fusion
Kinect for Windows 1.7, 1.8 KinectFusion provides 3D object scanning and model creation using a Kinect for Windows sensor. The user can paint a scene with the Kinect camera and simultaneously see, and interact with, a detailed 3D model of the scene. Kinect Fusion can be run at interactive rates on supported GPUs, and can run at non-interactive rates on a variety of hardware. Running at non-interactive rates may allow larger volume reconstructions. Figure 1. Ensure you have compatible hardware (see Tech Specs section below). Kinect Fusion can process data either on a DirectX 11 compatible GPU with C++ AMP, or on the CPU, by setting the reconstruction processor type during reconstruction volume creation. Minimum Hardware Requirements for GPU based reconstruction The minimum hardware requirement for video cards has not been specifically tested for Kinect Fusion 1.8. Recommended Hardware Processing Pipeline Figure 2. The first stage is depth map conversion. Figure 3. Figure 4. Figure 5. Tips
Photography. Else Ernestine Neulander-Simon (A.k.a. YVA)
Else Ernestine Neulander-Simon (Aka YVA) was a german photographer. Yva came from a Jewish middle-class family. She worked for many of the illustrated magazines and periodicals of the time. Towards the end of the 1920s, Yva began focusing on the commercial aspect of photography, specializing in advertising and photography. Yva’s innovative, and experimental work with multiple exposures became a hallmark of her work. More images>>>
Getting Started with Visual Studio
Kinect for Windows 1.5, 1.6, 1.7, 1.8 To work with the samples or develop your own applications, use Visual Studio 2010 Express (which is available at no cost) or any other version of Visual Studio 2010. A Visual Studio 2010 edition is required. Developers should be familiar with the following development environment and languages to take advantage of the SDK features. Visual Studio 2010 or Visual Studio 2012 To work with the SDK and to develop your own applications with the SDK, you can use Visual Studio 2010 or Visual Studio 2012, including the Express editions. C# or C++ or VB languages Samples are available in C#, C++, and Visual Basic. Application development for Windows 7 and Windows 8. The SDK has a minimum dependency of Windows 7 and is supported on Windows 8 as well. Follow this procedure to install Visual Studio 2010 Express or Visual Studio 2012 Express. To install Visual Studio 2010 Express or Visual Studio 2012 Express For introductory material on Visual C++ Express Edition, see:
The Geek Movement » UIST 2009 SIC: Laban Gestures for Expressive Keyboarding
On October 5th, we participated in the Student Innovation Contest at the 2009 User Interface Software and Technology conference (UIST) in Victoria, BC. Student teams were given about a month to develop a novel use for a pressure sensitive keyboard developed by Microsoft Research, and all the entries were demonstrated and voted upon at the conference. Our submission is detailed below and in this demo video. Laban Gestures for Expressive Keyboarding Karen Tanenbaum, Josh Tanenbaum & Johnny Rodgers Simon Fraser University-School of Interactive Arts + Technology Keyboards tend to be discrete input devices, capable of multiple isolated interactions. Theatre and dance use movement frameworks to understand gesture. To situate our gestures in a mood space, we adopted Russell’s classic “circumplex” model of affect. By combining these two models, we have arrived at a framework for expressive gestures. We believe that this framework has applications wherever computation and emotion intersect.
Kinect Fusion Explorer-WPF C# Sample
Kinect for Windows 1.7, 1.8 This sample demonstrates additional features of Kinect Fusion for 3D reconstruction. This sample allows adjustment of many reconstruction parameters, and export of reconstructed meshes. To run a sample you must have the Kinect for Windows SDK installed. To compile a sample, you must have the developer toolkit installed. If you need help loading a sample in Visual Studio or using Visual Studio to compile, run, or debug, see Opening, Building, and Running Samples in Visual Studio.