Datavisualization.ch Selected Tools Blur Studio on the state of the CG Industry By Dave Baker, Neil Blevins, Pablo Hadis and Scott Kirvan Introduction In our aim to analyze the evolution of the CG Industry we’ve had the great pleasure to talk to David Stinnett, co-founder of the legendary Blur Studio. Blur is a privileged place to ask our questions: it’s one of the longest running studios (as its “Blur.com” URL address attests), a central player in the CG business, consistently producing top-quality work, and a first-hand witness of the changes in the industry. For Max Underground this article is, in the year of its 10th anniversary, a way of going back to its roots. Blur turned 15 years old last April, it was founded in 1995 by David Stinnett, Tim Miller and Duane Powell who wanted a place to work at without the bothersome daily interferences and bureaucracies of a management-oriented mindset. Without further ado we present you an interview with David Stinnett on the past, present and future of the CG Industry. The Interview The exact day?
Scientists construct first map of how the brain organizes everything we see Our eyes may be our window to the world, but how do we make sense of the thousands of images that flood our retinas each day? Scientists at the University of California, Berkeley, have found that the brain is wired to put in order all the categories of objects and actions that we see. They have created the first interactive map of how the brain organizes these groupings. Alex Huth explains the science of how the brain organizes visual categories. The result — achieved through computational models of brain imaging data collected while the subjects watched hours of movie clips — is what researchers call “a continuous semantic space.” Some relationships between categories make sense (humans and animals share the same “semantic neighborhood”) while others (hallways and buckets) are less obvious. “Our methods open a door that will quickly lead to a more complete and detailed understanding of how the brain is organized.
Kinect MoCap Animation in After Effects — Part 1: Getting Started | Victoria Nece This tutorial is now obsolete. Check out the new KinectToPin website for the latest version of the software and how to use it — it’s dramatically easier now. Hello, I’m Victoria Nece. I’m a documentary animator, and today I’m going to show you how to use your Kinect to animate a digital puppet like this one in After Effects. If you have a Kinect that came with your Xbox, the first thing you’re going to need to do is buy an adapter so you can plug it into your computer’s USB port. You don’t need to get the official Microsoft one — I got a knockoff version from Amazon for six bucks and it’s working just fine. Next you’re going to need to install a ton of different software. Here’s a quick overview of how it’s all going to work. Then on the After Effects side of things, you’ll set up a skeletal rig for a layered 2D puppet and apply the tracking data to bring it to life. It’s not an easy process, but the results are worth it. Required Software:
Daniel Shiffman The Microsoft Kinect sensor is a peripheral device (designed for XBox and windows PCs) that functions much like a webcam. However, in addition to providing an RGB image, it also provides a depth map. Meaning for every pixel seen by the sensor, the Kinect measures distance from the sensor. This makes a variety of computer vision problems like background removal, blob detection, and more easy and fun! The Kinect sensor itself only measures color and depth. What hardware do I need? First you need a “stand-alone” kinect. Standalone Kinect Sensor v1. Some additional notes about different models: Kinect 1414: This is the original kinect and works with the library documented on this page in the Processing 3.0 beta series. SimpleOpenNI You could also consider using the SimpleOpenNI library and read Greg Borenstein’s Making Things See book. I’m ready to get started right now What is Processing? What if I don’t want to use Processing? What code do I write? import org.openkinect.processing Kinect kinect;
All posts Marcin Ignac Data Art with Plask and WebGL @ Resonate My talk at Resonate'13 about Plask and how I use it for making data driven visualizations Fast Dynamic Geometry in WebGL Looking for fast way to update mesh data dynamically. Piddle Urine test strip analysis app Evolving Tools @ FITC My talk at FITC Amsterdam about the process behind some of my data visualization, generative art projects and Plask. Ting Browser Experimental browsing interface for digital library resources Bring Your Own Beamer BYOB is a "series of exhibitions hosting artists and their beamers". Bookmarks as metadata Every time we bookmark a website we not only save it for later but add a piece of information to the page itself. Timeline.js A compact JavaScript animation library with a GUI timeline for fast editing. SimpleGUI SimpleGUI is a new code block developed by me for Cinder library. Cindermedusae - making generative creatures Cindermedusae is quite a special project for me. Effects in Delta ProjectedQuads source code
How to make 3d scan with pictures and the PPT GUI More than ever before 3D models have become a "physical" part of our life, how we can see in the internet with 3D services of printing. After download and unzip you have to edit the ppt_gui_start file putting the right path of the program (in orange). Now, if you are in Linux is only run the script edited: $ . Once the program is opened, click on “Check Camera Database”. With the Terminal/Prompt by side, click in “Select Photos Path”. Choose the path and then click on “Open”. Click in “Run” and wait a little. If all is OK, you’ll see a message in the Terminal: Camera is already inserted into the database If not, you can customise with this videotutorial: Now, make a copy of the path. 1) Go to “Run Bundler”. 2) Past at “Select Photos Path”. 1) To make a good scan quality, click on “Scale Photos with a Scaling Factor”, by default, the value will be 1. 2) Click on “Run”. Wait a few minutes, the program will solve the point clouds. 1) Paste the path in “Select Bundler Output Path”2) Click on “Run So:
NVIDIA® DIGITS™ DevBox | NVIDIA Developer Deep learning is one of the fastest-growing segments of the machine learning/artificial intelligence field and a key area of innovation in computing. With researchers creating new deep learning algorithms and industries producing and collecting unprecedented amounts of data, computational capability is the key to unlocking insights from data. GPUs have brought tremendous value to deep learning research over the past couple of years. *Monitor, keyboard, and mouse not included The DIGITS DevBox combines the world’s best hardware, software, and systems engineering: Four TITAN X GPUs with 7 TFlops of single precision, 336.5 GB/s of memory bandwidth, and 12 GB of memory per board NVIDIA DIGITS software providing powerful design, training, and visualization of deep neural networks for image classification Pre-installed standard Ubuntu 14.04 w/ Caffe, Torch, Theano, BIDMach, cuDNN v2, and CUDA 7.0 A single deskside machine that plugs into standard wall plug, with superior PCIe topology