Face Detection and Face Recognition with Real-time Training from a Camera To improve the recognition performance, there are MANY things that can be improved here, some of them being fairly easy to implement. For example, you could add color processing, edge detection, etc. You can usually improve the face recognition accuracy by using more input images, atleast 50 per person, by taking more photos of each person, particularly from different angles and lighting conditions. You could create mirror copies of your facial images, so that you will have twice as many training images and it wont have a bias towards left or right. You could translate or resize or rotate your facial images slightly to produce many alternative images for training, so that it will be less sensitive to exact conditions. You could add image noise to have more training images that improve the tolerance to noise. That's why you can often get very bad results if you don't use good preprocessing on your images.
Creating a haar cascade classifier aka haar training In the previous posts, I used haar cascade xml files for the detection of face, eyes etc.., In this post, I am going to show you, how to create your own haar cascade classifier xml files. It took me a total of 16 hours to do it. Hope you can do it even sooner, following this post Note : The below is only for linux opencv users. If you are a windows user, use this link For most of the dough, that is going to come, you will need these executable linux files. Before I start, remember two important definetions Positive images : These images contain the object to be detected Negative images : Absolutely except that image, anything can be present It's better to explain, with an example. First of all, I took the photographs of three of my pens, along with some background, the pics looked like the one below I took a total of 7 photographs (I didn't care to count, for which pen, I took more photographs, out of three) with my 2MP camera phone and loaded them into my computer. 1. find . . Pen detector
Emgu CV: OpenCV in .NET (C#, VB, C++ and more) Real-time object detection in OpenCV using SURF Object detection (or rather, recognition) is one of the fundamental problems in computer vision and a lot of techniques have come up to solve it. Invariably all of them employ machine learning, because the computer has to first 'learn' that a particular bunch of pixels with particular properties is called a 'book', remember that information, and use it in future to say whether the query image has a book or not. You should know about two terms before reading on. Training images are the images which the detector uses to learn information. Query images are the images from which the detector, after learning, is supposed to detect object(s). Generally, our aim in such experiments would be to achieve robust object recognition even when the object in a query image is at a different size or angle than the training images. That being said, template matching (because of the sheer volume of pixels that it processes) is slow and requires a lot of memory. A short description, though. Matching Strategy
Cascade Classifier — OpenCV v2.4.2 documentation Goal In this tutorial you will learn how to: Use the CascadeClassifier class to detect objects in a video stream. Code This tutorial code’s is shown lines below. Result Here is the result of running the code above and using as input the video stream of a build-in webcam: Remember to copy the files haarcascade_frontalface_alt.xml and haarcascade_eye_tree_eyeglasses.xml in your current directory. Help and Feedback You did not find what you were looking for?
SURF in OpenCV « Achu's TechBlog Let us now see what is SURF. SURF Keypoints of my palm SURF stands for Speeded Up Robust Features. It is an algorithm which extracts some unique keypoints and descriptors from an image. More details on the algorithm can be found here and a note on its implementation in OpenCV can be found here. A set of SURF keypoints and descriptors can be extracted from an image and then used later to detect the same image. co-ordinates from origin to the end of the image. Object detection using SURF is scale and rotation invariant which makes it very powerful. OpenCV library provides an example of detection called find_obj.cpp. The explanation of the code is straightforward. SURF keypoints of my mobile phone The above picture shows the SURF keypoints captured by myself holding a mobile phone. Here are a few more screenshots of object recognition using surf: Like this: Like Loading...
FaceDetection Note: This tutorial uses the OpenCV 1 interface and (as far as I can tell) is not compatible with the version of haarcascade_frontalface_alt.xml included in the OpenCV 2 code source. See for the OpenCV 2 version of the tutorial which is compatible with the current XML files. How to compile and run the facedetect.c is one of the frequently asked question in the OpenCV Yahoo! Groups. Haar Like Features: What is that? A recognition process can be much more efficient if it is based on the detection of features that encode some information about the class to be detected. The object detector of OpenCV has been initially proposed by Paul Viola and improved by Rainer Lienhart. After a classifier is trained, it can be applied to a region of interest (of the same size as used during the training) in an input image. Ok. Here is my commented facedetect.c file: Hope you understood what the code meant. On Linux
PyBrain Creative Inspiration: 10 Principles of Design Graphic design is much more than learning how to use the tools within Photoshop. It requires an intimate understanding of the relationship between different objects. This series of paper art poster designs by Efil Türk covers 10 design principles that are core to any designer's success. 1. "Balance as a design principle, places the parts of a visual in an aesthetically pleasing arrangement." 2. "Visual hierarchy is the order in which the human eye perceives what it sees. 3. "Pattern uses the art elements in planned or random repetition to enhance surfaces or paintings." 4. "Rhythm is the repetition of visual movement of the elements-colors, shapes, values, forms, spaces, texture." 5. "Space is an empty place or surface in or around a work of art. 6. "Proportion refers to the relative size and scale of the various elements in a design. 7. "It creates a focal point in a design; it is how we bring attention to what is most important." 8. 9. 10.
Explore Python, machine learning, and the NLTK library The challenge: Use machine learning to categorize RSS feeds I was recently given the assignment to create an RSS feed categorization subsystem for a client. The goal was to read dozens or even hundreds of RSS feeds and automatically categorize their many articles into one of dozens of predefined subject areas. The client suggested using machine learning, perhaps with Apache Mahout and Hadoop, as she had recently read articles about those technologies. What is machine learning? My first question was, "what exactly is machine learning?" Classification. The Mahout and Ruby detours Armed with an understanding of what machine learning is, the next step was to determine how to implement it. Finding Python and the NLTK I continued to search for a solution and kept encountering "Python" in the result sets. I decided to pursue a Python solution after I found elegant coding examples. print feedparser.parse(" Back to top Getting up to speed on Python pip
Machine Learning in Python Has Never Been Easier! At BigML we believe that over the next few years automated, data-driven decisions and data-driven applications are going to change the world. In fact, we think it will be the biggest shift in business efficiency since the dawn of the office calculator, when individuals had “Computer” listed as the title on their business card. We want to help people rapidly and easily create predictive models using their datasets, no matter what size they are. Our easy-to-use, public API is a great step in that direction but a few bindings for popular languages is obviously a big bonus. Thus, we are very happy to announce an open source Python binding to BigML.io, the BigML REST API. The BigML Python module makes it extremely easy to programmatically manage BigML sources, datasets, models and predictions. Just like magic! We have tried to build a very simple binding just wrapping all the HTTP requests and responses to BigML.io within one class. Beautiful Models: See our former post about it. Like this:
Milk: Machine Learning Toolkit for Python This is the code that I use for my research projects. Where can I get it? Github as usual. easy_install milk or: pip install milk if you use these tools. Examples Here is how to test how well you can classify some features,labels data, measured by cross-validation: import numpy as np import milk features = np.random.rand(100,10) # 2d array of features: 100 examples of 10 features each labels = np.zeros(100) features[50:] += .5 labels[50:] = 1 confusion_matrix, names = milk.nfoldcrossvalidation(features, labels) print 'Accuracy:', confusion_matrix.trace()/float(confusion_matrix.sum()) If want to use a classifier, you instanciate a learner object and call its train() method: Features Pythonic interface to libSVM. Article filed in categories: Software Work Python
PyML - machine learning in Python — PyML v0.7.3 documentation PythonBooks - Learn Python the easy way !