Accentuate.us: Machine Learning for Complex Language Entry Editors note: We’d like to invite people with interesting machine learning and data analysis applications to explain the techniques that are working for them in the real world on real data. Accentuate.us is an open-source browser addon that uses machine learning techniques to make it easier for people around the world to communicate. Authors: Kevin Scannell and Michael Schade Many languages around the world use the familiar Latin alphabet (A-Z), but in order to represent the sounds of the language accurately, their writing systems employ diacritical marks and other special characters. For example: Vietnamese (Mọi người đều có quyền tự do ngôn luận và bầy tỏ quan điểm),Hawaiian (Ua noa i nā kānaka apau ke kūʻokoʻa o ka manaʻo a me ka hōʻike ʻana i ka manaʻo),Ewe (Amesiame kpɔ mɔ abu tame le eɖokui si eye wòaɖe eƒe susu agblɔ faa mɔxexe manɔmee),and hundreds of others. It is easiest to describe our algorithm with an example.
Face Detection and Face Recognition with Real-time Training from a Camera To improve the recognition performance, there are MANY things that can be improved here, some of them being fairly easy to implement. For example, you could add color processing, edge detection, etc. You can usually improve the face recognition accuracy by using more input images, atleast 50 per person, by taking more photos of each person, particularly from different angles and lighting conditions. If you cant take more photos, there are several simple techniques you could use to obtain more training images, by generating new images from your existing ones: You could create mirror copies of your facial images, so that you will have twice as many training images and it wont have a bias towards left or right. You could translate or resize or rotate your facial images slightly to produce many alternative images for training, so that it will be less sensitive to exact conditions. You could add image noise to have more training images that improve the tolerance to noise.
Computing Your Skill Summary: I describe how the TrueSkill algorithm works using concepts you’re already familiar with. TrueSkill is used on Xbox Live to rank and match players and it serves as a great way to understand how statistical machine learning is actually applied today. I’ve also created an open source project where I implemented TrueSkill three different times in increasing complexity and capability. Introduction It seemed easy enough: I wanted to create a database to track the skill levels of my coworkers in chess and foosball. But, there’s a problem. Machine learning is a hot area in Computer Science— but it’s intimidating. “Not knowing something doesn’t mean you’re dumb— it just means you don’t know it.” I learned that the problem isn’t the difficulty of the ideas themselves, but rather that the ideas make too big of a jump from the math that we typically learn in school. Skill ≈ Probability of Winning Skill is tricky to measure. The key idea is that a single skill number is meaningless. See?
CameraCapture Here is a simple framework to connect to a camera and show the images in a Window. Sarin Sukumar A DSP Engineer - sarinsukumar@gmail.com Information to control the camera parameters from program. User can control Output format of the camera (YUV2, RGB etc), Brightness, exposure, autofocus, zoom, white balance etc. I have done it for USB cam and OpenCV 2.0. bool CvCaptureCAM_DShow::open( int _index ) { '' * '' bool result=false; long min=0, max=0, currentValue=0, flags=0, defaultValue=0, stepAmnt=0; close(); #ifdef DEFAULT VI.deviceSetupWithSubtype(_index,640,480,_YUY2); #endif #ifdef MEDIUM VI.deviceSetupWithSubtype(_index,1280,1024,_YUY2); #endif #ifdef ABOVE_MEDIUM VI.deviceSetupWithSubtype(_index,1600,1200,_YUY2); #endif #ifdef HIGH VI.deviceSetupWithSubtype(_index,2048,1536,_YUY2); #endif //VI.showSettingsWindow(_index); //custome code if( ! “getVideoSettingFilter()” Make this line of code like this
LIBSVM -- A Library for Support Vector Machines LIBSVM -- A Library for Support Vector Machines Chih-Chung Chang and Chih-Jen Lin Version 3.18 released on April Fools' day, 2014. LIBSVM tools provides many extensions of LIBSVM. We now have a nice page LIBSVM data sets providing problems in LIBSVM format. A practical guide to SVM classification is available now! To see the importance of parameter selection, please see our guide for beginners. Using libsvm, our group is the winner of IJCNN 2001 Challenge (two of the three competieions), EUNITE world wide competition on electricity load prediction, NIPS 2003 feature selection challenge (third place), WCCI 2008 Causation and Prediction challenge (one of the two winners), and Active Learning Challenge 2010 (2nd place). Introduction LIBSVM is an integrated software for support vector classification, (C-SVC, nu-SVC), regression (epsilon-SVR, nu-SVR) and distribution estimation (one-class SVM). Since version 2.8, it implements an SMO-type algorithm proposed in this paper: R. Download LIBSVM
drive a webcam with python I bought a USB webcam off of eBay quite some time ago, and I decided to connect it to my telescope with a little bit of hardware hackery. I’ll have to see about posting a writeup on how I did that at a later time. Anyway, when I installed my camera software, I quickly found how horrible the program was. It gave a tiny preview of what the camera saw, and had no way of capturing images or video without waaaay too many clicks of the mouse. That’s when I decided to write my own in Python. The main libraries that I ended up using were VideoCapture, PIL, and pygame. Here’s the code: I decided to use pygame in order to build this because it can actually handle the fps that I need for video. A couple of noteworthy points: The function on line 15 is simply there to help automate displaying information on the screen. If you’re trying to write a webcam app of your own, I hope this gets you pointed in the right direction.
Evolving Objects (EO): Evolutionary Computation Framework Capturing frames from a webcam on Linux :: Joseph Perla Not many people are trying to capture images from their webcam using Python under Linux and blogging about it. In fact, I could find nobody who did that. I found people capturing images using Python under Windows, and people capturing images using C under Linux, and finally some people capturing images with Python under Linux but not blogging about it. This instructional post I wrote to help those people who want to start processing images from a webcam using the great Python language and a stable Linux operating system. There is a very good library for capturing images in Windows called VideoCapture. It works, and a number of people blogged about using it. There are a number of very old libraries which were meant to help with capturing images on Linux: libfg, two separate versions of pyv4l, and pyv4l2. Finally, I learned that OpenCV has an interface to V4L/V4L2. Plus, OpenCV has very complete Python bindings. This is example utility code.
Theory | food-bot.com And so I set out to solve the ultimate problem, a problem that, if solved effectively, could revolutionize the lives of thousands of college students across the country. As I began formalizing and exploring the problem, I realized it is far less simple than it might first appear, and not unlike a famous challenging problem in computer science. This page explains various aspects of the free food problem as well as various strategies for solving it. problem statement Given an arbitrary document, d, determine whether d contains information about a free food event, and if so, return an array of correctly-associated information about each event (date/time, location, and food type). An important aspect of this problem is the correct classification of an arbitrary document as either free food or non-free food. My solution uses basic ideas from AI, especially the idea of Maximum Likelihood Estimation. Let: = Free Food category = Non-Free Food category = number of documents used in training And so:
Installing OpenCV 2.2 in Ubuntu 11.04 – Sebastian Montabone Many people have used my previous tutorial about installing OpenCV 2.1 in Ubuntu 9.10. In the comments of that post, I noticed great interest for using OpenCV with Python and the Intel Threading Building Blocks (TBB). Since new versions of OpenCV and Ubuntu are available, I decided to create a new post with detailed instructions for installing the latest version of OpenCV, 2.2, in the latest version of Ubuntu, 11.04, with Python and TBB support. UPDATE: Now you can use my new guide to install OpenCV 2.4.1 in Ubuntu 12.04 LTS: First, you need to install many dependencies, such as support for reading and writing image files, drawing on the screen, some needed tools, etc… This step is very easy, you only need to write the following command in the Terminal Now we need to get and compile the ffmpeg source code so that video files work properly with OpenCV. The next step is to get the OpenCV 2.2 code: Now we have to generate the Makefile by using cmake. Now you have to configure OpenCV.