The Following Shows the Professional Website Translation Services
Object Recognition Bot (above) and Zooming Bot (below)
This is a bot that professional website translation services recognize an image. If possible, it will zoom in on the image. It is like FaceBook’s photo zoom feature, but more powerful. The project has several working demos online here, here, and here (YouTube). I wanted to post this bot before ForgetFaceBot because people always ask, “How can I have a bot that recognizes faces in pictures and then posts about them on Facebook?”
This is different from that. You upload an image to the bot and it will process the image if it has not been processed before. The images are sent to Google Cloud Vision API for processing after being uploaded, so you will need a Gmail professional website translation services to account if you want to use this code. The code uses Google’s powerful TensorFlow Machine Learning platform. More details about how this works here.
Zooming Bot (below)
The idea behind a zooming bot is simple: take a user-supplied image and show all the available thumbnails for that photo when possible. When clicked on, each thumbnail links into that location of the main photo, so the user can navigate between different areas of interest. The zooming bot I’ve made is unique in that it remembers your previous zoom positions and thumbnails, so if you had zoomed into some area of an image earlier, then return to that page, later on, it will even show that same area but with one or more new images (the newer images are also linked to).
ForgetFaceBot (demo) (above) and Vision – This Adds Image Recognition to Your Facebook Page (below)
This is a bot that uses a vision API to post about the faces it sees on FaceBook. This is similar to Google’s photo search feature, but it works for any image you want.
ForgetFaceBot gets an entire history of all the photos you have posted on your FaceBook page, along with their time/date stamp entries. The facial detection algorithm looks through every single one of those photos and takes note of every face found within them. It then grabs all photos where there are at least two people in it (or more than one person if only one person was detected).
Everything else falls into the background category. The bot then searches for other pictures that look like they’re from the same event as the detected images. If any images are found, it searches those pictures for faces as well and if there is a match with one of the detected people from the first set of photos, then the bot posts a status about that person on your FaceBook page. The details about how this all works can be found here.
Vision – This Adds Image Recognition to Your Facebook Page (above)
Here is a simple demo I made using Google’s Cloud Vision API to detect objects around you through your webcam. It’s really interesting because you use Google’s machine learning tools to teach it what a dog looks like, and then ask it questions like “is my dog in this room?”, “What color is my dog?”, etc…I’ve added a simple GUI to the demo so you have a visual of how it works. It uses JavaScriptCore for client-side code, and CloudVision API calls.
This is not my project, but I thought I’d share it anyway. When you hear the word “vision”, what’s the first thing that comes to your mind? Many people, think about computers that are able to see. If you read any article on deep learning then at some point during that article there would be something along the lines of “Computer vision is when X happens in AI”.
But in this project called Vision by Kavya Joshi, they actually make computer vision tangible in a way in which you can interact with yourself! As per one of their blog posts, “How do we make it work? We started off with a number of ideas and began building the prototype.
Broadly we wanted to create an interface that would let us see what was going on in the complex algorithm inside TensorFlow.” In this project, they have used Google’s TensorFlow library which you probably have heard about if you have been interested in deep learning recently.
They took up an interesting challenge which was that they wanted to encourage people from across the world to take part in the research by developing a system for Machine Learning called Crowd Research 1.0! If you want to learn more about this project you can go here.
How Robots Can Use Neural Networks to See As People Do
In this blog post, they talk about the visual neuro-network of a robot and how it is hard to make it more like a human’s. “Some robots can see, but they don’t look very closely; we’re working on teaching our robot to do both at the same time.” They give some great examples of how if eyes were cameras then people would look really bad at taking pictures because your eyes move around while searching for something important and then take a picture once you’ve found that something important.
When someone hears the word convolutional neural networks, it can be very hard to understand what this actually means. There are many different layers of neural networks, and the convolutional layer is just one of these many layers present within neural networks. The word “convolution” sounds difficult because it has a lot of math behind the word but in the end, it’s not something that is complicated to explain.
The best way to explain deep learning would be through an example where you have two sets of data, which are about cats and dogs for instance. For each set, there are different attributes related to both cats or dogs, so if someone wanted to find out whether these two sets were similar then they could do it by looking at each attribute individually while also being able to look at them side-by-side.
From that side-by-side comparison, you would be able to notice attributes that are similar in both sets which mean they are more likely to be related than if the attribute was different in both sets. That is what the convolutional layer does, it looks at your image and can tell whether anyone spot in the image is similar to another spot in the image, so when you have an entire picture, this neural network can scan across each pixel in an image like a grid pattern and compare all of these little dots (pixels) for similarities.
Calculate the Gini Coefficient with Python!
The Gini coefficient is used to measure income inequality has been around for a long time now, and it makes sense that someone would come up with an online calculator so people can actually find out what the Gini coefficient is in their area. The way to determine income inequality is by drawing a Lorenz curve where the total population is at the bottom of the graph, while incomes are on top of the graph.
Then you would determine how equal or unequal your society is by comparing both axes. If there was perfect equality then this line would be straight but that’s not always true since even though everyone maybe starts off at 3 dollars, some make more money than others over time. There are other measures similar to this Gini coefficient which include.