How to Use Training to Make Sure Your Tech Team Is Keeping Up
October 4, 20185 Tips for Locating the Best Minds in Blockchain Development
October 10, 2018What is machine learning? Surely we’ve heard the word float around in recent years as one of the hottest new buzzwords to hit the technosphere, but does anyone know what machine learning is? Well, from the title alone we can deduce that it has something to do with machines learning, which is undoubtedly true. However, let’s look at a more formal definition: “Machine learning is a field of computer science that uses statistical techniques to give computer systems the ability to ‘learn’ with data, without being explicitly programmed.”
The magic of this field of study comes in the very last segment of that definition: “Without being explicitly programmed.” Traditionally in computer science, whenever we needed to have the machine do something, we’d write out a clear-cut instruction set in the form of code. The machine would then follow each step to the letter, planted in a rigid mold in which there was no deviation.
There isn’t anything wrong with this approach either— the matter of fact, it’s crucial in applications such as software for an airplane. You definitely would not want any degree of deviation when in a several ton machine hundreds of thousands of feet off the ground! However, there are specific applications in which machine learning, may perhaps, exceed its counterpart. Let’s consider a pattern recognition problem: fraud detection. Many organizations will write some sort of rule software to uncover known patterns. However, this alone is not effective at covering unknown patterns, adapting to new fraud patterns, or handling criminals’ increasingly more sophisticated techniques. This is where machine learning becomes vital.
What Are the Applications of Machine Learning?
Machine learning has already seeped into a plethora of different industries, eliciting a wide variety of various applications.
Personal Assistants
I’m sure we’ve all come into contact with Siri, who’ll (reasonably) do whatever you want her to. This is a direct example of a virtual personal assistant, their primary function being to assist you in finding information. Want to know Mexican restaurants within a 10-mile radius of you? What will the weather be like tomorrow? The events you have scheduled this weekend? Siri, or any virtual assistant, will do this for you. Assessing how this works ‘under the hood,’ machines such as Siri will look out for the information, remember your related queries, or send a command to other resources (like phone apps) to collect data.
Video surveillance
When we see or think of surveillance cameras, we think of the security guard in the control room, watching (or sometimes not!) your every move. However, what if humans were no longer needed? There are some sophisticated surveillance systems nowadays powered by AI and making it possible to detect crime before it happens! With computer vision thrown into the mix, the machine can track suspicious behavior such as people standing motionless for an extended period, stumbling, etc. This system can then send an alert to human employees, who can then handle the situation accordingly. When the activities are reported and counted to be true, thus continuously improving surveillance services.
Social Media
Perhaps the most easily relatable, we see a lot of machine learning in social media as well. For example, look at Instagram. Whenever you hit the explore icon, there’s a section labeled ‘For You’ that gives you a personalized feed based off of the pictures and videos you’ve clicked on in the past. A few more examples are:
- People You May Know. Facebook uses ML algorithms to determine people you may know based on friends that you connect with, the profiles you frequently visit, your interests, workplace, groups, etc.
- Face recognition. Whenever you post a picture with you and a few other friends, Facebook can recognize their face(s). This is based on metrics such as the projections in the picture and unique facial features, matching them with people in your friend’s list.
How Machines Are Taught to Learn
Tanenbaum, an esteemed researcher and professor, as well as dean at the Advanced School of Computing and Imaging, says “We’re trying to take one of the oldest dreams of AI seriously: that you could build a machine that grows into intelligence the way a human does—starts like a baby and learns like a child.”
Is this possible though? Can we teach a machine like a baby? Well, the first distinction is that babies are born with instincts that help them develop common sense, a phenomenon that is still lost on AI. For example, a child does not learn the same way a computer would, with explicit rules to identify an apple such as color=red, sizebetween7to8cm = true, shape = round. However, individual cases that are more nuanced that a machine might not be able to recognize would certainly not be missed by a child—like an apple being mutated and coming out orange.
Humans are not hardwired, nor are we born as blank slates with a lack of ability to reason about the world that surrounds us. We have predispositions that help us come to conclusions about the world, even without being trained continuously by some algorithm. This is the critical difference between AI and humans, making it difficult to teach one in a way inherently traditional to the way we learn as humans.
However, significant strides have been taken to get to this level of artificial complexity eventually. Researchers at Google’s AI company, DeepMind, has developed a program that takes on a range of tasks performed almost as well as a human—and (critically!) has developed a revolutionary way for the machine not to forget how to solve problems in the past, using that knowledge to tackle new ones. To do this, researchers drew upon studies from neuroscience that showed that animals learned continually by preserving brain connections used for maintaining skills acquired in the past, and applied this knowledge to their research.
James Kirkpatrick at DeepMind summarizes where research efforts are at regarding cracking this remarkable problem: “We know that sequential learning is important, but we haven’t gotten to the next stage yet, which is to demonstrate the learning that humans and animals can do. That is still way off. But we know that one thing that was considered to be a big block is not insurmountable.”
How to Build an Effective Machine Learning Model
There is a systematic way to build your machine learning models while maintaining efficiency and reducing clunkiness. Soon enough, you’ll have insight on how to create your own!
- Review the data first. Before even looking to develop a model, examine the data first. Preparing a training data set takes time, with mistakes being common. Eliminate some of a headache by seeing if there are any problematic elements in the data set, like any weird symbols, further verifying if you have the correct data in the right form.
- Detect rare events. Incorrectly identifying a unique event is an issue in some training models. To counteract this, build a biased training data set by oversampling or undersampling. This will then make your training data more balanced, and the higher ratio of events will help your algorithm learn to isolate abnormalities better.
- Keep your model simple. As a rule of thumb, it is better to keep your model as simple as possible. Nothing elicits confusion and errors more than a learning model too complicated to understand. Answer some questions like what “good” and “bad” mean to your system (if Attribute X, Y, and Z are this, then it’s terrible. Otherwise, good), and how to integrate your model into your application.
- Combine a lot of models. Data scientists often use algorithms like gradient boosting and random forests to build lots of models instantly. However, some models fit better with different algorithms, and this can be detected by seeing which modeling method has the overall best classification of the validation data. Place more emphasis on this method.
- Balance generalization. Generalization is the answer to overfitting, an unwanted result that comes from overtraining the model on training data, giving the illusion of perfect classification, until the model is presented new data and fails miserably. Generalization mitigates this issue by allowing the model to fit nicely with unseen data.