Artificial Intelligence, in computer science, focuses on intelligence demonstrated by machines. It means that computers should be able to understand or mimic the human mind to make decisions.
The final goal is that whether a machine can have a type of general intelligence similar to a human’s. But, even with the advancement of the machines’ capability of performing various tasks, they still require manual human involvement.
With growing fields of Machine Learning and Deep Learning, we are now closer to creating the “intelligent agents”. Deep learning has revolutionized techniques like computer perception, image synthesis, natural language understanding, and control.
But almost all these successes largely rely on supervised learning, which is an arduous process, requiring collecting massive amounts of data, cleaning it up, manually labeling it, training and perfecting a model purpose-built for the classification or regression for one specific use case, and then using it to predict labels for unknown data.
But as soon as trained deep learning models face original examples that differ from their training examples, they start to behave in unpredictable ways. A significant issue in supervised learning is that high-quality data is often hard to come by and labeling millions of data objects is costly, time-intensive and unfeasible in many cases.
Self-supervised learning is one of the recent ML techniques with an idea to develop a deep learning system that can “learn to fill in the blanks”. Self-supervised learning can be understood as autonomous supervised learning. It is a symbolic learning approach that removes the pre-requisite requiring individuals to label data.
Self-supervised learning models can extract and use the naturally available relevant context and embedded data as guiding signals. Luckily, it is not limited to learning from visual cues or associated meta-data in only images or videos and has use cases beyond computer vision.
It may very well be the future of AI:
According to some of the leading AI researchers, it has the potential to improve the network’s robustness, uncertainty estimation ability, and reduce the costs of model training in machine learning.
In his keynote speech on the AAAI convention computer scientist Yann LeCun mentioned the boundaries of present deep studying strategies and said that “This is what’s going to allow to the AI systems, deep learning system to go to the next level, perhaps learn enough background knowledge about the world by observation, so that some sort of common sense may emerge,”.
LeCun also added that “The next revolution in AI will not be supervised, nor purely reinforced.”
Self-supervised studying is one in several plans to create data-efficient artificial intelligence programs. At this level, it’s laborious to foretell which approach will reach creating the following AI revolution (or if we’ll find yourself adopting a completely different technique). However, right here’s what we learn about LeCun’s masterplan.