InvestorsHub Logo
Followers 680
Posts 141214
Boards Moderated 36
Alias Born 03/10/2004

Re: None

Thursday, 03/04/2021 12:16:47 PM

Thursday, March 04, 2021 12:16:47 PM

Post# of 49387
Facebook taught a computer vision system how to supervise its own learning process
By: Engadget | March 4, 2021

• The techniques that taught AI to translate speech are being applied to visual tasks

As impressively capable as AI systems are these days, teaching machines to perform various tasks, whether its translating speech in real time or accurately differentiating between chihuahuas and blueberry muffins. But that process still involves some amount of hand holding and data curation by the humans training them. However the emergence of self supervised learning (SSL) methods, which have already revolutionized natural language processing, could hold the key to imbuing AI with some much needed common sense. Facebook’s AI research division (FAIR) has now, for the first time, applied SSL to computer vision training.

“We’ve developed SEER (SElf-supERvised), a new billion-parameter self-supervised computer vision model that can learn from any random group of images on the internet, without the need for careful curation and labeling that goes into most computer vision training today,” Facebook AI researchers wrote in a blog post Thursday. In SEERs case, Facebook showed it more than a billion random, unlabeled and uncurated public Instagram images.

Under supervised learning schemes, Facebook AI head scientist Yann LeCunn told Engadget, “to recognize speech you need to label the words that were pronounced; if you want to translate you need to have parallel text. To recognize images you need to have labels for every image.”

Unsupervised learning, on the other hand, “is the idea of a problem of trying to train a system to represent images in appropriate ways, without requiring labeled images,” LeCunn explained. One such method is joint embedding wherein a neural network is presented with a pair of nearly identical images — an original and a slightly modified and distorted copy. “You train the system so that whatever vectors are produced by those two elements should be as close to each other as possible,” LeCunn said. “Then, the problem is to make sure then when the system is shown two images that are different, it produces different vectors, different ‘embeddings’ as we call them. The very natural way to do this is to randomly pick millions of pairs of images that you know are different, run them through the network and hope for the best.” However, contrasting methods such as this tend to be very resource and time intensive given the scale of the necessary training data.

Applying the same SSL techniques used in NLP to computer vision poses additional challenges. As LeCunn notes, semantic language concepts are easily broken up into words and discrete phrases. “But with images, the algorithm must decide which pixel belongs to which concept. Furthermore, the same concept will vary greatly between images, such as a cat in different poses or viewed from different angles,” he wrote. “We need to look at a lot of images to grasp the variation around a single concept.”

And in order for this training method to be effective, researchers needed both an algorithm flexible enough to learn from large numbers of unannotated images and a convoluted network capable of sorting through the algorithmically generated data. Facebook found the former in the recently released SwAV, which “uses online clustering to rapidly group images with similar visual concepts and leverage their similarities,” six times faster than the previous state of the art, per LeCunn. The latter could be found in RegNets, a convoluted network which can apply billions (if not trillions) of parameters to a training model while optimizing its function depending on the available computing resources.

The results of this new system are quite impressive. After its billion-parameter pre-training session, SEER managed to outperform state-of-the-art self-supervised systems on ImageNet, notching 84.2-percent top-1 accuracy. Even when it was trained using just 10-percent of the original dataset, SEER achieved 77.9-percent accuracy. And when using only 1-percent of the OG dataset, SEER still managed a respectable 60.5-percent top-1 accuracy.

Essentially this research shows that, as with NLP training, unsupervised learning methods can be effectively applied to computer vision applications. With that added flexibility, Facebook and other social media platforms should be better equipped to deal with banned content.

“What we'd like to have and what we have to some extent already, but we need to improve, is a universal image understanding system,” LeCunn said. “So a system that, whenever you upload a photo or image on Facebook, computes one of those embeddings and from that we can tell you this is a cat picture or it is, you know, terrorist propaganda.”

As with its other AI research, LeCunn’s team is releasing both its research and SEER’s training library, dubbed VISSL, under an open source license. If you’re interested in giving the system a whirl, head over to the VISSL website for additional documentation and to grab its GitHub code.

Read Full Story »»»



DiscoverGold

Information posted to this board is not meant to suggest any specific action, but to point out the technical signs that can help our readers make their own specific decisions. Caveat emptor!
• DiscoverGold

Volume:
Day Range:
Bid:
Ask:
Last Trade Time:
Total Trades:
  • 1D
  • 1M
  • 3M
  • 6M
  • 1Y
  • 5Y
Recent META News