In order to generate more accurate, appealing, and personalized insights for users, Chief Executive Mark Zuckerberg has decided to launch a research lab for artificial intelligence. The worldwide leading social network will embrace the project in order to enrich its social newsfeed functions, which will be improved in order to offer more specific insights to its users, based on all the content they share on their profiles.
Why artificial intelligence?
Zuckerberg and Michael Shroepfer, the social network’s chief technology officer, attended the latest Neural Information Processing Systems Conference along with the New York University Professor Yann LeCun, who was recently chosen as the leader of Facebook’s new research lab for artificial intelligence. During the convention, Zuckerberg talked about the project and explained that the company has been assembling a team of the best people in the field—perhaps that is the reason why he and his team attended the summit, to fish some clever minds—in order to make the network’s social newsfeed more efficient.
Furthermore, he announced the acquisition of Mobile Technologies, which is a speech detection and machine translation firm that will help Facebook expand its field work from photo recognition to voice. According to Zuckerberg, the main purpose is to build services that are more natural to interact with and that help solving many more problems than any existing technology nowadays.
The master mind behind the project
NYU’s Professor Yann LeCun is one of the world’s leading machine learning scientists and was chosen by the social network as the head of its research lab for artificial intelligence, which will be split across its Menlo Park headquarters, a new Al lab built a block from NYU’s campus in Manhattan, and the London offices. LeCun is renown by his work in the artificial intelligence field since the 1980s, when he developed a premature version of the “back-propagation algorithm”, that later became the top way to train artificial neural networks. While working for AT&T Bell Laboratories, he created the “convolutional network mode” that imitates the visual cortex of living beings in order to develop a pattern recognition system for machines. This model was later used for handwriting recognition and optical character recognition, which was utilized by many banks to read checks in the late 1990s and early 2000s. All in all, Lecun expertise is in “deep learning” image and speech recognition systems; he has driven his research in creating visual navigation systems of self-driving cars, drones, and autonomous ground robots. It is quite clear why Facebook would benefit from NYU’s Professor Yann LeCun’s knowledge: his experience will contribute tremendously in improving the way the network’s social newsfeed recognizes what people want exactly to see, how they desire to organize their photos, and who knows what more!