Artificial Intelligence (AI) has emerged as one of the most disruptive forces behind the digital transformation of business. At Microsoft, we believe everyone—developers, data scientists and enterprises—should have access to the benefits of AI to augment human ingenuity in unique and differentiated ways. We’ve been conducting research in AI for more than two decades and infusing it into our products and services. Now we’re bringing it to everyone through simple, yet powerful tools. One of those tools is Microsoft Cognitive Services, a collection of cloud-hosted APIs that let developers easily add AI capabilities for vision, speech, language, knowledge and search into applications, across devices and platforms such as iOS, Android and Windows.
To date more than a million developers have already discovered and tried our Cognitive Services, and many new customers are harnessing the power of AI, such as major auto insurance provider Progressive best known for Flo, its iconic spokesperson. The company wanted to take advantage of customers’ increasing use of mobile channels to interact with its brand. Progressive used Microsoft Azure Bot Service and Cognitive Services to quickly and easily build the Flo Chatbot—currently available on Facebook Messenger—which answers customer questions, provides quotes and even offers a bit of witty banter in Flo’s well-known style.
Today, we’re announcing new milestones for Cognitive Services vision and search services in Azure.
Bringing vision capabilities to every developer
For years, Microsoft researchers have been pushing the boundaries of computer vision, making systems able to more accurately identify images. The following milestones represent one of the many examples of how we’re integrating our research advances into our enterprise services.
Today, I’m pleased to announce the public preview of Custom Vision service on the Azure Portal (Figure 1). Microsoft Custom Vision service makes it possible for developers to easily train a classifier with their own data, export the models and embed these custom classifiers directly in their applications, and run it offline in real time on iOS, Android and many other edge devices. We built Custom Vision with state-of-the-art machine learning that offers developers the ability to train their own classifier to recognize what matters in their scenarios.
With a couple of clicks, Custom Vision service can be used for a multiplicity of scenarios: retailers can easily create models that can auto-classify images from their catalogs (dresses vs shoes, etc.), social sites can more effectively filter and classify images of specific products, or national parks can detect whether images from cameras include wild animals or not. Last month, we also announced Custom Vision Service is able to export models to the CoreML format for iOS 11 and to the TensorFlow format for Android. The exported models are optimized for the constraints of a mobile device, so classification on device happens in real time.
Figure 1: Custom Vision service, now available in Azure preview
The Face API is a generally available cloud-based service that provides face and emotion recognition. It detects the location and attributes of human faces and emotions in an image, which can be used to personalize user experiences. With the Face API, developers can help determine if two faces belong to the same person, identify previously tagged people, find similar-looking faces in a collection, and find or group photos of the same person from a collection.
Starting today, the Face API now integrates several improvements, including million-scale recognition to better help customers for their vision scenarios (Figure 2). The million-scale recognition capabilities represent a new type of person group now with up to a million people, and a new type of face list with up to a million faces. With this update, developers can now teach the Face API to recognize up to 1 million people and still get lightning-fast responses.
Figure 2: The Face API now integrates several improvements, including million-scale recognition
Harnessing search capabilities for every developer
Another key area of AI investment has been around search: everyone around the globe can gather rich information from Bing Search to query the web, but we’re also empowering developers to leverage it through multiple search APIs. Embedding it into any app with a few lines of code, to help users find the right information among the knowledge across the planet.
Part of the search capabilities of Cognitive services, Bing Entity Search brings rich context about people, places, things and local businesses to any app, blog or website for a more engaging user experience. I’m also pleased to announce Bing Entity Search is now generally available today on the Azure Portal.
With Bing Entity Search, developers can now identify the most relevant entity based on searched terms and provide primary details about those entities (Figure 3). Entities spans across multiple international markets and market types including information about famous people, places, movies, TV shows, video games and books.
Many scenarios can be covered with Bing Entity Search: for instance, a messaging app could provide an entity snapshot of a restaurant, making it easier for a group to plan an evening. A social media app could augment users’ photos with information about the locations of each photo. A news app could provide entity snapshots for entities in the article.
Figure 3: Augmenting content with entity search results
Today’s milestones illustrate our commitment to make our AI Platform suitable for every business scenario, with enterprise-grade tools to make application development easier and respecting customers’ data.
I invite you to visit www.azure.com/ai to learn more about how AI can augment and empower your digital transformation efforts. We’ve also launched the AI School to help developers get up to speed with these AI technologies.