Microsoft Azure Cognitive Services is a set of services (SDKs, APIs, tools) that available on the cloud that will help developers incorporate machine learning and AI into your existing applications that will make your tools more engaging and intelligent.
There are several types of Cognitive Services available through Azure. These services are sometimes referred to as Artificial Intelligence (AI) services. They include:
These services are used to help apps understand user commands, recognize the context of text for better search results, fix spelling and grammar errors, translate text into another language, and make suggestions for what to type next as a user is typing. The following are some of the APIs under Language AI:
- Bing Spell Check API: They perform contextual grammar and spell checking.
- Language Understanding service (LUIS) helps understand what a person wants in their own words. It uses machine learning to understand natural language from the user and extract relevant information.
- Linguistic Analysis APIs provides access to natural language processing (NLP) tools that identify the structure of the text. Currently, they provide three types of analysis – Sentence separation and tokenization, Part-of-speech tagging and Constituency parsing.
- Text Analytics API helps perform sentiment analysis, key phrase extraction, and language detection over raw text data.
- Microsoft Translator Text API helps perform text to text language translation and supports more than 60 languages.
- Microsoft Web Language Model API is a REST-based cloud service that provides tools for natural language processing on a web-scale data.
These services that used for real-time speech translation, helping to resolve speech barriers such as speech impediments, thick accents, and background noise. It can also convert speech to text and text to speech. The following are the APIs under Speech AI:
- Bing Speech API enables developers make speech-enabled features in their applications, like voice command control, user dialog using natural speech conversation, and speech transcription and dictation. It supports both Speech to Text and Text to Speech conversion.
- Custom Speech Service enables you to create customized language models and acoustic models tailored to your application and your users. The two main components of the service are – an acoustic model which is a classifier that labels short fragments of audio into one of a number of phonemes, or sound units, in a given language and the language model – the probability distribution over sequences of words.
- Speaker Recognition APIs provide the most advanced algorithms for speaker verification and speaker identification. In simple words, it helps identify who is talking. Speech Verification API can automatically verify and authenticate users using their voice or speech. Speech Identification API can help identify the person speaking in a group of people speaking.
- Translator Speech API will provide streaming API to transcribe conversational speech from one language into the text of another language.
These services are used to help apps identify and tag people’s faces in photographs.They can recognize emotions, automatically moderate content, and index images and videos. The following are the APIs under Vision AI:
- Computer Vision API helps to process images with advanced algorithms and returns information. E.g. it can analyze an image and extract text from it (OCR).
- Content moderation is the process of monitoring user-generated content on online and social media websites, chat and messaging platforms, enterprise environments, gaming platforms, and peer communication platforms.
- Custom Vision Service helps build custom image classifiers. It makes it easy and fast to build, deploy, and improve an image classifier.
- Emotion API returns the confidence across a set of emotions for each face in a particular image.
- Face API enables face attribute detection and face recognition.
- Video Indexer uses AI to analyse, edit and process videos within your app. Some of the functionalities include Visual text recognition (Extracts text that is displayed in the videos), Face tracking and identification (Enables detection of faces in a video) and Voice activity detection (helps separate background noise and voice activity).
These services are used to recommend items to customers, convert complex information into simple answers, and help make interactive search more natural. It can also integrate academic information into apps and make simple decisions on its own. It learns through experience, so it gets smarter and more precise over time.
- Custom Decision Service converts your content into features for machine learning. Some applications include personalized video content on a media portal, Optimizing ad placements in a web page or ranking recommended items on a shopping website.
- QnA Maker is a tool that will help create your own chatbot that is more of a Question and Answer /FAQ type.
These services are used for more precise and accurate results when searching news articles, images, videos, websites, and documents. It provides intelligent autosuggest options. It is integrated with the Bing search engine.
- Bing Autosuggest API: Give your app intelligent autosuggest options for searches.
- Bing Image Search API: Bring advanced image and metadata search to your app.
- Bing News Search API: Link your users to robust and timely news searches.
- Bing Video Search API: Trending videos, detailed metadata, and rich results.
- Bing Web Search API: Connect powerful search to your apps.
- Bing Entity Search API: Brings information about entities that Bing determines are relevant to a user’s query.
- Bing Custom Search API: It offers users access to targeted web search experiences. It helps to customise your search and helps in sorting and filtering the results.