Artificial Intelligence is a branch of computer science that focuses on creating applications that mimic the cognitive functions that we associate with the human mind such as learning or problem solving. The goal of an AI application is to analyze its environment and to take actions that maximize the chances of success. AI does this through the use of mathematics, computational intelligence, statistics and probability. Artificial Intelligence has use cases in almost every business or field of study and is commonly used to solve problems such as search,reasoning, knowledge representation, learning, natural language processing, computer vision, and robotics.
Cloud Computing and Big Data are merging into a trend that utilizes both remote computing and large-scale computation: Cognitive Computing. Massive data sets of the world around us are compiled every second (images, videos, audio, and text) and we need to quickly and accurately sift through that data to reach meaningful conclusions. Microsoft Cognitive Services implement cognitive computing and employ machine learning to provide actionable insights using vision, speech, language, knowledge, and search APIs.
Bots interact with your users naturally wherever they are – from your website or app to Cortana, Skype, Office 365 mail, Slack, Facebook Messenger, Skype for Business and more. Cognitive Services enable your bot to see, hear, interpret and interact in more human ways. Azure Bot Service provides a foundation for building custom bots to allow humans to interact with machines in productive ways.
Effective cognitive computing requires easy-to-use service endpoints consumable by apps with images, audio, and other media and data that needs to be processed by sophisticated cognitive systems that return straightforward and usable results. Let your students get their hands on cognitive algorithms quickly using Microsoft’s Cognitive Services APIs. There are many entry points into the exploration of cognitive computing.
The Computer Vision API identifies people and objects with a reported level of confidence. Individuals are identified, what they look like, what they are wearing, their age and demographic, what they are doing, and if they are part of a group. Objects are identified, such as buildings, houses, natural features such as rivers or mountains, or household objects such as dinner rolls or flowers, then placed in a context such as a city, a plate of bread, or a train station. Tags denote the notable aspects of the image, the most prominent or identifiable images that help determine what the image is “about”.
MS Learn Explore Computer Vision
The Face API imbues your apps with the ability to identify a person using an image of their face. The API compares two images containing faces and reports on how well they match up. This is accomplished using proportions of the head, hair color, and facial landmarks such as eyes, eyebrows, nose, and lips.
The Custom Speech API provides a powerful speech recognition system exposing acoustic models and language models for customization. Identifying and verifying a particular speaker is a next step in speech cognition and is provided by the Speaker Recognition API.
While speech recognition determines what a person is saying, language understanding extracts deeper meaning such as topic, sentiment, and desire. Build custom language models to interpret what a person wants using the Language Understanding Intelligent Service (LUIS). Learn to map human utterances in natural language to entities and intents to know what object or person someone is talking about, how they feel about it, and what they would like to see happen with it.
Explore the ability to search complex data using natural language queries using the Knowledge Exploration Server (KES). Define your own data schema and populate it with your data. Construct query grammars used to parse language requests and extract and filter data, then host your query engine as a service online. Employ natural language understanding to evaluate queries, offer intelligent recommendations, query auto-completion, and semantic search.
Azure search allows you to use the same tools that Bing and Office have used to perform search operations and extract information across large amounts of data.
A bot is an app that users interact with in a conversational way. Bot conversations can range from a basic guided dialog with pre-defined responses to a sophisticated interactive experience that leverages cognitive computing to determine user desires and sentiments.
Building a bot requires a development toolkit and a testing environment. Get started with the Bot Framework, a platform for building, testing, and deploying bots. It includes a Bot Builder SDK with support for .NET, Node.js, and REST. Bot Builder conversations can use simple text or rich cards that contain text, images, and action buttons. Manage and deploy your bot with the Bot Framework portal. The portal provides a central repository for your bots and a way to deploy your bots to a web page.
Build bots quickly using Azure Bot Service, an online tool for bot development built upon the Bot Framework. Choose from a range of templates including a basic interaction, highly structed forms facilitating particular conversations such as the ordering of a sandwich, natural language understanding to determine user intent, proactive alerts to notify users of events, to an FAQ template to answer users’ most common questions.
Build bots in your browser without the need for a text editor or source control, or choose the continuous integration option and use your own source code control such as GitHub, BitBucket, or Visual Studio. After developing and testing your bot, deploy it to pre-configured channels such as Teams, Skype, or Web Chat, as well as Bing, Cortana, Facebook Messenger, Kik, and Slack.
Azure Bot Service is an implementation of the Bot Framework using Azure Functions, which allows your bot to run in serverless, scalable containers.
QnA Maker gives you the ability to move your documentation and knowledge-base to the cloud, thereby allowing you to create a fully functional question and answer service.
You create the questions and the answers, and you train the solution to recognize typical questions from your users. You provide the answers, and the service does the rest of the work for you.
When connected to your chatbot, you can deploy a fully-functional customer service solution as a first line response to both quickly respond and help your customers and limit the number of calls that your customer service representatives would have to take directly.
As your users continue to query the system, the system will leverage machine learning to get better at responding to the questions as they come in.
As QnA Maker is a hosted application, you can choose the tier that is appropriate to your needs. QnA maker also supports over 50 languages, so you can choose the language that makes the resonates best with your users.
Now that you have researched and evaluated the tools at Azure for working with AI and Bots, it is time to put one into action.
To complete this lesson, you will be working through a challenge to create an Azure Web Bot that leverages QnA Maker to automatically respond to questions.
Good Luck, and Have Fun!
Exercise 2: Create an Azure Web App Bot
Exercise 3: Deploy a bot with Visual Studio Code