Check If the Dishwasher Is Clean With Google Assistant
by Toglefritz in Circuits > Gadgets
11327 Views, 126 Favorites, 0 Comments
Check If the Dishwasher Is Clean With Google Assistant
You know when you go to reheat that leftover lasagna in the microwave and you need a plate to put it on. So you open the cabinet where the plates are stored but… Oh no! There are no plates left. Next, you open the dishwasher, but then you are faced with a quandary: is the dishwasher clean or dirty?
You take a plate from the dishwasher and inspect it for signs of dirty-ness. You notice a speck of detritus along the rim. Did the dishwasher just do a sub-par job or did it not run at all? You take a second plate from the dishwasher to inspect it as well…
WAIT! Just ask your Google Home if the dishwasher is clean or dirty:
“Hey, Google, is the dishwasher clean?”
“The dishwasher is clean.”
Perfect, now you can reheat and enjoy your lasagna.
With the Dishes Action for the Google Assistant, simply tell your Google Home or other Google Assistant-equipped device when you load and unload the dishwasher. Then, when you or somebody else in your house goes to retrieve a dish, the Google Assistant will be there to let them know either that the dishes are clean, or that nobody ran the machine. so the dishes are dirty.
In this Instructable
This Instructable will each you how to build a simple "app" for the Google Assistant that will track and report if your dishwasher is clean or dirty. Instead of using detective work to determine if the dish you are about to eat from is actually clean, you will be able to simply ask Google. The action will work anywhere the Google Assistant is found: on a Google Home, on your Android phone, on your Wear OS watch, any anywhere else you can talk to Google Assistant. The overall process demonstrated in this Instructable can be used to track all kinds of things around your house.
Create Your Actions on Google Project
Creating Actions on Google, that is to say the "apps" that Google Assistant uses to communicate with users, might seem intimidating at first. After all, the Google Assistant is an incredibly sophisticated system powered by machine learning, AI, natural language processing, and all kinds of other high-level and cutting-edge computing technologies. But actually, getting involved in this emerging ecosystem is not too difficult at all.
Creating a new Project
The first step is creating a new Actions on Google project. All actions are designed, deployed, and maintained in the Actions on Google Console. Head over there now.
There are some useful links across the top of the page if you want do dig into a bit more documentation before getting started. Otherwise, click the Add/import project button in the middle of the page. You will be presented with a dialog box to enter the name of your new project. After entering a name, click the button.
You will then be prompted to choose a category for your new project. If you project fits neatly in a particular category, feel free to select on. Otherwise, you can skip this step if you find it difficult to shoehorn your project into a category.
Create an Invocation for Your Project
Now that you have your new Actions on Google project created, you will be taken to the Overview page of your project. This is actually a useful page because it contains a kind of roadmap for all the tasks needed to launch your Action.
The first step in building out your Action is setting up an Invocation. An Invocation is essentially how the user "launches" your action. Enter the Invocation section by selecting Invocation under the Setup section of the menu on the left side of the screen.
There are three pieces of information to fill out in this section. The first is the Invocation Name. When a user wants to use your Action, they would say something like "OK Google, talk to ." You want your Invocation Name to be easily remembered by your users so they can use your Action easily. When you enter your Invocation name into the appropriate field, the Console will show an example of the verbal phrase a user would say to invoke your Action.
Next is the Directory Title. The Directory Title is used as the title for your Action in the Google Assistant directory. Potential new users of your Action might discover your action in the directory.
Finally, you can choose the voice Google Assistant will use for your Action. This field is purely a matter of preference. If you've played with the different voices on your Google Home or other Google Assistant device, you already know how these sound.
When you are done completing all three sections of the Invocation area, click the Save button in the upper-right corner of the screen.
As you make changes in the Actions on Google Console, your progress will be tracked on the Overview page.
Create an Action for Your Project
As shown in the roadmap on the overview page, the next step in building our Action is to create the Action itself. This is where the real work of creating a new Action for Google Assistant begins. Start by selection the Actions button from the main menu. Then, assuming you haven't already created an action, click the button.
The Actions on Google Console has a number of built-in intents and templates that you can use as a starting point for your own actions. Otherwise, you can create a new intent from scratch as we will do in this Instructable. Choose the best option for your Action and then click Build.
Dialogflow
Developers (a group to which you now belong!) for the Google Assistant use a specialized tool for creating the logic Google Assistant uses to interact with users, called Dialogflow. When you click the Build button, you will be taken to the Dialogflow tool. If you have never used the tool before, you will be prompted to grant Dialogflow certain access to your Google account. Just review this information and continue until you enter the actual Dialogflow tool.
To get started building the logic behind your new Action, you will first need to create an Agent. Do so using the in the upper-right corner of the screen. It will take Dialogflow just a moment or two to create the Agent.
Create an Intent for Your Action
With your Agent created, the next step is to create an Intent within Dialogflow. "An intent represents a mapping between what a user says and what action should be taken by your software." Intents are the logic behind the Google Assistant. Intents allow Google Assistant to understand what a user says, and respond appropriately, processing any additional logic or actions in between.
So, for example, when you say "OK Google, what's the weather today?" An Intent allows Google Assistant to interpret what you said, determining that you are requesting information about the weather; use resources at the disposal of the Assistant to retrieve weather information; and to respond to your query in a structured way that you can understand.
This all sounds really complicated, but it really is not bad at all.
Create your Intent
To get started creating your first Intent click the button in the upper-right corner of the screen or select the "create the first one" link in the middle of the page. For the example Action in this Instructable, I will be creating three Intents. One is used to set the status of the dishwasher to clean, and one is used to set the dishwasher status to dirty. The other Intent is used to get the status of the dishwasher.
First of all, create and name each Intent your Action will need. At the top of the page there is a text field for entering a name for your Intent. After naming your Intent, click the . If you go back to the main Intents page, you will find a list of your Intents, along with a few default Intents.
Add Training Phrases to Your Intent
Depending upon the complexity of your Intent, there are several different fields you will fill out. We will start with a field all Intents require, Training Phrases. Training phrases are fed into Google Assistant's AI-powered natural language processing engine to allow Assistant to understand what users say. In the Training Phrases section, enter things a user might say when using your Action. You don't need to worry about entering every single phrase a user could potentially say. Rather, enter a few examples that will allow Google Assistant to extrapolate the user's intent if they say something close to the phrases you enter, even if it does not match exactly. You only need one training phrase, but including a couple will help Google Assistant better understand your users.
When you are done entering phrases, click the button again. You will notice a message in the lower-left corner of the screen that the system has begun training Google Assistant.
Create an Entity for Your Intent
You could create an Intent that is a simple call and response. For example, you might create an Intent (this is one that is used a lot) in which the user says something like "OK Google, tell me a joke" and Google Assistant responds with a joke. In this example, you would have a training phrase capturing the user's intention to hear a joke, and you would have a joke the Google Assistant would say in response (more on responses in a moment).
However, the most interesting Actions incorporate some kind of logic to make the system more flexible. In the example Action in this Instructable, Google Assistant will track whether the dishes in the dishwasher are clean or dirty. This additional information is stored in an Entity. Entities can be very powerful and flexible ways to adding intelligence to Google Assistant's responses to users. In this Instructable, however, we will create a simple Entity that will basically be a variable. To create an Entity, select the button from the menu on the left side of the page.
First of all, give your Entity a name using the field at the top of the page. In the next step, we will plug this name into our Intent to allow the Intent to access and/or change the entity value.
Speaking of values, in the fields below the name field, enter the possible values for your Entity. In this Instructable, the Entity can have two values: clean or dirty.
After giving your Entity a name and values, click the button in the upper-right corner of the screen.
Add the Entity to Your Intent
Now that you've created an Entity to store information needed for the Google Assistant to respond to a user, return to the Intent you added Training Phrases to earlier. Below the Training Phrases section, there is a section entitled Action and Parameters. It is in this section that your Intent will interact with your Entity.
First, supply a Parameter Name. This is a label for the piece of information the Google Assistant is trying to obtain from the user. In this Instructable, the user will basically be telling Google Assistant the value of the parameter. However, if you build a more complex Action, Google Assistant may have to ask the user for parameters. For example, if the user told Google Assistant, "OK Google, add to my shopping list," you might have a parameter containing the name of a grocery item. Google Assistant might then ask the user for the grocery item to add since the user did not provide this information straight away. For the purposes of this Ins Instructable, we will just enter a parameter name for tracking the status of the dishwasher.
The next field, Entity, will contain the name of the Entity we just created. To reference the entity, type @.
The last field (or at least the last field we will be discussing for the simple Action in this Instructable), Value, contains the value to set for the parameter. In this case, for the "Set Dishwasher Status to Dirty" Intent, we will set the status parameter to dirty when the user says "OK Google, the dishes are dirty" or a similar phrase.
Google can actually use its natural language processing chops to automatically obtain a value for your parameters from whatever the user says to Google Assistant. Try adding another Training Phrase using one of the values for your Entity. When you press enter, Dialogflow will automatically tie the phrase to an Entity value.
For example, in the Dishes Action in this Instructable, if I give Dialogflow the Training Phrase "The dishwasher is clean" and press enter, Dialogflow will automatically recognize the word "clean" and attach it to the Entity.
Once you are done hooking up your Entity and your Intent, click the button.
Create Simple Responses for Your Intents
The final step before our Intent is complete is to add Responses. This section is fairly easy to understand. Responses are the messages delivered back to the user, either by text, Google Assistant voice, or both, after the Intent is processed.
Some responses are very simple. For example, in the "Set Dishwasher Status to Dirty" Intent, after the user tells Google Assistant that the dishes are dirty, the Assistant basically echos back this information to confirm. The image in this step shows this simple time of response. To create simple Responses like these, simply enter the response in the field at the bottom of the Intent screen.
Then click the button to save your work.
What about more complex responses?
Now, where things get just a tad trickier is when you want to create Responses that are not just phrases Google Assistant echos back to the user. What if you want to make the Responses dynamic so that they change depending upon information previously given to Google Assistant.
For example, in this Instructable we have an Action for tracking whether the dishwasher is clean or dirty. We have one Intent that sets the dishwasher status to dirty, and Google Assistant echos back a simple confirmation. We have another Intent that sets the dishwasher status to clean, and Google Assistant again echos back a confirmation.
But then we have the other Intent that will allow the user to obtain the current status of the dishwasher. The Response to this Intent is not just a canned phrase for Google Assistant to echo back. For this Intent, the Response will change depending upon the status of the dishwasher. To create this kind of dynamic Response we will need to add one more element to our other Intents, a Context. Let's do this in its own step.
Add a Context to Your Intents
As the previous step teased, we will need to add a Context to our Intents in order to allow them to share information. If you hover over the little icon next to the Context field inside your Intents, it will give a very useful piece of information:
Can be used to "remember" parameter values, so they can be passed between intents.
This is exactly what we want to do!
There are two types of Contexts: input contexts and output contexts. These basically determine the direction your parameters are shared. Intents can share parameters with other Intents, this would be an output context. Or Intents can retrieve parameters shared from other Intents, this would be an input context. Or an Intent can do both.
So, for the Intents used to set parameters, in this example the "Set Dishwasher Status to Clean" and "Set Dishwasher Status to Dirty" Intents, set an output context to whatever you want to call your Context.
Then, in the Intent that will be retrieving parameter values, in this case the "Get Dishwasher Status" Intent, set the input context to match.
Of course, after each change to your Intents, make sure to click the button.
Create Dynamic Responses for Your Intents
Now that your Intents have input and output contexts, they will be able to share parameter values with each other. That means your Responses can be dynamic, changing their content based on user-supplied information.
Get the Entity Value from the Context
So, to enable an Intent to retrieve an Entity value and use it in its Response, first we need to set a parameter to contain the information. Just like before, set the Parameter Name to be anything you'd like. Then, in the Entity field, link the parameter to your Entity. Here comes the tricky bit, rather than getting the value for the parameter from the user, we will get the previously-set value. So, in the Value field, use the format below.
#<context>.<entity>
So, for the example in this Instructable, I would enter a value to get the dishwasher_status
entity from the input context. Like so:
#dishes.dishwasher.status
Use the Parameter Value in the Response
Now that you have a parameter to store the Entity value set by a different Intent, we can use that value in the response. Rather than simply typing a canned Response, we will use a reference to the Parameter Name we entered below. So, your response will look something like
The dishes are $status
where the parameter name will be filled in from the Entity value.
Hit that button again to make certain you save this tricky bit of work.
Test Your Action
By this point you've probably noticed the panel on the right side of the screen. This area allow you to test your Action by typing in example queries to Google Assistant and getting responses back. Along the way, you can observe how the Action is setting and getting parameter values.
It is very important to thoroughly test your Action before moving on. Simply type in queries and make certain Google Assistant responds as expected.
Add Your Action to Your Project
Alright! We've been fiddling around in Dialogflow for quite a while now. By this point we've created an Action that encompasses several different Intents. The user can interact with Google Assistant and Google Assistant will respond back appropriately.
Now we need to add our newly-created Action to our Project. Unless you closed it, your Project in the Actions on Google Console should still be hanging out in another tab. Hop back over there. From the Actions section, you should.... not see the Action you just spent so long creating. Not to worry though, just press the button.
Then, right when you thought you were done using Dialogflow, you will be prompted to go back and enable integration between Dialogflown and Google Assistant. Fortunately this is really easy. In Dialogflow, choose the Integrations tab from the menu on the left. Then, in the first section at the top of the page, click "Integration Settings."
You will be presented with a form for linking your Action to Google Assistant. In the second section, simply fill in your Intents. When you are done, press "Test."
You will then be sent back to the Actions on Google Console, where you Intents should now appear in the Actions menu where they were missing before.
Test Your Action (Again)
Alright, so we've done some great work so far but before we start on the final few items shown in the tracker on the Overview tab, we will go ahead and test the entire Google Assistant action from within the Actions on Google Console. To access the testing system, click the Simulator tab in the left navigation menu.
The testing we did earlier in Dialogflow is great for testing the logic between Intents and Responses, but the Simulator allows you to test your Action in an environment that very closely matches the behavior of a real Google Assistant device.
In the Simulator tab, in the section on the right, there are several different device form factors to test against. In this case, we will choose the middle icon, the speaker, which will test your action as if it is running on a Google Assistant-equipped speaker, like the Google Homes.
Then, in the panel on the left, you can type queries to the Google Assistant as if you were speaking them aloud. The Simulator will offer suggested queries. The first important test you will want to run is telling Google Assistant to talk to your Action; this is like opening an app on a mobile device. Simply type "Talk to Dishes: Dishwasher Status."
The Simulator will give you a verbal response to your queries just like a Google Home would, plus it will give you a written response. Test all the various Intents you designed in Dialogflow to make sure they all work.
Everything seems to be working, right! Well...
A Bug in our Action
In its current state, our new Action for Google Assistant works well. It recognizes utterances from the user and translates those queries into intents using natural language processing so it can recognize commands even if the user says them in various different ways.
You could just move on to deploying your new Action and call it a day. However, there is a problem affecting the broader usability of our new Action. Try the following test. Refresh your browser to get a clean instance of the Simulator. Then, as before, type "Talk to Dishes: Dishwasher Status." But this time, rather than first telling Google Assistant whether the dishes are clean or dirty, just ask it straight away if the dishwasher is clean. Your Action will break. The Google Assistant will not be able to get an answer to your question and you will be presented with an error code in the right panel: MalformedResponse
What's going on here?
Remember how we set up a Context for our Intents that allowed them to share parameters. Well, the thing is, the Context only exists while Google Assistant has your Action "open." As soon as Google Assistant concludes its conversation with your Action, the Context disappears. So, when you just go straight to asking Google Assistant about the dishwasher status without first giving it a status, it is looking for a parameter that does not exist.
Solving this problem will require a little bit more advanced systems and it will require some coding (it is fairly remarkable that we've gotten this far without doing any coding). So put your coding hat on if you want to continue and make your Action perform perfectly by giving it the ability to save information between turns of the conversation. Don't worry, it is not too complicated.
Enable Webhook Calls for Intents
Rather than just writing responses in Dialogflow, we will use Webhooks instead. Basically, instead of using pre-written, fixed responses, we will enable Google Assistant to use responses generated by code we will write. In order to enable Google Assistant to use coded rather than fixed responses, we will have to enable Webhook calls for each of our three Intents.
So, for each of the Intents, at the very bottom of the page, there is a section entitled Filfillment. Within this section, there is a toggle for enabling webhook calls: . Remember to click the button after flipping on the switch for each Intent.
Create Fulfillment Code
Alright, with all of the Intents set to use webhook calls in order to respond to user queries based on code, we need to actually write the code. Start by accessing the Fulfillment tab from the left navigation.
Next we need to enable the Inline Editor using the toggle switch next to that section. When you do this, the coding section below the switch will be enabled. If you want to simply copy/paste the code, you will find it below. Just place it into the coding area and press the button below the code entry area. It will take Dialogflow a few minutes to deploy your code.
How does the Code Work?
The code is not particularly difficult to understand. Let's start from the bottom. At the bottom of the code, you will find a series of intentMap.set()
statements. These statements tell the code which function to run when a particular Intent is triggered. For example, the statement, intentMap.set('Set Dishwasher Status to Clean', setStatusCleanHandler);
says to trigger the function called setStatusCleanHandler()
when the "Set Dishwasher Status to Clean" Intent is triggered.
Above this section, there are a series of functions, one for each of the Intents in our Action, including the default Intents. For the Intents that set the dishwasher status to clean or dirty, the functions look like this:
function setStatusCleanHandler(agent) {
let conv = agent.conv(); // Get Actions on Google library conv instance
conv.data.status = 1; // Save the status: 1 is clean, 0 is dirty
conv.close('Nice work! The dishes are clean.'); // Use Actions on Google library
agent.add(conv); // Add Actions on Google library responses to your agent's response
}
The secret sauce here is the conv.data.status = 1;
line. This line saves the dishwasher status in such a way that it will be available between separate conversations with Google Assistant.
The function to set the dishwasher status to dirty is basically identical to the function above. the third function, the one used to get the dishwasher status is below:
function getStatusHandler(agent) {
let conv = agent.conv(); // Get Actions on Google library
conv instance // If the status is clean, tell the user
if(conv.data.status == 1) {
conv.close('The dishwasher is clean.');
}
else {
conv.close('Oh no. The dishes are dirty.');
}
agent.add(conv); // Add Actions on Google library responses to your agent's response
}
This function is very simple as well. An if/else statement is used to look up the dishwasher status data and return a status to the user informing them of the status.
All three of these functions contain conv.close
statements. These contain the verbal responses Google Assistant will deliver to the user. They will also close the Dishes Action.
Fill in Directory Information
We are so, so close to releasing our new Action to an eagerly waiting public. However, before we can release our Action, we need to fill in some information that will be used in our Action's listing in the Directory, which is like the Play Store but for Google Assistant Actions.
From the left navigation, select the Directory information tab. If you take a look at the Overview tab, there will also be a link to this section under the "Get Ready for Deployment" section. The form in the Directory information section will be used to capture all the information Google needs to publish your Action in the Directory. As you fill out the form, you will find a preview of the listing for your Action on the right side of the screen.
After you finish filling out the form, hit the button. This time, the button is located at the bottom of the page.
Deploy Your Action
Alright, the time has finally come to release our brand new Action for the Actions on Google platform. All we need to do now is submit our work for review so that, assuming everything works correctly, it can be released for public use. Exciting!
From the left navigation select the Release tab. There is not much to do here. If you want to test your app with a smaller, more controlled group before releasing it to the general population, you can create an Alpha or Beta release for your app. Otherwise, if you are ready to release your app to the world at large, hit the button. After a couple business days, you will get a status back about the review process. You will be able to track the status of your request at the bottom of the Release tab.
Extra Credit: Modifying Welcome Responses
You've probably noticed by now, after testing your Action a few times, that Google Assistant will greet the user when they invoke your Action. Well, in this step we will customize those greetings. The standard greetings are things like "hello," "greetings," "hi," and so on. To add your own welcome messages, we will hop back into Dialogflow and access the Intents screen.
You may have noticed during some of our previous work, that Dialogflow has a pre-set Intent called Default Welcome Intent. This is a special Intent that runs when the user first "opens" your Action. Open this Intent.
Just like the other Intents, there are a list of responses Google Assistant will use when the Intent is triggered. To add your own custom greetings to your Action, simply change these Responses to your own messages.
As usual, click when done.