2020: Omnichannel presentation

Planning and writing for voice UI

First there was Siri. And then Cortana, Alexa, and Nest. Voice UI is the new hot thing – but how do you write in the right voice for… voice?

This session is for strategists and UX content creators alike. Planning and writing for a voice interaction means considering the complete experience – across all channels. In this session, you’ll learn how to plan for it, as well as tools to make the process easier.

 

What you’ll learn


  • How voice UI is different from written content
  • How to build a strategy that accounts for voice UI use cases
  • Why your company voice needs to change for voice UI
  • How to write and test content for voice UI



OmniXConf 2020 - Marli Mesibov.m4a.crdownload.m4a - powered by Happy Scribe


Hello and good afternoon and welcome back to this last session of the day before the final Q&A panel. I'm super excited because we got Marlee Mesmero from Mesbah from Mad Cow, which is one of my favorite agencies ever in the States. They do really good work. And she is BP of Content strategy Day. She will tell you everything about it. So I'm excited for two reasons. Aid those things that you see on the slide right at the moment, but also surrounding Mali.

That's our superpower. She needs these beautiful things. And there is a lovely, lovely story to go with those nets that are her trademark. And also because like most of you, I'm used to things talking back at myself from my car, my you know, my phone, my watching, you know, old all these objects that used to be inanimate, that now have a personality and have a voice. But the really interesting question is, how do they get the voice that they get and how if you want to be working in that field, how do you need to think about the conversation that we have with objects and with this?

I don't want to spoil your or give you a false name from misleading information. I will hand it over to Mali, was going to talk to us about planning and writing for Voice UI. I just remember that you have a handy dandy Q and A button on your dashboards, so please write to your questions as they come up. And if we have time at the end of the session, we'll get to ask them to Mali or you can ask them at the end of today in the formal Q&A session.

Mali, it's up to you now. Thank you so much, Alberta. As Alberta said, I am the V.P. of Content strategy at Mad Pough. And the reason that I've been at my pal for five years now is that our work focuses on people. It focuses on these complex industries like health care and finance, things that particularly in the US with health care. There can be so much complexity to it and so much that people need to do.

But there's also a very low level of understanding around it because it's so complex and yet it is so important. Right. We think about what people need to succeed in their day to day life. And I love that thought of of healthy, wealthy and wise. Right. That's what we all need to be. And yet there's so much complexity to it. But the other piece about it is that you can't take something like health, for example, as we are all finding very much so today, with coded and separated from the rest of life.

You can't have health care over here in one little pocket and the rest of your life separate. Expect it to work that way. If we could, then we would all be in Amsterdam right now together. But instead, because the health of the world has suddenly become at risk, we are all experiencing what it's like to deal with a chronic condition or with an auto immune illness or any sort of situation where suddenly you're very, very aware of all the things that are naturally part of your life and that you don't need to think about.

And that kind of brings me to our topic today, which is another thing that has been sometimes developed in isolation but is really a part of an overarching experience, and that is these devices that live in our kitchens and our bedrooms, on our phones, in our cars. Now, I don't know about you, but there is one in my house just a few feet away. And if I refer to it by its name, it will start responding.

So rather than college, Google or Alexa or Siri, I will be referring to it as a Charlie. That way, hopefully, if any of you were listening to this on speaker, you also won't get a little voice piping up in the back. How can I help you? And I love having a Charlie in my house. There are a lot of things that does that are absolutely great, but this is also a common situation in my house.

My husband will be cooking and we probably use the Charlie for timer's more than anything else because he's cooking. His hands are in something and he'll say, Hey, Charlie, Septime are for ten minutes. OK, timer for ten minutes starting now. Couple of minutes go by. Now maybe he's playing something in the oven and checking on something else and he says, Hey Charlie, how much time is left on that timer? I'm sorry you have no current timer's.

OK. Charlie. Set a timer for seven minutes. Timer for seven minutes, starting now. Few more minutes go by. Now there's something else being mixed. There's checking on something on the oven jack and something on the stove. And he says, Hey, Charlie. How much time is left on the timer? You have to timer set. Which one would you like to check? Then my husband starts swearing. Charlie asks if that's a song he wants to play.

It's a mess. And ultimately, what it comes down to is this is because out of all of the voice recognition that we've gone through as of twenty seventeen, which is now three years ago, the error rate, they were they were really proud of the fact that was down to only about a less than five percent error rate. The thing is that since twenty seventeen, I have not been able to find any new statistics saying that we've gotten any better.

So as far as I can understand, we are still at about just below a five percent error rate. And that sounds pretty good. But it actually means that out of every 20 times that you give a command to your Charlie, one of those is going to be misunderstood, which is why it's such a common occurrence for my husband to start swearing at Charlie. But there's more to it than that, because, you see, my husband has a pretty broadly Americanized white male accent.

And speech recognition actually performs worse for me because my voice is at a higher tomber. And when Charlies were built, they were predominantly built and tested by people who spoke like my husband. So what does that mean for those of us? And I'll get into this a little more, because ultimately most of us are not building Turley's from scratch, but we are reliant on them. You see, this has become a key element of an OmnichannelX experience. This voice.

Why is how people are getting information about what to trust in terms of kov and where to go to help protest for black lives matter. Our Charly's are so much a part of our lives. They are part of how we cook and how we learn what our doctor said and how we start a checklist for what we need to pick up. What we you to order from the grocery store. And if you look at the X pyramid, where a really great opportunity to make our experiences of there, Charlie's better.

Because at the bottom of the pyramid pyramid is functional. That means that quite simply, the technology exists is there. Then we start moving into reliable. And once we hit that five percent error rate. Wow, that's pretty reliable. It's not perfect, but it's reliable enough to start making sure that it can be used in the right situations. That's not going to mess up your cooking. But we're not up in a more subjective characteristics of the experiment yet, we haven't gotten to the point where it's convenient or enjoyable or meaningful in our lives, significant.

And yet, if you watch the commercials for the for many of the Charly's, they talk about how about that delite, right? That significance, that meaningful aspect create memories. Part of your life. What we're aiming for is what. One of my favorite books talks about, so I am I'm a big Broadway nerd and I. I absolutely adore West Side Story. And one of the things that I love when I read about the original West Side Story on Broadway.

Was this quote from Gene Rosenthal, who was the lighting designer who designed that along with many other amazing shows. And she points out that her job as a lighting designer is essentially not to make people say what great lighting. She wants people to say, what a beautiful sunset or even. Did you notice how the how he looked at her? If her lighting is appropriate, nobody's talking about the lighting. She says when no one notices a thing on stage except the actors.

Then you've done your job as it should be done. There's another quote very similar to this. Tomorrow's devices should be so, should be unobtrusive, something so you that dissolves into your life. This is from the product leader of Spotify. And that's what we're going for here. We don't want people to say what a great Charlie, we want them to say, gosh, cooking is so much easier or even this is the best roast I've ever made.

Because the Charley helped make it so. That's not to say we haven't done anything in terms of voice technology. If you look back at the early 80s or even 1971, which was the first time that a call system where you could call in and get a response to your voice started to work. There was one created by IBM in 1971. Then we get to the late 1980s when, like Dragon software, some people may be familiar with, like a dictation software are being created.

We get into the 2000s. The first voice search app for the iPhone was launched in 2008. We started getting our Charly's in 2014 and in 2016, Microsoft announced that speech recognition had reached human parity, which I had a little trouble believing because I don't think that there's actually a five percent error rate and people understand each other. But what do I know? I'm just a content strategist. Anyhow, we've come a long way. But we've still got farther to go.

And one of the ways I think we can do that is clarifying what we mean when we talk about voice you, why you see there's voices, why they can listen to you like. Drive in software. Google Voice, anything that does a transcript. And these are Morgan For a long time, the accessibility world has been making sure that that can happen, has been working on all sorts of adaptations for that. Then there's the UI that can listen and respond.

And that's sort of this newer world that's been predominantly since 2014. That is all of our Charly's. That is these things that need a software that will tell it what to respond to, that will need sort of spreadsheets and organization and and a content strategy behind not only taking what was said and trying to put those words on paper, but taking what was said and figuring out what's appropriate in response to that was the context. And that's really what we're getting at today, is not just our chat bots, which many of you may be building and require some of the same understanding, but really something bigger than that.

So when we talk about our Charly's, the the broader term for those might be AI assistance. And for those unless you work for Google or Microsoft or Amazon, you're probably not building your own AI assistant. What we're doing is more in the skills section, the apps. I have an example here, Alexis skills. Right. These are essentially downloadable software for these AI assistance. It's typically transactional, but it can also be informational or educational. They get they get downloaded.

They need to work with the voice of the actual assistant. But it's a voice and tone that needs to match your organization. And it needs to respond to the trigger words. The same with chat pots in the AI assistance do. But it's a smaller piece of it. So if you're not doing anything with voice, you if you just came here to learn what's about. You might have this idea in your head that we're building the next, say, three PEO, we're creating Jarvis and we're not really.

We're actually going to building a product. We're building a software that Seath reveal would be able to download so that he could understand one more language and for good products. We need to think about what the purpose of it is. We need to make sure that it's comfortable to interact with it. Thinking back to that, you experiment. Yes, see, three PEO needs to be significant and meaningful. He's got a personality. The app that we're building.

We are quite simply trying to create content that. We don't need to be creating something delightful. We're trying to jump to that top of the You experiment, but we're not even necessarily a consistently reliable and usable yet. So let's go back to basics and focus on what it means to create Boy Shuai that is usable and intuitive and really human. Let's make it useful to people. Here are a couple of tips I'd like to the actual of my talks.

I want to make sure that when you go back to work later this afternoon or next week or whatever, and somebody says to you. I think we need voice you why or, you know, we should be the next Amazon or whatever it might be. You've got some ideas around what would make that useful. What would make that informative? What would make that usable? As we said, there are the chat bots. They're the voice you. They're the assistants themselves.

They may all constitute conversational interface. And so some of the advice I have here is going to work for all of those. And there are things that chat bots can certainly learn from the way that we approach voice UI and vice versa. But I'm also going to try to call out what makes Boice UI special or different. The first thing, though, is to think of it as a conversation. Some of you may already be doing this. When we talk about us writing, we talk about making sure there's a conversation where headers are the questions and paragraphs are the answers.

Right. You think about what does the end user say? What do you respond? When it comes to voice you, why the end user is actually saying something. And thinking it about this way, writing out the variations and what someone might ask and how you would respond to it. That's the way to set your end-user up for success. Also, when you write out things in a conversational way, you're more naturally using language. You're using language that your audience will recognize.

You end up getting away from saying screen or device and instead start saying things like laptop or phone. And you start thinking about the tone. Why is someone asking this question and do you need to be comforting here or simply factual? One example coming from something that's almost pretending to be a chat bot, but is essentially just. Not even dynamic content is just screens. But lemonade, just an insurance company, does a great job of creating conversation and I think there's a lot that we can learn from how they write out their conversation as we build this into voice you.

Why? So everything's written as a question than an answer. In this case, our chat board, our guide, whatever you might call, introduces herself as Maia. And she actually just says ready to go. She doesn't ask what's your name? But she does ask for a first name and last name here through the the the form fields. And once you say, let's do this. Moving forward, she reinforces that she heard you. She says, great to meet you, Marley.

Not just great to meet you. So there's some dynamic content there. So that way I have a little more trust, a little more faith that when I provide more information. We'll move forward. There's also an assumption of context here, because when I enter my home address, she doesn't say, do you rent or own at your home address? She says, do you rent or own it? That question only makes sense if it's specifically following the one before it.

These are things that we need to consider when you build a voice. Why is the next question that you might ask? Always going to come next? Or are you creating multiple different questions and answers that could come together or be separate at a time? How modular is this content? Which brings me to the second recommendation. Our second piece of advice here is to make sure that you're using natural language. This is important not only for that conversational element, but also because there isn't a lot of trust built yet.

So when I say natural language, the reason I say that, instead of saying keep it short or I provide all the information is because we're actually trying to hit a middle ground here. On the one hand, you do need to provide more information. As I said, trust is low. We want to do with lemonade does and reinforce that we heard someone correctly when somebody says, what's the capital of Alaska? We can't just say Juneau because that doesn't let them know that.

We definitely heard they were asking for the capital of Alaska. Someday we might be able to do that. We're not there yet. So today, when someone says, what's the capital of Alaska? You want to respond with the capital of Alaska? Is Juneau at the same time? We as humans are not great at just listening to things. There's a reason that even in a talk like this, I've got something visual for you to look at. Many people try to take notes during something like this.

It's really hard to just listen and retain information. And so if we went on too long and said the capital of Alaska, which is a state within the United States, like, it's just not organized, stay with somebody. Much like writing, we do very sentence structure, it's not all short. Or else it will sound choppy. Can't all be long. And where, as with the written word. A lot of times we'll use something brief to kind of go into more detail.

We really need except for issue I for what it is. Right. That's a big part of OmnichannelX is using the right channel at the right time and we need to acknowledge that often. We wish you why people are looking for a short answer. A quick answer. It is transactional in that way. Think of cars. For one thing. Right. Think of when somebody is driving and is asking their voice to why, whether it's built into the car or whether it's on their phone.

I think of the fact that they are driving. We don't want to take up all their time. We don't want to help their mind wander. We want to give them the answer and let them move forward. But there's more to it than that. We also need to think about the fact that today we've got a whole bunch of different commands being created by different people in different situations. When you're building voice, you I look at what already exists there and help people help your end user to not have to remember different commands for eight different ways to order pizza.

For example, in the Pizza Hut app, you have to say, Charlie, ask Pizza Hut to place an order. But in Domino's Pizza app, you asked Charlie to place your easy order. Trying to remember that back and forth and dealing with the potential error messages when you do one versus the other, super frustrating. We think about this to make it easier. Piece of advice, number three. Should talk about context. Context is complicated when you're on a Web site.

You've got navigation that tells you where you are. We've got sometimes you've got dynamic content simply because we know where you came from. We've got the fact that you can see the headers and the titles. We don't have any of that with voice UI. We need to create context for the end user. So like I said before, if somebody asks what's the capital of Alaska, you can provide some context by saying the capital of Alaska is right. We need to think of these as modular modular content.

If you miss Carrie Haynes talk earlier, I highly recommend checking out the recording there or checking out her book. She talks a lot about structuring content, and that's sort of the nuts and bolts and the tech side of what we're not going to get into so much here. But it's what this is what this conversation is built on. So when somebody has no headers, no images of the user flows could be literally coming from anywhere. We can't make assumptions about the context that they have.

We need to instead sometimes literally map out every single possible context so that we can make sure that our voice UI is appropriate regardless of where they're coming from. We can also provide context for end users by thinking about those. Like I said, like mapping out all of the different reasons someone might ask something. And then making a conscious choice to say we are going to provide additional information in this direction because we think that's the most likely or the most helpful in this case.

For example, in the States, you can't just see any doctor. Hashtag broken health care system. You need to see a doctor who takes your insurance. So when somebody asks Charlie, where's the nearest doctor to me? They probably want to know what insurance that doctor takes or they might be asking for the nearest doctor who takes their insurance. And that's something that they may not even realize when they're asking or may not know about. It's an area where our Charly's can actually increase the value of what they're providing by offering additional context and saying the nearest doctor is point three miles away.

This doctor takes the following insurance. Do you want to find a doctor close to you who takes a specific insurance? Right. We can provide some additional context and kind of guide our listener. Similarly, if somebody is asking where can I get the flu shot, they probably want to know a place that has them in stock. Now, whether or not that's something we can find out in that moment. That's another question. But going through these flows is how we identify what kind of experience we want to be providing.

One thing that Google does really well online, I've noticed when I type in library hours, I will now get a message that says library open until 9:00 p.m. and it says hours may have changed you to cope it. That's something that we can actually program if we are creating an app for Starbucks or for our company or our store or whatever, we can make sure that there's almost an error message. But again, that additional context that when somebody asks and we know it's a holiday, we can provide that additional information of maybe holiday hours in effect or maybe Kofod hours in effect.

All of these are ways that we can provide a better experience. All right. Number four, I mentioned earlier that there are aspects of voice to eye that have actually started in the accessibility community. And yet somehow when it comes to voice you, why accessibility is one of the biggest challenges I see. So in doing there's a written word. We talk about things like captions or all tags or making sure there's a transcript. I make sure there are ways for people to, quote, see without their eyes to navigate, without their hands to hear, without their ears, but with voice you.

Why? Accessibility is about how we handle the situations where we fail. It's about making sure if you are building your own Z3, BAEO, building your own Charlie, I'm thinking about who is testing it, who is creating it, making sure it understands different and accents different ways of saying things, understanding lists. But for those of us who are not doing that, for the rest of us, we're building essentially apps. Right. Skills and things like that that can be downloaded.

We can actually create our our tagging systems so that. Our Charly's can understand multiple different ways of asking for something. We can make sure that we say something more interesting than just I didn't understand that. We can also make sure that whenever possible we offer options rather than just saying over and over again. I didn't understand that because that's a pretty good way to get people really frustrated or overwhelmed. If you get the opportunity and invite to spend time in our in our limited time today watching this video.

But there's a great comedy video called Elevator Recognition by Burness Town. And the entire premise is essentially what if there's an elevator that has no buttons, only works with voice and it doesn't understand Scottish accents? It's hilarious. You know, it's not hilarious with this kind of thing happens in the real world instead in a comedy video. Lastly, my last piece of advice here is to remember that voice quite literally sounds different. There are things that you see written down that seem quite normal that when you hear them, they sound differently.

We'll catch some of that through effort. No piece of advice. Number one, make it conversational is vice. Number two, making natural language. But you also have to think about what somebody means and how that's going to be different based on where they are. Why they're using voice. Why NPR is a great example of this. I typically when NPR. When somebody says, I want to listen to, let's say. Planet Money. We might bring them on a Web screen to a list of all Planet Money episodes.

But when somebody is asking voice, you lie, the assumption can be made that they want to listen to the latest episode because NPR is a radio station. And if they were using the actual radio, then they would be simply listening to an episode. So. This is this is sort of the next level, right? I said before, we don't have context. This brings us to how we create context, how we say not just an assumption that this is people want.

This is what people want, but it's choice. This is what we have experienced people is wanting in a similar context. And so we are going to create an easier way for them to get what they are getting to, what they want to get to. So quick time check. Excellent. Got about 15 minutes left. And there's just enough time for me to talk through one of the specific areas of creating voice UI, which is how you write this conversational natural language, contextual, accessible.

Useful and usable for you. And then I have a couple minutes left for questions. Well, the reason I want to go over this particular area is because one of the biggest questions I tend to get is do you need a separate voice and tone for your voice? Why? And the answer is no. But you do need to make sure that your voice and tone reflects not only the written word, but also your voice. Why years? How to do it?

First of all, for people who have not asked this question, in fact, are saying, yeah, I know that phrase, voice and tone. But what do you been talking about? Here's how I typically define it. The voice is personality as a human being. My voice, my personality is always going to be the same. It doesn't matter where I am. Who I'm speaking to, what I'm doing. I'm always Marli, but my tone is going to change.

I don't speak the same way to you as I do to my cat. A very different tone of voice for each of those. Firstly, because she's less interested in content strategy, more interested in being the cutest, most adorable little thing who somehow makes me talk like and I've lost my mind. I also don't use the same tone of voice, even speaking about Content strategy when I'm talking to other people who have a detailed knowledge of it. Then when I'm explaining what I do to a friend of my mom's.

So Tone will always have those main elements of your voice, of your personality, but will change based on the scenario, based on the situation. And so although your organization's voice should not change between voice y and written word, the tone absolutely should tone changes in two ways. It changes based on the channel. And it changes based on the emotion. Let's get to both. If you're looking for examples, by the way, people do really good work with this.

I found that 18 F has great example and you'll notice that they call out their voice. And the things that are part of their personality all the time, simply because it's government communication. And then separately, the tone which is going to change based on the type of writing. Now, I would hope that they also add warboys you. I have a six step process that I use for creating a voice. It starts with our content goals. Then a message, architecture or principles, you may call it.

Then creating those voice aspects. Testing it, building out the tones. And then adding the nuts and bolts. Let's go through them. First, those content goals, the content goals. This is gonna be the same, whether you've done it a million times, whether you're doing it the first time, whether you are only creating for voice you I only created for written word or doing a combination thereof. You want to know your audience. Want to know your audience goals.

Want to know your business objectives. If this sounds really familiar and you're like, that's just content strategy it is. You can't create a voice and tone without knowing those basics of any type of content strategy. It's particularly important here to think about who those audiences are in terms of what they know, what they're comfortable with, and maybe even what language aspects they use. Right. So if there is terminology that, you know, that they do or don't use in relation to to what you do or what your organization says, that's a good thing to note in this initial identification of your audience goals and your audience and your content goals.

It's also good to note things like within your business objectives, not only we want more people to purchase our product or we want to become the best of the best in this field or whatever, but very specific areas like we want our audience to understand. These elements, because that will help with identifying if we're just going to go totally with the language that our audience speaks or we're going to try to include some education there. One thing that this will also help you consider is, is, boy, you are the best way to spend your time.

Is it a channel that is appropriate? Keep in mind, when we talk about OmnichannelX, it's about making choices. OmnichannelX doesn't mean we're everywhere. It means we're everywhere that our audience is and we're everywhere that is helpful and useful to our audience. So the question to ask at this point is if you've got a higher up, who's saying we should do voice shuai? Then you may be saying, all right, let me come back to our content goals and let's identify how voice you.

I will help our audience reach their goals and how it will help us meet our business objectives. You can ask some questions here as you're doing this around what your company does. When you started the company, what are the companies you admire? These are all good ways to get at good ideas for those goals and really get your your brain thoughts flowing. I also like to think I mentioned before the idea of my personality as a human being. I like thinking about Brand as a character, ideally not a human being, because human beings can change, can suddenly start saying terrible things on Twitter.

Whereas a character is typically a finished product. They are also a little more two dimensional, which is better for organizations because they're not human beings. And so we can think about how we would describe them. What kinds of content suits them, how we want people to feel when interacting with our rapt. That brings us to this idea of message architecture. I've more recently been hearing this called design principles or content principles or just principles. And in some ways, I find that that's an easier way of thinking about it.

But I still love the idea of building an architecture. I have this image in my head of our message. Architecture is like the pillars that hold up our building. And the message architecture essentially tells us if I like what those principles are, what is something for us to run ideas by and say? Does this decision support our idea? So when you come up with those principles, those pillars, those message architecture themes. Each one should be something like.

We are. I technologically savvy. And then you need to identify what does that mean in terms of those audience goals? Right. How does that mean people are going to describe us? How will they see us? How will we know that we are technologically savvy? And what does that mean from a content design standpoint? Maybe it means we are so tech savvy that we exist in all of the latest technological places like Voice. Why say so? Knowing what those pillars are, what those principles are will help us identify what channels we should be.

Among other things. One example here would be if we say we are supportive and that is one of the pillars that, you know, the upmost that the four pillars, the six pillars, no more than that, that define us, then. People will will say that we are a caring, passionate and friendly community. Right. Maybe we help people with their finances. So they'll say they helped me pay my bills. The coaches have a genuine desire to pay forward their own rewarding experience.

OK. That tells us if we're gonna be supportive and people think that we're always there for them, we don't have a customer support line that is 100 percent always available. And if there's any chance that's ever not going to be the case, we need a chat bot so that we have something available to people to give them help at all times. And we need to make sure that we are always explaining things, that we are being as personalized as possible because you can't be supportive and come across as robotic or preset.

Now we're ready to create the voice itself. Now the voice people say, well, I just created those pillars is not the same as voice. Not necessarily. Like I said, one of your pillars might be tech savvy or something like that. That's not a voice element. But now that you know those pillars, as you think about what are the adjectives that define us, you're going to whittle them down to six or eight. But you may start with as many as 50 when you're just throwing out things and you're going to brainstorm these based on those pillars.

So now you need to think about how would you create this voice? It's going to need to work across all channels. So you don't want a voice element. That's like I'm set. We're excited. We're happy. Well, that doesn't work. How about what you're showing an error message. You don't be excited. An error message. You don't want to be happy when you're giving someone bad news. And so for each element, each adjective that defines your voice.

Think about why it matters. Think about how you do it. What types of things come into play and then come up with a couple of examples? What it sounds like in social media, what it sounds like. I voice and come up with an example of what it doesn't sound like. This may sound like we're getting into tone and we're we're treading pretty close, but you're not going to know how that voice works until you identify what you have in your head, because really, when you say we sound interesting, that's going to mean something different to everyone.

Well, when you say we sound interesting and it sounds like this and not like that. Then people can follow it. So one example here going with our financial example. We're going to sound a pathetic. We do it by expressing sympathy and understanding for members decisions, beliefs and motivations. What it sounds like was written, it sounds like, of course, you want to pay your bills, but sometimes life gets in the way. Let's make a list of some of the reasons you want to pay your bills, not pay your bills on time.

Otherwise, you're causing more problems for yourself. And if you think the not sounds ridiculous, that's OK. You often want to create ridiculous knots because it will actually help clarify things. Next, think you want to do is test. Testing is very important with voice, you eye. And if you're wondering how do I test a voice? Well, there's actually something that was created in 1984 called the Wizard of Oz approach in this method. You have somebody who is essentially hiding behind a curtain or under a table or somewhere.

And his response, responding, using a script. You also need a separate note taker who can identify what is working, what isn't working. And you need somebody to moderate. Act as a facilitator who is sitting with the tester as opposed to with the person with the script. Now we're ready to create those tones. You want to think through with these scenarios are what is the first interaction sound like? How does it sound when we get an error message?

Bad news. Good news. Congratulations. Explaining things, setting goals. And how did those tones sound different when they're in social media on a blog on the Web site, unvoice UI in person on the phone. So for each tone, similarly to what we did for the voice elements, we're going to identify what the tone is and why it's appropriate for this scenario, how we do it and what it sounds like. One place that did a great job of this no longer exists is the old MailChimp voice in town.

And if you're like, I'm tired of people saying that the Milchan voice and tone is the utmost example. I'm sorry. They just did it best. You'll notice that along those content types they created, they had some that were by emotion like Cicek success message and some that were by channel like blog or video tutorial. I think that was a mistake. I think they would have been better off having success message and then having examples within that for blog, video, social media, etc.

. But I do like that they pulled out content types in some way. I also like that they called out what they are envisioning. The end user is thinking or saying this is that same idea. The conversational naturale approach that we talked about before and they identify what someone might be feeling and that helps understand why they respond, what they do. The last thing I'll say on this before we move on. I only left like a minute for questions I'm sorry, is editorial guidelines.

And that's important because there are some elements that need to be consistent across voice and written elements. And there are other things that are going to change. I tell, for example, I it may be appropriate to use abbreviations in the written text, but not in voice. You why grade level. It's sort of a known best practice. That seventh grade reading level is appropriate for most situations when we're not talking about highly complex information, when we're not talking to professionals and academics in the field.

But there hasn't been a best practice created for voice you yet. So what grade level are you writing for at that point? We gotta test it. We have to learn more. This is an opportunity for you to identify what is best for your organization and start creating your own editorial guidelines. Then there are things that should be the same across. You need to know you're using slang. You need to know what pronouns you use. Overarchingly or editorial guidelines are to help you create clear and concise information and that you're going to need examples for in voice and in written text.

So with all that in mind, I know we've gone through a lot and we'll be available for further conversation later on, linked in on the slack channels and on Twitter and via email, you can always find me. Just remember the voice. You why is not a silver bullet? It's a tool. And we have an opportunity to use it well and to make it not teetering on delightful while barely understanding people, but truly useful and usable. If you create a voice for your voice, you why using this advice and these tips, if you do your research and you do the work.

You can have tremendous Arawa. You can have just a fantastic experience and make your creations more OmnichannelX. Thank you so much. And thank you so much, Marley. That was fantastic. I as you said, there is no room for at the moment. There is no more room for questions. However, for all those burning questions that I'm sure are in your minds. Be sure to join at five forty five. Room three were marlie can answer all of your, you know, burning things about the Charly's, the Nets and everything else that she's been working on.

Thank you again so much. And we will see you a little later. Bye.

Great. Be back with 15. Bye bye.