Sole designer partnered with a designer from Azure, a Product Manager, a content designer, and a dev team.
This project was a key stepping stone for OpenAI integrations and generative AI concepts within PVA as the first instance a bot could be completely generated using AI.
Power Virtual Agents was a Microsoft Power Platform product that helps users (generally enterprise facing) build chatbots for their business needs. PVA, with its conversational type nature, has been dabbling in AI for the past few years, and when GPT3 blew up, PVA, among the rest of Microsoft, really jumped quickly at the chance to beat the market in GPT and OpenAI integrations.
This work was PVA's first step towards exploring generative AI, and how it can create a bot. This feature explores connecting PVA using a an API connector service to Azure OpenAI Studio, and pushing it fully through to a "generative AI" node in the authoring canvas.
The authoring canvas is a major piece to PVA's selling point: a low-code place to create how a chatbot talks, using nodes, connections, entities, and more. I co-owned this area with a dear friend, and holy, it was such a complicated area. As PVA progressed into the GPT/AI world, we looked to simplify this mapping, eventually; however, it still becomes the very visual that can tell users exactly how their bot will respond to different queries.
A connector is a pre-made API connection, ready for use once credentials are given, provided by a whole infrastructure available within the entire Power Platform. For example, you could connect to Outlook to allow a Power Apps app to send or read information for the app to use, or use Outlook as a means to send emails in an app. In this case, the feature crew added another connector to the collection, an Azure OpenAI Studio connector, where you can pass info freely between Azure OpenAI Studio and PVA, once enabled.
Generative AI is a term to describe the use of AI to generate, or create, something based off of some form of prompt. For example, when you type "cat riding horse" in Midjourney, you'd get an image of, well, a cat riding a horse. π Here, we were hoping to make chatbots.
- To create a seamless end to end experience of using Azure OpenAI Studio to deploying the bot as a PVA bot.
- To enable the ability to connect Azure OpenAI Studio to PVA from the authoring canvas.
As usual, after kickstarting the conversation with my PM on what the expectations were and main problems to solve, I generally lead the exploration on the main flow first. For me, pen and paper is the quickest way to visualize thoughts and get high-level conversations started.
At this point in my career, integration flows in the Power Platform are rather familiar to me; some past and valid research is top of mind when thinking about the initial flow: starting in Azure OpenAI Studio, and somehow getting the user to the PVA authoring canvas, and vice versa.
1. Users in integration/growth scenarios usually click prompts to explore. It is critical to get users from the context of one product to the other with as little steps as humanely possible.
2. Users in general do not enjoy context-switching. How do we make this as painless as possible? Also, if switching from a context, it's essential to open the new experience in a new tab, especially if the original context was a "maker's space".
So, as shown above, started to draft out this rough flowchart that got conversations started. Overall, the high-level flows and questions presented checked out pretty smoothly, so we moved on to creating some mocks using this flow as the frame.
As mentioned, I started to map out preliminary screens to the flow chart made above:
For me, beginning stages are always messy, and clarity and reasoning finds its way as the feature progresses. Starting here, some major questions began to surface with screens and screenshots provided. Sticky notes are a way for me to capture live discussions with PM and engineering partners during our meetings (and also a great tool to ensure everyone feels like their feedback, thoughts and concerns were heard).
E2E From Azure OpenAI Studio to PVA
First, lets consider the first step in starting from Azure OpenAI Studio:
Starting strong from within the context of Azure Open AI Studio; we were shown this screenshot from the Azure OpenAI Studio crew that they had decided on the term "Deploy" and the content "As a Power Virtual Agent". Right off the bat, we see an opportunity to improve the experience by adding a documentation link or any potential learning information (as we had preceding research that users are hesitant to switch contexts unless given at least some form of preview or information about engaging with it). Next, in the Power Platform, an action like this isn't necessarily called "Deploy", or what another Azure PM was pushing for, "Publish" (Publish has a whole hairball reputation in the Power Platform that I can get into if you'd like). We also don't call our "bot" a "Power Virtual Agent".
We also had some other questions about how exactly this space worked, since we were starting to consider how, as a connector, Azure OpenAI Studio will need to have input and output variables represented in the authoring canvas. At this point, since I'd worked on an integration piece in Azure before, I reached out to some old buddies to see if we could get an Azure designer's input, since Azure's design patterns and reasonings are unique.
Second, assuming all goes well, we knew that there'll need to be a licensing or FRE experience here, if this is the user's first time in PVA:
Here I just grabbed an existing experience for what we show when a user doesn't have a PVA license. This, at the bare minimum, we knew users would have to encounter, at least until post GA (general availability). If the user somehow has a PVA license (which research showed most Azure users did not have PVA licsenses) we skip this step.
Next, we knew that users would need to configure the bot (naming the bot, designating what language it hosted) so we knew a screen like the following must show at this point in time before moving forward:
This brought up a lot of questions:
- At this point, do users even know what's happening? Is there anything we can do to help them? Should we say we know they're coming from Azure? Are they.. lost? (I ask this because in past similar integration experiences like this, this is the exact moment where we have a huge drop-off point for these reasons.)
- ... When do we create a connection to Azure OpenAI Studio?
- Additionally, not only if this is the first time they're trying this flow, but what if this is also their first time in PVA?
Moving forward, we'd definitely need some form of treatment here, or much confusion would be had.
Next up, I looked at the inevitable "connection creation" experience.
This particular step, since we were so tight on time, had the technical constraint that we HAD to use this modal for connection creation, since it's leveraged from another Power Platform product. Later on, this ability would be worked out of the iframe, but for now, this will need to be here somehow.
Lastly, we have the authoring canvas where we see the results of the connection.
Here, I pulled a screen using the "Create generative answers" node (which at that time, this was the one node that did everything genAI) and added the section at the bottom on the pane on the right. The pane at this time looked.. unfavorable in terms of heirarchy, and used this feature as an opportunity to fix this structure while developers were working in this area.
I also added a very rough teaching bubble stand-in, because, although not required, we had existing research stating that users feel the impact of the work done in the authoring canvas and any generative AI experience in PVA by testing out the flow of it in our test chat immediately after. Especially if this is the user's first time in PVA, they'll need to know where this testing experience is.
E2E Connecting to Azure OpenAI from within PVA
Similar to the screen above, we knew that starting from PVA would require using this pane and approximately this experience in the authoring canvas. Some questions came to mind here about, how do we review all the data-sources in this pane? How can we make this pane more scalable? How can we enable the creation of connections from here, and selecting the right model from Azure OpenAI Studio from here?
From here, it became clear that I needed to reach out to various other designers on my team to coordinate, communicate, and collaborate for changes on this pane, and also another page that hosted all generative AI related data sources.
Many reviews and discussions followed. Based on the flowchart and wireframe breakdown, I'll just go over what happened for each major section, and bring it all together in the end.
Starting from within Azure OpenAI Studio
After starting conversations with the Azure OpenAI Studio team, we identified a few areas to align and develop:
1. Content alignment on "Deploy", and "A Power Virtual Agent". With some pushing for "Publish", this became a terminology war, where I consulted our and Azure's content writer to find an unbiased middle-ground.
2. How can we educate the user about the feature to make the jump less daunting?
3. After finding out we need some form of ID from this particular model, how and where can the user easily find this ID of sorts and know to add it to PVA, somewhere?
On #1, we landed on "Deploy to" as our final content for the button since we'd identified that it wasn't a "publish" scenario, and although "Deploy" was commonly used in Azure, it was considered not enough information to convey that it was going to open a new window or product. The PVA and Azure content designers came together and gave us this string, which I supported. As for the proper term for when the dropdown was open, we changed that to "a new Power Virtual Agents bot", because a "create new" experience is involved in this flow, and also when referring to other products within Microsoft, we need to say the full name before mentioning that product's file type.
See below:
On #2, because we also realized we needed to give a terms of service since generative AI used in a different product was happening and potential passing of data, along with needing that learning moment, and also along with the fact that switching contexts from just the dropdown was too daunting, the Azure designer and I decided to use this modal (following Azure's native patterns) as a pitstop for users to confirm that this is what they wanted to do, and also so that they see the legal agreement before proceeding.
This also includes some notation with EUDB/SCHREMs disclaimers, and a very clear "Continue in Power Virtual Agents" with a link-out icon to allow the user to anticipate the context change.
On #3, I'd learned from the Azure PM and designer that Azure has a very established pattern for finding the API key code, where they were very confident that any Azure user would understand exactly where to go to find it. For future-proofing, I worked with the designer to make any adjustments to that experience, ensuring it's developed with the Azure team, and documented this in the flow for reference.
This modal is invoked from the "View sample code" command you'll see on the left, right under "ChatGPT playground (preview)".
Licensing modal
As for this discussion, the result was simple; we ended up having a dependency on another feature that was almost done, so we kept the screen as is, in case the user didn't have a PVA license (which we found out was highly likely, and seldomly do people have both licenses). The reason this is important to get rid of is from past research, trial modals are a big cause to drop-off rates cross-products. Since we can grab the information from Azure this user's region, we can and did eventually get rid of this step.
Connection and bot creation
I ended up trying my best to consolidate these two steps, espcially since we're coming from Azure, for two main reasons: Less steps = π, but also, between iterations, it was really awkward to be prompted to "connect to Azure OpenAI" when the user was just there, and started this flow by "deploying to a Power Virtual Agents bot".
The first two screens are quick loading screens with content that shows what's being made and when:
While the second two screens show that the user's language is already prefilled from the context of coming from their model in Azure OpenAI Studio. We also landed on having a welcome message bar with acknowledgement that they're still in that flow that they'd started from Azure, so that there's no question that their information is getting passed over and nothing is lost.
The other two screens with the red bars show messaging during specific error scenarios, which I'll skip here.
In PVA
Now that we're in the context of PVA, we had some feedback that seeing the authoring canvas immediately on load into PVA could be really discouraging for Azure users since we knew that Azure and PVA users usually didn't intersect. We instead opted to show a success message bar up top to show the flow is finished, and the bot is ready for testing. If the user wants to dig into how it was connected, they can follow the link in the success message bar.
If the user clicks that bar, they'll see this design. On the right, you'll see all the different states, interactions, and levels to see. I'll spare you the details.
As a bonus task, I worked with one of my colleagues to establish first run experience (FRE) flows, since at this time, features tangential to this work didn't have any of these flows worked out, and found it essential to work on these to ensure they didn't overlap.
For now, I'll spare details again since it was technically a side quest. Feel free to review it in the figma link at the end.
Adding the model to a PVA bot
In the authoring canvas, although low-code, is where concepts get really technical.
Starting off, the design still seems familiar to the initial mock, just a bit more stylized.
Unfortunately, this feature couldn't work in the Data source pane rework, but I was able to rework it supplementary to a different workstream shortly after, along with figuring out a better information heirarchy in the generative answers node (both of which, to the credit of the designer, were designed extremely fast due to a high-stakes but way-too-rushed feature).
Now we get into exactly what it looks like to add and manage connectors.
Once the "add" button is invoked, we open a call-out (a commonly used component in PVA for sub-interactions) to surface a list of available connections in the current environment. An environment is, in short, a creation "folder" where you can make various things, without impacting other environments (pertaining to app lifecycle management, which we don't need to know too much about for this project). Unfortunately, we weren't able to fit connection management (creation/deletion) in PVA for this scope since it was too costly for developers, so we included a link out to Power Apps to configure connections; awkward, I know, but it was the only solution we had at the time.
Once added, the connection (after a connector is activated) can be selected in the list, and will show as it does on the right.
If the user decides to click "Edit parameters" in the connection's card, they'll see the following experience:
"Parameters" is a term used in Azure OpenAI Studio to describe any configuration on the model. These parameters are mostly in the right pane of Azure OpenAI Studio, but can exist in other hidden areas of that page. We consolidated them here to allow the user to edit the model without leaving the context of PVA, and also gave a disclaimer that editing here doesn't edit the model, but just this instance used in the bot.
The definitions in the tooltips (as documented on the right) are lengthy due to us pulling the direct definitions of these terms from Azure, and would be inefficient to create our own disconnected definitions as the Azure OpenAI Studio may change over time.
With these discussions, as mentioned earlier, we came to the following designs in the AI Capabilities page, where we store all data used in a PVA bot, to be another area where we'll need to surface that we need to accommodate for connections. As mentioned, we eventually realized we didn't have the funds and dev time to be able to build connection management into PVA yet, so we decided to have a designated link in this page to clearly communicate to users that, yes, you're on the right page to see what data is connected to your bot, but here's handy link to where you can configure your Azure OpenAI Studio connection.
This full-view screen shows the AI capabilities page, where you'll see a section for "Azure OpenAI Service on your data". The button clearly communicates that invoking this button will open Power Apps.
We finally made it! Here's the final flows for this project: Link to Figma: Azure OpenAI on your data
Sometimes when working in such a large org, let alone company, you'll often find projects like this where research isn't necessarily needed because prior research on very similar features will tell you just enough what to look out for. I was pretty happy with the level of communication, planning, and compromise happened to get this feature out the door, while also setting up a game plan for usability improvements for next time this project gets picked up. My PM and I were in full sync and communicated well, while our developers got to have many checkpoints for feedback and insights into their investigations into technical limitations. After this project, it proved easy and exciting to push for these unfinished "was too expensive" experiences.
Also, as usual, I particularly enjoy working with designers from other teams and orgs. My collaborator from Azure was such a delight to work with, and learned more about how Azure does things differently, in general.