Sole designer partnered with my manager, a product manager, a content designer, and a dev team.
This project was a super high-visibility, high-stakes project that shaped the beginning and laid the groundwork of PVA's metamorphosis into Microsoft Copilot Studio and how it exists today.
Power Virtual Agents was a Microsoft Power Platform product that helps users (generally enterprise facing) build chatbots for their business needs. PVA, with its conversational type nature, has been dabbling in AI for the past few years, and when GPT3 blew up, PVA, among the rest of Microsoft, really jumped quickly at the chance to beat the market in GPT and OpenAI integrations. This work was the first step PVA took to eventually become what's widely known as Microsoft Copilot Studio.
There was one specific problem that PMs and developers loved to bring up โ the time it took to create a "custom connector", and eventually to map an action to it. To put it in simple terms, a connector is an API with a pre-made connection made to it (made by Microsoft) that someone can connect to to do stuff with. A custom connector allows a user to call upon any service of their choosing using our connector/connections infrastructure (Which, in the Power Platform overall, there were over 1,000 native connectors available for use).
Lastly, another problem on mind was, after creating the bot (eventually called a copilot), how could a user go about reviewing how exactly AI pulled these connectors and assets together, while breaking the "black box" that AI promises to be, and to build trust with the user in a way that they know that they're in control and understand how AI put things together? These two concepts proved to be important for our users in the Power Platform in the rising age of AI.
To create a proof of concept for a complete re-imagining of PVA with GPT technology that will excite executives and leadership at Microsoft and also streamline tedious creation points for the bot maker.
To show a bot testing experience that gives clarity, visibility, and transparency to the bot maker of what's happening behind the scenes with AI.
To see the structural change of the product if we fully incorporate GPT technology as a main means to create bots.
Around early 2023, the tech industry was hit pretty hard with the economic climate due to the pandemic and surrounding issues, including Microsoft. In short, this project was a big gamble for PVA to become especially important at Microsoft.
For the screens shown plus the video above, I was given only a work week and a half in total of time to pull this work together. Sure, the work-life-balance was absolutely non-exisitent during this time, but believed that the concept came out crisp and clean and as good as I could give it with the short amount of time and lack of research given. It succeeded in earning PVA an important spot at the Microsoft table, eventually changing itself to Microsoft Copilot Studio.
Some notes on research
There was no research study conducted during this initial concept work (you can probably imagine how much this pains me), but existing research was taken into account when thinking about this concept.
Users are generally excited to have GPT integration in Power Virtual Agents, and overall are curious and optimistic about the integration.
Users and admins enjoy the level of disclosure PVA has been giving them on cautions on using AI, and how to best utilize AI responsibly.
Users and admins would still like a way to turn off AI features in general to keep a manual/traditional method of working.
Users like to feel that they're still in control of information being put out by the bot, and they play a large role in the bot creation flow.
Although this proof of concept is about if PVA took a fully futuristic, full dive into GPT integration (which eventually shaped the final direction PVA, or MCS went), during the feature finalization process, our feature crew will be considering how to slowly ease users up to this final destination.
This was just a proof of concept at the time, turned reality later.
There are many things I would've liked to reconsider when going through the feature finalization process. There's some experience gaps in this proof of concept.
Content design isn't official in this project.
There was help from our beloved content designer, but terminology may be different from what's available today, depending how feature finalization goes.
We had to move fast, so we had many discussion meetings to land a concept and a scenario to focus on. Early sketches and diagrams were crucial to keep the crew aligned and on the same page.
We knew that we wanted a flow that starts with the user asking for what they wanted in the bot. From there, there were discussions of "how much should we show? Should we hide functionality and abilities in a black box?".
Personally, I believe in transparency and clarity for the user so we can guarantee that the user is kept in the loop at all times during any process in a product (also reflecting preceding user research we'd gotten before this concept work). I'd created the two following sketches to help communicate our thoughts and final takeaways from discussions. Sketching the flows is, in my opinion, the best most efficient way to communicate where I'm thinking in the "what are we working towards?" phase, without having to burn time on creating mocks. Notes from follow-up reviews with the feature crew are also shown on the sketch image.
The concept of a "capability" came to fruition; a capability was considered "an ability that, from using generative AI, the bot can now do, since we're able to string those actions together". Naturally, this concept came with its own questions: How is it made? How can a user configure a capability? Do they want to configure a capability? How can a user be able to understand, and maybe edit, how it's made?
As our discussions matured, I found it important to strengthen the story-telling aspect and the concept overall by including a few screens on what it'd look like to test the "capability" after creation. This would bring the story full circle in that not only is it easy to create, but you'd get to see, as the bot maker, the very ability you just made and how it works and how it's wired.
In the future, I'd love to be able to edit the wiring during test mode, and also want to run a research study on potentially having testing during the creation flow so the bot maker can be reassured that the "ability" that they'd asked for is what they get when they go to finalize it.
Finally, we ended up with the approximate flow to present to stakeholders:
Creation (create page)
โฌ
Configuration
โฌ
Add additional connector (to show ability to customize)
โฌ
Completion (consider testing in creation flow some other time)
โฌ
Test the experience
The concept morphed slightly between discussions.
Since it was such a fast-paced project, I kept very open and constant messaging streams with product leadership, my feature crew, and my manager to ensure we were all on the same page at the same time. I ensured that everyone knew what and exactly when things were happening. Clear communication, transparency, and visibility is absolutely essential when it comes to high-visibility and very quick design work. This working style, plus my ability to stay creative under high pressure, put me in the envisioning seat for PVA very often.
Creation changed between starting as a part of a larger artifact (Seen here as "capabilities") and eventually morphed into the conceptualization of an existing pattern of a "create" page, since the flow is able to create any components based on the description. I'd chosen a banner to host the description since we want to grab a lot of attention for this ability.
Version 1:
Version 2:
Configuration simply changed mental models between discussions. With description being the only option for this presentation of this concept, I'd gotten rid of the selective starting pattern and leaned on the high-visibility entry point in the "create" page. I do believe that in the future, there'd need to be additional creation points, like the new components page shown.
Version 1:
Version 2:
Version 3 (final):
Testing initially started as a details page where you can see the "wiring" explicitly written out as settings, but decided to go for a more familiar "GPT" test chat mode to enter, especially for story-telling purposes.
first, I explored what it'd look like to click on one of the capabilities to see what's within it.
Version 1:
Version 2 became more of an exploration on how, using PVA's test chat, pushing it to a new level where you can see what's wired when chatting with the bot:
Version 3 (final):
And bonus, a reimagining of the impact of this feature on the existing components in PVA (the IA needed rework to accommodate this mental change).
Before rework:
After rework:
At the last second, we didn't present the creation experience, so I'll include that along with the final video we presented.
Although I didn't favor the pace we went at, I was really happy with the quality of work I was able to turn around in a short amount of time, and ultimately, it became successful in earning PVA a widely known, new name, Microsoft Copilot Studio. I was able to consider all stakeholders and provide input. With something so high-stakes, there's always a challenge of balancing conflicting feedback and the wants and needs from different stakeholders, which I handled well. I was honored to have been chosen for a project with such high impact and implications.
There's plenty here that needs additional thinking, like how we can get deeper into the details of the wiring, additional creation entry points, and impacts on the authoring canvas. I'd also like to explore seeing a "wiring map", where you can visually see what's connected to what and see if that's something user's would be interested in.
In the end, everyone was happy with the result, project delivered, and PVA went on to become the product that powers Microsoft Copilot.