Adding tests to the test library was not an ideal experience due to it's heavy use of modals. This was originally chosen due to legacy code, but when the time became available to rewrite the old code - an embedded experience became possible.
Product Designer
2 weeks
The company originally started with just a few people, and one main developer. In order to move quickly and generate a product that could win funding, a great deal of the application was built with standardized but difficult to customize components such as incredibly inflexible tables.
This resulted in the need for a lot of modals. Any time the user wanted to add or edit something, instead of being able to act directly on the page, a modal would pop up and the user would have to interact with the modal instead. Modals get the job done, but should really only be used for alerts or specific messaging with limited inputs. Creating a seamless experience in the application is less jarring for the user and helps them keep context.
When new feature work slowed down, and resources were dedicate to update old code, the opportunity arose to remove the old structured tables and use custom components. I created designs that allows for both of the flows users can take to add tests - suggested and manual.
If a user selects suggested tests, the Qualiti Intelligence AI will prepare 3 tests for the user, that they will then have the ability to add or remove. They are also able to generate similar tests to a suggested test, or they can manually edit that test instead. If a user just wants to add their own manual test, they have that option as well.
Embedding our test additions into the library was inspired while conducting competitive research for other companies. The product team discovered a competitor that had an impressive experience for adding test steps within the test view itself. After creating an updated experience for our users to add test steps, I realized that our users could benefit from a similar experience within the test library.
At this stage, I would normally perform more thorough user research with actual users, however, this was not prioritized within the team and our restricted resources did not allow for it.
After starting with the foundational elements, such as branding, spacing, and colors, I then added components and elements to this library slowly over time while working on the Qualiti Portal redesign. I prioritized creating clean and clear reusable elements in the designs and made sure any changes were made on the master component that lives in the library.
I wanted to create an experience that felt both within the users workflow and was efficient and as clear as possible. It was important to me to ensure the user both could work quickly and could understand what they were adding, and could easily distinguish it from what was already there.
After presenting this to the team, I received feedback that the option for users to select a credential was missing. This is required for our Qualiti Intelligence to know how to login to the users application and have the ability to generate steps. I resolved this by adding it in for both suggested and manual tests.
When we used the old code table for the test library, we had checkboxes on each row for a user to be able to select and act on multiple tests, and they also had the ability to select the entire row to go to that test's detailed view. Checkboxes are a very small element to interact with and can take up a lot of valuable space on a table, so I wanted to find a way to move away from these while I had the opportunity.
I struggled for a bit with how to allow users to still select multiple rows to take actions on them, but also still be able to click to easily go to a test's detailed view. I eventually came up with selected a row highlights that row and it's actions and the ability to do this with multiple rows. When you have a row selected OR when you hover on a row, you then can see the action buttons that will appear for that row, including go to the detail view.
Some use cases were not addressed in this design, to include - is a user able to add multiple manual tests at once? Is there a way for them to add more than three suggested tests at once? If they edit a suggested test, does the AI learn from that and is it still able to generate steps? To further refine this design, I would address these scenarios along with a few others not listed here.
I would also add specific tags to tests that are generated by the AI versus ones that are made manually. Ideally I would do user research and see if there is a need to distinguish the two before proceeding.