Big steps have been made since we first started with our design system project. It has become a well-oiled process and everyone is familiar with the tool. We were curious to know how the rest of the team felt about working with it. Time for some internal research! Through a series of interviews, card sorting sessions and tests we learned how to make our design system a tool the whole team would love to use.
Atomic design
At Moxio, we have been developing our own design system over the past year. If you haven’t heard of the expression, this is a popular method to catalogue all components and elements used in an application. It’s a handy tool for both designers and developers to work together and maintain consistency in UI as products evolve. It’s essentially a library of elements used in a digital product, often with guidelines describing how to use any given element, which we call style guides.
Check out Annemiek’s comprehensive blog about it from December 2017 - she explains our need for a design system and how we went about to set one up. This project has been going strong, as we add components and write style guides, making it easier for our teams to grab an element to use in any given context.
However, as our system grew, we realised we needed to re-evaluate how we categorised elements. Our list of style guides was growing longer by the day. We had been using Brad Frost’s structure of Atomic Design - where elements are sorted according to atomic, molecular and organic categories.
Basically, this meant that:
- Very small elements which could not be reduced to anything smaller, were sorted under the category “atoms”
- Larger elements which were combinations of atoms, were sorted under “molecules”
- Even larger elements, which were combinations of (atoms and) molecules, were labelled “organisms”
- We used templates to show examples of all variations of a widget in greater detail
We assumed this would be clear to everyone, but we’d never really checked if this was true. Since we had so many elements being documented, our menu navigation became very long - it felt cluttered. Our design system team decided to review the current method of categorising these elements: how could we sort them into clear, distinct categories? Did everyone else at Moxio understand and like to use the atomic design principles? How could we guarantee ease of use over time? It was time for some internal user research!
Research setup
We used a combination of methods to figure out what the rest of the team needed: interviews, card sorting sessions and validating findings.
We wanted to involve the whole team in this exercise, so that everyone felt comfortable using the design system – make it a tool people want to use in their daily workflow, something we all share responsibility for. Our aim was to uncover our teammates’ mental models, to define a new conceptual model of how to organise information in our design system.
What is a mental model?
A mental model describes the different ways of thinking, of structuring information, that individuals all have. It’s a personal interpretation of a thing or a situation. That’s part of the reason why assumptions can’t be trusted - they’re based on your own mental model, your own perspective based on your personal experience. We didn’t know if the Atomic Design principles matched people’s internal representation of how elements were sorted - was there a way to organise them which would meet (most) people’s mental schematic?
What is a conceptual model?
A conceptual model is the actual representation of the interface of a product - where the mental model is internal, individual, the conceptual model is the physical interface, which should match the user’s mental model. For more information about this, check out uxmag’s article.
To find out more about how our people think and what makes them tick, we started with interviews to explore the current understanding and experience of the style guides.
Interviews
Why interviews?
To uncover mental models, we started out by, well, simply talking to each other. We didn’t know how people had been using the design system, let alone how they felt about searching for style guides... We had given demos of the system, but how were people using it? What did they like about it? What could be improved? What kind of information did they need from style guides? To do this, we started with some short interviews with teammates who had already some experience using the style guides.
We chose these people because we needed to do a sanity check - was this what people needed? How did they view and use the style guides? Based on these findings and insights from seasoned users, we would be able to define how to continue our internal research into categorising the elements.
Findings
It turned out that the style guides as such were quite useful already, with some small improvements in terms of content. For instance, people wanted clearer use case examples of when to use a given component, and when not to use it.
This feedback brought up a discussion in our design system team – should we also spend time documenting examples of what not to do? Rather than focusing on negative examples, we decided to spend extra time developing writing guidelines for the style guides themselves. We defined a set of rules for tone of voice, how to organise the content and how to describe the context of use. A few layout changes were made to improve readability.
The preliminary findings showed us that our teammates were interested in contributing to making the overall design system easier to use and more valuable for them. This helped us to decide how to continue - by using card sorting methods, we hoped to involve many people in depth and uncover patterns of expectations for where to find elements. We moved on to next the part of our internal research: open card sorting sessions.
Card sorting
Open card sorting
We decided to go for an open card sorting exercise, in which no predefined categories are set. We had been using “Atoms – Molecules – Organisms” as main categories, following Brad Frost’s Atomic Design principles. Our design system team liked the use of larger main categories, but we weren’t convinced that these names made it clear where each element belonged. One of the goals for this exercise was to check our gut feeling with other teammates, while defining a clear subcategorization for the elements themselves.
A total of 9 individual card sorting sessions were held, with people both familiar and unfamiliar with the design system. Those with the least experience with the design system served as a control group, to verify that the categorisation could be understood by new colleagues in the future.
Most components and elements currently in the design system were printed out on small cards. That gave us 56 cards, which is around the advised maximum of cards to use in an open card sort. Any more would make the session too complex and would result in participant fatigue, something you can read more about in Optimal Workshops guidelines. Fatigue was the opposite of what we wanted to achieve – we wanted people to enjoy using the design system, not get fed up thinking about it!
If you do have more than 60 elements to sort into cards, pick the most important ones and stick to between 40-60 cards. Once you have a clear structure, it’ll be easier to sort the remaining elements not included in the exercise.
Participants were asked to sort them into categories of their choosing and then name the categories.
First results
After a few sessions, some categories already started to appear. However, there seemed to be a big difference in the starting point of participants. We considered them as being split over two clans, which we named “Purists” and “Contextualists”:
- “Purists” – those who chose a categorisation based on a pure definition of an element – what it is, what it is made of. Such definitions are generally stable and development-driven
- “Contextualists” – those who preferred a categorisation based on functionality of an element – what it does, when it is used. Such definitions rely on the context of use and can be multi-faceted. It is user-centred and gives more meaning to a component.
Since components can sometimes be used in different contexts, most context-oriented participants eventually got stuck trying to place one multi-faceted element in a single category.
The problem “contextualists” faced could be traced back to a wish to sort elements into a recognisable pattern based on general definitions, as well as wanting to tell something more about the underlying relationship between elements – how they are to be used, how they work together. This uncovered a hidden need to see how elements relate to each other. This is a need we could cater for in the content pages rather than the menu navigation.
Another challenge lies in the context of use of the style guide library, our design system. There are at least two distinct usages for the design system:
- Search for a specific component. The developer or designer knows which component they need. This implies previous knowledge and experience using the design system. Such a categorisation is not prone to change.
- Search for a component for a specific problem. The developer or designer does not know which component to use. They know what some functional requirements are. In that case, they might look for an element that provides feedback about the status of an object after it has been edited.
A categorisation based on context is more “human” than a development-driven categorisation, which is systematic and based on solid facts - not subjective experience. However, there are a couple of caveats:
- No single function for all elements. A single functional definition of each element is not always possible, meaning one element could fit into several categories based on context of use.
- Definitions are subjective, as well as subject to change over time. The success of such a categorisation relies heavily on everyone having the same mental model with regards to naming and sorting elements. For instance, does a “key-value input” belong with other inputs, or in a section for “Forms” (which says a lot more about when key-values are used)?
- Prioritising functionalities. This last one is related to the first point: once we’ve defined all (current) possible functionalities of an element, how do we distil the one most important functionality of an element?
Although very interesting on a cultural level, the contextualist point of view was not the most prominent group in the research. Out of our 9 sessions, 3 participants were clearly contextual in their way of thinking. Others hinted at this but decided to use a more purist approach. This allowed them to define a single categorisation based on the structure of the element, rather than all possible uses for it.
Data and decisions
It seemed a context-based definition would be difficult to apply and to keep using in the long run. People who started out enthusiastically sorting elements according to context of use, were soon left stranded in the complexity of wanting to fit everything in single categories. Would this be reflected in the data?
After a total of 9 sessions, we saw enough similarities and patterns emerge that we decided to tally the results. We’re a small team of 15 and, by then, had spoken with most users of the design system with the interviews and the card sorting sessions.
The categories of each session were entered in a spreadsheet. This gave us an initial raw number of 47 categories. Of these 47 categories, 9 were contextually oriented: ‘Feedback,’ ‘Status,’ ‘Workflow,’ ‘Browsing,’ ‘Helping tools,’ ‘Verbal,’ ‘Non-verbal, Mutable or Choices,’ and ‘Immutable.’
In some of these examples, the naming shows a distinct subjectivity and interpretation, the different meanings people apply to digital elements – highly interesting from a user researcher’s point of view, although challenging to apply for a more general categorisation. For this reason, we chose to let go of this contextual approach. Instead, we would move forward with the “purist” logic of categorising elements based on what an element is, from a technical point of view.
Despite the many categories, we did find similarities in the number of times an element appeared in any given section. Patterns started to emerge. To sort them out into distinguishable clusters, we merged categories which used similar names for similar purposes.
In doing so, a clearer categorisation started to emerge: 3 main categories, with 12 subsections – including 2 sections named ‘Other’ for elements which did not fit into a unique category.
It's elementary, my dear Watson!
Do you recall the Atomic Design Principles we had been using until now? While most participants appreciated the hierarchical breaking down into 3 categories, the terms Atoms, Molecules and Organisms did not make the sorting logic immediately clear. Although people with a basic level of understanding of physics will know the differences between these 3 elements – the translation to digital components is what gave this distinction a learning curve. It made the entire exercise seem a little…too much.
We didn’t want a learning curve. We hoped to achieve a level of simplicity which would make our Design system a logical, easy-to-use tool that enriched everyone’s workflow. The same meaning could be conferred by using a naming convention such as “Small – Medium – Large” (although what is Medium? and what is Large? could still be a subjective issue).
We renamed it “Basics – Components – Views,” hoping this would be clearer, as these names had been made up by participants during the sessions. It was essentially the same kind of categorisation as the atomic design principles, but we hoped this semantic change would make it clear what each category collected.
- “Basics” referred to all the base elements, often related to style: typography, icons, colours…
- "Components" collected all other elements, from simple to more complex. Things like Inputs, Buttons, Bars were gathered under this header.
- "Views" consisted of larger application elements. During the card sorting, it became clear that such a category was necessary, with sections being named "Application XL" and "Externals" to describe near-cmplete parts of the application.
We hoped this naming convention would make it easier to sort the elements into the newly defined subsections than when following Brad Frost’s Atomic Design principles.
Testing
Using the newly defined subsections and their three main categories, we asked a couple of colleagues to participate in a closed card sorting exercise. A closed card sorting is one where we give predefined categories and ask participants to sort the cards into those categories. Our hope was that the resulting categorisation matched that resulting from the open card sorting research. This went much faster than the previous sessions.
There was, however, some doubt as to the naming of two main categories. “Views” conveyed a subjective meaning, ‘how you look at things,’ which did not make it immediately clear what should fit in this section. It was either too abstract or too subjective, so we knew we would need to change it for something which conveyed in practical terms, what this section collected.
To clarify this, we changed the term “Views” to “Modules”, as this seemed to convey the correct meaning more successfully. It also fit better in the developer vocabulary, so we hoped it would be more easily recognised. The series was now: Basics - Components - Modules.
Despite the initial confusion over naming conventions, the overall results of the closed card sorts were satisfying: participants could easily and quickly perform the card sort. The categorisation into subsections was clear and easily recognisable.
Final thoughts
All in all, this has been an insightful research experiment. By involving the rest of the team in this process, they took charge and explained what they felt was logical.
Some participants’ natural instincts were to label elements based on the context of use – this shows how comfortable they were with the elements themselves. This logic would be interesting for a definitive list of components, one which would not be subject to change. For Moxio, a company with various product offerings, where elements can be expanded over time, this contextual definition logic did not seem the most fruitful. Still, it was an interesting and insightful line of thought to uncover.
It was nice to see that the design system itself was already very much engrained in the collective mind. People knew what it was and what it included. They had already been using it for some time. The findings were all the richer for it. We also learned that there was a need to know how elements related to each other, to improve findability. This came up during the initial interviews and the card sorting sessions - this was the need that was being responded to by the contextualists. How does one element relate to another?
One instance of this, which we came across earlier this week, is the relationship between checkboxes and tables. Checkboxes are presented as standalone components under a subsection named Inputs. In the style guides for checkboxes, we included an example of the input in a table cell - yet this was not included in the table style guide. The design system team discussed this and decided to move the example to the Table style guide and simply refer to it in the checkbox style guide. We could do this in the introduction, making sure the description included this contextual information, and use a section for “Related components” which displays in-page links.
Our new sorting logic also meant a new design for the menu - applying an accordion effect made navigation in the design system a lot easier. We included a search input: people had been Ctrl+F’ing to get to the right component, so it was added as a real feature in the navigation.
The purpose of this internal research was to re-evaluate our current organisation of content in our design system, to see if any changes were needed to make the system more future-proof and accessible. By holding open card sort sessions, we uncovered hidden mental models and got a peek into people’s expectations. This allowed us to define a new categorisation, test it and see mostly positive results. We’re not sure if everything will stay in the same category, but it’s good enough to test it in real life.
So, for now, we have a new design system structure – let’s put it to practice and see how it holds up!