Service Design with Focus Brands
 - UX Research Project

Focus Brands was one of several industry partners that collaborated with the Georgia Tech HCI Program over the Fall '18 semester.  Our group of four students took on the challenge of working with their Business Intelligence (BI) Team, which was seeking to improve their internal and external communication with the company and improve the overall operational workflow of their team.  We employed a variety of research techniques as we worked with the small BI Team, eventually following the research to a money-saving service design solution that leveraged off-the-shelf products (Jira and Typeform) that the team could put into action immediately, instead of settling on the lengthy development time and high cost of a custom design solution.  In addition to UX research/design, I provided overall project management for the team, creating meeting agendas, managing task distribution, and setting team deadlines. 

All of the detailed information regarding the Focus Brands' BI Team's work processes are protected under NDA, however, I do outline the research methods and design methodology we made use of during the project in the sections below.

AUG 2018  -  DEC 2018 (14 Weeks)

Project Management, UX Research, User Testing, UX Prototyping, Service Design (Jira and Typeform)

Trello, RealTime Boards, Adobe XD, Jira, Typeform, Competitive Analysis, Semi-structured Interviews, Remote Interview, Contextual Inquiry, Task Analysis, User Testing, Expert Evaluation, Journey Mapping, Personas
Research and Design Methodology
     Our team followed the user centered design process throughout the project, using research to inform the entire process.  Initial research led to the discovery of challenges in the problem space and set the stage for informed brainstorming to develop a wide array of possible design solutions.  From that point, we narrowed down our brainstormed solutions  and selected three for additional development (an AI bot, a custom support application, and a service design solution with tools they were familiar with).  We sketched these concepts out, elicited peer feedback, conducted an accessibility review, and then presented them to the users for feedback.  The user feedback sessions were extremely informative, highlighting a foundational concept - the user desired an immediately deployable solution.  Based on the feedback we received, I built a wireframe using Adobe XD that simulated how Typeforms would be used, which was then used in the next user feedback session.  With the help of another team member and using our research as a guide, I developed proposed changes to the BI Team's workflow in Jira.  After completion of the wireframe feedback session, I used Typeform to build a functional prototype for their solution and assisted another team member in using Jira to build a testable prototype for the service design solution.  At that point, we conducted user testing as a team, recorded the results, and identified the next steps for deploying the solution.  Lastly, to wrap up our collaboration, I presented the team's research and testing resluts to the Focus Brands team, and provided documentation to them so their UX and IT teams could begin immediate implementation.
Research Techniques 
      At the start of the project, the BI Team Lead gave us a brief summary of the communication processes that they believed could be improved.  However, the team lead is just one stakeholder in the problem space.  In order to have a comprehensive understanding of the issues and their context, we conducted in-person interviews with the team lead, team members, and clients.  The first round of interviews focused on discovery of the problems or issues affecting the team and clients, and future rounds of interview were focused on answering the additional questions that arose from the initial research.

     To provide a depth of understanding and context to the problem space, we conducted a contextual inquiry with one of the team members, reviewing how the team operates at the granular level.  The technique allowed us to review the BI Team's processes from start to finish, and shed light on many steps in their processes that were not originally brought to light during the interviews. 

     The remote interviews occurred a little later in the project than the in-person interviews, as the BI Team and client availability for in-person interview was reduced for a short period due to their operation tempo.  We conducted remote interviews with both team members and clients during this time, gaining additional insight that informed the design process described above.  It also gave me the opportunity to conduct interviews over the phone, which is more difficult, in my opinion, than an in-person interview.  The ability to see body language, gauge facial expression, etc. during an in-person interview provides many cues that could prompt follow-up questions, and those cues are simply not present during remote interview.

     Our team also conducted a task analysis to outline the BI Team's processes in a logical fashion.  Laying out the steps the team has to take in order to help their clients allowed us to see where the team may be inefficient, simply because of the process they were using.  The task analysis also allowed us to design our solution in a way that removed some of the extra steps the B Team was going through while supporting their clients.

     To interpret the data we collected during research, we used a variety of techniques to include affinity notes and mapping, categorical grouping, personas, and a user journey.  These products were used throughout the entire project to keep our team focused on who our users were, what their needs were, and what they were experiencing.  Unfortunately, almost everything on the personas and user journey are covered by the NDA, which is why I have not included pictures of them in this section.
User Testing and Evaluation
      User feedback sessions were conducted both in-person and remote.  For the first session, I presented the sketch concepts to the BI Team, walking them through how the solution would work, and pointing out key features.  After walking them through each concept, I presented a set of questions to them in a structured format.  At the end of the session, I presented some open-ended questions to allow them to provide any additional feedback my questions did not cover.  The feedback we received was used to inform the next step in our design process.  For the second session, my teammate presented the BI Team with our wireframes for Jira and Typeform, allowing them to work through the wireframes themselves before guiding them through a structured feedback session.  The feedback garnered was used to complete our prototype, which was then used in user testing.

     Our team split user testing into two groups, clients and BI Team members.  One teammate and I worked with the BI Team member side administering our Jira user testing while the other two members of our team worked with the client user testing.  In both cases, we used benchmark tasks to evaluate how well the system worked.  For Jira, as BI Team members worked through the benchmark task, we updated the system dynamically to give them immediate feedback on their choices, playing the role of supervisor or client as necessary.  We also asked the BI Team members to use the "think aloud" technique, gaining additional information and insight from hearing the members talk through what they were thinking.  Our teammates that were administering the client user testing used similar techniques during their testing procedures.

     Expert evaluation in a traditional UX sense allows an expert to review a solution and identify many common problems or potential roadblocks that a user could experience.  These design problems could then be fixed by the design or development team prior to user testing, allowing for more valuable feedback when user testing begins.  Likewise, for our solution, expert evaluation could provide the same benefits.  Although the availability of local experts in both service design and Jira experience level are not as easy to locate as a standard heuristic evaluator, we did locate one expert to aid in evaluation.  Through his inspection and evaluation of the design, we were able to fix several problems and add several features to the design that we would have otherwise missed.  Moreover, some of his feedback was immediately adopted by the BI Team, further proving the worth of conducting expert evaluation.    

     Accessibility and universal design was not simply an afterthought during this project, it was included from the start.  At every step in our design process we were thinking about designing our solution while being as inclusive as possible.  In sketching, wireframing, and prototyping we conducted intentional reviews on the proposed solution's accessibility.  For the later wireframe and prototype steps, we also leveraged the use of current accessibility standards and electronic tools to help check for potential problems with color blindness, low/no vision, tremors, and dyslexia.  To aid our team, we used the Stark plugin for Adobe XD and the Funkify Disability Simulator.

You may also like

Back to Top