15 items are waiting for you

McKinsey & Company

The business value of design

    • Medium

      Great products do less, but better

        A few years later, the team behind the product comes to the conclusion it has to do more. Features are added, new use cases are covered, and functionality becomes more sophisticated. That happens for the following reasons: To the point our product tries to do so many different things, that its value proposition starts to dilute amongst the plethora of features we have created. – We can’t remove this feature. Customers would complain if we did so.
– How many customers?
– It’s a small number. But they can be really vocal about it. That’s understandable. Every team wants to build a great product, and by doing so, have a higher number of happy users. Adding new features to the product is a common way of buying short-term, artificial happiness. Think about the most successful products you know. The ones you use everyday, as a customer. Twitter, Lyft, Venmo, Slack. One single value proposition, articulated through the various product layers: features, architecture, interactions, usability, branding, communications. German industrial designer born in 1932, Dieter Rams has become one of the most recognized and influential designers of the 20th Century. A firm believer in Functionalism, his rational vision of design is summarized by his famous phrase: “Less, but Better”. Once in a while, try to step away from the day-to-day of feature development, and ask yourself: “if I were to start building my product from scratch, which 3 features would I include in it, today?”

        Fabricio Teixeira reminds us of the basic evolution of products.
        Successful products usually start from a good ...

      • Medium

        The design mistakes we continue to make

          T he saying that “good design is obvious” is pretty damn old, and I am sure it took different shapes in the previous centuries. It referred to good food, music, architecture, clothes, philosophy and everything else. We forget that the human mind changes slowly, and the knowledge you have about human behaviour will not go old for at least 50 years or so. To make it easy for you, we need to keep consistent with a couple of principles that will remind us of how to design great products. We should be told at least once a month about these small principles until we live and breathe good design. The human brain’s capacity doesn’t change from one year to the next, so the insights from studying human behaviour have a very long shelf life. What was difficult for user twenty years ago continues to be difficult today — J. Nielsen Steve Krug laid out some useful principles back in 2000, after the dot-com boom which are still valuable and relevant nowadays. Even after his revised version, nothing changed. Yes, you will tell me that the looks are more modern and the websites are more organised and advanced (no more flash!). But what I mean about that is — nothing has changed in human behaviour. We will always want the principle “don’t make me think” applied to any type of product we interact (whether it is a microwave, tv, smartphone or car). The reason for that is — we are on a mission, and we only look for the thing that interests us. For example, I rarely remember myself going through all the text on the homepage of a product website. Why? Because most of the web users are trying to get something done, and done quickly. We do not have time to read more than necessary. And we still put a lot of text because we think people need to know that. Or as some designers say: “it adds to the experience”. Another important aspect that will help scanning a page is offering a proper visual hierarchy. We have to make it clear that the appearance on a page portrays the relationship between elements. So there are a couple of principles for that: We believe that people want something new and more. But we forget that there are so many applications on the market that each demands our time. Each of them has different interactions, and we need to learn each one of them. And our mind blows up when: “Oh man, another app to learn?!”. We as designers, when asked to design something new, have a temptation to try and reinvent the wheel. Because doing something like everyone else seems somehow wrong. We have been hired to do something different. Not to mention that the industry rarely offers awards and praises for designing something that has “the best use of conventions”. Our job is to make stuff clear and obvious. If obvious is not an option, then at least self-explanatory. The main thing you need to know about instructions is that nobody is going to read them. We should aim for removing the instructions to make everything self-explanatory. But when they are necessary, cut as much as possible. (but, really, nobody is going to read them). We muddle through. Take IKEA as an example. If you gave an average person to assemble a wardrobe from IKEA, I am sure that he will assemble it right most of the times. Why? It is, most of the cases, apparent on how it should be assembled if we have a clear picture in front of us. But even in instances where they look at the instructions, there are no words — only images. For most of the people, it is not essential to know or understand how your product works. Not because they are not intelligent, but merely because they do not care. So once they nail down the use of your product, they will rarely switch to something else. Let’s take as an example the Apple AirPods. We can all admit that they are the worst sounding earbuds for the price you pay. But when I look at how people interact with it, I understand the real reason why they buy it. They do not make you think about why it is not working. You even don’t notice they have new technology. I look at how my mom interacts with them, and she never asked me what technology is behind or how they work. She knows that whenever you open the case near your device, it is going to connect. It is that easy. My favourite one. We, designers, love giving the users subtle effects and add beautiful delights. Right? Well, what if I told you that your users don’t care about it? No matter how much they tell you they do, they don’t. First time? Yes. Second? Ok. Third? Really, how much do I have to see this until it’s enough? Why is this happening? Life is a much more stressful and demanding environment than an app’s delights and subtle effects. For example, you are a father, and your kid is screaming because he wants ice cream, the dog is barking because somebody is calling at the front door and you are trying to book a quick train ticket that should leave in 40 minutes. In that specific moment, people will not give a f* about your subtle cues. On the other side, we should use them, but not when it kills the user flow. Focus group is a small group of people that sit around at the table and discuss things. They talk about their opinions about the product, past experiences, their feelings and reactions to new concepts. Focus groups are great for determining what your audience wants. A usability test is about watching one person at a time trying to use something (your product in this case). In this case, you ask them to perform specific actions to see if you need to fix something in your concepts. So focus groups is about listening and usability tests are about watching. All of us who design digital products have the moment when they say — “I am a user too, so I know what is good or bad.” And because of that, we tend to have strong feelings about what we like and don’t. We enjoy using products with ______, or we think that _____ is a big pain. And when we work on a team it tends to be hard to check those feelings at the door. The result is a room full of people with strong personal feelings on what it takes to design a great product. We tend to think that most of the users are like us. It is not productive and will not add any value if you ask questions such as: “Do people like drop-down menus?”. The right question to ask is: “Does this drop-down menu, with these words, in this context, on this page create a good experience for people who are likely to use the site?” The reason for that is if we focus on what people like, we will lose focus and energy. Usability testing will erase any “likes” and show you what needs to be done. The point is that every question that pops into our head, when using your product, only adds up to the cognitive workload. It distracts our attention from “why I am here” and “what I need to do”. And as a rule, people don’t enjoy solving puzzles when they merely want to know if that button is clickable or not. And every time you make a user tap on something that does not work, or it looks like a button/link but it’s not, it also adds up to the pile of questions. And this happens because who built the product did not care too much about the product. Make sure to share down in the comments small design mistakes we do on a daily basis for our products (whether apps or websites) so others can learn from them too. The article was written based on the revised version of the book Don’t Make Me Think by Steve Krug. If you are a blogger, writer or a simple human who loves to write in spare time, you might want to check out Shosho. Something I created to encourage creative souls to write even more and better. Shosho is a styling app which helps you remove buzzwords, jargon and words that are hard to read from your writing. I would appreciate if you give it a try.

          Revising the famous book "Don't make me think" - what are the mistakes we keep on making?

        • Nngroup

          Flat UI Elements Attract Less Attention and Cause Uncertainty

            Summary: Flat interfaces often use weak signifiers. In an eyetracking experiment comparing different kinds of clickability clues, UIs with weak signifiers required more user effort than strong ones. The popularity of flat design in digital interfaces has coincided with a scarcity of signifiers. Many modern UIs have ripped out the perceptible cues that users rely on to understand what is clickable. Using eyetracking equipment to track and visualize users’ eye movements across interfaces, we investigated how strong clickability signifiers (traditional clues such as underlined, blue text or a glossy 3D button) and weak or absent signifiers (for example, linked text styled as static text or a ghost button) impact the ways users process and understand web pages. There are many factors that influence a user’s interaction with an interface. To directly investigate the differences between traditional, weak, and absent signifiers in the visual treatments of interactive elements, we needed to remove any confounding variables. We took 9 web pages from live websites and modified them to create two nearly identical versions of each page, with the same layout, content and visual style. The two versions differed only in the use of strong, weak, or absent signifiers for interactive elements (buttons, links, tabs, sliders). In some cases, that meant taking an already flat page and adding shadows, gradients, and text treatments to add depth and increase the strength of the clickability signifiers. In other cases, we took a page that already had strong, traditional signifiers, and we created an ultraflat version. We were careful that the modifications we provided were reasonable and realistic. We chose these interfaces as study materials because, for the most part, they’re decent designs that are representative of the better sites on the web. We set out to isolate the differences between signifier-rich and signifier-poor interfaces, not to evaluate the design of these sites. For each of the stimuli pairs, we wrote a short task to direct the user’s attention to a specific interactive element on the page. For example, for the hotel site, the task was: “You will see a page from a hotel website. Reserve this hotel room. Please tell us when you have found where you would click.” We conducted a quantitative experiment using eyetracking equipment and a desktop computer. We recruited 71 general web-users to participate in the experiment. Each participant was presented with one version of the 9 sites and given the corresponding task for that page. As soon as participants saw the target UI element that they wanted to click to complete the task, they said “I found it” and stopped. We tracked the eye movements of the participants as they were performing these tasks. We measured the number of fixations on each page, as well as the task time. (A fixation happens when the gaze lingers on a spot of interest on the page). Both of these measures reflect user effort: the more fixations and time spent doing the task, the higher the processing effort, and the more difficult the task. In addition, we created heatmap visualizations by aggregating the areas that participants looked at the most on the pages. The study was conducted as a between-subjects design — each participant saw only one version of each page. We randomized assignment to either version of each page, as well as the order in which participants saw the pages. (See our course on Measuring User Experience for more on designing quantitative studies.) All participants began with a practice task on the same stimulus to make sure they understood the instructions before they began the real tasks. Especially with quantitative studies like this one, it’s a good idea to use a practice task to ensure that participants understand the instructions. (It’s also best to conduct pilot testing before even starting the real study to iron out any methodology issues.) This experiment was not a usability study. Our goal was to see how users processed individual page designs, and how easily they could find the target elements, not to identify usability problems in the designs. (Usability studies of live sites rarely involve a single page on a website; most often, people are asked to navigate through an entire site to accomplish a goal.) This means that, when looking at a design with weak signifiers, users spent more time looking at the page, and they had to look at more elements on the page. Since this experiment used targeted findability tasks, more time and effort spent looking around the page are not good. These findings don’t mean that users were more “engaged” with the pages. Instead, they suggest that participants struggled to locate the element they wanted, or weren’t confident when they first saw it. 22% longer task time for the weak-signifier designs may seem terrible. But remember that our metrics reflect time spent while looking for where to click. The tasks we measured were very specific and represent just a small component of real web tasks. In regular web use, people spend more time on other task aspects such as reading the information on a page. When you add in these other aspects, the slowdown for a full task (such as shopping for a new pair of shoes) would often be less than the 22% we measured. On the other hand, the increased click uncertainty in weak-signifier designs is likely to sometimes cause people to click the wrong thing occasionally — something we didn’t measure in this study. Recovering from incorrect clicks can easily consume more time, especially since users don’t always realize their mistake immediately. Beyond the actual time wasted, the emotional impact of increased uncertainty and decreased empowerment is an example of how mediocre experience design can hurt brand perception. Heatmaps are quantitative visualizations that aggregate the number and duration of eye fixations on a stimulus (the UI). They can be created from the gaze data of many participants, as long as they all look at the same stimulus with the same task. Heatmaps based on all participants’ data convey important information about the page areas that are relevant for the task (provided that the number of participants is high enough). In our color coding, the red areas were those which received the most and longest fixations. Orange, yellow, and purple areas received less attention, and areas with no overlay color were not viewed by any of the test participants. When comparing the two versions of each page pair (strong signifiers vs. weak signifiers) we found that the pages fell into two groups: those with nearly identical user gaze patterns for the two versions, and those with different user gaze patterns (as indicated by the heatmaps). Of the pages we tested, 6 out of the 9 pairs had different user gaze patterns. With the exception of the signifier strength, we eliminated all other variations in page design within a given pair, so we can conclude that the signifiers changed how users processed the page in their task. One major overarching difference emerged when comparing the 6 pairs of page. The weak-signifier versions of the pages resulted in a broader distribution of fixations across the page: people had to look around more. This result reinforced our findings that weak-signifier pages required more fixations and more time than strong-signifier ones. This difference suggests that participants had to consider more potentially interactive elements in the weak-signifier versions. Because the target elements (links, tabs, buttons, sliders) lacked strong, traditional signifiers, they didn’t have the same power to draw the participants’ attention or confidence. In many cases, participants fixated on the target element, but then moved on to other elements on the page — presumably because they hadn’t immediately recognized it as the solution to the task. Of the six sites, one page pair displayed a particularly dramatic difference in the heatmaps. The original interface used to create the stimuli was a zig-zag layout from a fine-jewelry website. The page layout featured three sections, each with a heading, short paragraph of text, product image, and text link. To create the strong version of the page, the text links were given a traditional link treatment: blue color and underlined text. To create the weak version, we took inspiration from a common tactic of ultraflat designs, and made the text links identical to the static text. The placement of the text links (below the paragraphs) was left the same in both stimuli. The weak-signifier version showed red areas on the primary navigation, as well as on the 3 Year: Pearl heading. In contrast, the target link received most fixations in the strong-signifier variant. When we inspected the individual-participant data, we discovered that many users (9 of the 24 participants) who saw the weak signifier version stopped at the subheading, and never looked at the text link. They believed they could click on that subheading to reach the pearl jewelry and didn’t continue down to see the link. In the strong signifier version, 86% (25 out of 29) participants first fixated on the heading, and then moved to the Shop Pearl target link. In the weak version, only 50% (12 out of 24) followed this pattern. (This difference is statistically significant; p < 0.005.) The links styled like static text didn’t draw users’ eyes down from the subheading, while the strong, traditionally styled links did. 3 of the 9 sites resulted in no differences in the gaze patterns between strong and weak signifiers. Why are these three page pairs nearly identical, while the other six pairs showed substantial differences? One of the stimulus pairs had in-line text links as the target element: light purple, nonunderlined links vs. traditional blue, underlined links. In this pair, the weak-stimulus heatmap only showed a very slightly wider distribution of fixations on the paragraph containing the target link. This suggests that the low-contrast presentation of in-line links, compared with regular text, may be a slightly weaker signifier, but not perceptibly so. In the case of Brilliant Earth, the lack of contrasting color for links, however, had a big impact, as shown above. We can speculate that there is a contrast continuum: the stronger the color contrast between links and surrounding text, the higher the chance that users will recognize them.  If we had used a light grey highlight color in the weak version of Ally Bank, we might expect to the see a more dramatic difference in the gaze patterns. As long as in-line text links are presented in a contrasting color, users will recognize their purpose, even without an underline. The other two stimulus pairs with no discernible heatmap differences between the weak and strong versions had some traits in common, when compared to the rest of the stimuli: We want our users to have experiences that are easy, seamless, and enjoyable. Users need to be able to look at a page, and understand their options immediately. They need to be able to glance at what they’re looking for and know instantly, “Yep, that’s it.” The problem is not that users never see a weakly signified UI element. It’s that even when they do see the weak element, they don’t feel confident that it is what they want, so they keep looking around the page. Designs with weak clickability signifiers waste users’ time: people need to look at more UI elements and spend more time on the page, as captured by heatmaps, average counts of fixations, and average task time. These findings all suggest that with weak signifiers, users are getting less of that feeling of empowerment and decisiveness. They’re experiencing click uncertainty. These findings also confirm that flat or flat-ish designs can work better in certain conditions than others. As we saw in this experiment, the potential negative consequences of weak signifiers are diminished when the site has a low information density, traditional or consistent layout, and places important interactive elements where they stand out from surrounding elements. Ideally, to avoid click uncertainty, all three of those criteria should be met, not just one or two. A site with a substantial amount of potentially overwhelming content, or radically new page layouts or patterns, should proceed with caution when adopting an ultraflat design. These characteristics echo our recommendations for adopting a flat UI without damaging the interaction on your site. Notice that those characteristics are also just good, basic UX design best practice: visual simplicity, external consistency, clear visual hierarchy, and contrast. In general, if you have an experienced UX team that cares about user research, you’ll do better with a flat design than other product teams that don’t. If your designs are already strong, any potential weakness introduced by flat design will be mitigated. If you’re conducting regular user research, any mistakes you make in implementing a flat UI will be identified and corrected. To get comparable, interpretable results from this experiment, we had to ask users to do very focused, short tasks on a single page. In real life, users don’t do tasks that way. They arrive to your site, and don’t know who you are or what you do. They navigate to pages, and don’t know for sure that they’ll find what they’re looking for there. They explore offerings and options. Remember that there’s a difference between findability and discoverability. Strong signifiers are helpful in situations where users care about finding something specific. They’re absolutely crucial in situations where you care that users discover a feature that they didn’t know existed.

            A study was conducted to test eye-tracking and comparing different kinds of click-ability cues. The results show that...

          • BBVA Data & Analytics

            Experience Design in the Machine Learning Era

              Traditionally the experience of a digital service follows pre-defined user journeys with clear states and actions. Until recently, it has been the designer’s job to create these linear workflows and transform them into understandable and unobtrusive experiences. This is the story of how that practice is about to change. Over the last 6 months, I have been working in a rather unique position at BBVA Data & Analytics, a center of excellence in financial data analysis. My job is to make the design of user experiences reach a new frontier with the emergence of machine learning techniques. My responsibility — among other things — is to bring a holistic experience design to teams of data scientists and make it an essential part of the lifecycle of algorithmic solutions (e.g. predictive models, recommender systems). In parallel, I perform creative and strategic reviews of experiences that design teams produce (e.g. online banking, online shopping, smart decision making) to steer their evolution into a future of “artificial intelligence”. Practically, I boost the partnerships between teams of designers and data scientists to envision desirable and feasible experiences powered by data and algorithms. Nowadays, the design of many digital services does not only rely on data manipulation and information design but also on systems that learn from their users. If you would open the hood of these systems, you would see that behavioral data (e.g. human interactions, transactions with systems) is fed as context to algorithms that generates knowledge. An interface communicates that knowledge to enrich an experience. Ideally, that experience seeks explicit user actions or implicit sensor events to create a feedback loop that will feed the algorithm with learning material. Discovery Weekly is Spotify’s automated music recommendations “data engine” that brings two hours of custom-made music recommendations, tailored specifically to each Spotify user every Monday. The Discover Weekly’s recommender system leverages the millions playlists that Spotify users create. It gives extra weight to the company’s own experts playlists and those with more followers. The algorithm attempts to augment a person’s listening habits with those with similar tastes. It does it in three main tasks: A typical Discover Weekly playlist recommends 30 songs, a big enough set to discover music that matches with a personal taste among other false positives. That experience provokes the curation of thousands of new playlists that are fed back into the algorithm a week after to generate new recommendations. These feedback loop mechanisms typically offer ways to personalize, optimize or automate existing services. They also create opportunities to design new experiences based on recommendations, predictions or contextualization. At BBVA Data & Analytics I came up with a first non-comprehensive list: We have seen that recommender systems help discover the known unknown or even the unknown unknowns. For instance, Spotify helps discover music through a personalized experience defined on the match between an individual listening behavior and the listening behavior of hundreds of thousands of other individuals. That type of experience has at least three major design challenges. First, recommenders systems have a tendency to create a “filter bubble” that limits suggestions (e.g. products, restaurants, news items, people to connect with) to a world that is strictly linked to a profile built on past behaviors. In response, data scientists must sometimes tweak their algorithms to be less accurate and add a dose of randomness to the suggestions. Second, it is also good design practice to let an open door for users to reshape aspects of their profile that influence the discovery. I would call that feature “profile detox”. Amazon for example allows users to remove items that might negatively influence the recommendations. Imagine the customers purchase gifts for others and those gifts are not necessarily material for future personalized recommendations. Finally, organizations that rely on subjective recommendation like Spotify now enlist humans to give more subjectivity and diversity to the suggested music. This approach of using humans to clean datasets or mitigate the limitations of machine learning algorithm is commonly called “Human Computation” or “Interactive Machine Learning”. Data and algorithms also provide means to personalize decision making. For instance at BBVA Data & Analytics we developed advanced techniques to advise BBVA customers on their finance. For example, we consider the temporal evolution of account balances to segment savings behaviors. With that technique we are able to personalize investment opportunities according to each customer’s capacity to save money. This type of algorithms that leads to decision-making needs to learn to be more precise, simply because they often rely on datasets that only give a perspective of reality. In the case of financial advisory, a customer could operate multiple accounts with other banks preventing a clear view on on saving behaviors. It proved a good design practice to let users tell implicitly or explicitly about poor information. It is the data scientist’s responsibility to express the types of feedback that enrich their models and the designer’s job to find ways to make it part of the experience. Traditionally the design of computer programs follows a binary logic with an explicit finite set of concrete and predictable states translated into a workflow. Machine learning algorithms change this with their inherent fuzzy logic. They are designed to look for patterns within a set of sample behaviors to probabilistically approximate the rules of these behaviors (see Machine Learning for Designers for a more detailed introduction to the topic). This approach comes with a certain degree imprecision and unpredictable behaviors. They often return some information on the precision of the information given. For example the booking platform Kayak predicts the evolution of prices according to the analysis of historical prices changes. Its “farecasting” algorithm is designed to return confidence on whether it is a favorable moment to purchase a ticket (see  The Machine Learning Behind Farecast). A data scientist is naturally inclined to measure how accurately the algorithm predicts a value: “We predict this fare will be x”. That ‘prediction’ is in fact an information based on historical trends. Yet predicting is not the same as informing and a designer must consider how well such a prediction could support a user action: “Buy! this fare is likely to increase”. The ‘likely’ with an overview of the price trend is an example of a “beautiful seam” in the user experience, a notion coined by Mark Weiser at the time of the Xerox Palo Alto Research Center and further developed by Chalmers and MacColl as seamful design: Seamful design is about exploiting failures and limitations to improve the experience. It is about improving the system allowing users to tell about poor recommendations. DJ Patil describes subtle techniques in Data Jujitsu. The ideal for an algorithm is to deliver high precision and recall scores. Unfortunately, precision and recall often work against each other. There is often a need to take design decisions with the trade-off between precision versus recall. For instance, in Spotify Discovery Weekly, a design decision had to be taken to define the size of playlists according to the performance of the recommender system. A large playlist highlights the confidence of Spotify to deliver a rather large inventory of 30 songs, a wide-enough set to increase the opportunities for users to stumble on perfect recommendations. Today, what we read online is based on our own behaviors and the behaviors of other users. Algorithms typically score the relevance of social and news content. The aim of these algorithms is to promote content for higher engagement or send notifications to create habits. Obviously these actions taken on our behalf are not necessarily for our own interest. In the attention economy, both designers and data scientists should learn from the anxieties, obsessions, phobias, stress and other mental burdens of the connected humans. Source: The Global Village and its Discomforts. Photo courtesy of Nicolas Nova. Arguably, we entered into the attention economy, and major online services are fighting to hook people, grap their attention for as long as possible. Their business is to keep users active as long and frequently as possible on their platforms. This leads to the development of sticky, needy experiences that often play with emotions like Fear of Missing Out (FoMO) or other obsessions to dope the user engagement. The actors of the attention economy use also techniques that promote addiction such as Variable Schedule Rewards. It is the exact same mechanisms as the ones used in slot machines. The resulting experience promotes the service’s interest (the casino) hooking people endlessly searching for the next reward. Our mobile phones have become those slot machines of notifications, alerts, messages, retweets, likes, that some of us check on an average 150 times per day if not more. Today designer can use data and algorithms to exploit cognitive vulnerabilities of people in their everyday lives. That new power raises the need for new design principles in the age of machine learning (see  The ethics of good design: A principle for the connected age). There are opportunities to design a radically different experience than engagement. Indeed, an organization like a bank has the advantage of being a business that runs on data and does not need customers to spend the maximum amount of time with their services. Tristan Harris’ Time Well Spent movement is particularly inspiring in that sense. He promotes the type of experience that use data to be super-relevant or be silent. The type of technology to protect the user focus and to be respectful of people’s time. The Twitter “While you were away…” is a compelling example of that practice. Other services are good at suggesting moments to engage with them. Instead of measuring user retention, that type of experience focuses on how relevant the interactions are. Data scientist are good in detecting normal behavior and abnormal situations. At BBVA Data & Analytics we are working to promote a peace of mind to BBVA customers with mechanisms that gives a general awareness when things are fine and that trigger more detailed information on abnormal situations. More generally, we believe current generation of machine learning brings new powers to society, but also increases the responsibility of their creators. Algorithmic bias exists and may be inherent to the data sources. In consequence, there is a particular need to make algorithms more legible for people and auditable by regulators to understand their implications. Practically, this means knowledge that the an algorithm produces should safeguard the interest of their users and the results of the evaluation and the criteria used should be explained. In the previous section we have seen that the experiences powered by machine learning are not linear or based on static business and design rules. They evolves according to human behaviors with constantly updating models fed by streams of data. Each product or service becomes almost like a living, breathing thing. Or as people at Google would say: “It’s a different kind of engineering”. I would argue that it is also a different kind of design. For instance, Amazon explains Echo’s braininess as a thing that “continually learns and adds more functionality over time”. This description highlights the need to design the experience for systems to learn from human behavior. Consequently, beyond considering the first contact and the onboarding experience, that type of product or service requires considerations on their use after 1 hour, 1 day, 1 year, etc. If you look at the promotional video of the Edyn garden sensor you will notice the evolution of the experience from creating new habits for taking care of a garden to communicating the unknown unknowns about plants, to convey peace of mind on the key metrics, and to guarantee time well spent with some level of watering automation. That type of data product requires a responsible design that considers moments when things start to disappoint, embarrass, annoy or stop working or being useful. The design of the “offboarding experience” could become almost as important as the “onboarding experience”. For instance, allegedly a third of the Fitbit users stop wearing the device within 6 months. What happens to these millions of abandoned connected objects? What happens to the data and intelligence on the individual they produced? What are the opportunities to use them in different experiences? Products characterized by an experience that evolves according to behavioral data that constantly feed algorithms (e.g. Fitbit) are living products that inevitably also have a tendency to die. Source: The Life and Death of Data Products. There are new ways to imagine the relation after a digital break-up with a product. Digital services work on an increasingly vast ecosystem of things and channels but user data have a tendency to be more centralized. Think about the notion of portable reputation that allows people to use a service based on the relation measured with another service. Looking a bit further into the near future, the recent breakthrough in Natural Language Processing, Knowledge Representation, Voice Recognition and Nature Language Production could create more subtle and stronger relations with machines. In a few iterations, Amazon Echo might start to be much more nurturing. A potential evolution that anthropologist Genevieve Bell foresees a shift from human-computer interactions to human-computer relationships in The next wave of AI is rooted in human culture and history: “So the frame there is not about recommendations, which is where much of AI is now, but is actually about nurture and care. If those become the buzzwords, then you sit in this very interesting moment of being able to pivot from talking about human-computer interactions to human-computer relationships.” — Genevieve Bell In this section we have seen that algorithms are getting closer to our everyday lives and that data provide a context for an evolving relationship. The implications of that evolution require most intense collaboration between design and data science. My experience so far envisioning experiences with data and algorithms shows that it is a different practice from current human-centered design. At BBVA Data & Analytics, the role of data scientists has been elevated from reactive model and A/B test developers to proactive partners who think about the implications of their work. Our singular data science teams breaks into sub-teams that partner more directly with engineers, designers, and product managers. At the moment of shaping an experience, we exploit thick data, the qualitative information that provides insights on people’s lives (see  Why Big Data Needs Thick Data), big data from the aggregated behavioral data of millions of people and the small data that each individual generates. Classically, designers focus on defining the experience of the service, feature or product. They nest the concept within the larger ecosystem that relates to it. Data scientists develop the algorithms that will support that experience and measure it with A/B testing. The first few weeks in my role at BBVA Data & Analytics, I found designers and data scientists often stuck in deadlocked exchanges that typically sounded like this: The main issue was the lack of shared understanding of each other’s practice and objectives. For instance, designers transform a context into a form of experience. Data scientists transform a context with data and models into knowledge. Designers often adopt a path that adapts to a changing context and new appreciations. Data scientists employ processes similar to humber-center design but are more mechanical and less organic. They strictly follow the scientific methods with its cyclical processes of constant refinement. A properly formulated research question helps define the hypothesis and the types of models to develop in the prototyping phase. The models are the algorithms that get evaluated before they are deployed to production into what we call at BBVA Data & Analytics a “data engine”. Whenever the experience supported by the “data engine” does not perform as expected, the problem needs to be reformulated to continue the cyclical process of constant refinement. The scientific method is similar to any design approach that forms and makes new appreciations as new iterations are necessary. Yet, it is not an open-ended process. It has a clear start and end but no definite timeline. Data scientist Neal Lathia argues that “cross-disciplinary work is hard, until you’re speaking the same language”. Additionally, I believe designers and data scientists must immerse themselves in the other’s practice to build a common rhythm. So far, I codified several important touchpoints for designers and data scientists to produce a meaningful user experience powered by algorithms. They must: This intertwined collaboration illustrates a new type of design that I am trying to articulate. In a recent article Harry West CEO at frog suggested the term ‘design of system behavior’: “Human-centered design has expanded from the design of objects (industrial design) to the design of experiences (adding interaction design, visual design, and the design of spaces) and the next step will be the design of system behavior: the design of the algorithms that determine the behavior of automated or intelligent systems” — Harry West So far I have argued that “living experiences” emerge at the crossroad of data science and design. An indispensable first step is for designers and data scientists is to establish a tangible vision and its outcomes (e.g. experience, solution, priorities, goals, scope and awareness of feasibility). Airbnb Director of Product Jonathan Golden calls that a vision-driven product management approach: “Your company vision is what you want the world to look like in five-plus years — outcomes are the team mandates that will help you get there.” — Jonathan Golden However, that conceptualization phase requires that visions live not just as flat perfect things for board room PowerPoint. Therefore, one of my approaches is to engage the design/science partnership to produce Design Fictions. It has similarities with Amazon’s Working Backward’ process as described by Werner Vogels: “You start with your customer and work your way backwards until you get to the minimum set of technology requirements to satisfy what you try to achieve. The goal is to drive simplicity through a continuous, explicit customer focus.” — Werner Vogels Thinking by doing with Design Fiction creates potential futures of a technology to clarify the present. Schema inspired by the Futures Cones and Matt Jones: Jumping to the End — Practical Design Fiction. Design Fiction aims at making tangible the evolution of technologies, the language used to describe them, the rituals, the magic moments, the frustrations, and why not the “offboarding experience”. It helps the different stakeholders of a project to engage with essential questions to understand what the desired experience means and why the team should build it. What are the implications of purchasing that next generation Garden Sensor? What can you do with it? What aren’t you allowed to do? What won’t you do anymore? How does a human interact with that technology the first time, and then routinely after a month, one year or more? Creative and tangible answers to these questions can come to life before a project even starts with the creation of fictional customer reviews, user manual, press release, ads. That material is a way to bring the future to present or as we say at the Near Future Laboratory: “The Design Fictions act as a totem for discussion and evaluation of changes that could bend visions of the desirable and planning of what is necessary.” At BBVA Data & Analytics, this means that I gather data scientists and designers with the objective of creating a tangible vision of their research agenda. First, we first map the ongoing lines of investigations. Then we project their evolution into 2 or 3 iterations wondering: What would the potential resulting technology look like? Where could it be used? Who would use it and for what type of experience? Each participant uses the template of a fictional ad to tell stories with practical answers to these questions. Together we group them into future concepts. We collect all the material and promote the most promising concepts. After that, we share these results internally in series of paper and video advertisements that describe the main features, attributes, characteristics of the experience from our point of view (the feasible) and the user’s point of view (the desirable). This type of fictional material allows both designers and data scientists to feel and get a practical understanding of the technology and its experience. The results help build credibility, enlist support, counter skepticism, create momentum and share a common vision. Finally, the feedback of people with different perspectives allows to anticipate opportunities and challenges. With the advance of machine learning and “artificial intelligence” (AI), it became the responsibility of both designers and data scientists to understand how to shape experiences that improve lives. Or as Greg Borenstein argues in Power to the People: How One Unknown Group of Researchers Holds the Key to Using AI to Solve Real Human Problems: “What’s needed for AI’s wide adoption is an understanding of how to build interfaces that put the power of these systems in the hands of their human users.” — Greg Borenstein That type of design of system behavior represents a future in the tight partnership between design and data science. So far in that journey of creating meaningful experiences in the machine learning era, I can articulate the following characteristics: This is an extended transcript of a talk I gave at the Design Wednesdays event at the BBVA Innovation Center in Madrid on September 21, 2016. Many thanks to the BBVA Design team for their invitation and the quality of the organization!

              This article by author Fabien Girardin discusses the duties in his position as a designer at "B.B.V.A. Data and ...

            • Medium

              The researcher's journey: leveling up as a user researcher

                On January 3, 2011, I delivered the greatest usability study ever conducted. It was, truly, an incomparable study, with a detailed report that would leave academics everywhere singing its praise from the rooftops. For the first time, my college professors would have been proud of my work. The report was perfect: beautifully typeset in LaTeX, fully hyperlinked, and methodologically reproducible. This world-shattering report was delivered and then…nothing happened. Two years following the report, none of its original findings translated into product action. Anything that did change was the result of subsequent, duplicate efforts. As a newly-minted researcher, the burning question of “why didn’t that work?” kicked off a six year journey that’s still pressing on. At a higher level, the question is about doing useful, effective work: “what does it take for research to positively influence product and design, and how do I do that?”
There’s no simple answer; it’s a broad interplay of dynamics involving people, processes, and structure. Effective research hinges on organizational ways of working and the team’s desire to learn—a spoiler: there are times when, due to the structure and culture of your organization, it just won’t work. Success also lies in your own mastery of the research process, technique, and your ability to influence the team.
Here we’ll focus on the last pieces, charting the growth of researcher-as-individual contributor from junior, to mid-level, to senior researcher. To make it easier to assess your own progress, we’ll evaluate growth on three axes: A quick baseline: in terms of “doing research” (the defining trait of a researcher) we’re talking about execution, orchestration, or facilitation of the sometimes-linear process of: We use ownership of the process as a proxy for growth and maturity in the role. Your journey begins in the middle and slowly spreads outward to embrace the whole cycle: from basic mechanics and execution toward projects, program level initiatives, and higher-order strategy. It’s an exciting ride. A junior researcher starts with prepackaged questions or predefined methods and executes on defined units of work. It’s hard to understand potential outcomes when you don’t have clear experience that relates your execution to specific kinds of output. Each project is a new opportunity to build experience across contexts and methods to learn the types of things you’ll find and how they feed into product. When a product manager says “we should do a usability test” or a designer “needs to talk to customers,” it’s a chance to hone basic skills. Growth comes through experience and reflection, forcing yourself to ask “why do we want to do this?”, “what are we really trying to find out?”, and “is this the right way to get there?” Every interview in and of itself can be a hurdle, and it’s hard to see the forest (project) for the trees (each instance of execution). As a junior level researcher, you must become competent in executing on the basics: Research synthesis outputs are facts, incidents, and simple behavioral insights. You’re ready to move on when these basic pieces can be smartly combined and deployed to ensure a successful project. Junior researchers strive to empower the organization with insights, and answer on-hand questions. Your influence develops on the strength of that execution, and the sense of judgment that you hone as you learn what your users need and how they do their work: Failure to have an impact (e.g. aforementioned report on January 3, 2011) is especially instructive: “It’s so clear to me that X is true, and I believe we should do Y. But nobody else sees it — what’s happening?” There’s an outside-in perspective flip that precedes growth, akin to the ideas underlying the practice of service design. It’s not about the great studies that you can do, it’s about finding out what the team needs to push work forward. By now, you’ve developed a sense for product and design, and can deliver strong, evidence-based recommendations. This speaks to a new level of technical competence (derive meaningful insights and connect them to design or product strategy) as well as influence (be seen as a respected, measured point-of-view and valid source of insight). Facility with a range of methods allow you to assess trade-offs and select methods with a reliable sense of outcome. Understanding how projects run, you work backwards from expected outcomes and proactively plan your efforts. From the project-level vantage point, you draw on well-developed basics in new ways and adapt methods to the project at hand. It’s here you take ownership over the project, working as a research partner to push design and product outcome. Mid-level growth in execution also extends basic reporting to employ more robust methods of synthesis and communication: At mid-level, you work within the organization to reframe team questions and incite action with results. You realize that it’s never enough to run a good project and deliver great insights: no matter how “true” or “logical” your findings, they will not promote themselves if you don’t bring the team along: Understanding what comes out of different research methods, paired with a keen sense of how the organization works, is the next step in ensuring positive product and design impact through research. The shift from medium to senior level started in late March of 2015 when my consulting team delivered a scientific disaster modeling system for a client. They had tried to redesign an on-premise solution for the cloud, spending millions of dollars and two years shipping a system their customers wouldn’t accept. It wasn’t usable, attempted to do everything but could do nothing well, and it ignored pages of feedback customers felt were essential. Given the messy context of the project, I ran a user-centered discovery and testing program designed to force focus on the project and help pave the way for successful delivery. During discovery with customer proxies and subject matter experts, we built a set of personas encapsulating the goals, needs, and workflow scenarios of the system’s main users. Within our client and with their top customers we socialized the persona “Daniel” as our primary target: we claimed that if V1 could solve for Daniel’s specific needs (without specifying how), all parties would see real and immediate value from the system. Slowly, with open lines of feedback and iteration, client and customers agreed that Daniel represented their core and most pressing needs. We aligned on a goal: if, by a specified date, our system could support Daniel’s target scenarios, the project’s first phase would be a success. We tested conceptual and functional prototypes with the client’s customers, learning and iterating until real users could achieve Daniel’s core tasks in the system. The customers, especially non-user buyers, invariably piled on feedback outside the bounds our V1 scope (much like before). With clear alignment on Daniel’s needs, we could address feedback honestly and openly, maintaining focus in development: “Given what you’ve seen so far, do you believe [this input] would help Daniel with [goal] in [target scenario]?” The client and their customers came to trust and respect our team’s ability to act–or not–on their feedback with a clear lens. Phase 1 ended as a success. As a senior researcher, you leverage learning in new ways beyond specific project work. Organizations already spin off more data and knowledge (in nice functional silos) than any team can make use of — you look to unlock this knowledge, frame rich stories, and foster broad alignment. This is higher order impact at a ‘research program’ level that must also be balanced with project execution. At senior level, you look to the higher order purpose of every project, request, and activity, often suggesting your own project work based on perceived team needs. Your sphere of awareness shifts from a pure focus on user behavior, needs, and context, and must encompass the organizational reality that supports or stifles meaningful work: Owning the edges of the process entails focus on understanding what the organization needs, and ensuring it leads to meaningful action. It’s work that may go far beyond the standard role descriptor of user research: your job is to wrest fruit from the garden of knowledge, but, if it’s not productive, you may need to shovel fertilizer. Moving to the program level, ownership of the research process requires you work in regular partnership with other teams to employ projects strategically. Senior level technical competence is tied tightly to ways of disseminating knowledge, increasing alignment, and ultimately fostering higher order impact than any individual project may achieve: At a senior level, your work introduces new language, shapes the organization’s thinking about users, context, work, and direct organizational inquiry to align with strategic priorities: This includes understanding how to use individual projects to inject structure and clarity into product development and turn learning into broader organizational understanding. As you follow the path of research, a logical extension of the function includes centralizing, framing, exposing, and continually communicating your organization’s point of view on the industry and the user’s needs in context. Empowering teams to learn on their own and ensuring meaningful compassion for users’ context and needs — at organizational scale — is the beginning of “and beyond.” 
It does not, however, reduce the need for project-level execution; this will always remain a difficult balance. It may mean a strategic individual contributor role, building out a research function to take on the higher order work, or something else, entirely…
This snapshot is based on personal experience, researching researchers, related reading, and some light extrapolation. If you are a researcher on the journey, at “beyond,” or managing this journey for others, I’d love to hear about your experience. 
Special thanks to Abhik Pramanik, Christiana Lackner, Chantal Jandard, Alissa Briggs, and the PlanGrid design team.

                After delivering a world-shattering report on usability that was fully hyperlinked with reproducible methodology, the...

              • UX Planet

                Color, psychology and design

                  Color is a beautiful thing that creates different emotions in humans. We see things and differentiate similar objects with the use of color. We feel colors as an object that creates different emotions when seen. Colors produced in the visual system of the human brain. In real life, the colors do not exist. We create colors using our brains, which means colors stay subjective in nature and, not objective. In design, the color act as a key function that grabs the attention of the user. Color is the easiest aspect to remember when come to encountering new things for the users.The colors of the design always make connection with branding of the product. The product designers use color as a way to communicate what the product is about. Most of the time user’s purchase ideas largely depends on color. There are some facts that quite important when come to color psychology. As for the findings by Joe Hallock, it says that there is a significant difference between the preference between genders when coming to color selection. The study was done for most favorable and least favorable colors and there was a significant favor for the blue color of both men and women and the orange color was the most disliked color by both men and women. In this study, it was found that men preferred Bold colors and women preferred soft colors. These details of findings make you understand when coming to product design why designers tend to use the blue color as much as they can in their applications and use less orange color. But it is always good to use colors that would not give support just to the likability of the users but improve the quality and the behavior of the users. When coming to application design most of the people look into color before making the purchase. For an example, G-shock wrist watches are really famous for their hard use and the durability. When the users go to the G-shock website they feel that exact feeling of trust towards what the wristwatch stands out to be. The color brings up the personality of a user when come to use the applications. Here you can see that G-shock is using bold colors that would easily grab the attention of the people who like to ware cool things instead of being extremely professional. The product design is not just about being understandable but also discoverable. Our brains always like to focus on brands that are immediately recognizable. In order to create the product look engaging and recognizable, you have to use the colors properly that align with your business ideas, personality, emotion and differs from your competitors. Many studies have shown that color is a key fact when come to deal with direct competitors. The use of color is really common in the food and restaurant industry. where designers use the appearance of the product heavily based on unique colors. If we take Mcdonalds, KFC, Starbucks and other famous dine in places who have a number of branches oversees focus heavily on their unique colors and designs. Important thing is to understand and focus on the customer reaction towards the colors rather than focusing the colors itself. Your color should achieve the goal of what you are trying to give to the customers. There are many articles that you may able to find how the colors impact on design. In my study, i have found following examples to show how the color physiology has impacted the design. Blue is one of the most commonly used colors when coming to product design. The blue color is considered to give emotions such as trust, safe and relaxation. Blue has different color shades into it that create a different set of emotions with it. The light blue color creates the emotions such as calm and makes the user feels refreshed. The blue color is also associated with happiness. Normally the clear blue sky gives that feeling of happiness and friendliness towards the user. By using the fact of being friendliness the trust is created towards the user. The pink color is a color which is related to candy and sugary items. Most of the time it is called a “Girl’s color” by many people. The color pink is not as feminine as you think. It is a color of playfulness and Joy. Black is one of the most desired colors in the spectrum. The black color represents power and formality. The black color is considered to be the strongest color of the spectrum. Black fonts have been there from the black and white age until the electronic age due to its ability to create a proper emotion of power over other colors. Red is a color that gives us the sense of importance and notifies to us about a danger as well. Red color often associates in design in places where the user should pay special attention. For example, in traffic lights, we show the color red as an indication to stop crossing or to stop the vehicle moving forward. At the same time, the color red is taken as a symbol of love and passion. But most of the time red is used in places where the user needs immediate attention. Green color for obvious reasons, humans find it a color that is connected to the environment, trees and plants. Most of the time organizations that sell organic food and beverages use the green color to their application. Since this color is so natural to our eyes it grabs attention when properly used. We always consider color has everything to do with the branding but not with the application user’s emotion. But we can clearly understand that we can use colors to trigger different emotions and use this to help gain the upper hand in direct competition as well. The knowledge in color psychology to understand the misconceptions regarding colors such as It also helps the designers to understand that there is no universal color that is called the best color to be used in the design. We should always focus on who we are designing for and get their ideas and feedback in the early stage of the design process to create a design that is more supportive to give a better user experience.

                  Users are attracted by color, making it an important part of UX design. Jan Hallock’s research has shown that ...