Where am I? 6 data and user insights that took our UX to the next level
by Niki Forecast, Senior Product Designer at what3words
Last month our team released v.4.0 of the what3words app for iOS and Android. The defining feature was a complete overhaul of the design and UX — a result of analysing our app data from Firebase and a great deal of user testing.
As one of the Senior Product Designers at what3words, I support the Product Team in evolving the design and UX of our products. These include the what3words app for iOS and Android, online map, 3WordPhoto app, and Developer Portal. Day to day I create rapid prototypes to demo new features and user flows using Axure RP. I also produce high fidelity designs in Sketch.
In June this year, I presented the logic behind our overhaul at the Women of Silicon Roundabout. Here are the highlights:
The ‘I just don’t know why I’m here’ update
The first and most obvious change from v.3.0 to v.4.0 is what happens when you open it for the first time. Previously, you were thrown in at the deep-end; you were presented with a map, a pin and a 3 word address. Sometimes you would see a grid, sometimes you wouldn’t. And there were a lot of icons.
In the past, this was enough. But more and more, users download our app without any knowledge of our tech. Today, we need an on-boarding experience that explains our system succinctly to this audience.
Our first solution showed a search field and asked users to search for a location. The idea was to educate the user through the process of finding their first 3 word address, and then show them how to use it.
Except users with no prior knowledge of what3words didn’t understand why they needed to search for a location. They hadn’t seen the grid or a 3 word address at this point, so their experience was confusing. ‘I really just don’t know why I’m here’, one user said, during our user testing sessions.
We learnt that we first needed to show users a map with a grid — while the exact location didn’t really make a difference. We also wanted to get users to click on a square so they could understand that each has a unique 3 word address. Then, we could show them how to search for their own 3 word address, which they hopefully would be far more motivated to do once they knew what that actually was.
The localisation rabbit hole
An interesting lesson came from the changes to the page that displays when the user clicks the search bar. v.3.0 was looking cluttered, so we created a stripped back version that just showed examples of different addresses that could be entered, working on the assumption that this was the most useful information.
While we could localise these addresses for all our languages, in America for example, showing an address on the West coast wouldn’t feel very relevant to those using the app on the East coast. Our team wants the app to feel localised as much as possible, but we were suddenly staring down a massive rabbit hole — faced with the challenge of displaying different example addresses for every single region, in every single country, in every single language…
Instead, we developed one line of copy that was as effective an explanation, and easy to translate: “Tip: Try entering your home street address, or a town, or place name.”
The ‘there were a lot of icons’ update
As i mentioned before, v.3.0 contained a lot of icons. UI designers love icons because they’re incredibly useful when space is precious — particularly on small phone screens. But we soon discovered that different cultures, countries, and generations all have their own interpretations of what specific icons mean. Although we couldn’t get away from them completely, we realised it’s far better to have icons accompanied by text for the key actions that we need our users to understand and use.
The ‘where on earth am I?’ update
On v.3.0 of the app, no matter how big the region you searched, you always end up at a magnified grid level. For example, if you searched for Oxford, you would be presented with a street in the middle of Oxford.
The logic was that what3words makes most sense to new users when they can see the grid with a square selected, and its 3 word address. But the first thing anyone did when they reached this screen was zoom out, as it almost never displayed the area they were looking for.
So, we came up with another option, in which our app would behave in a similar way to map apps, with a wider view of the region appearing on the screen. To encourage next steps, we added an instruction at the bottom of the screen telling the user to zoom in until they saw the grid.
Unfortunately, the major problem here was that the message at the bottom of the screen got completely ignored by the majority of our users in testing. This meant they didn’t understand they had to zoom in, they never found the grid, or a 3 word address.
After some more experimentation, we arrived at the third option shown above. We learned that by bringing our message to the top of the screen and highlighting it more, more people actually read it, and followed the instruction. This is something we’re still experimenting with, as there are so many ways to explain what3words.
A / B Testing
Often when you open apps for the first time, you’ll see an introductory carousel — a few slides of images with copy explaining key features. This seemed sensible and we decided that if the cool kids were doing it, we probably should too. So we set about creating our own carousel.
However, we suddenly realised that although we thought the content of the carousel made sense and was engaging, many users might find it a distraction or irrelevant, which could result in their disengagement before they even reached the main part of the app.
This was a really difficult thing to test, as the slightly artificial nature of user testing sessions means it can be difficult to gauge true reactions; in a testing session a user is focused on the task at hand and would continue past a carousel, even if they find it confusing.
In real life, when there are other distractions to contend with, a small barrier like this could result in disengagement.
Our conclusion was: when in doubt, get data. We’re currently running an A/B test where half of new users will see the carousel and half won’t. The data will be able to tell us how well each group goes on to use the rest of the app based on our key metrics. This will determine how effective the carousel is as an onboarding tool. We’ll keep you posted on how this one goes…
The ‘who even types these days’ update
The ability to use voice entry was available in v.3.0, but research told us this was a far more valuable and well used feature than we’d anticipated.
We thought it would be a useful feature for those that don’t like spelling, or people using it with hands free, but we were pleased to see many of our users discovering the ease and speed of speaking a 3 word address, and so choosing to use this feature over typing.
This motivated us to make what3words voice entering accessible on the main screen, and have enhanced the experience with sound cues.
We’re also working on optical character recognition (OCR) technology. This means that while flicking through your Mongolia Lonely Planet Guide, you’ll be able to scan the 3 word address of a listing and it will open up on the what3words map. This, combined with voice entry, means that you’ll never need to type in a 3 word address, massively increasing the accessibility of the app.
Since releasing v.4.0 in June, v.4.0.2 is already live as we’ve made some tweaks and ironed out a few bugs. Although the jump from v.3.0 to v.4.0 has been a big iteration, we are generally more focused on smaller, more rapid iterations.
We believe the evolution of digital products should be the result of incremental changes. That said, because our user base has evolved significantly since v.3.0 was released, we needed a top down evaluation of the UX to cater for this growing audience.
Join our mission
Interested in joining the what3words team? We’re a talented bunch of people dedicated to changing the world. Check out our open positions here.