WhatTheFont

Font identification app for mobile and desktop

WhatTheFont is an instant font identification tool from MyFonts, the world’s largest font store. As designers, it’s common for us to see a great looking font in use, in print, or in an image, and have no idea what it is. And unless you happen to have a designer friend who’s great at knowing all the fonts you’re out of luck… unless you use an app like WhatTheFont.

WhatTheFont exists as a webapp and a mobile app. I designed the experiences for both and worked closely with the developers during implementation, including acting in an official Product Owner capacity for the mobile app.

Date

2017

Company

MyFonts

Industry

Designer productivity tool linked to ecommerce website

My Contributions

Visual Design

UI & UX Design

Product Ownership

Demo of how the mobile app works

Screenshots of my design for WhatTheFont for iOS, released in October 2017. We also simultaneously released an Android version.

Originally released in 2009, the initial iteration of WhatTheFont quickly became a major traffic driver to MyFonts and held significant value to the business. By the mid 2010s, nearly a quarter of the inbound traffic to the site came in through WhatTheFont, and the service was used 1.5 million times a month.
The original WhatTheFont ran on early-2000s code and relied on the user manually identifying key letters in a font sample, after which the system would try to use vector outlines to match the letters against the fonts in its database.

WhatTheFont1, aka the original WhatTheFont experience, 2009–2017. This is how the interface looked before I worked on it.

In 2017, as advances in machine learning were rapidly progressing, the company decided to make a new backend for WhatTheFont, one that leveraged a deep learning network to identify fonts using computer vision. This project was internally known as WhatTheFont2, and the screenshots that will follow are the interface I designed for it.
The new AI-powered font identification engine meant that I could radically simplify the user experience and remove several manual steps that had been required in the original version.

My design for WhatTheFont on the web, 2017.

The new WhatTheFont, powered by deep learning, did away with almost all of the manual user steps. On the back end, the neural network looks at an image, detects the text, and is able to identify the font used from even a very short sample.

At the time that we released this version in 2017, the user experience represented a cutting edge AI-powered search. As I write to update this page in 2025, it's practically an understatement to say that the field of AI has grown exponentially. I strongly suspect that if WhatTheFont was reworked today, the experience could be simplified even further.

Challenges
In 2017, the field of machine learning had recently made huge improvements and computer vision had progressed drastically compared to what it used to be. The newly created font identification neural net was able to identify most fonts in its database with high accuracy, and it could do so very quickly and without needing human intervention. It was so good that it turned out one of the main challenges with the design of this app was actually to intentionally slow things down, due to limitations on the number of API requests we could reasonably make at one time.

My original concept for the UX was something like a two step process where the user would upload a photo, and the app would tell them what fonts were used in it, just like the way you’d be able to ask your smart designer friend to identify fonts for you. Sounds straightforward, right? Well, in practice, it turned out that that intended user flow was a little too good to be true.

In testing, we discovered that although it was technically possible for the font identification neural net to identify every piece of text in a complex image at once, doing so would cause too many API requests to the server which would slow the whole app down pretty badly. This technical limitation ended up affecting the final design of the app. My challenge was to create a user experience with just the right amount of friction, to balance technical needs with user needs while still keeping the experience as smooth as possible.

Computers don’t see text the same way we do. While a human may look at this image and intuitively know that each word in it is the same font, a computer looks at it and assumes that each word could be a different font and each one needs to be identified separately.
So in the case of a complex image, like a page from a book, the server would get bogged down as the system tried to individually identify each piece of text spotted by the computer vision API.

My solution for this was to have the user select a single piece of text they wanted to identify. I was originally concerned that the user would want to select all the text at once, but in usability testing I found that they easily adjusted to this workflow. We helped them out by pre-selecting one of the bits of text that we identified automatically. This worked quite well in practice and users were able to easily tap through to their font results.

At the end of the day, as user experience designers, it’s not enough to understand how the user would want to use the product, we also need to understand the benefits and limitations of the technologies we work with. I was lucky enough to work with a really engaged and passionate team who were excited to help me understand how the new technology they had built worked, and it was through this that I was able to come up with design solutions to meet the technical challenges.