Why Google+ works (UXtraordinary blog preview)

Excerpt from UXtraordinary:

I am thrilled to see Andy Herzfeld’s social circles concept implemented so beautifully in Google+. My semi-educated guess is that empowering users to define access to themselves according to their own purposes and needs—in context—will build engagement and strong loyalty. Why? Because it comes naturally.

Too many social media-based sites, including many social networks, provide only lip service to how users think about groups, let alone user privacy empowerment. A user is seen as one of their members, and the business creates a mental model in which the user is the center of a series of widening circles. “Empowerment” of user content privacy is typically limited to enabling permissions control within those circles.

Users don’t see themselves that way. People think of sharing information in terms of a constantly changing algorithm of need, purpose, and ability. We trust some friends more closely than others; we have acquaintances who know a great deal about us whom we barely know. It’s my belief that these clashing mental models are one of the primary reasons social media and social networks fail to engage.

Mental models of user grouping
Read more


How not to be overwhelmed by data

When dealing with vast amounts of data, how to prioritize it? Lois Beckett reports this approach, from John O’Neil, curator of the New York Times’ topic pages:

The most pressing criterion for what gets updated first, O’Neil said, is whether “we would feel stupid not having it there.”

Guess that’s as good a standard as any ;–)

Beckett, Lois (23 Feb. 2011). The context-based news cycle: editor John O’Neil on the future of The New York Times’ Topics Pages, Neiman Journalism Lab, Harvard.


About those Facebook messages…

Discover Magazine shared the basics of Facebook’s new messaging service. The highlights:

  • Everything, from texting to email to IM to Facebook posts, is served in one large thread.
  • Everything is saved. Mark Zuckerberg is reported as saying, “Five years from now, a user can have this full rich history with your friends and the users around you.”
  • Extremely large attachments and storage are allowed.
  • Microsoft is pairing up to allow document viewing of a variety of file types.
  • Facebook will prioritize your content, based on your social network and other indicators.
  • The data is Facebook’s as well as yours—content will be used to guide personalized advertising, etc.

I don’t see why what I currently do – forward everything to Gmail and label it as flexibly as I want – isn’t just as good. This allows me to:

  • Track things as far back as 15 Nov. 2004, when I started Gmail. I certainly don’t mind not having Facebook posts captured in it—there’s too much spam there already—but I do have notifications in Gmail, with content, from a variety of sites I use to communicate with friends, colleagues, etc. Personally, I dislike the One Giant Thread approach, possibly because it diminishes my enjoyment of a thing by distracting me.
  • Allow Google to prioritize email
  • Receive targeted advertising according to my content, which I don’t thrill to—but it’s certainly not original to Facebook.
  • Allow me to save extremely large files.
  • View a broad variety of files in the integrated Google docs.

It seems none of this is new, and that using Facebook, I’d be starting from scratch re: richness of content, content taxonomy, etc. Why on earth would I switch?


Reveals the logic behind your HTML/XHTML code. W3C semantic data extraction.

Try it with these:

Why does this matter? Because as search engines and other information trawlers grapple with more and more content, the ability to parse and understand that content continues to become more and more important.


Envisioning a new title bar

Devin Coldewey had a fascinating suggestion on UXMag: turn the URL field into an active breadcrumb field. It’s a wonderful idea, but it ignores the usefulness of the URL bar for bots, and for those sites still showing the actual URL (most), it’s a helpful clue to visitors: yes, I really am on the site to which I thought I was linking.

But Coldewey’s concept, transforming part of the browser into something offering new functionality, was inspired, as was the bread crumb suggestion. So here’s what I proposed:

I love the concept. But if we’re going to re-conceptualize portions of the browser, why not the browser title area? You might do the following:

–Leave the URL in the URL bar. It’s an important place to verify you are where you think you linked, and this is helpful for bots as well.

–Add functionality to the title bar to link to breadcrumbs. It’s already good SEO for titles to go from specific to broad, e.g.:

     Steller’s Jay | Family Corvidae | All About Birds

So, we could add functionality with links, where a browser with title breadcrumb capability will turn those into links, and a browser without will merely read them as a title. The code could simply check browser version, and if breadcrumb title f(x) were true, it would display the above as links, thusly:

     Steller’s Jay | Family Corvidae | All About Birds

Love the idea of leveraging different areas to be more functional. Pity that “nav” is already snapped up by HTML5 ;–)

P.S. All About Birds is one of my favorite sites, run by the Cornell Lab of Ornithology. Enjoy!

Coldeway, D. Making the URL Bar Useful Again: Where the breadcrumb should have been all along. UX Magazine, 9 September 2010.


Am I there yet? How progress bar dynamics drive users

Gavin Davies wrote a nice piece, Are we nearly there yet? The role of the progress bar on the web, discussing the four requirements of a useful progress bar in tracking software task completion. Per Davies, a good progress bar should be:

  • Accurate – watching a bar fill up gradually only to chug to a halt at around 90% can infuriate all but the most Zen. Worse still on the hair ripping scale are bars that fill up, only to empty and begin anew!
  • Responsive and smooth – the bar should be updated regularly to show that things are still working. This means that, on the web, we should update progress bars via Ajax rather than hefty page reloads. Research shows that a linear, consistent progress increase is better than the bar jerking around like a malfunctioning robot dancer.
  • Precise – the bar should show an estimate of time remaining, and perhaps other data such as percent or file size remaining so the user knows if he or she should start any long books in the interim.
  • Appropriate – before using a progress bar, consider carefully whether it is appropriate, both in terms of User Experience and technical feasibility.

I’d expand on this, and add that the progress bar is useful beyond the completion of a software task, but for personal accomplishments; “Am I there yet?” moments, if you will. For example, the LinkedIn progress bar prompts profile completeness.

The progress bar is actually a game element, which triggers the twin desires to both complete and compete. It’s invaluable in educational, social media, and other contexts. Like Seth Priebatsch of SCVNGR said, “Humans love progress bars. If you see a progress bar, you want to complete it.”

The same requirements apply that Davies suggests, but the nature of the progression dynamic changes. So, instead of time remaining, the user may have tasks remaining, or user-generated content, or a certain amount of time using the application or exploring the site. These progress bar milestones can themselves incorporate game elements, becoming a quest for users for which progress bar completion is only one of many rewards.

Apart from games themselves, game mechanics (or gamification) have been primarily used in the educational field, although it’s been spreading through interaction design since the nineties. Those interested in exploring the field, a good start is Clark Aldrich, a true guru at designing “serious games” and simulations.

Cross-posted from UXtraordinary, my professional blog. Related UXtraordinary post: Fun is Fundamental.


Making all your points with graphs

James Hrynyshyn, a science journalist, pointed out on the Class M blog that a recent climate research graph was poorly designed.

The paper, Drought-Induced Reduction in Global Terrestrial Net Primary Production from 2000 Through 2009, demonstrated that anomalous CO2 and anomalous NPP (net primary production, described by Robert Simmon of NASA’s Earth Observatory as “a measure of the amount of carbon a plant takes from the atmosphere and uses to grow”) were negatively correlated. In other words, not only was increased CO2 not acting as “plant food,” it was undermining NPP overall.

In order to demonstrate just how strong this correlation was, the researchers inverted CO2 emissions on their graph:

Original NPP and inverted CO2 graph as shared on Class M science blog
Click for larger image.

As Hrynyshyn pointed out, this could readily lead to misunderstanding by non-scientists (or the occasional absent-minded scientist) to appear as though the CO2 anomaly was positively correlated to the NPP anomaly, instead of the opposite. That would be a significant misunderstanding, and in a politically controversial area such as climate change, a serious problem.

Robert Simmon, of NASA’s Earth Observatory, provided an alternative graph, clearly demonstrating the negative correlation:

Robert Simmon's version of NPP and inverted CO2 graph, as shared on NASA's Earth Observatory blog
Click for larger image.

But if I understand this correctly, the original graph had a useful purpose, it just went about it poorly. The point of the original graph was not to mislead about the positive/negative aspect of the correlation, but to demonstrate the strong level of correlation. This is a useful visualization in the right context, you just can’t do it by itself.

So why not add a third line? Something which showed the actual anomalous NPP and CO2 numbers with differently colored solid lines, as shown in the 2nd version, and added a clearly different third line (perhaps dotted, but in the same color as the CO2 to associate them), labeled CO2 (Inverted to demonstrate absolute correlation).

Revised NPP, CO2, and inverted CO2 graph, showing both actual data and degree of correlation
Click for larger image.

If you show the actual numbers clearly, then equally clearly distinguish the inversion, you can make both points without misleading, or allowing your graph to be misused.

Note: I commented this suggestion to the Earth Observatory post, and David Powell, another commentator, expressed concern the line could still be misunderstood. Powell wrote, “people would assume that a third line meant a third set of data and not just the same data plotted differently.” To show how I think it’s possible to avoid that, I created this example, which I think clearly distinguishes the inversion from the actual data.


Simplicity is not a goal but a tool

Simplicity in design is not a goal but a tool. The goal is the need of the moment: to sell a product, to express an opinion, to teach a concept, to entertain. While elegance and optimal function in design frequently overlaps with simplicity, there are times that simplicity is not only not possible but hurts usability. Yet many designers do not understand this, and over the years, I’ve seen the desire to “keep it simple, stupid,” lead to poor UX.

I was therefore glad to see Francisco Inchauste’s well-thought, longer version of Einstein’s “as simple as possible, but no simpler” remark.

From the column:

As an interactive designer, my first instinct is to simplify things. There is beauty in a clean and functional interface. But through experience I’ve found that sometimes I can’t remove every piece of complexity in an application. The complexity may be unavoidably inherent to the workflow and tasks that need to be performed, or in the density of the information that needs to present. By balancing complexity and what the user needs, I have been able to continue to create successful user experiences.

Plus, as I’ve commented before, messy is fun!

[Cross-posted from UXtraordinary.]


A ll over the world on Sunday, May 2, at 15:00 UTC, photographers took shots wherever they were. Some were planned, some not (there are rainbows, etc., that could not be arranged).

The New York Times has gathered it all into a global gallery – stacks of photos reaching out to the sky, browsable by stack. Beautiful images, and once again, beautiful UX from the New York Times.


The biggest barrier to UX implementation

My response to a LinkedIn UX Professionals question, Why they don’t like to spend or invest in the User Experience tasks?

My personal experience has been that ignorance is the largest barrier to UX implementation. While there are many exceptions, too often do developers, marketers, executive management, or others with a large level of control over UX strategy and tactical development feel that user experience is simply “common sense.” They believe that they are users, and therefore they have insight into the process. This is natural.

It’s the responsibility of UX professionals to educate them and evangelize the value of user experience. (Though it’s nice if you can get executive support, it’s frequently not there.) At my current company, I approached this from several angles:

  • I held one-on-one meetings with stakeholders and others, seeking to understand their needs and start a conversation about possible UX solutions.
  • I wrote and presented brown bags, open to all, on subjects like Why Taxonomy Matters: Taxonomy and the User Experience, in order to promote understanding of UX and its considerations.
  • I introduced concepts designed to make people think more from the user perspective. For example, like many sites we’re interested in user-generated content. I expanded this to user-generated experience (a concept I’d already developed from previous social media work and user analysis), and measured/discussed user-generated activity. The point, of course, was that thinking about user activity required thinking about user flow and perspective. Eventually key stakeholders were talking about UGA as a matter of course, and we even discovered ways to convert some UGA into UGC.

This was successful enough that UX became a standard consideration in not just design, but product strategy. It is of course beyond your control what others do with your information – but you have to provide it!

People understand success. Show your co-workers and management how UX solves their problems. Provide numbers, using performance indicators that matter to your audience. Present before/after case studies. Remember to focus on solutions, not problems (never show a problem for which you don’t have a suggested solution). In short, provide the best possible user experience for your internal customers.


Ahmed Kamal, the person who posed the question, responded positively to my comment:

Alex O’Neal, I raise my hat! I appreciate it really! your comments are really touching, reflecting a real long experience, comprehensive and concluding the problem and how to solve it!!

Aw, shucks :–)

[Cross-posted on UXtraordinary.]


The BBC’s style guide. An exceptional, best-in-class example of what a style guide should do.

Also of interest: a BBC’s blog post on their goal of a new global visual language .


Julia Rubiner of Editorial Emergency wrote a great follow-up to her article, Stop Abbreviation Abuse Now! (to which I replied ). It’s a nice article, and not just because Rubiner called me her “favorite taxonomist” in it ;–)

Read Call-back: My Epic Fail Failure.


Good blog from founder on IxDA Sturm und Drang

Off-site comment capture of the moment:

David Malouf, one of the founders of IxDA (the Interaction Design Association), has heard a lot of grief about a recent site redesign. He shared his thoughts on the subject in an open letter . Definitely worth reading.

My comment:

Personally, I see great improvement in the new site. I wasn’t able to contribute to it myself, but I’m impressed at what was done through donated, as-possible time by many people.

I think a lot of the hostility is probably misdirected energy from the economy, a lack of empowerment at work, a need to set themselves apart in the job market, etc. Sadly, it’s always easier to put down another’s work and show “expertise” with criticism, rather than to do something well yourself. (This is not to say that criticism isn’t a valid skill, just that when it exists in a vacuum it’s not very helpful.)

What about raising funds via IxDA-backed training? Leverage local IxDA members to provide reliable training in specific areas. IxDA sets the standards and accepts the payments; the trainer gets the street cred of IxDA backing, and half the money (via PayPal or whatever).

I’ll volunteer to author/present on how taxonomy affects interaction design :–)


Evolutional UX

[Cross-posted from UXtraordinary.]

I subscribe to the school of evolutional design. In evolution, species change not to reach for some progressively-closer-to-perfection goal, but in response to each other and their ever-changing environment. My user experience must do likewise.

Rather than reach for pixel-perfect, which is relatively unattainable outside of print, (and is probably only “perfect” to myself and possibly my client), I reach for what’s best for my users, which is in the interests of my client. I expect that “best” to change as my users change, and as my client’s services/products change. This approach makes it much easier to design for UX.

Part of evolutional design is stepping away from the graceful degradation concept. The goal is not degraded experience, however graceful, but differently adapted experience. In other words, it’s not necessary that one version of a design be best. Two or three versions can be equally good, so long as the experience is valuable. Think of the differences simply resizing a window can have on well-planned liquid design, without hurting usability. Are the different sizes bad? Of course not.

This approach strongly supports behavioral design, in which design focuses on the behavior and environment of the user. You might be designing for mobile, or a laptop, or video, or an e-newsletter; you might be designing for people being enticed to cross a pay wall, or people who have already paid and are enjoying your service. You might be appealing to different demographics in different contexts. Evolutional UX thinks in terms of adaptation within the digital (and occasionally analog) ecology.

Evolutional UX also reminds the designer that she herself is part of an evolving class of worker, with many species appearing and adapting and mutating and occasionally dying out. We must adapt, or fall out of the game—and the best way to do that is to design for your ever-changing audience and their ever-changing tools.

And now, some words of wisdom from that foremost evolutional ecologist, Dr. Seuss. Just replace the “nitch” spelling with “niche” and you’ve got sound ecological theory, as every hermit crab knows.

And NUH is the letter I use to spell Nutches,
Who live in small caves, known as Nitches, for hutches.
These Nutches have troubles, the biggest of which is
The fact there are many more Nutches than Nitches.
Each Nutch in a Nitch knows that some other Nutch
Would like to move into his Nitch very much.
So each Nutch in a Nitch has to watch that small Nitch
Or Nutches who haven’t got Nitches will snitch.


Technological evolution in the Cambrian age

Another comment left in the wild, this time in response to the deeply wonderful Irving Wladawsky-Berger’s post about The Data Center in the Cambrian Age. I strongly recommend it. In the meantime, here’s what I had to say:

Great post! Reminds me of Stephen Jay Gould’s Wonderful Life, which discusses the explosion of fauna of the Burgess Shale. That book transformed my understanding not just of biology, but of creativity and human development. Since then I’ve observed this effect in other areas.

For example, movies of the ’20s and ’30s used techniques set aside in later decades, as the industry determined what they thought most appealed to the market. Ironically, some of these were then called innovative when re-used by more modern directors. Typography has gone through a similar pattern of evolution as well.

The interesting thing about an explosion of human technological innovation is that unlike competing animal species, whose success is just as largely due to chance as well as adaptation, humans can at least partly evaluate the value of a new idea. But in the marketplace, established companies using older approaches can crush new ideas and better approaches. Humans have to leverage the internet, governments, and their purchasing power to make sure we know *all* our options, and can choose the best one for our needs, not an abstract corporate entity’s profit line.


Page title versus Call to action: who wins?

Off-site comment of the moment:

An IxDA member posted a question about whether or not the header of a page should take precedence over the primary call to action. My response is below. Essentially, I believe the very act of forcing a choice between the two is a mistake.

Re: [IxDA Discuss] Visual Importance of Page Titles
Tue, 24 Feb 2009 13:52:37

I think you can, and should, have both. The page title can be obvious and clear to the user, while there can also be a clear call to action (your marketing splash). Your title is not only the description of the page’s content, it’s part of the navigational scent of the site, and as such is can’t be left out. This in no way stops you from having a clear “center” to the page’s content. In fact, having a primary focus to your content is better than scattering the focus.

If marketing wants to leave the title off completely, that’s a problem not only in usability but in SEO. There’s a reason the H1 tags are important to search engines – they’re important to users! And as search engines overcome the Heisenbergian issue of depth of analysis vs breadth of pages, things like semantically sound page construction will become even more important.


P.S. As an aside, I would recommend trying to eliminate either-or questions in design. Frequently there are many choices, not just two, and in the few cases where their truly are only two options, the attempt to find more will be the exception that proves the rule. :–) The tyranny of dichotomy limits much more than it resolves.


Messy is fun: stepping away from Occam’s Razor

The scientific method is the most popular form of scientific inquiry, because it provides measurable testing of a given hypothesis. This means that once an experiment is performed, whether the results were negative or positive, the foundation on which you are building your understanding is a little more solid, and your perspective a little broader. The only failed experiment is a poorly designed one.

So, how to design a good experiment? The nuts and bolts of a given test will vary according to the need at hand, but before you even go about determining what variable to study, take a step back and look at the context. The context in which you are placing your experiment will determine what you’re looking for and what variables you choose. The more limited the system you’re operating in, the easier your test choices will be, but the more likely you are to miss something useful. Think big. Think complicated. Then narrow things down.

But, some say, simple is good! What about Occam’s razor and the law of parsimony (entities should not be unnecessarily multiplied)?

Occam’s razor is a much-loved approach that helps make judgment calls when no other options are available. It’s an excellent rule of thumb for interpreting uncertain results. Applying Occam’s razor, you can act “as if” and move on to the next question, and go back if it doesn’t work out.

Still, too many people tend to use it to set up the context of the question, unconsciously limiting the kind of question they can ask and limiting the data they can study. It’s okay to do this consciously, by focusing on a simple portion of a larger whole, but not in a knee-jerk fashion because “simple is better.” Precisely because of this, several scientists and mathematicians have suggested anti-razors. These do not necessarily undermine Occam’s razor. Instead, they phrase things in a manner that helps keep you focused on the big picture.

Some responses to Occam’s concept include these:

Einstein: Everything should be as simple as possible, but no simpler.

Leibniz: The variety of beings should not rashly be diminished.

Menger: Entities must not be reduced to the point of inadequacy.

My point is not that Occam’s razor is not a good choice in making many decisions, but that one must be aware that there are alternative views. Like choosing the correct taxonomy in systematics, choosing different, equally valid analytic approaches to understand any given question can radically change the dialogue. In fact, one can think of anti-razors as alternative taxonomies for thought: ones that let you freely think about the messy things, the variables you can’t measure, the different perspectives that change the very language of your studies. You’ll understand your question better, because you’ll think about it more than one way (thank you, Dr. Minsky). And while you’ll need to pick simple situations to test your ideas, the variety and kind of situations you can look at will be greatly expanded.

Plus, messy is fun.

Note: Some of these thoughts sprang from a letter to the editor I posted on Salon.

Cross-posted on