Collaborative design

Having talked about it a bit with a few people, my friend Cory recently1 sent me an essay from way back in 2002 by Havoc Pennington on why free software often has poorly designed UI.

One of his points is that “designers can’t submit code patches.” I have long felt this was a major problem facing not just UI designers in OSS, but any designers working collaboratively.

Basically, the reason a relatively hierarchically flat collaborative style works so well for software is that it’s atomic. Code is made up of functions and objects, which in turn are made up of lines, which in turn are made up of characters. The way most languages are designed, lines make the perfect atomic unit for versioning, making version management tools like Git work so well for large teams. When someone makes a change to someone else’s code, it’s really easy to see exactly what the change was.

That makes it really easy for someone who’s never contributed to a project before to fix bugs and make small changes. UI design2 is different. Like any kind of design, it requires big-picture vision. And that is exactly what top-down organizational structures or solo designers are good at.

Is this just a matter of tools that are inadequate for the task of distributed design, or is this really a fundamental aspect of design that makes it poorly suited to large, distributed teams?

  1. Apparently I started this post on April 27, 2011 despite not getting around to finishing it till now.

  2. UI design is distinct from UI implementation. Nudging something a few pixels, slightly modifying the behavior of UI elements, etc., is UI implementation, and is something that is at least somewhat atomic.

Adium Weather Sparklines

Today’s programming diversion: weather sparklines in Adium!

Weather sparklines screenshot

Get it and make it better.

In defense of Google

Social media has exploded this afternoon with people upset about Google shutting down Google Reader. Well, I’m about to do something I very rarely do: defend Google.

As you might assume by my support of, I don’t object to proprietary services; I object to proprietary data and lock-in. Even services you pay for can be shut down, though it’s more likely when providing said service isn’t aligned with a company’s business model. By letting people export their feed lists, Google is doing this responsibly. (RSS itself is, obviously, an open format.)

Even if you run something like TT-RSS, the hosting provider (which you pay) could stop operating. Host from a box in your living room? Great, until your ISP caps your upload bandwidth. Autonomy is a lovely idea, but unless you conduct all of your communication via ham radio1, you can pretty much forget about it.

Which reminds me: why are web apps such a good idea in the first place? Just use a native feed reader. (You know, local binary, the whole nine yards.)

  1. For the record, my call sign is KC8TKP. ;)

Ideology versus the user experience

The open-source community taking ideological stances to the detriment of users and the user experience is harmful and counterproductive.

Adium, a popular chat/instant messaging client for Mac which uses libpurple as a backend, supports the XMPP chat protocol. XMPP is a standard, so any implementation should theoretically conform to that standard. But when the most widely used implementation is XMPP compatible, but not compliant, that should make developers obligated to produce software that behaves in a broadly accepted way.

Google Chat (aka Gtalk aka Gchat aka that little chat thingy in the corner of most people’s email) is based on the XMPP protocol, but the way it handles statuses is different from the XMPP specification. Specifically, Google’s Idle is XMPP’s Away, Google’s Away is XMPP’s Do not disturb, and Google’s invisibility simply doesn’t work.

Adium’s developers have taken the stance that “Adium has never supported invisibility on GTalk and will not as long as libpurple does not support it. They dislike the way it works (for good reasons #p11433).”

The “good reasons” given by the libpurple team come down to developer ideology:

Popularity and user-base have never been the sole (or even particularly major) driving factors in pidgin development (as a whole, individual developers can be motivated by anything they want to be motivated by).

How would you present a configurable option for something like this? Where would you put it? What would you call it? “Enable shared status support and invisibility”? No one will understand that. It is exactly the conjoined nature of these two features that I dislike the most.

The fact that Google Talk decided the choices of available statuses in the XMPP protocol were too great and “simplified” them down thereby breaking things like DND and XA is most unfortunate, but not exactly a greatly motivating factor for working around that brokenness by introducing as many side-effects as problems one removes.

To me, this is as if the web developers of ten or fifteen years ago simply decided that they didn’t care if what they produced looked wrong for the vast majority of users because IE was wrong. Just because you don’t like the way something is implemented doesn’t mean you should break the user experience.

When “doing it right” and “making the user happy” are at odds, the open-source world needs to redefine “doing it right”.

Technology, concentration, and delayed gratification

Yesterday I received the following email from a friend who has been babysitting for her younger cousins while their parents are out of town.

I’d like to take this opportunity to be grateful that I was not born a few years later, so as a student my will to do homework was not undermined by portable technology: when I was 13, I had to use a desktop computer to procrastinate and play online, instant messaging, etc. No privacy, the chair wasn’t comfy — it was the best I had. My 13-year-old cousin has been struggling to write a PARAGRAPH for a paper for almost a week. His mother instructed me to take away his iPhone, so he couldn’t play games on it instead of work. He didn’t fight me, returned to his room and played on his tablet instead. When I took that away a few days ago, he produced a second tablet. I confiscated that today. He has now written 6 sentences, and I’m feel disproportionately proud.

Technology (and parents who provide it so lavishly!) is making concentration even harder, and ADD does not need this kind of help. Oy.

As someone who struggled (and continues to struggle) with concentration, I had no problem as a kid not getting work done, but rather than blame technology, I had no one to blame but myself: I was (am…?) the world champion at staring off into space. For me, an important life lesson was that if I did my work now — and I mean really sat down and tried hard to do it now — I could do other things later. It was Parkinson’s Law. I put it into effect in high school by signing up for as many extracurricular activities as I could, which forced me to do my work more efficiently. I did it through college, and while the workload in grad school is much more unevenly distributed over time, I still attempt to keep myself busy. If I don’t, I’ll procrastinate until the end of time.

This has nothing to do with technology, though, and everything to do with behavioral psychology. People are bad at intertemporal choice. We hate delayed gratification, but part of growing up is learning how to live with it: we can’t do the fun things right now all of the time. That is the lesson parents should be teaching their children. If a kid (or adult) doesn’t want to do something, they will always find a way to avoid it. Technology might make finding distractions easier, but ultimately the agency lies with us, not our toys.

Book of Mormon production review

Last night I saw the Chicago production of The Book of Mormon! As expected, it was really fun and really high energy. And, of course, hilarious. It was amazingly written and produced, but I (of course) have some critiques.

The staging was full of fun little subtleties and jokes, and the pit and pit mix were quite good. The set and lighting design were also quite good. I particularly liked the lighting in the second half of Sal Tlay Ka Siti. I don’t usually like fake stars, but it was very tasteful and actually rather beautiful.

However, the tightness among the singers (mostly the Elders) and between them and the pit left something to be desired. I feel like it got a little better in the second half, but they got off to a weak start. The Africans seemed tighter, at least. I could have used a little more vocal in the mix, or just a little more high end on the vocals to add some definition. That actually may have been over-corrected, because at first it felt like the front fills were a little harsh from where we were sitting (fourth row of the dress circle). Not that the mixer can hear that from where they sit. Of course, more enunciation would have done the trick, too.

And the best vocal work in the show (by far, I thought) was Nabulungi.

As a side comment/thought: does anyone know who owns the rights to the show? As far as I can tell, it’s independent, which means they had to pay cash for the rights to characters from Walt Disney, Star Wars, Lord of the Rings, and Star Trek. That can’t be cheap.

White flight, or early adopters of a new old economy?

What follows is my response to Whitney Erin Boesel’s post, and her response to me. I encourage you to read her post and comment first.

Thanks for your reply. I’m glad to know there’s a bit of hyperbole going on in this post!

You’re absolutely right that developer culture/values isn’t raceless, but then again, neither is any occupation or community of practice.

While you may be right that the short-term effect may be a return to the early culture — and demographics — of Twitter, I think the other side is that we (I’m a backer of see ourselves on the forefront of an enlightened movement that (speaking for myself, anyway) we hope will spread beyond those “enlightened few”.

Now, before you accuse me of being elitist, let me point out that such “enlightenment” stems from awareness, which is influenced by circumstance, which naturally varies with SES, occupation, community of practice, etc. I’m not personally a Twitter developer, but I’m active enough in the community to be aware of the issues that have angered Dalton and others. Developers’ grievances with Twitter aren’t that they aren’t be catered to hand and foot, and they certainly aren’t that the culture created by Twitter users has shifted, but that Twitter is being hostile toward them as a result of trying to monetize a free-to-use system. Twitter app developers made Twitter what it is by creating the ecosystem that made it so useful in the early days. Now Twitter is turning toward advertisers, turning its back on developers. It’s the much-written-about shift from platform to media company.

And as a PhD student, I also think a lot about issues of data ownership and privacy, particularly with respect to the corporatocracy. So, like you, but, importantly, unlike many “regular” users regardless of race or SES, an opportunity to disrupt the “you’re the product” economy struck me as immensely appealing.

The “geek culture/values” you write about are about wanting to keep Twitter a content- and user-agnostic platform, not about caring who the users are. I don’t know about you, but to me, “the beauty of a follow model” has been that I can choose — or, dare I say, curate — my Twitter community. And that’s the whole point: the way I use Twitter is as a platform, as infrastructure. It’s not a platform company’s job to curate the content I see; that’s the job of a media company. That’s why developers — and many users, like me — are upset.

As with any platform, my daily life isn’t impacted by who else uses the infrastructure. I don’t particularly care who else has a phone, who else uses electricity, or who else drives a car. What I care about in a macro sense is equality of access.

That distinction, I think, is what is somewhat lost in your post. Is the cost to get in high right now? Yes. Remember the cell phone commercial in the mid-’90s that was a take-off on the Grey Poupon ads? “Do you have a cellular phone?” “Well so do I!” I don’t have the data in front of me, but I think the latest Pew numbers show that 50% of blacks have smartphones, while only 30-something% of whites do. If less than $5/month is unaffordable to someone with a smartphone, I’ll be honest: I’d question that person’s priorities.

So, like I said in my original comment, I absolutely am concerned about online privacy becoming a privilege rather than a right. We’re at the very beginning of what I hope will be a broader market shift away from treating personal information as currency and back to treating, well, currency as currency. New sociotechnical systems will have to be built to support that. Let the those who are traditionally early adopters be the ones who have to put up with the bugs, the fail whales (or whatever they’ll be called), and everything else that goes along with immature systems. They know what they’re getting into. And, yes, they can afford to put up the “are you serious” money.

But down the road a little bit? “Are you on” “Well so am I!”


I’ll admit it: hearing about today’s Google Doodle Moog synth pulled me in from DuckDuckGo. After messing around a bit this morning, I think I’ve figured out all the knobs in the Oscillators section. I’m hoping to figure out the details of the other sections later.

Basically, there are three oscillators. The two big knobs control the tuning offset from the primary (i.e. nonadjustable) oscillator. The three knobs on the left control the octave of the fundamental of each of the three (measured in feet as a reference to tube length in an organ, I’m assuming); the knobs on the right switch the shape of the wave (sawtooth, square, etc.) of each oscillator, affecting the harmonic structure.

Notes in versus

A summary of my notes from Patrick’s MTS talk:

  • ICT skill vs. interactional skill
  • Skills vs. norms
  • Norms vs. affordances
  • Tech properties vs. norms
  • Functionality vs. norms
  • Materiality vs. practice
  • Knowledge vs. practice
  • Understanding vs. enacting

Social Norms and Cyberasociality

Back in the day, it was assumed that people couldn’t form social relationships online because as a medium, text didn’t transmit the nonverbal cues necessary to support relationship development and maintenance. Then, in the mid-1990s, Joe Walther proposed the Social Information Processing (SIP) model of relationship development.

A big piece of SIP was that the rate of social information transmission is lower than other, more cue-rich media (like face-to-face), but over time just as much social information can be transmitted through a text-based channel. It then goes on to suggest that this is possible because users adapt the limited medium of text in ways that enable richer communication using what have come to be called CMC cues (e.g. capitalization, letter repetition, emoticons, chronemics, etc.). I call this the temporal cue density hypothesis, and it’s what I’m working on empirically testing now.

Studies that look for CMC-cue effects on social outcomes such as trust, likability, and rapport (e.g., Byron & Baldridge, 2007, Walther & D’Addario, 2001) generally work like this:

  • show someone a message
  • ask them what they thought of the message sender
  • manipulate the cues
  • show someone else the message
  • ask them what they thought of the message sender
  • do math

Now, the simplicity of this story may be about to be disrupted. Studies like these all have an implicit underlying assumption: all, or at least most, people within a culture interpret social cues in similar ways1. Therefore, interpretation of CMC cues is assumed to be universal.

Cyberasociality is an empirically-backed concept proposed by Zeynep Tufekci which states that one’s inability or unwillingness to feel socially engaged by online media is a fundamental social-psychological, or even perceptual, trait of that person.

She describes it this way: language is a primarily aural construct, with reading and writing added on top as a brain-hack of visual symbolic abstraction, and some people, regardless of other cognitive abilities, have difficulty reading because of dyslexia. In much the same way, sociality evolved as a primarily — and primally — face-to-face ability. Like literacy, being social in text with abstract representations of other people is a brain hack, and one that not everyone’s brain is equally suited to perform.

If this is true, the very conception of online social norms as, well, normative may be broken.

  1. This type of study does often test for interactions with personality traits like extraversion, but in light of Cyberasociality, those traits may not be the real reason for differences between subjects in the same experimental condition.