Today’s programming diversion: weather sparklines in Adium!
Social media has exploded this afternoon with people upset about Google shutting down Google Reader. Well, I’m about to do something I very rarely do: defend Google.
As you might assume by my support of App.net, I don’t object to proprietary services; I object to proprietary data and lock-in. Even services you pay for can be shut down, though it’s more likely when providing said service isn’t aligned with a company’s business model. By letting people export their feed lists, Google is doing this responsibly. (RSS itself is, obviously, an open format.)
Even if you run something like TT-RSS, the hosting provider (which you pay) could stop operating. Host from a box in your living room? Great, until your ISP caps your upload bandwidth. Autonomy is a lovely idea, but unless you conduct all of your communication via ham radio1, you can pretty much forget about it.
Which reminds me: why are web apps such a good idea in the first place? Just use a native feed reader. (You know, local binary, the whole nine yards.)
For the record, my call sign is KC8TKP. ;)↩
The open-source community taking ideological stances to the detriment of users and the user experience is harmful and counterproductive.
Adium, a popular chat/instant messaging client for Mac which uses libpurple as a backend, supports the XMPP chat protocol. XMPP is a standard, so any implementation should theoretically conform to that standard. But when the most widely used implementation is XMPP compatible, but not compliant, that should make developers obligated to produce software that behaves in a broadly accepted way.
Google Chat (aka Gtalk aka Gchat aka that little chat thingy in the corner of most people’s email) is based on the XMPP protocol, but the way it handles statuses is different from the XMPP specification. Specifically, Google’s Idle is XMPP’s Away, Google’s Away is XMPP’s Do not disturb, and Google’s invisibility simply doesn’t work.
Adium’s developers have taken the stance that “Adium has never supported invisibility on GTalk and will not as long as libpurple does not support it. They dislike the way it works (for good reasons #p11433).”
The “good reasons” given by the libpurple team come down to developer ideology:
Popularity and user-base have never been the sole (or even particularly major) driving factors in pidgin development (as a whole, individual developers can be motivated by anything they want to be motivated by).
How would you present a configurable option for something like this? Where would you put it? What would you call it? “Enable shared status support and invisibility”? No one will understand that. It is exactly the conjoined nature of these two features that I dislike the most.
The fact that Google Talk decided the choices of available statuses in the XMPP protocol were too great and “simplified” them down thereby breaking things like DND and XA is most unfortunate, but not exactly a greatly motivating factor for working around that brokenness by introducing as many side-effects as problems one removes.
To me, this is as if the web developers of ten or fifteen years ago simply decided that they didn’t care if what they produced looked wrong for the vast majority of users because IE was wrong. Just because you don’t like the way something is implemented doesn’t mean you should break the user experience.
When “doing it right” and “making the user happy” are at odds, the open-source world needs to redefine “doing it right”.
Yesterday I received the following email from a friend who has been babysitting for her younger cousins while their parents are out of town.
I’d like to take this opportunity to be grateful that I was not born a few years later, so as a student my will to do homework was not undermined by portable technology: when I was 13, I had to use a desktop computer to procrastinate and play online, instant messaging, etc. No privacy, the chair wasn’t comfy — it was the best I had. My 13-year-old cousin has been struggling to write a PARAGRAPH for a paper for almost a week. His mother instructed me to take away his iPhone, so he couldn’t play games on it instead of work. He didn’t fight me, returned to his room and played on his tablet instead. When I took that away a few days ago, he produced a second tablet. I confiscated that today. He has now written 6 sentences, and I’m feel disproportionately proud.
Technology (and parents who provide it so lavishly!) is making concentration even harder, and ADD does not need this kind of help. Oy.
As someone who struggled (and continues to struggle) with concentration, I had no problem as a kid not getting work done, but rather than blame technology, I had no one to blame but myself: I was (am…?) the world champion at staring off into space. For me, an important life lesson was that if I did my work now — and I mean really sat down and tried hard to do it now — I could do other things later. It was Parkinson’s Law. I put it into effect in high school by signing up for as many extracurricular activities as I could, which forced me to do my work more efficiently. I did it through college, and while the workload in grad school is much more unevenly distributed over time, I still attempt to keep myself busy. If I don’t, I’ll procrastinate until the end of time.
This has nothing to do with technology, though, and everything to do with behavioral psychology. People are bad at intertemporal choice. We hate delayed gratification, but part of growing up is learning how to live with it: we can’t do the fun things right now all of the time. That is the lesson parents should be teaching their children. If a kid (or adult) doesn’t want to do something, they will always find a way to avoid it. Technology might make finding distractions easier, but ultimately the agency lies with us, not our toys.
Thanks for your reply. I’m glad to know there’s a bit of hyperbole going on in this post!
You’re absolutely right that developer culture/values isn’t raceless, but then again, neither is any occupation or community of practice.
While you may be right that the short-term effect may be a return to the early culture — and demographics — of Twitter, I think the other side is that we (I’m a backer of App.net) see ourselves on the forefront of an enlightened movement that (speaking for myself, anyway) we hope will spread beyond those “enlightened few”.
Now, before you accuse me of being elitist, let me point out that such “enlightenment” stems from awareness, which is influenced by circumstance, which naturally varies with SES, occupation, community of practice, etc. I’m not personally a Twitter developer, but I’m active enough in the community to be aware of the issues that have angered Dalton and others. Developers’ grievances with Twitter aren’t that they aren’t be catered to hand and foot, and they certainly aren’t that the culture created by Twitter users has shifted, but that Twitter is being hostile toward them as a result of trying to monetize a free-to-use system. Twitter app developers made Twitter what it is by creating the ecosystem that made it so useful in the early days. Now Twitter is turning toward advertisers, turning its back on developers. It’s the much-written-about shift from platform to media company.
And as a PhD student, I also think a lot about issues of data ownership and privacy, particularly with respect to the corporatocracy. So, like you, but, importantly, unlike many “regular” users regardless of race or SES, an opportunity to disrupt the “you’re the product” economy struck me as immensely appealing.
The “geek culture/values” you write about are about wanting to keep Twitter a content- and user-agnostic platform, not about caring who the users are. I don’t know about you, but to me, “the beauty of a follow model” has been that I can choose — or, dare I say, curate — my Twitter community. And that’s the whole point: the way I use Twitter is as a platform, as infrastructure. It’s not a platform company’s job to curate the content I see; that’s the job of a media company. That’s why developers — and many users, like me — are upset.
As with any platform, my daily life isn’t impacted by who else uses the infrastructure. I don’t particularly care who else has a phone, who else uses electricity, or who else drives a car. What I care about in a macro sense is equality of access.
That distinction, I think, is what is somewhat lost in your post. Is the cost to get in high right now? Yes. Remember the cell phone commercial in the mid-’90s that was a take-off on the Grey Poupon ads? “Do you have a cellular phone?” “Well so do I!” I don’t have the data in front of me, but I think the latest Pew numbers show that 50% of blacks have smartphones, while only 30-something% of whites do. If less than $5/month is unaffordable to someone with a smartphone, I’ll be honest: I’d question that person’s priorities.
So, like I said in my original comment, I absolutely am concerned about online privacy becoming a privilege rather than a right. We’re at the very beginning of what I hope will be a broader market shift away from treating personal information as currency and back to treating, well, currency as currency. New sociotechnical systems will have to be built to support that. Let the those who are traditionally early adopters be the ones who have to put up with the bugs, the fail whales (or whatever they’ll be called), and everything else that goes along with immature systems. They know what they’re getting into. And, yes, they can afford to put up the “are you serious” money.
But down the road a little bit? “Are you on App.net?” “Well so am I!”
Back in the day, it was assumed that people couldn’t form social relationships online because as a medium, text didn’t transmit the nonverbal cues necessary to support relationship development and maintenance. Then, in the mid-1990s, Joe Walther proposed the Social Information Processing (SIP) model of relationship development.
A big piece of SIP was that the rate of social information transmission is lower than other, more cue-rich media (like face-to-face), but over time just as much social information can be transmitted through a text-based channel. It then goes on to suggest that this is possible because users adapt the limited medium of text in ways that enable richer communication using what have come to be called CMC cues (e.g. capitalization, letter repetition, emoticons, chronemics, etc.). I call this the temporal cue density hypothesis, and it’s what I’m working on empirically testing now.
- show someone a message
- ask them what they thought of the message sender
- manipulate the cues
- show someone else the message
- ask them what they thought of the message sender
- do math
Now, the simplicity of this story may be about to be disrupted. Studies like these all have an implicit underlying assumption: all, or at least most, people within a culture interpret social cues in similar ways1. Therefore, interpretation of CMC cues is assumed to be universal.
Cyberasociality is an empirically-backed concept proposed by Zeynep Tufekci which states that one’s inability or unwillingness to feel socially engaged by online media is a fundamental social-psychological, or even perceptual, trait of that person.
She describes it this way: language is a primarily aural construct, with reading and writing added on top as a brain-hack of visual symbolic abstraction, and some people, regardless of other cognitive abilities, have difficulty reading because of dyslexia. In much the same way, sociality evolved as a primarily — and primally — face-to-face ability. Like literacy, being social in text with abstract representations of other people is a brain hack, and one that not everyone’s brain is equally suited to perform.
If this is true, the very conception of online social norms as, well, normative may be broken.
We’ve been hearing a lot of bad stuff about Facebook lately. I’ve been giving Facebook a lot of grief myself, lately, too. I hate that I’m their product not their customer, I hate what it does to my sanity (it’s too easy to become reliant on it for social affirmation), and I hate what it can do to my ability to focus (brb, checking FB…).
In this post, though, I want to address the other side of an internal debate: why I am still on Facebook. The primary reason is that, put simply, I derive utility from the service. Lately, the cost–benefit analysis has been coming down on the side of keeping my account open. (The other reason, of course, is that I need to have access to Facebook professionally.)
Being relatively new to a community, Facebook plays three important roles: phatic, event awareness, and ad-hoc organizational.
Many of the people I’ve been meeting, I’ve met through events in the Jewish community. That means I’m affiliating myself with a community that I’ll only see once a week, and that is at least forty-five minutes away by train. The phatic function of Facebook posts can be a way to establish stronger connections — or at the very least help ensure I exist more than just once a week.
Chicago has a rather dynamic community; there’s almost always some sort of service or dinner or event to attend on Friday nights. The way to find out about these events, though, is almost exclusively through Facebook. Finding out about a group or organization and Liking it is, if not the only way, certainly the most efficient way to stay in the loop about goings-on. Plus, many request RSVPs so they can plan appropriately. Ad-hoc organizing (e.g. “Anyone want to go to…”) happens less often, but it does happen, usually in the realm of finding out about shows to attend.
And of course, as one whose academic interests span the user interfaces, social behavior, and broader implications of systems like Facebook, I do have something of a professional obligation to at least keep tabs on what’s happening in the world of Facebook. Or at least that’s what I tell myself. ;)
Personally, though, I eagerly await the day I feel comfortable enough to close my Facebook account.
Have any economists modeled the consuming public/workforce as a public good?
It seems to me that corporations are playing a game-theoretic game in which they individually want to pay less money and employ fewer people while simultaneously hoping other corporations will keep employing people and paying them enough to maintain a customer base for their product. In other words, a social contract.
What we’re seeing now is the result of too many corporations defecting over the past 30 years. A tragedy of the commons, where we’re the commons.
The flip side of this chain reaction, of course, is that consumers demand lower and lower prices because they can’t afford what they used to. In order to compete, companies are forced to send manufacturing jobs to countries where labor costs are lower, so even more people can’t afford what they used to.
How do we stop it?
Ah, the buddy list. Remember when we actually liked advertising to our friends that we were online, and maybe even wanted to chat? That was high-tech — in 1995. The buddy list (also known as presence) is a kind of social transparency, and while we still need social transparency mechanisms built in to our communications media, presence is no longer the appropriate mechanism. Presence comes from a time when the normal state of affairs was that you were unavailable, usually because in order to be available, you had to be at a desktop computer with a modem, and had to dial in to your ISP. Available meant connected, and connected meant available. When always-on connections were still novel, the away message became all the rage. (Remember when, in undergrad, we would regularly leave our computers on all night as an answering machine?) And presence became more sophisticated, using not just away messages, but idle states and times. But in many cases, just being visible on a buddy list is too much presence.
At the other end of the spectrum, historically speaking, was SMS. Being mobile, it was assumed that one was always connected (and therefore available) via SMS; therefore, presence was unnecessary. Yet people aren’t (or at least don’t want to be) always available.
Now that the nominal assumption is one of connectedness, connectedness and availability can no longer be assumed to be the same. And because connectedness is the assumed state, it doesn’t need to be advertised.
This, it seems to me, sets the historical context for a new (except for BBM) trend displacing presence: notifications of engagement. Rather than explicitly articulated status, action (or inaction) by the receiver signal availability to the sender. They do away with status, but provide the social transparency needed to manage sender expectations. Or, more simply, the sender can see whether their message has been received and read.
While right now this is almost exclusively used in mobile-to-mobile systems (BBM, Kik, Whatsapp, etc.), it has always bothered me that there is no desktop client for any of these systems. Finally, Apple — who pioneered FaceTime’s always-available-no-presence-like-a-telephone availability — is poised to bring such a system to the desktop (as well as iOS) with iMessage1. It’s instant messaging, without presence, with delivery, read, and typing notifications, that works on the desktop and mobile devices.
Personally, I can’t wait.
The next time you think, “Oh, I should tweet that!”, don’t. Experience life as private moments rather than as a performance. #mindfulness
– Me, oddly unironically on Twitter
Some astute observers of Noah (often called “friends” or “stalkers”) may have noticed that I have been tweeting much less than I often have. This is deliberate, and I have to admit, I like it.
I like it for two reasons. First, and this is a little embarassing to admit, but there’s a component of self-validation that goes along with tweeting. I put myself out there, and I want to know that people appreciate what I have to say. By tweeting more, I hope for (and even sometimes get) more @replies, click-throughs, and retweets. Tweeting sets into motion a whole set of other behaviors: engaging in more Twitter conversations, checking Favstar.fm, checking to see if I’ve been retweeted, checking click-through stats on bit.ly. Sure, it’s nice to be loved, but constantly hitting reload to see if I’m getting the kind of social affirmation I’m looking for is neither healthy nor a good use of time. Less tweeting means less potentially coming back at me, and that can be a good thing.
Second, Twitter changed the way I live, or at least the way I conceive of life. With Twitter, especially when used for personal rather than professional content, I found myself constantly thinking, “Ooh! I should tweet that!” Have a clever thought? “Ooh, I should tweet that!” Doing something other people would think is cool? “Ooh, I should tweet that!” Just get some exciting news? “Ooh, I should tweet that!” Read an interesting article? “Ooh, I should tweet that!” It’s ridiculous, really. Life becomes performative rather than introspective.
Enter Day One. It’s like Twitter, but to yourself, and with no character limit. Brialliant! Hmm…I think there’s a name for such a thing. Oh, right…a journal! I’ve never been much of a journaler or diarist, but this thing I can do. Day One gets part of the credit: I’d love to see an analysis of the app and how design can influence and encourage behavior…but that’s a different story. (Hint: the small size of the quick entry box makes it feel more Twitter-like and less intimidating.)
But the bigger reason I think I’m so into Day One is that tweeting has trained me to live not only performatively, but with a critical, reflective eye. So many of my “Ooh, I should tweet that!” moments I don’t actually tweet, either because I don’t think my audience would be interested, or because they just plain aren’t appropriate for Twitter (or anyplace else outside my own brain, for that matter). But with an outlet for them, those thoughts are captured.
And despite the mental barrier to entry being lowered by making the text box nice and small and having Twitter to have established norms of observation, reflection, and conciseness, I often find myself expanding on those thoughts, blowing through Twitter’s character limit, sometimes even going on for hundreds of words.
And the cool thing is that, even without the possibility of social feedback, tweeting to myself is just as emotionally rewarding as tweeting to the world — if not more so.