Skip navigation
Speculative Diction

Digital moralism


This week on Wednesday, my Twitter feed was swamped first with posts about the newly elected Pope (which I expected). What I didn’t expect was that by the time evening rolled around, the Pope tweets were being eclipsed by reactions to Google’s decision to “kill” its RSS aggregation tool, Reader.

Now, I use Reader a lot–every day–to sort through piles of higher education news, so I was annoyed by this news. It means I need to seek out a new tool and set it up, not just for my personal use but for the professional accounts I run as well. Thankfully feeds can be exported, so the actual transfer shouldn’t be a big deal. There are other options available, and more are being built. For me the issue is more the irrationality of dismantling a perfectly good tool (like when Tweetdeck was bought and destroyed by Twitter), but I’m leaving that aside for now.

What I want to address is the theme of digital moralism, which is of course nothing new, but which made another appearance in the Google Reader discussion. Some of the online responses I saw were both predictable and deeply frustrating in a specific way. The line of arguing often begins with “I told you so”, as in, “I told you that using a tool from an Evil Corporation like Google would come to no good”. Followed by, “If you just do X” (get your own website or server; write your own app), your problem is solved. Then: “What, you don’t know how to code? Everyone should know how to code. Why not teach yourself? It’s easy.”

There is an ethical edge to the responses that begins to come through as a judgement. Do you really want to support big corporations that dominate the Internet and academic publishing? Are you really so lazy that you can’t take time to investigate all your options, or learn how to create your own, instead of using commercial tools?

Considering those ethics, why don’t we take a moment to consider why, other than laziness, so many people might be using these tools and why they may not have the resources (time, money) or even the desire to choose differently? This has been addressed in a number of helpful posts, including those from Miriam Posner and Lee Skallerup Bessette, which address access as it relates to gender and other forms of privilege as part of the context of coding and of recognition in the Digital Humanities. Ernesto Priego has also written about the various facets (and degrees) of academics’ “digital literacy”. These discussions have become more visible as a part of the ongoing construction of disciplinary boundaries in that field: whose work is most valued as DH work, and why?

Another argument frequently raised is that we should make coding a part of school curriculum. Perhaps if many education systems were not already struggling with their current responsibilities, that would be an option. But it would require a curricular re-design that presupposes an awful lot of resources on the part of public schools and teachers. It also adds to the responsibilization of schools for solving problems that a particular group sees as absolutely pressing but which, in comparison to other issues, may not be the most urgent. I’m not saying curriculum shouldn’t change to reflect the realities of daily interactions with technology; I think it it should. But the way the solution is framed also needs to take into account the context of schooling, and the political struggles often involved in claiming certain subjects as “essential” over others.

That’s one interesting thing about digital moralism. It may be the “right” thing, but no-one is won over to the cause when they feel chastised for using a non-preferred tool, or publishing in non-open access (OA) journals, or for thinking “Python” refers to Monty Python. I am pro-OA, and against corporate monopolies, and I still feel alienated by high-handed retorts that assume everyone has what is necessary to implement the solutions considered most appropriate. What I need is another option, something other than just “learn to code”, something that takes into account my context and its potential limitations.

In my case, the main reason I don’t have a personal blog at all is that my blog is here, on the University Affairs website. This is why it’s a problem when I’m told to “just post your papers on your personal blog, instead of on [insert existing web tool here]”. I would love to know how to program, mainly because I’d like to build the Ultimate Twitter App, but this is a daunting task to a complete beginner and when I barely have time to work on my dissertation, it’s also not realistic. In fact it’s only a relatively recent thing that I have the means to access any of this information at all, and I freely admit that most of my self-discipline has been (and still is) tied up in my PhD work.

If you have the capacity and resources to do things on your own rather than using pre-fab online tools, then I congratulate you and also thank you for any contributions you make. Those of us without coding skills need to be appreciative of the work done by those who have them; without it, we could not do what we do online. We can also support their efforts in other ways. But do we all have the opportunity to learn the skills they have? Yes–and no. The availability of resources online is not enough. This is not just about whether we “really” want to gain skills; framing it in those terms is a means of assigning responsibility and then making a judgement about other people’s commitment to a cause. It also assumes, as Trent M. Kays points out, that we cannot have any understanding and appreciation of tools–or use them critically–without also knowing how to create them ourselves; and while I believe creation brings a special appreciation, it’s not the only kind.

Melonie Fullick
Melonie Fullick is a PhD candidate at York University. The topic of her dissertation is Canadian post-secondary education policy and its effects on the institutional environment in universities.
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

  1. Jo VanEvery / March 15, 2013 at 17:10

    Seriously, there is learning to code and there is knowing when it makes sense for most people to use professionals and off the shelf solutions.

    I get annoyed when a style of jeans that I like and fits me well gets discontinued by the manufacturer. I know how to sew and I’m good at it. I still would rather buy jeans than make them myself. I see no difference between this and what you report here.

    We can’t all be good at everything. We all have limited time and attention. “Make your own” is not a real answer.

  2. Holden / March 18, 2013 at 07:51

    Yes “learn to code” isn’t helpful; although I do think it’s not unreasonable to expect some proficiency. I don’t expect every driver to know how to rebuild an engine, but I do figure they’ll know how to fill the gas tank or perhaps check the oil.

  3. Russell Campbell / March 20, 2013 at 14:11

    I find this discussion fascinating, not only for the computer science aspects, but for the implications of managing multiple proficiencies. It becomes an exercise in frustration trying to balance any other significant interests outside my Ph.D. program, and I have a lot of them.

    Teaching programming skills does not actually need anything more than paper and pencil. With nothing more than these, ideally, one can learn the *limits* of computers and their computational power. Realistically, it can be a subject supplementing mathematics, e.g. most people learn long division which happens to be an algorithm. Hopefully, elementary and high schools will adapt at some point, but I don’t think it is necessary to segregate it into its own subject throughout all grade levels.

    One professor in my department has gone as far as claiming the current study of Computer Science to be the equivalent of studying English during its initial popularity. But when research is starting to augment human brains with hardware, the comparison between the two subjects begins to fade. I’ve found I can hardly keep track of smart phones surpassing desktops, let alone the important new developments in technology, even with RSS feeds. Hyperbole, but it’s gradually becoming less and less so, and ironically, I’m starting to prefer printed books for more thoughtful, digested information.

    Always a pleasure to read your articles, Melonie!

Click to fill out a quick survey