That's how we get (just for example) all those theory-class Digital Triumphalist types (your Jay Rosens, Mathew Ingrams, and Jeff Jarvises) constantly crowing about how great social media is while also ignoring or downplaying all the awful stuff it brings us -- hate, lies, inanity.
But it's one thing to observe problems, and it's another to propose solutions that either wouldn't work or would be worse than the problems themselves. Morozov, a tech-skeptical writer and researcher, drew attention in January when he called on Google to "flag" search results that linked readers to sites promoting, for example, crazy conspiracy sites and false information. Of course, such a scheme would be utterly unworkable for lots of reasons, chief among them the fact that it would mean putting Google employees, or perhaps some kind of Board of Accuracy empaneled by Google, or an algorithm created by Google, in charge of determining what is true and what isn't. Scary stuff.
On Sunday, the New York Times published Morozov's latest idea: to bring in outside "auditors" to clean up the algorithms that Google, Facebook, Apple and many other companies use for things like autocompleting search results or filtering "obscene" content. Google's autocomplete, for example, filters out "Lolita," which in effect puts fans of Nabokov in the same category as pedophiles. There have been a whole bunch of incidences of Facebook cutting off pages because its algorithm detected "obscenity" where there was none. Morozov cites the example of the New Yorker's page being blocked because it posted a cartoon depicting Adam and Eve. Apple's iBooks presented the title Naomi Wolf's Vagina: A New Biography with "Vagina" spelled "V****a."
Ridiculous, of course. But is the solution really to have outside auditors come in to "fix" such problems? And would it work in any case? It seems like stuff like this is for the marketplace of ideas to fix: Apple was embarrassed by its mistake, so it was fixed, and, hopefully the algorithm was tweaked to make it less likely that such things will happen again. Still, they probably will -- algorithms are by nature imperfect.
It might seem counterintuitive that Morozov, who is constantly on the prowl for ways to "clean up" the Internet is here seemingly calling for ways to make it more open and less censored. But that's not what he's doing, really. He is also constantly on the prowl for ways the Internet can be managed, and even controlled, and this is just another example of that.
In calling for "auditors" to periodically examine algorithms, he cites the example of disasters in financial markets caused by algorithmic trading earlier this year, after which "authorities" in Hong Kong and Australia called for periodic audits of the trading algorithms. "Why couldn't auditors do the same to Google?" he asks.
His usage of the verb "to" is interesting, for it reveals that he at least subconsciously knows (hopes?) that his idea would amount to a penalty for Google. But his usage of the noun "authorities" is downright disturbing. Even if he doesn't want the government to impose audits of Internet algorithms, it's clear that he favors some kind of authoritarian rule over how the Internet operates. One difference between allowing the unfettered use of trading algorithms in financial markets and allowing them for screening Internet content is that the former can result in economic calamity, while the latter only momentarily annoys or inconveniences people.
Morozov at some point has to accept the fact that the Internet is basically just a mirror of humanity, and that lots of humans are venal, or are morons, or are emotionally maldeveloped, or are some combination of all those things. He also has to accept the fact that technology is imperfect (though fixable). We have to deal with these truths, but we can't just legislate or engineer them away -- at least not by fiat.