Anderson’s editorial “All That Glisters Is
Not Gold: Web 2.0 And The Librarian,” appeared five years ago in the December 2007 issue of the Journal of Librarianship and Information
Science. It was intended to promote discussion on "rationalizing the implications" of Web 2.0 for Libraries. In the wake of the accelerated pace at which the digital world is
evolving, Anderson’s editorial seems quaint, even nostalgic. It predates the
Gibbon-esque Decline and Fall of social
media (see below), the financial crisis of 2008 (and beyond), the hacktivist group Anonymous,
and the Mexican standoff between Copyright Lobbyists, Copyleftists and
Peer-to-Peer Pirates.
As part of his definition of Web 2.0, Anderson
rightly highlights the constant iteration cycle common to Web 2.0 services as
an important characteristic. The evolving nature of information within this
context has been one driving forces behind the exponential increase in its
usage. For example, English Wikipedia is edited on average 180,000 times every day, 72 hours of video are uploaded to YouTube every minute and an average of 4,000 tweets are created every second. In simple terms, the internet you use tomorrow
will be a vastly different place from the internet you use today. In 2011, Ian
Hickson announced that HTLM5 would from then on be known as HTML, as the
working group was preparing for the next iteration in the HTML standard before the previous iteration had even been
published (Hickson, 2011). The speed
at which the living Web grows and evolves is the thrust of Anderson’s
conversation about Libraries as part of the Web 2.0 phenomenon. Information
professionals are being asked to grow and evolve at the same pace, though as
some point out, such speedy professional development is unsustainable;
“One person captured the difficulty of fitting professional development into a busy schedule: “I could always learn more about everything I do, but there are serious time constraints.” Five people recommended that they or their coworkers should be cloned.” (Burke, 2009, p. 7)
Perhaps the most interesting eventuality
Anderson overlooked is the maleficence of institutions and organisations in
possession of user-generated data. Profiteering from freely given private data
is a grave concern at the moment. Telefonica’s Dynamic Insights, selling
anonymised data from mobile phone customers is one example, as are Facebook’s shaky
attempts to monetise social media. No one could have anticipated the leviathan
that personal digital data would become. With each keystroke our ‘DigitalDossier’ becomes fatter with exploitable data. To use Anderson’s term, not
every river of users enjoys being “fished.” Though just because there is more
data to fish from, doesn't necessarily make the fish more edible. For every
credit card purchase there are hundreds of tweets about the weather, instagrams
of breakfast, and seemingly indecipherable acronyms (YOLO, IMHO, BRB, etc).
Anderson incorrectly dates and attributes
the invention of the term Web 2.0 to Dale Doughtery in 2004. In fact it was in
1999 that the term first appeared in the April issue Print Magazine (DiNucci, 1999).
It was in this article that DiNucci began to use the term and to explore what
the next iteration of the World Wide Web might look and feel like; and
importantly predicting web pages that would behave like applications. If
Berners-Lee first proposed what would grow to be the World Wide Web in 1989 and
DiNucci anticipated Web 2.0 in 1999, it follows that the next iteration may be
already taking place. In fact the seeds for the Semantic Web (or Web 3.0 as it
is sometimes known) were planted at about the same time as those for Web 2.0 –
some ideas just take longer to germinate (Bikakis,
Tsinaraki, Gioldasis, Stavrakantonakis, & Christodoulakis, 2012). O’Reilly
(and Anderson) emphasised user generated data such as peer production,
folksonomy and viral marketing, as central to the idea of Web 2.0 (O'Reilly, 2005). Indeed, the big names in the
Web 2.0 game wouldn't exist were it not for their users, who steadfastly
contribute the very content that makes each service useable and worthwhile. The
next step, developed by the W3 Consortium is the removal of human effort from
the equation entirely. The Semantic Web is conceived of as the sum of all data
available, processed by machines instead of people (Berners-Lee, Hendler, & Lassila, 2001).
While Anderson’s comments on the place of
Web 2.0 in Libraries were intended to generate a fruitful discussion on the
merits of novel technology in the information services industry, the
conversation about Web 2.0 and Libraries ended a long time ago. Predictions about the way that Web 2.0 would change our lives, for better and worse, Where Web 2.0
relied on users to generate the data, create the connections and rank the
significance of that data, the Semantic Web computes this for us. What does
that imply for information professionals working in Libraries?
References
Berners-Lee, T., Hendler, J.,
& Lassila, O. (2001). The Semantic Web: A new form of Web content that is
meaningful to computers will unleash a revolution of new possibilities. Scientific American, May 2001.
Bikakis, N., Tsinaraki, C.,
Gioldasis, N., Stavrakantonakis, I., & Christodoulakis, S. (2012). The XML
and Semantic Web Worlds: Technologies, Interoperability and Integration. A
survey of the State of the Art. In I. E. Anagnostopoulos, M. Bieliková, P.
Mylonas & N. Tsapatsoulis (Eds.), Semantic
Hyper/Multi-media Adaptation: Schemes and Applications. New York: Springer.
Burke, J. J. (2009). Neal-Schuman Library Technology Companion: A
basic guide for library staff (3rd ed.). New York: Neal-Schuman Publishers.
DiNucci, D. (1999). Media:
Fagmented Fututre. Print Magazine,
32, 221-222.
Hickson, I. (2011). HTML is the
new HTML5. Retrieved from http://blog.whatwg.org/html-is-the-new-html5
O'Reilly, T. (2005). What Is
Web 2.0: Design Patterns and Business Models for the Next Generation of
Software. O'Reilly. Retrieved from
oreilly.com/lpt/a/6228
No comments:
Post a Comment