Discussion:
Frank Chimero: Refragmentation
nettime_utilities
2014-10-01 01:32:23 UTC
Permalink
< http://frankchimero.com/blog/refragmentation/ >

Refragmentation

Aug 28, 2014

Sometimes it's worth reading the comments. For instance, take this
comment[1] on the web's consolidation from Mike Caulfield.[2]

You look in 1993 and see Guido Van Rossum and Berners-Lee arguing
that instead of an IMG tag there should be a general "include", that
would allow you to pull together pieces of multiple sites together
from multiple MIME types. Twenty years later, there's still no
include.

You see Shirky and Weinberger talking in 2003 about how the web was
designed to connect pages, not people, and the groups forming were
essentially hacks on top of that. But that power to connect people
doesn't get built into the protocols, or the browser, or HTML. It
gets built on servers.

It's almost like the web's inability to connect people, places, and
things was the ultimate carve-out for corporations. [I]f the
connections have to live on a single server (or server cluster) then
the company who controls that server wins.

The lack of an <include> tag led to Pinterest. No method to connect
people created Facebook. RSS's confusing interfaces contributed to
Twitter's success. Any guargantuan web company's core value is a
response to limitations of the protocol (connection), markup spec
(description), or browsers (interface). Without proper connective
tissue, consolidation becomes necessary to address these unmet needs.
That, of course, leads to too much power in too few places. The door
opens to potential exploitation, invasive surveillance, and a fragility
that undermines the entire ethos of the internet.

[Edit: APIs were at first a patchwork to resolve the shortcomings of
protocols. They let data flow from place to place, but ultimately APIs
are an allowed opening to a private dataset -- a privatized protocol. The
halcyon days of Web 2.0 were a short lived window of benevolence that
eventually closed.]

If a fifth of the planet signs on to Facebook each month, why shouldn't
a neutral version of it's functionality be built into the protocol,
markup spec, and browsers that drive the distributed web? (An argument
could be made that Facebook is already trying to do the inverse -- turning
the internet into Facebook -- with its [14]internet.org campaign.) We
should view the size and success of these companies as clear calls to
recreate their products' core functionality and weave it into the
fabric of the web.

What if tech companies were field research for the protocol? This may
be a dream, but it's our only hope to refragment the web.

Frank Chimero is a professional designer, amateur human, and
intelligent idiot.

(c) 2014 Frank Chimero, or whomever made the things I didn't make.


[1] http://cogdogblog.com/2014/08/27/dont-be-a-platform-pawn/comment-page-1/#comment-680854

[2] http://hapgood.us/about/
Florian Cramer
2014-10-01 13:54:51 UTC
Permalink
The argument that insufficient protocol semantics lead to "walled
Post by nettime_utilities
The lack of an <include> tag led to Pinterest. No method to connect
people created Facebook. RSS's confusing interfaces contributed to
Twitter's success. Any guargantuan web company's core value is a
response to limitations of the protocol (connection), markup spec
(description), or browsers (interface). Without proper connective
tissue, consolidation becomes necessary to address these unmet needs.
That, of course, leads to too much power in too few places. The door
opens to potential exploitation, invasive surveillance, and a fragility
that undermines the entire ethos of the internet.
... has been for years by computer scientist and W3C member Steven
Pemberton.

However, it's not realistic to think that richer markups or protocols would
solve these problems because they don't solve the "issue" of identification
and trust between users. Facebook, Ebay, etc. do not only manifest
third-layer protocol extensions (if one consider TCP/IP the first and http
the second layer of the web), but they are also identity and trust brokers.
At a conference in Amsterdam, Pemberton gave the example of semantic web
tags that could obsolete Ebay, simply because they would give users
sufficiently precise and searchable tags for marking up their own private
sales offer on their personal homepage. But nothing would ensure that the
tags wouldn't be used as spam, and that the seller identities weren't
fraudulent.

RSS is a case in point. The interface is neither confusing, nor really hard
to use for people who just want to follow and read feeds. But the reason
for Twitter's success is the social filtering which RSS doesn't offer. And
the social filtering function, in turn, relies on Twitter's function as an
identity broker. Conversely, the lack of an <include> tag did _not_ lead to
Pinterest because Pinterest is being used by people who cannot, or do not
want to, write HTML. It is the old problem in computer and information
science that solutions are being thought up from their back-end, not their
front-end, and often, usable front-ends cannot be developed because they
weren't considered in the back-end's design. (Case in point: XML is,
theoretically, the working and tested solution for any kind of document
processing; but not so in practice because there is, 17 years after its
invention, not a single user-friendly universal text program for editing
XML.)

-F

Loading...