Viewsdesk - chasing the global public sphere

Technologies


Technologies16:45, February 27, 2011

German Member of Parliament, Malte Spitz (blog), sued his mobile phone operator T-Mobile to obtain access to the data (Google translation) they had on him. After a lengthy legal process he won and received about half the data they had in their possesion.

Relating to “data” as an abstract can be difficult, because it doesn’t put it in any meaningful context. Spitz, together with the newspaper Zeit Online, decided to make an attempt at providing that context.

The result is an absolutely fantastic page where you can backtrack Spitz’s whereabouts. By combining geospatial data with other publically accessible information, such as Twitter posts, it’s an effective display of the power and insight the Data Retention directive give whoever can access the data. (The page is in German, but the Google translation should be enough. Press the Play-button, and adjust speed using the lever labeled Geschwindigkeit.)

What They Know
The phone companies store what is called Call Data Record, or CDR for short, containing meta data on the service they provide, such as information on the service type, telephone numbers, the length of the call, its duration and what cell towers were involved in relaying the call. The CDRs were originally designed for internal use, primarily for accounting, but are now stored for a minimum of six month to provide records for law enforcements.

Without going into too much detail on the contents of the Data Retention Directive (2006/24/EC) it is worth noting that its scope goes well beyond what Spitz received from T-Mobile. The contents of all text messages are also stored; the URLs of visited webpages and sent emails too.

It would be interesting to replicate Herr Spitz’s move in other countries to see the reactions from authorities and the from the phone companies. The amount of location data you would get on yourself would certainly put the likes of Gowalla or Foursquare out if business in a heartbeat!

Whom Do We Trust
What really worries me is how much we trust our authorities.  Even though Europe is quite stable politically, we seem to have forgotten many of the lessons of our past and the importance of not centralizing too much of information about people in the hands of a few. Remember that political stability, historically speaking, is an anomaly. Even in Europe at this time, certain developments should remind us that we should never take a democratic consensus for granted. There’s also the lesson we learn from the rest of the world. Imagine how tools like this could be used in Egypt, Tunisia or Libya.

One could even argue that passing western laws like this enable misuse in other countries simple because we create tools that enables it. Western companies develop most of the software and hardware that power the global communications infrastructure. Everything that gets developed in the western world eventually find its way to rouge states – if not via official exports then though the black market. Creating a data retention solution, like Narus Insight or (the aptly named) HP Dragon, is well beyond the reach of most countries if they had to do it themselves.

From a global perspective, it puts us on a very slippery slope. Maybe west needs to accept responsibility for misuse of the tools we unleash upon the world?

 
Technologies19:18, October 24, 2010

As Wikileaks first released 70,000 documents from the war in Afghanistan – and then a staggering 400,000 incident reports on Operation Iraqi Freedom, it’s become clear that great sources today means something different than waiting for Deep Throat to call. The question is if old media is up for the challenge?

The Swedish national television network, SVT, was one of the few news organizations invited by Wikileaks to get advance access to the data. Apparently, SVT has a dedicated database editor employed! This is not news for larger, international outfits, like The Guardian, but to my knowledge it’s a first in Sweden. It’s a move that must be applauded, and hopefully this will inspire others to follow.

Understanding data is crucial for journalists and needs to be a prioritized part of the job – and it’s really nothing new. Scientists, even in social and political science, have always been looking at the world through datasets. For them, data mining is a methodology for finding patterns, discrepancies and that particularly interesting angle that’s worth a closer look.

The data is the story, stupid!
In the Wikileaks case it is, of course, quite obvious that a journalist printing nearly half a million pages and, after a quick visit to the coffee machine, start reading from page one is going to fail – and fail miserably. But it doesn’t have to be a massive material like this to make a data based process worthwhile.

In a transparent world, all data – in its raw, true, unedited state – would be made available to the public. The good thing is that the world is potentially filled with people qualified to do such analyses, and where journalists would fail or don’t find that needle in the haystack, maybe someone else will. Because the data is there, for everyone!

The data is the insight, stupid!
There is a tendency to treat whatever knowledge or intelligence is built up within an organization very secretively. In the end, the world has silos of insight, but no way of connecting the dots.

As I see it, there are two ways of looking at it. Firstly, from the point of transparency. This is what scares people, because what if someone finds something to complain about? There’s a tendency to treat data like business secrets, and reserve exclusive right to interpret the data. Transparency is naturally most important for anything governmental, but in that sector at least we can legislate for openness. Doing so for private enterprise is much more difficult.

Secondly, and this is what I believe people are missing, is that data mining and auditing by a third party can actually help the organizations’ internal processes. It will make them smarter; it will help them get better at whatever it is they’re doing. It will help companies churn out better products and better services.

There are, for example, a lot of NGOs that should open up their projects and submit its data of statistics, knowledge and insight for general use by external parties. Not because they have to, by law, but because they would gain from it.

Charity organizations are also dependent on donations, and it’s reasonable to think that donors have the right to know exactly how their money is spent. Understandably it’s scary, but the question is if they can afford not to do it in the long run? What if the donors would require it to make a donation? It’s the route that the UN is taking, for example.

The data is the giant, stupid!
Once data is set free, it can be combined and cross-referenced with other data. It can be plotted to geographical information system or provide answers to questions the creators never imagined. Suddenly, a seemingly dull dataset gets new life, and can possibly be used for something that wasn’t imagined by its creators.

Hans Rosling, professor of public health, and a fantastic public speaker at that, combine UN data, and fight for more open data, and gain valuable insights that would have been impossible to attain in any other way.

Data owners must realize that they are the giants on whose shoulders people want to be standing.

 
Digital Culture and Technologies08:22, July 17, 2006

China is attempting to gain momentum on the western-dominated Internet, by encouraging a rapid transition to IPv6. And, it’s not only a matter of prestige, as the IPv4-addresses – the phone numbers of the internet – are running out. And they’re not exactly evenly distributed among the world?s nations:

“When 26 Chinese share one Internet protocol address, while each American possesses six IP addresses?this is the quandary facing China in the IPv4 era”, Zhao Houlin, director of the International Telecommunications Union, said in 2005.

Full article here.

 
Technologies16:40, May 31, 2006

The Swedish organization Gapminder, aims to visualize world development by making statistics more understandable. I must say they are very, very good at what they do.

First, watch the video presentation – and then play with the software here.

Via mymarkup.net

 
Technologies18:29, March 13, 2006

Andy Carvin has posted a video where Martin Varsavsky present the idea and rationale behind FON – an initiative to provide open wireless internet for the masses. I’ve been trying to get my head around this FON-idea for quite some time now, but I still don’t get it.

Considering the overwhelmingly positive reception the project has gotten, especially amongst high-profile bloggers and venture capitalists, I feel like I must be missing something. So, I’ve made an attempt to summarize my thoughts, and I urge you to explain to me why you think I’m wrong.

Complex
Without attempting to explain the whole model, since most readers probably already know about it, it is important to understand that FON use three user-levels, conveniently named after computer industry caricatures; you can be a Bill, a Linus or an Alien. This describes your sharing status, and also dictates how you can access the internet yourelf.

A Linus is any user who shares his/her WiFi in exchange for free access throughout the Community wherever there is coverage. A Bill is a user who, instead of roaming for free, prefers to receive 50% of the fees that FON charges to Aliens. And Aliens are those users who do not share their WiFi access and therefore must pay FON a modest fee every time they connect through a Fonero access point.

Seems fair? Well, yeah, maybe. Oh, I almost forgot: before you can join the share anything, you have to buy a specific router, replace the firmware and invalidate any warranties. It’s way too difficult for most, and too expensive for some.

Not Free
FON is not free. FON is actually pretty darn expensive. For an Alien, a user who does not share their connection, the fee is ?5 for 24 hours of access, or ?40 per month. That, my friends, is a pretty heft lump of money for a service that is getting cheaper by the minute. Bandwidth, in the western world, is dropping closer to zero cost every day. By comparison – I pay about ?35/month for my 100/10 mbps connection. “Modest fee”? Think again.

A Movement?
FON is trying to establish itself as a movement, complete with pictures of marching people with flags and banners on its website. This is just bull. FON is not a movement. It’s a corporation whose game is making money. Nothing wrong with that, but it ain’t no movement. Say it with me: FON is not a movement.

Truly free internet
Believe me, I’m not against open WiFi access. On the contrary, I’m pretty much having wet dreams about it. But if FON is not more than I think it is, then I do not believe it’s the way to go. FON’s thinking behind their user-levels is that people who share their internet connection need some incentive to do so. Otherwise, they say, only a few enthusiasts will go though the trouble.

In a way, I think they’re right – only a few will, but it has been proven again and again, that a few enthusiasts is all it takes. Consider the open-source projects. There are still people in my parents’ generation who claim that voluntary programming efforts can never produce something as advanced as an operating system. People, they say, have no incentive. I show them Linux, and their gaze become distant, and they repeat like zombies that it can’t be done. I boot Linux up, and they repeat like zombies that it can’t be done.

Why can’t collaborative efforts bring free internet? Because old-school economics say so?

Just looking at myself, I have a 100/10 Mbps fiber connection to the internet, and in all honesty, that is more than I can consume – even though I’ve sure many would describe me as an extremely demanding internet user. I download a fair deal of stuff, host this website and use VoIP for all my immobile telephony. Today, I spend much of my overhead bandwidth routing TOR-traffic – another collaborative effort that actually works.

The incentive for me to share my internet connection with my whole street, through an intentionally sloppy setup of a plain WiFi-router, is very simple: I hope somebody else would do the same when I need it. Just like I hope that someone is sharing the file in need on BitTorrent, and in return I share some with them. It does not take a lot of people to cover a great part of a city’s center. I do it because I’m a Good Guy, and I like that.

I would not, however, share my internet connection to everyone (be a Linus), and let FON receive ?40/month for my kindness.

The Alterative
On the other extreme of this debate are the people who want the city, state or government to provide access to free wireless internet. Varsavsky does not like this at all. In the video, he is not-very-subtle when insinuating that this is a communist way of doing things. Then again – why should he like it? It would ruin his business idea of having users are build the infrastructure for FON, pay for the bandwidth and share it – while FON receives at least 50% of all the income.

Something just does not add up, in my book. It’s like a pyramid scheme.

I’m not a big fan of big governmental projects either, but history shows that when it comes to huge infrastructural projects, private businesses are pretty darn worthless. Roads, railways – and the internet itself, would not have been without state funding.

Instead, I purpose a model where there is a state owned infrastructure, and privately owned networks. Operators can compete with price and service over rented base stations, repeaters and access-points. Well, that’s another post – but still.

I do not understand is how FON think they will ever reach critical mass by hoping that people will finance, build and operate the network for them, while profits go to the company. Internet access should – and will be – as ubiquitous as electricity, and no bar-owner would ever ask you to pay extra for the privilege of having a beer under his light bulb. The system is too complex and too expensive, and the extra benefits from just promoting free networks are non-existant.

 
Technologies12:45, November 23, 2005

FON, a Spanish initiative, a project with the ?goal to create a unified WiFi Network that will let members of the community to share not just bandwidth but also experiences and values?.

Why are they so focused on voice telephony, when it?s just one of the avaliable services if they can manage to set up, operate and maintain a grassroot access network? In the western world, charges for telephony is hardly a problem or a limitation for anyone: calls ? even international ? are very inexpesive in relation to the income level. They claim it?s a revolution, but really, they’re just offering something for free that is already cheap. Or am I missing something?

 
Technologies05:08, May 11, 2005

Podcasting, the practice of distributing recorded audio over the internet, almost like home-made radio, is quickly gaining popularity. It’s being hyped as the Next Big Thing by bloggers and grassroot journalists all over the (industrialized) world. However, yours truly is not impressed.
(more…)

 
Technologies13:18, May 3, 2005

Today, Bloggforum Stockholm 2.0 was announced to be held on Saturday the 28th of May 2005. Bloggforum 2.0 is a follow-up to the first conference held last November. The organizers are – just like last time – two of swedens arch-bloggers: Erik Stattin och Stefan Geens. This time around there will be more seminars, more people and more fun.

I have been invited to participate as of the panellists in the Media and Blogs seminar, moderated by Jonas S?derstr?m.

Registration is free. Hope to see ya’ll there!

 
Technologies04:03, January 30, 2005

In November 2004 there was a blog forum (Sweden’s first and last?) arranged by IDG. I was there, along with a good part of Sweden’s bloggers, I’m sure.

I had just bought an MP3-player at the time, and recorded the whole thing. Well, not quite actually – I came ten minutes late, so there are a few minutes missing at the start of the conference.

I realize it’s maybe a bit late, but since I have not yet seen the audio posted somewhere else, and I stumbled over the file on my computer just now – I decided to post it. Maybe it’ll make somebody happy.

(The sound kind of sucks, but what to expect from a sub-SEK1000 player with a built-in microphone?)

And yes, I’m sorry all of you English speaking readers – the whole show is in Swedish.

Bloggforum Stockholm 2004 (MP3-audio, 80:38 min, 37 Mb)
Bloggforum Stockholm 2004 (Streaming version of the same file)

 
Internet Governance and Technologies02:15, January 4, 2005

In a very interesting article the Swedish newspaper Dagens Nyheter (DN), reports something often forgotten by national crisis management centers: the help organizations are often exclusively focusing on issues relating to the grieving process of adults, whilst neglecting the need of the younger population.

The older generation find information though traditional channels; television, radio and newspapers. Kids however, receive a lot of information though the Internet, and they retreat to their regular hangouts even during tough times. And, considering how hard Sweden has been struck by the tsunami disaster – these are definitely tough times.

One place where they go is the Swedish online community Lunarstorm, with an incredible reach of 80% of Swedes between the ages of 18-to-25. (The figure is probably about the same, if not even higher, for kids between the ages of 13-to-17.)

Lunarstorm, realizing the crucial need for help, have involved around 20 priests and other volunteer personnel who are working around the clock to meet young people on their own turf: online.

The issues can be everything from comforting someone who has lost a friend, to more concrete and practical things like starting charities and how to volunteer to help.

Save the Children, Unicef and the Red Cross, have understood that this is an important channel, says Johan Forsberg, Lunarstorm’s information officer. He only wish that the Swedish government, particularly prime minister G?ran Persson and foreign minister Laila Frievalds, would wake up and realize it too. They’re needed here, he says.

When you think about it, this is not a very unexpected development. The Internet has integrated into people’s lives in all other aspects of life, so why would this be different? And, as usual it’s the kids that are forgotten by authorities: they’re busy speaking adultish about adult things with other adults.

I believe that using the Internet for these kinds of things can be very helpful, on a personal level. Sharing thoughts, fears and sorrow with other who feel the same way as yourself can be liberating and helpful. However, to avoid getting stuck in vicious thoughts, it’s also important that there is adults, preferably professionals, present in such discussions.

It seems to me that Lunarstorm are doing a great job, with whatever limited means they can muster. So, G?ran Persson – what are you waiting for? Wake up, smell the coffee and go answer some real questions from your future constituency.

 
Digital Culture and Technologies12:47, December 23, 2004

I’ve previously discussed, with Mark for example, that internet connectivity is so important that it should be treated as a public utility, rather than just a matter for private interests. Apparently, the businesses disagree with me. (Why am I not surprised?)

Besides recently losing an important battle in Philadelphia, where Verizon Communications made sure the state would not subsidize internet access, the legislation has now been converted to a fill-in-the-blanks DIY-kit, complete with dotted lines for the state name.

The legislation, dreamed up by American Legislative Exchange Council (ALEC), is called the Municipal Telecommunications Private Industry Safeguards Act (word file).

Read more, and be sure to check out the list of representatives on the Private Enterprise Board of ALEC, and you might get a hint to from where this is coming from.

 
Internet Governance and Technologies03:09, December 7, 2004

In several articles I’ve briefly commented on the fact that there is a problem – from a democratic point-of-view – with how Internet access is being sold to consumers today. The technology used does not encourage what I believe is promoting the fundamental uniqueness of computer mediated distribution. This article is an attempt to elaborate on the subject – hopefully, so that even non-geeks can grasp at least some of it.

Is There a Problem With Technology?
A truism of network-building is that the most difficult part of offering high speed internet is not building the backbone, but getting transporting the signals the last two kilometers from the provider and into the homes of consumers.

Today, broadband Internet connectivity is sold to consumers in a few basic flavors: ADSL, Cable and fiber. Fiber is rare, and globally the real competition is between ADSL and Cable – what method is most common depends on what country you look at. In the US for example, cable access is more common than ADSL while in Sweden it’s the other way around.

ADSL, as the most common variant of xDSL-technologies are called, is in theory a superb way of hooking people up to the Internet. It allows for telco’s to connect their telephone station to the backbone, and then use the existing copper wire to reach in to peoples homes. This means no modifications to the building, nor to each and every apartment. It’s simply plug-and-play from the consumers point-of-view. (Today, ADSL also comes in two modified (and thus faster) versions: ADSL2 and ADSL2+. These will be treated synonymously with ADSL in this article.)

Cable is also cheap because the infrastructure is (mostly) already there. The medium used is the coax cables that ordinarily carry television broadcasts.

Fiber, on the other hand, is extremely fast and future proof, but seldom a realistic possibility. Each apartment building needs to be directly hooked up with dedicated fiber cables, routers have to be installed in the basement, and ethernet network cables be installed in every apartment. In effect, very few households have fiber-access, in comparison to ADSL and cable.

Now to the design flaw: ADSL and cable is asymmetrical, a fancy word for saying that the speed at which you can send data and the speed you can receive data, is not the same. For example, personally I have a connection that is called 8/1 – in other words, I can receive at 8 megabit per second (mbps) and transmit at 1 mbps.

In Sweden, the country I’m for obvious reasons most acquainted with, the ISP’s are franticly competing to offer high speeds at very low cost using ADSL-technology. Last I checked, 24 mbps cost SEK 399/month (about US$60/month) from Bredbandsbolaget. So, what’s bad about that? Well, nothing if you don’t consider that the connection they’re talking about is a 24/1 mbps service – offering only 1 mbps of upload.

Why Is Up-Speed Important?
What separates the Internet from other media is that the audiences are active in participating as senders – just not being passive recipients. Radio, television and print media – they’ve all been a way for consumers to receive, but there was no way to transmit or respond to the messages. Networking changed all that, and I dare to say that this reason alone is the reason why the Internet is so extremely popular. All the “killer-apps” we’ve seen is because of a feedback channel; e-mail, Internet banking – even simple web browsing is interactive. But what also changed was that everyone could become a sender in a more revolutionizing way – setting up homepages and transmit material in a way that more resembled newspapers than anything else. (Remember the homepage-craze in 1995? Everyone had one, though nobody admits it today.)

Without the decentralized structure of the Internet – that anybody could publish content anywhere – there’d be no Internet. Not by definition, not in practice. The closed computer networks that attracted consumers 15 years ago (most notably America Online, CompuServe and Prodigy) died or were forced to become open systems within about five seconds after the Internet gained momentum. There simply was no way a single company could envision or create such the diversity of services offered on the Internet. (Some wise-guy will comment that it had to do with availability of pornography – but even so it does not contradict my theory. It still had to do with people being able to broadcast material, whether its pictures of last years Christmas celebration with the family or pictures of people with absolutely no costumes on.)

So, I argue that the reason for the Internets success is that it allowed people to express themselves. And for that an uplink channel is needed. A concrete example of what I mean is this very blog you are reading now. It’s hosted on a crappy second-hand computer I bought for SEK 1000 (about US$140) and is standing in a closet in my apartment. Using the Internet and its inherent channel to transmit information I’ve transformed a closet into a broadcasting device with a (potentially) global audience.

My fear, however, is that the possibility of transmitting is being effectively removed by current standards, endorsed by the owners of the networks. Why you ask? Because of a flawed technology, coupled with greed from the ISP’s to make money with content services.

Is There a Problem With the Companies?
For a capitalist, the ultimate goal is consumer addiction. This is a wet dream for them because nothing is as good as a costumer that returns again and again – without money having to be spent on persuading him to do so. Short of getting people addicted by their own free will, they can at least take every precaution that people don’t have a choice.

Thus, companies strive – in one way or another – towards creating virtual monopolies. They want to have an exclusive channel into your brain without the hassle of competition. For this reason, many media companies have merged in the last decades, to create huge conglomerates that, for example, control the whole chain of production, distribution and advertisement of a movie. It makes sense to them. Very much sense, actually. Not only do they produce the movie, but also they can make absolutely sure you see it, whether you want to or not.

So what on earth has this got to do with broadband Internet? Well, Time Warner owns Road Runner, one of the biggest providers in the US, for example. All Internet service providers, trying to maximize income, see huge possibilities in controlling content – not setting it free.

The asymmetric model being built into technology fits the companies perfectly, since they want passive consumers who are in need of content. Even though there are alternative technologies, such as VSDL, capable of offering symmetric bandwidth, ISP’s often chose not to pursuit them, and where they existed have discontinued them – at least in Sweden.

A recent Swedish example can illustrate my point. As the available bandwidth to the consumer increase so does the opportunities to send services their way. IP-telephony (or VOIP) has been around for a while, and for consumers it is a fantastic way to lower their cost for national and international telephony. Since they already have the bandwidth they can just sign up to any service and make really cheap phone calls. Well, for customers of Bredbandsbolaget it’s not quite that easy. Since they offer their own VOIP-service and do not wish to have competition, they’ve simply blocked the use of other services, and thus limiting the bits allowed to freely pass through the line for which the consumer is paying.

The same pattern is repeated with the launch of broadband broadcasted television that was launched by Bredbandsbolaget in cooperation with Viasat – Sweden’s largest satellite-TV owner. At the cost of 5 Mbps and a mere SEK 299/month you get 27 tv-channels blasted right into your TV. (What they fail to mention is that it’s just as expensive as their non-broadband packet, and then you get to keep your 5 Mbps – why can’t really see the use just yet?)

Instead of separating the infrastructure from the content, the two are increasingly intertwined. The promise of broadband distribution is that you should be able to buy television, in the form of zeroes and ones, from just about anybody. Not just repackaging an old service digitally and selling to you again. What this leaves us with is the monopolies the cable companies created in the 80’s – effectively controlling what was broadcasted to the homes. They owned the cables and operated the service. Where are the competition were did my free choice go?

The Future of the Network
The problem with this approach is that it’s counter-productive for the prosperity of the Internet as a growing community. Again, consider what the lessons from the early 90’s taught us – closed networks perish because they are not really networks of independent nodes but a controlled and centralized environment. I repeat what I said above: Without the decentralized structure of the Internet – that anybody could publish content anywhere – there’d be no Internet. Not by definition, not in practice.

I believe that in order to satisfy the increasing demands of bandwidth, as services get more complex and more mission-critical, P2P-technology (also called peer-to-peer) will become a vital part of the distribution chain. And I’m not talking about Napster or eDonkey but about a transparent system that distribute content by letting the computers organize themselves to minimize strain on the network and enable lightning-fast transfers. The business models of copyright holders and content providers will have to adjust to such a system in order for the companies to survive (…but that’s another article).

Already today there are systems, based on P2P, that allow people to download media content, released under a less restrictive license. Torrentocracy is one of them that I mentioned in an article just the other day.

Even the, usually pretty business-conservative paper, The Economist admits (subscription walled) that utilizing P2P-technology is a natural development and would enable faster, better and more fault-redundant services to the consumers. Creating a hub-and-spokes-network as what largely done today is not ideal, they say.

The problem is that P2P will only work if the nodes in the network are symmetrically hooked-up. If users consume (receive) faster that they produce (send) the scheme will fail, since P2P is mathematically a zero-sum game. By definition, every bit you download has to be uploaded by someone else.

Effects for Democracy
To speak bluntly; most companies that sell Internet capacity to you today does not want you to distribute content – they’d much rather sell it to you themselves. They want you to be passive because that gives them opportunity to sell you things, while not changing their tried-and-tested business-model. Content owners need to thoroughly revise their business model for a digital age, where scarcity is no longer determining the value of commodities. Bits are not atoms.

I fear continuing on this path of turning the Internet to yet another place where you passively consume (televised) messages will indeed hurt the Internets inherently democratic effects of allowing its participants to have a voice. P2P could enable ordinary citizens to distribute their own content, much like what bloggers all over the world are doing with ideas right now. Participation is the cornerstone of a healthy democracy, and the mainstream media have failed to be inclusive enough – by any standard.

The question is of course if governments should regulate Internet connectivity and treat it like a public commodity. Critics to such an idea usually claim that the Internets success is to a great deal the story of entrepreneurs and that therefore it’d be wrong to regulate. However, one should also keep in mind that the Internet is the result of millions of dollars spent on research – funded by the US government. So, maybe we should cut the governments some slack? Besides, nobody at their right mind is talking about expropriation of the Internet, just some guidelines for what is acceptable behavior. Like what FCC in the US (and PTS in Sweden) do for telecommunications.

Somebody said that we often overestimate the progress that will be made in two years, but underestimate what will happen in ten. To me, that is both good and bad news given this scenario. For one, the technologies we see today are not what are going to be still here in ten years. Bandwidth demand will continue to increase, and I’d expect them to level out at about 1 Gbps per household (I’m not kidding). However, one should still be mindful of the things I’ve explored in this article. Structures and patterns of behavior are determined at an early stage, even though unsymmetric technologies get replaced, we might have gotten used to just downloading the last 40 seasons of Friends from our old buddy the cable company.

(Thanks to Mark for a great discussion on the matter.)

 
Digital Culture and Technologies14:27, December 1, 2004

The guy behind Torrentocracy, Gary Lerhaupt (blog), had an idea. What if you combined RSS, P2P and your television set? It would, they say, be a system that would let you, for the first time, enjoy the true power of television.

While space in traditional media is (by comparison) scarce and controlled by few, the promise of the new technology is that information and ideas will spread freely and that citizens will gain access to a plethora of opinions and voices. Being both a technical platform, combining some hardware and Linux software, the project is also negotiating deals with content owners to release materials to the public sphere under a more permissive licence, such as Creative Commons.

The system enables you to with your remote control select, download and watch material. Of course you need a geek in the house to set the system up, but it sounds simple enough. To quote Gary himself: “If you ever wondered how and when your computer, the internet and your television would merge into one seemless device with access to anything and everything, then at this very moment […] Also Sprach Zarathustra should be resounding through your head.”

The suggested system available today is clearly too complicated for the average user, I believe Gary has a point. The P2P-distribution technologies are a great way of getting content to the masses. (Provided our upload speed in effect is not taken away from us. More on that matter some other time. Watch this space.) Some material has already been released, most notably sequences from the documentary Outfoxed. Torrentocracy also carried the presidential debates, hopefully enabling a few more people to listen to the most powerful man on earth and his “opponent” battling it out.

The point is, even though the MPAA is whining about copyright infringements and reports that Bittorrent alone use 35% of all available bandwidth on the planet; one should remember that there are legit uses for the technology.

So would content owners really be interested in this? Well, clearly what is missing is stream of money back to the studios or TV-channels before the bigger studios would be interested. However, remember that Michael Moore actually encouraged people to download and spread his documentary Fahrenheit 9/11 online to maximize the number of people who saw it before the American election. So, while it may not be a preferred system to distribute the latest Star Wars flick, it might well prove to be interesting for idealistic New Documentary directors, for whom money is not necessarily the greatest reward.