Re: [ox-en] "open source model permits elitism"
- From: Stefan Merten <smerten oekonux.de>
- Date: Wed, 07 Jun 2006 08:33:39 +0200
Hi Nadav and all!
2 months (70 days) ago Nadav Har'El wrote:
It has been nice to see how in the last few years, free software (aka Open
Source software) has become a fad, with everyone thinking they understand
this phenomenon. But I think that what some of these people have been doing
is to explain how free software fits in their *old view* of the software
world, rather than really trying to understand the new view.
I absolutely agree with your posting :-) .
For example, in that article you quoted from the Economist:
On Sat, Mar 25, 2006, Geert Lovink wrote about "[ox-en] "open source model permits elitism"":
Open, But Not as Usual
Economist (03/18/06) Vol. 378, No. 8489, P. 73 *
The limitations of the open-source approach are becoming evident as
the methodology branches out of the software sector and into other
...
through continuous self-policing. Indeed, only a few hundred of the
approximately 130,000 open-source projects on SourceForge.net are
active because the others are unable to accommodate open source's
My understanding (based on 15 years of actually using free software, and
about 6 years of helping write it) is different.
What actually happens is that free software is a process of evolution, much
like Darwin's familiar theory: New free software projects are created all the
time; Successul projects grow, get more users and/or more developers, while
projects that are not well-adapted (do not fit the needs of users and
developers wel) die, i.e., get abandoned on source-forge. Like in biological
evolution, improvements can be done on any successful free software project
to create even more successful software, either in the form of gradual
changes, or of an abrupt "fork".
This is also very much like in a market. There you also have producers
which produce without knowing in advance whether their product will be
useful to others. The difference, however, is, that a failure in
market can mean a big reduction in life opportunities while a
non-successful Free Software project at worst is a waste of a time.
But even this is probably rarely true because the original
contributors usually use that software anyway so it's at least useful
for them.
So the fact that there are many dead, cul-de-sac, open-source projects is
not a "limitation" that has just now "becoming evident", but rather a
direct consequence of how free software works, and actually is a benefit of
this model: good projects do not die, but rather evolve, while "bad"
(mal-adapted) projects die off.
May be usefulness is a more useful term here than well-/mal-adapted.
The proprietary software world is also
filled with dead projects (software that was never successful, or was once
successful and now dead), but with proprietary software, the death of
software often doesn't happen because of its intrinsic qualities, but
often because of other reasons, say, because a company goes bankrupt and
nobody else can take over its proprietary code and continue working on it.
Indeed. There are assessments that (still) 50-70% of (proprietary)
software projects fail. Well, for some this means they are not
completed with the expected ressources - which is usually no criteria
for (Doubly) Free Software projects. However, a good part of these
failing proprietary software projects is actually never used.
Someone might reply that what I've just said only applies to small meaningless
software (such as a clock) and not to large, important, software (such as a
word processor or browser or whatever). I beg to differ. Huge, monolithic
software projects is not a "fact of life", but rather a choice made by some
projects (e.g., OpenOffice and Mozilla), often projects that started their
life as proprietary software (in this case StarOffice and Netscape). Other
projects choose to remain very componentized and distributed, and continue
to work in small groups. A good example is X-Windows, in which individual
components (the server, Window manager, basic libraries, widget libraries,
etc.) have always been develop by different groups with only small amount
of interaction. Another good example is Apache web server, which while it
(now) has some centralized infrastructure, tries to keep the seperation
between the different components, so that the small group who are crazy
about (say) SSL and HTTPS, can work on their own "close-knit" group while
people interested in (say) caching can work on that. Similar seperation
happens in the development of the Linux kernel, where a very small close-knit
group can be invloved in writing a single piece of the entire kernel (say,
one driver or one feature).
I think this is an important observation: Free Software projects tend
to modularize instead of clumping. For a corporation it makes sense to
clump more and more functionality into it's proprietary software - for
instance because of additional features which can be used only by
using this software.
If you have usefulness in mind then it makes more sense to modularize
things so they are useful in many contexts. Let me give one example.
In Linux there is `ispell` - a seasoned spell checker which started
out decades ago. There are lots of dictionaries for this spell checker
around and alone for its age the software contains lots of know-how.
`ispell` can be used with many applications - I'm using it with Emacs
for instance. This is useful for these other applications because they
get spell checking for Free if they just implement the interface
required to use `ispell`. It is also useful for the users of those
applications because they can build up their own single personal
dictionaries and this is used with every application - instead of
maintaining a separate dictionary for each application offering spell
checking.
Again, just like in biological evolution, "innovation" often can happen
as of a fork or a branch into new habitat, not just as slow improvements on
an existing projects inside its old habitat. Just as a random example:
I just installed Fedora Core 5, and I noticed that after more than a decade
that the "locate" command (used to quickly search for files on your system)
hasn't changed, two new things appeared:
* "mlocate", which does the same thing as the old locate but uses much
less time to update its database, by using an innovative algorithm.
* "beagle", a new full desktop search (similar to Google Desktop) application
Neither of these applications depended on people sending patches to the
original author of the "locate" command. The original "locate"'s source code
was available to build on, but when innovations came along, the innovators
did not have to confine themselves to the development process of the original
"locate" if they didn't want to (and in this case, they didn't want to).
Agreed. This also reminds me of how things are on markets. It is no
problem to "offer" your own idea of how to do things. However, you can
use the brightest ideas of your "competitor" even by using its source.
It's funny but more and more I see how Free Software in many regards
works similarly like products in a market - but without the alienation
:-) . I think this is another piece in the puzzle to understand how a
synthesis of thesis and anti-thesis overcomes old structures.
Mit Freien Grüßen
Stefan
--
Please note this message is written on an offline laptop
and send out in the evening of the day it is written. It
does not take any information into account which may have
reached my mailbox since yesterday evening.
_________________________________
Web-Site: http://www.oekonux.org/
Organization: http://www.oekonux.de/projekt/
Contact: projekt oekonux.de