Wednesday, November 22, 2006

Why Open Source Does Not Work (For Me)

When freedom is compulsory, can it still be called freedom?
(anonymous ftp)

While I can't help applauding some practical consequences of the Open Source movement, I find the ideology behind it rather scatological.

As it stands, there is balance in the force. But not necessarily due to the quality of say, ahem, Linux, but thanks rather to the suckiness of some of the non-Open Source players. Apple stands proof that brilliant products are indeed possible in the world of proprietary designs.

I enjoy writing software and I take pride in my work. I am most happy when my code helps people solving practical problems. But I also enjoy getting paid for my work, at the fair market rate.

Yet the collaborative model of Open Source is effective. When people work together, they tend to become a distributed system with tolerance to faults, and with opportunities for major fun. What lacks is the opportunity (for the worker bees) to make a major buck. Once the Source goes Open it somewhat starts attracting more dung flies than worker bees.

What I would love to see is a system where people have fun building a cathedral right next to the bazaar.

The two-digit IQ-ers (merchants, thieves, and bums that frequent the bazaar) have to pay to enter the cathedral, and cathedral builders share the profits (which may be monetary or otherwise, as some may chose to contribute for good karma or posthumous glory, and have their source code shine in its eternal simplicity and elegance).

How can we achieve a development model that accommodates a pragmatic mix of open and closed source, thus satisfying the individual needs of the people involved?

There are technical and non-technical options. The possible non-technical include:
  1. have all contributors adhere to a honor system (weak, because we all know how human nature works);
  2. have everybody sign NDA-s and contracts (may quickly degenerate into "my lawyers against their lawyers" situations);
  3. have someone of good morals and notorious character appointed (elected?) leader, arbiter, gatekeeper and distributor of profits, after careful weighing every body's contribution (yeah right, look at our elected leaders).
You can easily see that a technical solution that enforces and self-polices the model might be a better alternative. How about, for example, have everybody design and code to interfaces? Then, each module owner is free to open up her implementation, or to keep it proprietary. The approach only works as long each contributor owns a meaningful component of the system, that can be put behind an interface. That's really not too much to ask, and it would benefit the designs. Team members will have to think in terms of the intent behind the abstraction, rather than the actual algorithms that implement it; crossing the interface barriers for the purpose of tweaking and hacking also becomes naturally discouraged.

The major challenge for this approach of mixing and matching open and closed modules is the lack of binary standards in environments such as C++ (the Itanium ABI is still young) and Linux. This fluidity is purposely perpetrated by Linux and GCC, and even enforced by the LGPL that prevents statical linkage as a workaround for runtime "platforms" that are as stable as quicksand.

In a future installment, I will go over my attempts to create a platform that allows a mix of open source and closed IP within the same project, when I started the Zero Debugger project.

Tuesday, November 14, 2006

I have recently discovered Linux Sabayon 3.1, and so far it's the only distro that works with my new Turion x2 DV6000z laptop. I still have some difficulties with the wireless, but I am hopeful that the 3.2 release scheduled for the 27-th will fix the issues. Give it a try, it simply rocks.

Saturday, November 04, 2006

Clever Design or Fancy Hack? You Decide.

When I started designing the Zero Debugger Engine a couple of years ago, I also slapped together a Gtk-based user interface plugin. The engine has an API for extending the debugger with plugins.

The purpose of my UI plugin was to drive the design of the API that the engine should expose. It was intended to be a throw away. There wasn't much design behind the UI, I thought of it more in terms of a test bed than even a prototype.

Yet as it happens so many many times in software (and in life), at some point I got stuck with it. Once I was ok with the engine API, I should've started working on the "real" UI plugin. Clean room design, right?

In the process of developing the engine and its C++ API which I dubbed the Zero Development Kit, it became evident that for some simple customizations C++ was overkill, and a scripted language API would be easier and more intuitive for most users. With this idea in mind, and with all the lessons learned from the "mock" UI, I started drafting a design of a Python-written UI.

But because Zero is my witching hours project, and most of the time I get distracted by annoying things such as having a day job, the Python UI had not seen the light of day until the last few months (see my previous post).

Meanwhile it was essential for the debugger to have an UI, to appeal to users as an interesting proposition... So the "throw away" UI project outlived its initial scope. But there was a problem:

The C++ written UI was based on gtk-- 1.2 (the ancestor of Gtkmm, a C++ wrapper for the Gtk library). I should've started with the newer technology, Gtkmm 2.x, but I was hoping to get someone interested in a business proposition, and their company was running on RedHat 7. But I also needed to show something that worked on the more modern Linux distros.

To cut to the chase, I decided to write some adaptation layers so that my initial code would compile both on gtk-- and Gtkmm. For classes such as Gtk::CTree, that have been completely replaced with Gtk::TreeView, Gtk::TreeModel and such, I just created a minimal implementation of CTree.

But there are also classes that have retained most of their interface in between the 1.2 and 2.x versions of the Gtkmm, give or take a couple of deprecated methods and some new ones. For example: the Gtk::Notebook class has a new family of overloaded methods, append_page. Concise and self-documenting, they allow one to write code such as:

void TabLayoutStrategy::add_registers_view(Widget& w)
rightBook_->append_page(w, "Registers");

To achieve the same effect in the old code I had to say:

void TabLayoutStrategy::add_registers_view(Widget& w)

Very verbose, and the simple intent to append a page with a given tab label text kind of gets lost...

The new append_page method can be implemented in the terms of the older API, like this:

struct Notebook_Adapt : public Gtk::Notebook
void append_page(Gtk::Widget& w, const Gtk::nstring& text)
// ...

Now only if I could replace Gtk::Notebook with Notebook_Adapt in the gtkmm 1.2 compilation... and here's where the hack comes in: Notebook_Adapt does not add any member data to the Gtk::Notebook, just non-virtual methods. So a Notebook_Adapt class instance could be overlayed on existing Notebook instances, hmmm.
Enter the adapting pointer:

template<typename S, typename T >
class APtr
S* p_;

explicit APtr(S* p) : p_(p) { }

T* operator->() const
BOOST_STATIC_ASSERT(sizeof(S) == sizeof(T));
return static_cast(p_);

S& operator*() { assert(p_); return *p_; }
const S& operator*() const { assert(p_); return *p_; }

operator const void*() const { return p_; }

Armed with this template, and a..., yuck, yeah, some precompiler help, I can now say:

#ifdef GTKMM_2
typedef Gtk::Notebook* NotebookPtr;
typedef APtr NotebookPtr;
// ...
class TabLayoutStrategy : public LayoutStrategy
// ...
// ...
// old:
// Gtk::Notebook* rightBook_;
// replaced with:
NotebookPtr rightBook_;

Inside of the class above, calls to:

rightBook_->append_page(widget, labelText);

now compile (i.e. are source-level compatible) with either gtk-- 1.2 or Gtkmm 2.x