Sunday, December 30, 2007

Yet Another New Year's Resolution

Because this is a time of the year when people make resolutions about... ahem... loosing weight, and because I recently read in a blog how various programming languages are inherently bloated, I decided I should write down my opinions on fat code.

In this summer post back in June, I was reporting that the ZeroBUGS project had 135,601 lines of code, out of which 120815 (89.10%) were C++. Tonight, after just finishing a big round of changes, the project totals 142,992 lines of code, 126779 (88.66%) being C++.

So one could argue that my project got inflated by roughly 6k lines of (C++) code. And what do I have to show for six months of fattening up?

Bug fixes notwithstanding: I have ported the code to the PowerPC platform, added support for visualizing wide strings and Qt strings, added a feature that allows debug events to be ignored by thread, and (hot from the oven and about to be released) added support for D programming language's associative arrays. In only six thousand lines of code. Not too bad, but that's just my opinion.

What are in general the factors that cause source code to bloat up?

System Refactoring
Back when I was working for Amazon.com, we had to re-write the ordering system (one of the many Amazon software components, the ordering system was in charge of all the magic that happens from the time you click Proceed To Checkout until your items are shipped). At the time when I joined the team that got tasked with the re-write, we had a subsystem consisting mainly of a few tens (or maybe hundreds?) C functions.

This thing was not very flexible, and was already giving up at the seams whenever new functionality was being requested by the business people. It also had statical dependencies to almost anything else in the system. The decision was to replace it with a middle-tier service, with clean APIs.

My boss at that time (an ex Bell Labs guy) had a good plan:
  • design a set of object-oriented, abstract interfaces;
  • implement them in terms of the legacy C code;
  • then rewrite all client code to use this new C++ API;
  • and finally, once there is no more coupling to the C implementation, change the implementation, one small piece at a time.
And this is what we did, and I think it was a successful project, with a few wrinkles though:
  • the migration effort took a couple of years to complete; meanwhile, the old system co-existed and was being actively changed and maintained;
  • do not forget the people factor: some of the middle-managers (I hear these days they got promoted, accordingly to the Peter Principle) had personal political agendas that caused the project to take longer than necessary

The overall effect was that in fact we had two parallel systems in existence: a legacy one, and the "new" one.

The problem is that by the time you are ready to throw away the legacy system, the new system is already old enough to be called "legacy" itself.

Work on another system designed to replace the "new legacy" may start before the "old legacy" is completely retired. So the company may end up with three or more systems being maintained in parallel (sure, the plan is to eventually phase out the legacy, but that may not happen as soon as we wish). And here is one place where bloat, and its first cousin needless redundancy thrive: when code bases are being unnaturally kept alive. Two systems in parallel, and old one and a replacement, are fine. Three or more systems that are trying to solve the some problem is not a Good Thing. And just in case you did not catch it: needless redundancy is in itself a needlessly redundant association of words.

Supporting multiple platforms
Another reason for code to grow in size is portability. In order to make your code portable, you need abstractions and indirections. I started ZeroBUGS in late 2003, because I wanted to best GDB.

My first debugger prototype was less than three thousand lines of code. I was inadequately enthused: the code was stable as a rock, but that's pretty much all I can say about it. It did not work with multiple threads; it did not load core dumps. The support for STAB was sketchy, support for DWARF was none. There was no expression interpreter. The code was stable as a rock, and that's pretty much all I can say about it. And GUI? What GUI?

It took another four years or so to add all these features. Maybe a third to a half a of this time was spent making said features portable. And I do not even mean across OS-es or CPU architectures. You see, when I started to write the GUI I went with Gtk-1.2 and the corresponding C++ wrapper, Gtk--. By the time I was done the world had already moved to Gtk-2.x, and the Gtkmm C++ wrapper was a standard package in most distributions. I had to write an adaptation layer, not unlike the ordering system adaptation API back at Amazon. And that bloated my source code. But today I can compile against Gtkmm or Gtk-1.2 (and 95% of the details are transparent to my client GUI code).

But Gtk-1.2 may no longer be relevant, some people may say. And I think they are right. But let's look at Professor Tanenbaum's MINIX for a second, shall we? Not only because it is a lean and robust system (easy when you do not have features, multi-threading anyone?) but because if you read the source code you notice a strange thing: there are a lot of macros dedicated to ensuring compatibility with Kernighan & Ritchie C compilers. In the 2006 3rd edition of the book. What the heck?

I guess that the lesson here is that writing for portability is fine, but keep an eye on things that may become obsolete sooner than you think. The code that deals with one particular OS, compiler, etc, will then turn into dead weight.

So my 2008 resolution is to get ZeroBUGS on the treadmill.
Happy New Year!

Sunday, November 25, 2007

ZeroBUGS on PowerPC


Happy Thanksgiving! (Yes, we do celebrate Thanksgiving in Seattle in spite of what the Liberal Turkeys want you to believe).

And what a great long weekend this was. We took advantage of the great weather and hiked in the Lincoln and Discovery Parks and walked around Alki Beach, with our son in the backpack.

I have been looking for a solid block of free time from my daytime job in order to look into porting ZeroBUGS to the PowerPC.

I have an old PowerMac that I bought used from a company in Oregon this past summer, for this specific purpose (and promptly installed Ubuntu on it).

But I did not have the time to look into it seriously until now. The main motivation was to see how far I can push the envelope, with the existing design holding up. In other words, I wanted to see how portable my overall design of ZeroBUGS was. And porting prompts one to revisit, re-test and refactor old code.

Of course, I did not expect the outcome of three or four nights of hacking to be perfect.

But the result, while very rough, is quite usable (at least usable enough for me to bootstrap the debugging of the debugger with itself) and I am quite happy with it.

So what's so hard about porting the debugger to the Power architecture? you may ask. Shouldn't it be quite a straightforward task, given it is written in portable C++?

A debugger interacts with the target program (and the OS) at a low level, and hence parts of it depend on the low-level architecture details. Take stack unwinding for example. The layout of stack frames (or links, in PPC parlance) is different than on the Intel chips. And so is the mapping of DWARF register numbers onto the CPU general purpose registers.

Linux runs on PowerPC in big endian mode (as opposed to Intel chips which are little-endian systems) and thus the binary representation of C and C++ bit fields differs.

The hardware support for debugging (special debug CPU registers) also varies across processors.

Well-thought abstractions should isolate these idiosyncrasies. I found out that I have done a good job designing the stack unwinding mechanism (a "driver", template method that delegates to platform-specific bits) but totally blew the model for hardware debug registers.

Oh well. Back to the drawing board. Luckily it is not a heavily-used feature...

Sunday, November 18, 2007

The Pros and Cons of Hitch Hacking... Ubuntu 7.10


After fubar-ing one of my systems last week with Fedora Core 8, I switched to Ubuntu 7.10 Ballsy Baboon as my main development system and I just love it.

I cannot care less about the over-hyped desktop effects, I am interested in software development, and in this context the eye candy is irrelevant.

Ubuntu 7.10 allows you to install all major GCC/G++ versions from 2.95 to 4.2 which is so cool! I can test with older GCC versions without having to boot up an older distro! Excellent.

So that is the Good. Now for The Bad and The Ugly: the PowerPC is not officially supported.

This is hardly any news, 7.4 is not supported either but it worked great on my second-hand G4. Upgrading from 7.4 to 7.10 on the other hand is a terrible mistake. I repeat: if running on PowerPC, do not upgrade from Ubuntu 7.4 to 7.10!

If you do, your machine may not reboot properly. It is very likely that you will find yourself stuck at BusyBox, and then you will have to type:

modprobe ide_core
exit

to bring the system up.

Then you may also see an error box popping up, about HAL not starting. You will have to start it manually:

sudo /etc/init.d/hal start

The network interface may not start either, and there is no sound. But then again, I am only using the G4 for porting ZeroBUGS to PowerPC (which I devote very very little time to, so not a biggie).

As a bonus point: on the AMD64 (my main system) nspluginwrapper and flash are installed auto-magically.

So overall this Manly Monkey distro is great! Oh wait. I got the name wrong again... I think they call it Goofy Gorilla. Or Ghastly Gnome.

Sunday, November 11, 2007

Fedora 8 Headaches

I should have known better: if it ain't broken, don't fix it. My main system is a 64 bit machine which, until yesterday, ran Fedora Core 5. On top of that I ran several 32-bit distros in VMWare.

The system was just fine, yet I decided to get adventurous and upgrade to the new Fedora 8. Big friggin' mistake. First of all, the installation DVD got consistently stuck at about 30% of Checking dependencies in packages selected for installation...

So I grabbed a Fedora 7 DVD which I burnt a while back and never got around to play with, and tried it out. The upgrade worked (I could not afford a fresh install, since I needed to keep my data). And I should have stopped here. But no, I got greedy and retried the FC8 DVD. No more hanging at checking dependencies... hurray! But wait.

After rebooting the system, pirut (the front end to YUM) does not work anymore (crashes every time), the Gnome desktop forgets to repaint after a while and looks like a Windows 3.1 desktop back when an application leaking GDI resources messed up the entire machine.

Oh, and because of the new 2.6.23 kernel, I had to recompile VMWare's kernel module, which would have been ok, had it worked. But in the good old tradition of kernel developers who don't give fudge about backwards compatibility the kernel source is now incompatible with VMWare Server 1.0.4.

I always thought that it would be easier to test my software if I limited myself to fewer distributions. So thank you RedHat, you are out.

Friday, November 09, 2007

Detecting Motor Stalls in NBC/NXC

So I stayed up all night again playing with the Mindstorms kit. I re-built my car model with a shorter, sturdier frame and a more precise steering. I kept the brick in the back and the differential transmission.

Since that went well into the wee hours, I had little time left to program the thing before the sunrise (yeah, I had to stretch that Transylvanian connection again).

Anyway, the detection of obstacles and steering around them is quite imperfect, and the vehicle bumps into things more often than I would like. So I have decided to compensate with a hack, and searched the Internet for ways to detect motor stalls. The search turned a couple of solutions, one in RobotC and another one in the graphical NXT-G language.

I could not find any NXC code snippet, but after some experimentation I came up with the code below. It checks whether the rotation count drops under a specified threshold for three consecutive times (to avoid spurious, false positives).


#define OUT_DRIVE OUT_C
#define STALL_THRESHOLD 6 // may depend on battery charge level?


bool is_stalled()
{
for (int i = 0; i != 3; ++i)
{
ResetRotationCount(OUT_DRIVE);
Wait(10);

const int count = MotorRotationCount(OUT_DRIVE);
NumOut(0, LCD_LINE1, count, true);

if (abs(count) >= STALL_THRESHOLD)
{
return false;
}
}
return true;
}


Here is another variation:

bool is_stalled()
{
ResetRotationCount(OUT_DRIVE);

for (int i = 0; i != 3; ++i)
{
Wait(10);

const int count = MotorRotationCount(OUT_DRIVE);
NumOut(0, LCD_LINE1, count, true);

if (abs(count) >= stallThreshold * (i + 1))
{
return false;
}
}
TextOut(0, LCD_LINE1, "stalled", true);
return true;
}

I have yet to build a robot with the motors attached directly to the wheels, and I wonder if the false positive that I get are related to the fact that my wheels are powered through a differential transmission.

And the stall threshold depends on the surface the robot runs on, so I added the following code to read it from the user:

void input_stall_threshold()
{
bool done = false;
int count = 0;

stallThreshold = INIT_STALL_THRESHOLD;
until (done)
{
Wait(300);
ReadButtonEx(BTNCENTER, true, done, count);

bool pressed = false;
int n = 0;

ReadButtonEx(BTNLEFT, true, pressed, n);
if (pressed)
{
if (stallThreshold > 0)
--stallThreshold;
}
pressed = false;
ReadButtonEx(BTNRIGHT, true, pressed, n);
if (pressed)
{
++stallThreshold;
}
TextOut(0, LCD_LINE1, "stall thresh=", true);
NumOut(78, LCD_LINE1, stallThreshold);
}
}

Saturday, November 03, 2007

Mindstorms Roadster Version 1.1

Boy is this Mindstorms thing addictive... I spent last night fixing a bunch of design bugs (I gave up the worm gear steering mechanism shown in the previous post, and adopted a simpler solution).

The chassis is long because my very first intention was to have the brick in the middle of the car (hence the X design, to avoid sagging). But it is more practical to have the brick in the back. And it looks cool, too.





Mechanical issues figured out, I went and downloaded the Bricx Command Center 3.3 and after some tinkering I came up with this code (the car has the touch sensor attached in the back, not shown in the photos).

Although the code is just work in progress, it is already lots of fun to watch to car on the kitchen floor, backing out and steering away to avoid bumping into the furniture!


#include "NXCDefs.h"

#define NEAR 25
#define MIN_BACKOUT 20
#define MAX_BACKOUT 100
#define STEER_ANGLE 20

int angle = STEER_ANGLE;
int backout = MIN_BACKOUT;// backout time


task main()
{
int i;

SetSensorTouch(IN_1);
SetSensorLowspeed(IN_4);

while (true)
{
OnRev(OUT_C, 100); // motor is installed backwards,
// we are going forward

while (SensorUS(IN_4) > NEAR)
;

OnFwd(OUT_C, 100); // back out at full speed
Wait(50); // wait 50 ms

// steer away from obstacle
RotateMotor(OUT_A, 50, angle);

for (i = 0; i < backout; i++)
{
// bumped into something while backing?
if (SENSOR_1)
{
// play sound, then stop
PlayFileEx("! Startup.rso", 1, FALSE);
backout = MIN_BACKOUT;
break;
}
Wait(100);
}
Off(OUT_C); // stop

angle = -angle; // steer the other way

RotateMotor(OUT_A, 50, angle);


backout += 10; // backout increasingly
if (backout >= MAX_BACKOUT)
{
backout = MIN_BACKOUT;
}
}
}

Friday, November 02, 2007

The Roadster


Here is my first Mindstorms NXT project: The Roadster. One servo motor controls the steering (the mechanism is still hacked up, but I have a couple of ideas for making it more solid).


Another motor powers the rear axles through a differential from nxtasy.org. Designing mechanisms is not a strong suit of mine, but I recognize a cool design pattern when I see it. And reuse is an ingredient to successful engineering.

A very simple program receives notifications from the sonar when objects are detected closer than 35 centimeters and commands the front wheels to turn, in order to avoid collision.

I look forward to downloading the NBC/NXC tools (maybe this weekend?) and building a more complex program.

My inner child is so happy.

Wednesday, October 31, 2007

The Perfect Poker Player

video

A friend sent me this funny commercial, advertising... wait a minute? Could the guy in the video be possibly bluffing? (The dialogue is a bonus for Romanian speakers).

Tuesday, October 30, 2007

Handling Invalid UTF-8 in ZeroBUGS


Based on suggestions from users I have changed the way ZeroBUGS deals with invalid UTF8 code when loading source files. The GtkSourceView widget uses ustrings internally, and when passed single-byte strings will assume UTF-8 encoding and convert them.

In order to prevent accidents such as loading a binary file instead of a source file by the users, I had my UI reject outright any invalid UTF-8 text.

But it turns that sometimes this restriction is too strong, and I have added a dialog that asks before rejecting the file.

Saturday, October 27, 2007

Who Needs Geography



"Who Needs Facebook" asks Forbes' Evelyn M. Rusli. And who needs geography? I should add.

Dear Evelyn, I know that Google is omnipotent and they can move Redmond to California should they decide to, but for the time being it has not been moved.

I think you got owned.

It Was a Very Good Year

When I was twenty-eight,
It was a very good year
It was a very good year for Internet hacks
Who were full of air
With all those flowers in their hair
And it came undone
In nineteen ninety eight.

On June 25 Microsoft released Windows 98 First Edition.

On November 24 America Online announced its intent to acquire Netscape Communications in a stock-for-stock transaction worth US$4.2 billion.

The Browser Wars thus approached conclusion. We all know how much market share Netscape initially had, and how little share Mozilla with all of its derivatives has today.

So whenever I hear all that buzz about how great and awesome and strong and cool and supercalifragilistic Google is, I like to point out that (for whatever strange reasons) history tends to repeat itself.

Wednesday, October 24, 2007

Linurix: Linux Limericks

I like to improve my English as a second language every chance I get. For example, when I am stuck in traffic I practice making limericks.

Today I came up with something new: linurix (plural linurix). Limericks about Linux. They go something like this:

There is a man from Stellenbosch
Who's never seen a Macintosh,
Because Winnie Mandela
Once told the poor fella
That Ubuntu Linux was posh.

There is a guy in Tennessee
Who owns two Macs but no PC.
He thinks RedHat from Raleigh
And the Pope are unholy.
And this is religion, you see.

Charles Simony circled the Earth.
On his way back he came forth
And called his attorney
"Hey man, this journey
Confirms: Ubuntu is shuttle-worth!"


Tuesday, October 23, 2007

Wide Strings Debugger Support



This past weekend I worked on adding support for visualizing wide standard strings and QT strings in the ZeroBUGS debugger. For convenience, they can be displayed as if they where regular C strings.

This feature can be turned off and on from the Edit / Options menu. Under the Language tab, if the checkbox for Qt strings is unchecked, the inner structure of the object is shown. Otherwise Qt strings are displayed as regular strings. std::string and std::wstring instances are handled in a similar fashion.

Before implementing this feature, the values of the variables displayed by the debugger where being represented as C strings.

The solution for wide strings keeps the existing design: I read the Unicode strings from the target process' memory, then call wcsrtombs to UTF-8 encode them.

And Glib::ustring takes it from there:
GTK+ and GNOME chose to implement Unicode using UTF-8, and that's what is wrapped by Glib::ustring. It provides almost exactly the same interface as std::string, along with automatic conversions to and from std::string.

It is still a new feature and it needs more testing and possibly, ahem, some debugging but overall I am pleased with the results.

And if you have any feedback regarding what features should I work on next, please take the poll in the right hand side bar, or leave a comment. Or both. Merci.

Sunday, October 14, 2007

AMD's CodeAnalyst

Now that I have stabilized the features of ZeroBUGS, I am looking into improving the overall performance and implicitly the user experience.

This weekend I live searched for profiling tools and I found AMD's CodeAnalyst.

On Linux, Code Analyst uses oprofile.

I downloaded the source code for version 2.6.22 of CodeAnalyst and proceeded to build it on a Fedora 5, 64-bit system equipped with an Athlon processor. It is not clear to me whether the 2.6.22 version is a coincidence. Has the kernel version to be an exact match for best results?

I am running 2.6.20 on that box, and the program ran but collected no samples, as per this message
Oprofile engine has encounter error genrating system view

I grepped for the message and found it here:



void opdata_handler091::read_op_module_data()
{
string session = m_session;
vector <string> non_options;

reset_data();

for (size_t i = 0; i < m_op_events.size (); i++) {
non_options.clear ();
non_options.insert (non_options.begin (), m_op_events[i]);
non_options.insert (non_options.begin (), session.data ());

try {
generate_apps_summary(non_options, i);
} catch (op_fatal_error const & e) {
string msg =
"Oprofile engine has encounter error genrating system view:\n\n";
msg += e.what();
m_ca_display->update_display(msg.c_str());
reset_data();
break;
}
}
}


(File /src/ca/libs/libopdata/op091/opdata_handler091.cpp). The reason I am showing the code snippet here (broken English not withstanding) is that I noticed a bad coding idiom: the catch block is doing quite a lot of stuff, and I see several things that could possibly throw.

Nit-picking aside, I rebooted the machine in a different configuration (Ubuntu 6.06, 32-bit, running kernel 2.6.15) and this time I got luckier. I even found a few interesting bottlenecks in ZeroBUGS.

I concluded that the oprofiled / 2.6.20 64-bit kernel combination must be the culprit of my initial lack of success.

It definitely looks like CodeAnalyst is a useful tool, once one gets past the initial small road blocks.

Wednesday, October 10, 2007

Gnomes

Following my disgruntled post on Gnome and GtkSourceView last night protests broke in Europe (and even in France).

Seriously, some people may think that I am talking nonsense and they are entitled to their opinion.

But I am considering rewriting the UI for ZeroBUGS using KDE. I am sick of this game of ever shifting APIs. Say what you want about Windows, but the Win32 API stayed backwards compatible for as long as I can remember.

Those folks do not get it: when you break the API you generate more work for developers like me: I have to carefully track the changes, decide whether to stay with one library or change my code to the new API? is the end-user going to have all the dependencies easily resolved or do I need to ship additional shared libraries? Merde.

Ed: Max wrote a very intriguing comment about http://www.fltk.org/. Worth considering, thank you kindly, sir!

Who Developed The Linux Hart?

Oh, a stud named Linus Torvalds. According to this source, he was studding (sic) in 1991, when "Linus felt he could do better than the Minix".

And the next page reads:
How to get Linux?

Linux available for download over the net, this is useful if your internet connection is fast. Another way is order the CD-ROMs which saves time
,

CD-ROM that saves time? Is he talking about the Caldera distro, perchance?

And it continues (emphasis mine):

What Kernel Is?
Kernel is hart of Linux Os.

It manages resource of Linux Os. Resources means facilities available in Linux

And ppplease, no PC lecture. English is not my native tongue either.

Tuesday, October 09, 2007

Gnome My Foot

I have not been paying much attention to the (old) Gnome versus KDE debate until yesterday (I just assumed that Linus is nit-picking and brushed it off).

But after installing OpenSUSE 10.3 I realized that the API for the gtksourceview component has changed. In the process, a part that is important to me (source markers) was dropped. It is not a show-stopper or enough reason for me to part with Gnome, because for the time being I have got some options, and the GtkSourceView team will reintroduce the dropped features by version 2.22.

But I understand now how people get upset over dropped features and interface changes.

As always, Linus was right.

All Mac Users


How many Mac users can you fit in one auditorium? All of them, apparently, as it may be seen in this picture from http://www.pizdaus.com/

Monday, October 08, 2007

OpenSUSE 10.3 First Impressions

This past weekend I have installed OpenSUSE 10.3

My first impressions: it looks very polished, albeit a bit slow (but then again, I installed it on a virtual machine, on a USB external drive). It installed some services that I do not care about such as SMB and power management. The Beagle thing seems to have an initial crawl phase (I guess that it indexes a whole bunch of stuff) and I had to kill and remove it.

And I was annoyed that there is no gtkhtml3-devel package (but I found what I needed here, so no biggie).

On the nice side, OpenSUSE 10.3 comes in the box with GCC 4.2, unlike Ubuntu 7.10 that ships with 4.1.3 (at least in the tribe pre releases, we still have to see what the official release contains).

Overall, a nice system, packed with apps and looking slick and crisp.

Speaking of Ubuntu, it is not a bad distribution either, but it is getting credit and hype for work that the upstream distro (Debian) is doing, and everybody knows that the heavy lifting in kernel development happens at RedHat, Novell, and IBM.

Thursday, October 04, 2007

RMS is Titanic

It turns out that reports of RMS GPL-ing himself (several times a day) are not new. I found this one (allegedly faked by a humorist) dated 2001:
On the other hand, Richard M. Stallman is always available for comment at press time. He issued this statement: "Unlike Linus Torvalds, I have made my signature available to the public under the GNU General Public License. It may be freely copied, distributed, and modified. However, I do request that users continue to write my signature as 'GNU/Richard M. Stallman'."

Sounds very much like someone who charges for autographs, as dug here, would say.

And this RMS sank down below a long time ago.

RIP, RMS

Tuesday, October 02, 2007

The Strange Tale of Two Exits

Since my early days as a C programmer, I have been always confused by standard library functions that came in two flavors, just one leading underscore apart (as in open and _open). A simple rule of thumb, “choose either one for the job at hand”, makes life easier (or almost).

There are exceptions to this rule. For example, what is the difference between _exit and exit?

If you answered “unlike exit, _exit doesn't cleanup global objects” then congratulations, you may skip this article; go do something fun. But if you thought that _exit is the ISO standard synonym for exit then I beg you to read on.

Both functions will cause the application to… well, exit and the operating system reclaims all resources held by the process (memory, file handles, and so on).

The underscore variant does just that, without further ado. The non-underscore form, however, does a little bit more: before yielding control back to the operating system, it loops through all the functions that have been previously registered with atexit, and calls them in the reverse order of their registration.

Let’s take a look at the following C program:

1 #include <stdio.h>
2 #include <stdlib.h>
3
4 void finish(void)
5 {
6 const char msg[] = "Good Bye.\n";
7 write(1, msg, sizeof msg);
8 }
9
10 int main()
11 {
12 atexit(finish);
13 fprintf(stdout, "returning from main...\n");
14 exit(0);
15 }


Line 14 is equivalent to saying “return 0”; when the function main() returns, control is transferred to the C runtime which invokes exit(), passing it the return code.

The output of this program is:

returning from main...
Good Bye.


If the exit call in line 14 is replaced with _exit, the finish function will not be called, and the “Good Bye” message does not show in the output.

But how does this old C stuff affect us, C++ programmers? The short answer is that global objects (which live outside function main(), are cleaned up by functions registered with atexit().

Let's consider the following code:

1 #include <stdio.h>
2 #include <unistd.h> // use <stdlib.h> on Windows
3
4 struct Fred
5 {
6 ~Fred()
7 {
8 fprintf(stderr, "Good Bye\n");
9 }
10 };
11
12 Fred fred;
13
14 int main()
15 {
16 fprintf(stderr, "Returning from main...\n");
17 exit(0);
18 }


Again, line 17 is equivalent to saying “return 0”; this program behaves the same as our first example. If we change line 17 to read _exit(0) we may observe that the string “Good Bye” is never printed.

“But we are not registering anything with atexit!” you may say. True, we are not, but the compiler is, on our behalf. Because an instance of Fred is being constructed outside of main(), the compiler generates some cleanup code, arranging for the destructor to be invoked as if it were registered with atexit.

And you might not even be aware that you are calling exit; maybe all you do is invoke a third-party library, or a library function developed by some other person or group in your company; that function may call exit when it encounters an error.

This may become a serious issue in a multithreaded program that has global objects, if any thread calls exit(). This may lead to the untimely demise of a global object that another thread is depending upon (such as a global mutex wrapper class instance).

My Two Cents (and Two Buck Chuck)


While visiting a couple of wineries in Oregon last week, I read this fun article on Fred Franzia. It got me to thinking that either I have a thick palate (and thus cannot tell things apart once the bottle goes above $35), or all that "fruity flavors with accents of oak and Mediterranean spices" on the label is just chickenshit.

Is it all about pretense, and feeling smug, kind of like this piece of coprolite video in which Steve Jobs asserts that the Borg have no taste?

Who knows? Cheers!

Friday, September 28, 2007

What Would Fake Steve Do

Microsoft has been going hard after the search and advertising business for a while now, and they are getting more and more serious. Ed: see here.

I have not tried the features of Live Spaces yet, but it surely looks nice. Ed: Importing contacts from gmail into live spaces requires some gymnastics (maybe I will post about it later).

Google is firing back with their free applications, trying to get the Borg to duck. As Joel Spolsky explains,
"in infantry battles [...], there is only one strategy: Fire and Motion. You move towards the enemy while firing your weapon. The firing forces him to keep his head down so he can't fire at you"

I cannot tell why, but for the last couple of months I have been having this strange fantasy: I imagine that I am Steve Ballmer, and I scheme to bury big G.

Here are some ideas that I came up with:

  • give away infinite storage space in Hotmail, ad-free

  • ad blockers built right into Internet Explorer

  • use other apps (that Microsoft controls) as advertising channels (how about 3D ads in Halo?)

If somehow the boys in Redmond managed to do a better job at blocking spam, and gave Hotmail to everyone, ad-free, then you can kiss gmail good-bye. Of course, anything that is ad-free (such as all content displayed by IE, should it block all ads, period) would also cut into Microsoft's own advertising revenue channels. But they do not have to do it forever; they have other revenue streams, so they just have to block ads only long enough to put G to rest. Sure, it would be really sweet to block just the competitors' ads, but something in my gut is telling me that will not fly well with the other big G (ze Government).

And what about Firefox, then? Well, if 80% or so of the market sets a no-ads standard, Mozilla will have to follow suit. But who cares anyway about a browser that is being embraced by freetards, who are not likely to click on ads and buy stuff anyway (as pointed out by this blog, free software is for poor people).

I think 3D games are excellent contenders to web browsers for showing advertising, it would not be much different from real life. Picture yourself playing a shoot-em up match in a decor resembling Times Square. The only difference is that the content of the billboards (in the game) would be controlled by the mother ship. Ed: I was told today that it is already being done. That shows how much I know about video games.

The main problem may be getting businesses that are used to advertising in the browser space to transition to other media (game consoles, mobile devices, etc.) But in the long run, it will force the googlers to develop more client apps (so that they can show ads in them, rather than inside of the browser).

So it is all about re framing the game. If Google wants to distract Microsoft from the software business, fine: Redmond should create diversions that force Mountain View away from search.

It is an arms race, and I hope the Soviets will loose.

Ubuntu: Sex Sells

I cannot believe this. Shocking! How low is Ubuntu going to stoop? Edit: More tragically, I cannot believe some people flamed me for using this picture that "objectifies women"... Gee, wow!

Convinced that these stories of shameless sexploitation are but a dream of FSJ, I went and conducted my own independent investigation.

What I have dug out is unbelievable. (Get more offended by other outrageous stuff here and here).

Solid products should not rely on sex to sell... Oh wait. Ubuntu is free, there's nothing to sell to begin with. So what are they saying then? That Linux is for boobs, Apple is for asses?

As The Monty Python would put it, my nipples explode with delight!

Sunday, September 23, 2007

Boost Your Python

One of the many Good Things that entailed my going on to college was that I personally met some of the most brilliant individuals of my generation, and that I was influenced by them.

In the early 90's Sorin Surdu Bob pushed me to learn C and, as I started humming Let it Be, assembly language hacking was out and Hello World was in.

By a symmetrical twist of fate, in the late 90's I got to work on a couple of projects with Andrei Alexandrescu, who kicked me out of my C preprocessor habits and thought me the noble art of C++ templates.

I have been using C and C++ for a quite some time now, taking on jobs where performance was so critical that no other language would've fit the bill. Ranging from code for portable mp3 players to handling thousands of e-commerce transactions per second, none of my projects could've been done in an interpreted language such as say, Python.

But in the last couple of years I found myself in need for developing a quick prototype (of a graphical user interface). I made the mistake in the past to develop a complete user interface in C++, using gtkmm. It took a long time, and it was boring. Besides, the speed of C++ was not required, but something to speed up the development would've been appreciated. Of course, there's Glade, but I set out to see if I can do even better.

So I went shopping for a language to allow me to build a prototype fast, and hopefully have some fun while at it.

The main application was already written, about 60 000 lines of C++ or so at the time, all that I needed was to redesign the UI. Whatever language I was going to settle on, it had to play well with C++.

Enter Python: a fun, no-nonsense object-oriented programming language with good support libraries for Gtk and Glade. And there is a library in boost that makes integration with C++ a breeze.

I would like to share with you the wonderful experience I had hybrid-programming in C++ and Python.

Imagine that you have an e-commerce application, written in C++ (why in C++ is out-of-scope here, imagine that some else developed it, and that was their best decision at the time when they wrote it). One of the subsystems of this application deals with users, and you would like to quickly add some scripting capabilities to it.
The scripting feature would allow users to quickly extend the functionality of the main application. For example: someone may want to write a script that prints a report of all the newly added users; or write a graphical interface for administering the users in the system; and so on.

Say the central artifact in the subsystem is the User class:

#ifndef USER_CLASS_DEFINED
#define USER_CLASS_DEFINED

#include <string>

class User
{
public:
explicit User(const std::string& email);
virtual ~User();

const char* email() const { return email_.c_str(); }
void set_email(const std::string& email) { email_ = email; }

//
// etc
//
private:
std::string email_; // use email account to login
std::string encryptedPassword_;
};
#endif // USER_CLASS_DEFINED


Here's all the C++ code that you need in order to export the User class to Python:


// file ecommerce.cpp
#include <boost/python.hpp>
#include "user.h"

using namespace std;
using namespace boost:python;

// ecommerce is the name of the Python module that interfaces
// to the e commerce system -- you can name it whatever makes
// most sense to you
BOOST_PYTHON_MODULE(ecommerce)
{
// init<string> simply means that the constructor
// takes a string as it's parameter

class_<User>("User", init<string>())
.def("email", &User::email)
.def("set_email", &User::set_email)
;
}

Done! Now before you go off and try this at home, there is one detail that needs to be flushed out.


Embedding Versus Extending

There are two ways that C/C++ functionality can be made visible to Python. Which one you choose depends on the legacy C++ code that you have in place. If the C++ code is structured as a set of dynamic libraries (aka shared objects) then you can extend Python by building the above sample into a module, say libecommerce.so.

But if the system is a big monolithic blob, that needs to be run as a standalone application, then you need to embed the Python interpreter in it.

The first approach of extending should be preferred, because then you can combine your module with other Python modules and attain a higher level of versatility.

All you need to do is build your module with a command line like this:

gcc ecommerce.cpp -lboost_python -shared -o libecommerce.so

As you probably figured out, you need the boost_python library. Good news is that you may not even have to build boost_python, rather install the boost-devel package which is readily available for most main-stream Linux distributions. At any rate, detailed instructions on how to download and install boost are available at http://www.boost.org.

Once the module is built, you may import it into Python the usual way:

>>> import ecommerce

and access user objects like this:

>>> user = ecommerce.User('nobody@nowhere.org')
>>> user.get_email()
'nobody@nowhere.org'

If you need to embed rather than extend Python, you need to add this code somewhere in your existing C++ program:

#include <stdexcept>
#include <string>
#include <stdio.h>
#include <boost/python.hpp>

using namespace std;
using namespace boost::python;

bool run_pyhon_script(const string& filename, int argc, char* argv[])
{
FILE* fp = NULL;
bool success = true;

Py_Initialize();

try
{
if (PyImport_AppendInittab("ecommerce", initecommerce) == -1)
{
throw runtime_error(
"could not register module __ecommerce__");
}
object mainModule = object(handle<>(borrowed(PyImport_AddModule("__main__"))));

object mainNamespace = mainModule.attr("__dict__");

PySys_SetArgv(argc, argv);

fp = fopen(filename.c_str(), "r");
if (!fp)
{
throw runtime_error(filename + ": " + strerror(errno));
}

PyRun_File(fp, filename.c_str(), Py_file_input, mainNamespace.ptr(), mainNamespace.ptr());
}
catch (const exception& e)
{
fprintf(stderr, "Exception caught: %s\n", e.what());
success = false;
}
if (fp)
fclose(fp);
Py_Finalize();
return success;
}

Inheritance

Say the User class presented above has a derived class, for example GroupAdmin:

class GroupAdmin : public User
{
private:
int groupID_;

public:
GroupAdmin(const string& email, int groupID)
: User(email), groupID_(groupID)
{ }

int get_group_id() const { return groupID_; }
//
// etc
//
};

GroupAdmin already is also a User, and it would be nice to preserve the relationship in Python as well.

BOOST_PYTHON_MODULE(ecommerce)
{
class_<User>("User", init<string>())
.def("email", &User::email)
.def("set_email", &User::set_email)
;

class_<GroupAdmin, bases<User> >("GroupAdmin", init<string, int>())
.def("get_group_id", &GroupAdmin::get_group_id))
;

}

Voila. GroupAdmin auto-magically inherits the methods of User when exposed to Python. You can now write extensions to your e-commerce system, manipulating User and GroupAdmin objects in Python.

More Advanced Features

The Boost Python library has built-in support for exposing standard STL containers to Python, and support for handling smart pointers.

Let's imagine that there is a static method inside the User class, that queries user objects by the domain part of their email, like this:

class User {
// ...
static std::vector<boost::shared_ptr<User> > load_users(const string& domain);
};

As you can see, load_users returns a vector of smart pointers to User objects, rather than a vector of Users. This design minimizes the overhead of copying User objects around.

It takes three steps in order to expose this method to Python scripts:

Register the smart pointer to User objects:

BOOST_PYTHON_MODULE {
// ...
register_ptr_to_python<shared_ptr<User> >();

Second step, expose the vector:
// you need to include:
// #include <boost/python/suite/indexing/vector_indexing_suite.hpp>
class_<vector<shared_ptr<User> > >("UserVec")
.def(vector_indexing_suite<vector<shared_ptr<User> > true>())
;

Finally, expose the method itself as a standalone function:

def("load_users",
&User::load_users,
"load users by email domain" // documentation string
);

You can see the techniques described above used to hide the complexity of a C++ debugger here.

C++ is not always the best tool for the job. Thinking hybrid may save you many hours of tedious coding.

Monday, September 17, 2007

Debugger Tip #1: Leaner Binaries

Suppose that you are building a C or C++ Linux program that is going to be installed on tens or hundreds of your production machines. Since this software is not shipped to customers, you may as well leave the debug information in, to help you later with troubleshooting.

For complex programs the size of the debug information (especially for C++ programs) may be considerable, and it may impact your deployment time.

Hopefully you will not need the debug symbols as often. What if you could store the debug information on only one server instead of N?

Turns out you can pull this trick easily with the following bash script (which you can include in your Makefile as a post-build step):

#! /bin/bash
DBGFILE=DebugInfoServerNetworkMountedPath/$1.dbg
if objcopy --only-keep-debug $1 $DBGFILE; then
#strip -d $1 # strip debug info, or strip everything:
strip $1
objcopy --add-gnu-debuglink=$DBGFILE $1
fi

That's it.

"But how is the debugger going to know how to locate the debug information, since we stripped it out?" one may ask.

Simple. The objcopy --add-gnu-debuglink step creates a special section inside the ELF executable, which will point to the (network) location of the debug information. Both GDB and ZeroBUGS know how to handle it transparently.

Wednesday, September 12, 2007

Has RMS Gee-Pee-L-ed Himself?

It just stroke me today: the Internet seems awash in Fake Secret Diaries and Blogs.

There is a Fake Steve Jobs, and look: a fake Billy G! We even have a (grin) fake Ballmer (throws fake chairs, and occasionally, some stool).

But guess what. As of today, there is no Fake Diary of Richard Stallman! Wow. Is this because the guy is irrelevant and / or not funny? Shame on you if that's what you think.

My guess is that there is no fake RMS out there because he has placed his rotund image under the GPL (v3).

Someone must have told him "Hey Dick, go GPL yourself".

Saturday, September 08, 2007

It's a Sin!

I am spending some of my spare time these days revisiting algorithms: the computer scientists' bread and butter. (Which we tend not to use explicitly in our daily work, since most useful algorithms are part of one standard library or another).

But I am secretly hoping to lure my son (when the time comes) into sciences by showing how rather abstract stuff such as "the shortest path in a weighted, directed graph" applies to computer games, artificial intelligence, and whatnot. So I sat down last night to play with the Dijkstra algorithm, and I wrote this short Python program.

It was most frustrating that I spent one hour working on the bulk of the program, and three hours on solving the trigonometry problem of drawing those little arrow bitches at the end of the graph edges. The F-word came up some many times, I was happy that my son was asleep.

But then again, being five months of age he does not know what that means anyway.

"Let Daddy show you how easy trigonometry is. It's all about f***ing sin and cos".

Tuesday, September 04, 2007

Studying D Programming Language...

I have recently decided to re-hash (sic) my algorithms, and to study the D Programming Language at the same time. Below you can see a cool heapsort implementation.

As I used the ZeroBUGS debugger to step through the code, I have noticed that the DMD compiler does not generate debugging info for the template parameters (a and b in swap, a in siftdown, and so on). I hope this glitch (and other related bugs) will be fixed before the next D Programming Language conference.


import std.stdio;

void swap(T)(inout T a, inout T b)
{
scope tmp = a;
a = b;
b = tmp;
}

void siftdown(T)(inout T a, int begin, uint end)
{
uint root = begin;
while (root * 2 + 1 <= end)
{
scope child = root * 2 + 1;
if (child < end && a[child + 1] > a[child])
{
++child;
}
if (a[root] < a[child])
{
swap(a[root], a[child]);
root = child;
}
else
{
break;
}
}
}

void heapify(T)(inout T a, uint length)
{
int start = length / 2 + 1;

while (start >= 0)
{
siftdown(a, start, length - 1);
--start;
}
}

void heapsort(T)(inout T a)
{
uint count = a.length;
heapify(a, count);

--count;
while (count > 0)
{
swap(a[0], a[count]);
--count;
siftdown(a, 0, count);
}
}

void main()
{
long[] a = [ 5, 4, 1, 2, 100, 10, 42, 5, 10 ];
writefln(a);

writefln("----- heapsort -----");
heapsort(a);
writefln(a);
}

Friday, August 31, 2007

Fundamentally Broken

This week I had an interesting (and heated, of course) conversation on the Northwest C++ Group mailing list, following an Agile/Design Patterns webinar announce by Alan Shalloway.

I responded to the announcement in a sarcastic tone that some people found funny. Some consultants felt offended by the "unfriendly remarks". Boo hoo!

Several people gave examples of how Agile had been misused in real-life projects.

Read the above line again, because that's where the crux of the problem with Agile is: it is more often misunderstood and misused than not (hm, where did I hear that before? "Communism is not bad, it was just incorrectly applied by the Soviet Block").

Defending Agile on such grounds is equivalent to saying that there's nothing wrong with using raw pointers in C and C++, people just have to learn to do it right; hire expensive consultants and trainers to tell them how to avoid dangling pointers and such. What about using smart pointers or a garbage-collected language, then?

Has it not been said of yore that programming languages and APIs should be designed so that they encourage the correct use rather than the other way around?

If Agile encourages incorrect usage, then it is not a well-designed methodology. It needs some serious patching.

I showed the above paragraphs to a friend and he quickly rebutted me: "You criticize Agile yet you do not even know what Agile is. Please define Agile for me".

Awesome point! It greatly explains the misuse of the methodology: people hear the word and go ahead and apply whatever they think Agile is. Like some manager dude who does not allow his developers to fix memory leaks, because such activity is not covered by a "user story" on his cork board.

XP, Agile, Test Driven Development have become anti-brands. These buzzwords have been so badly abused that they are now diluted.

Even worse, Agile is now associated (in many people's minds anyway) with weaselly consultants and incompetent managers.

Wednesday, August 29, 2007

Debugging is A Crappy Job

My best metaphor to date that describes the relationship between programmers and debugging tools is inspired from a recent trip to McLendon Hardware (a local, smaller and characterful version of the ubiquitous Home Depot).

No matter that my only purpose in the store is to buy bulbs, conduit, paint, or whatever supplies are needed for my weekend home maintenance project; I always always always end up wandering in the power tools section. Sounds familiar? If you are a normal male, it should. Pickup trucks. V8 Engines. The Niagara processor. Power tools. Got to love them.

For those of us with a geeky side, the enumeration may also include Power Books, cool programming languages, and Turbo Compilers (some girly men may also like Emacs, floppy discs, and Windows Vista, but let that not disturb you for now).

The point is, nobody in their right mind ever goes to the local hardware store to check the new selection of plungers. Because this is the very definition of a debugger: a tool to get the nasty job done (then swiftly hidden under the sink so visitors don't see it).

So what if one day your hardware store starts selling power plungers? Maybe even reversible ones? You may end up spending more time debugging.

Release early, release often!

Sunday, August 26, 2007

D Programming Language Conference: A Blast

I have not had much time for blogging lately. Terribly busy with getting the ZeroBUGS code up to snuff, fixing bugs, preparing my speech for the first D Developers' Conference, and Real Life (tm) in general.

The Conference (sponsored by Amazon.com, and organized by amazonian extraordinaire Brad Roberts) was a great success. See here and here.

The third day of the conference was a hands-on session of language design. Most of the stuff that Andrei Alexandrescu and Walter Bright drew on the whiteboard went way over my head. Thomas Kuhne sat next to me and quickly hacked his demangler for the D Language, to better integrate with my debugger.

Cool ideas for the future of the language and its support libraries filled the air, so I would not be surprised if by next year's conference the D Programming Language makes it into mainstream.

Wednesday, June 13, 2007

I Will Break Your Unit Testes (tm)

How much does it cost to develop a debugger? I ran SLOC Count on the source tree of my ZeroBUGS debugger for Linux, which I have been developing single-handedly since 2003 (late Fall).

I have removed some of the third party components, such as libdwarf and Google's sparsehash, but I kept all experimental and support code (you can subtract the ansi C lines though, they represent some old tests).


SLOC Directory SLOC-by-Language (Sorted)
38470 plugin cpp=35578,asm=1485,python=1258,sh=149
26576 engine cpp=21091,sh=5409,ansic=76
12115 interp cpp=11475,yacc=450,lex=190
9300 zdk cpp=9300
9101 unmangle cpp=5402,ansic=3692,sh=7
8147 dwarfz cpp=8147
5592 typez cpp=5592
4648 stabz cpp=4648
4353 dharma cpp=4353
4289 zero_python cpp=4289
2967 elfz cpp=2967
2813 misc cpp=2093,python=629,ansic=91
2357 symbolz cpp=2357
2151 minielf cpp=2151
949 readline cpp=949
798 top_dir python=477,sh=237,ansic=84
376 tools python=296,sh=80
362 bench cpp=362
176 make sh=176
61 zrt cpp=61
0 docs (none)
0 help (none)


Totals grouped by language (dominant language first):
cpp: 120815 (89.10%)
sh: 6058 (4.47%)
ansic: 3943 (2.91%)
python: 2660 (1.96%)
asm: 1485 (1.10%)
yacc: 450 (0.33%)
lex: 190 (0.14%)


Total Physical Source Lines of Code (SLOC) = 135,601
Development Effort Estimate, Person-Years (Person-Months) = 34.67 (415.99)
(Basic COCOMO model, Person-Months = 2.4 * (KSLOC**1.05))
Schedule Estimate, Years (Months) = 2.06 (24.73)
(Basic COCOMO model, Months = 2.5 * (person-months**0.38))
Estimated Average Number of Developers (Effort/Schedule) = 16.82
Total Estimated Cost to Develop = $ 4,682,930
(average salary = $56,286/year, overhead = 2.40).
SLOCCount, Copyright (C) 2001-2004 David A. Wheeler
SLOCCount is Open Source Software/Free Software, licensed under the GNU GPL.
SLOCCount comes with ABSOLUTELY NO WARRANTY, and you are welcome to
redistribute it under certain conditions as specified by the GNU GPL license;
see the documentation for details.

I don't know what else to read into this, other than I am a badass.

You think that cowboy programming is bad?

I believe that agile is well suited for developers with small unit testes.

Friday, June 08, 2007

Worse is worse

Last week I presented my work on the ZeroBUGS debugger at Amazon.com. I started my talk by saying that the debugging support in Linux is in line with the worse is better principle: the building blocks are rudimentary for the sake of keeping the implementation simple.

For example, there is no native BreakpointEvent notification. Rather, the debugger implementer needs to keep track of all breakpoints; if a SIGTRAP occurs at the address where an active breakpoint exists, then it is most likely because the said breakpoint was hit.

Another solution is to use PTRACE_GETSIGINFO (available since kernel 2.3.99) and inspect the siginfo_t structure:

struct siginfo_t {
int si_signo; /* Signal number */
int si_errno; /* An errno value */
int si_code; /* Signal code */
pid_t si_pid; /* Sending process ID */
uid_t si_uid; /* Real user ID of sending process */
int si_status; /* Exit value or signal */
clock_t si_utime; /* User time consumed */
clock_t si_stime; /* System time consumed */
sigval_t si_value; /* Signal value */
int si_int; /* POSIX.1b signal */
void *si_ptr; /* POSIX.1b signal */
void *si_addr; /* Memory location which caused fault */
int si_band; /* Band event */
int si_fd; /* File descriptor */
}


For a SIGTRAP signal, the si_code field may be TRAP_BRKPT, in which case we know that the program hit a breakpoint.

At any rate, it is up to the debugger application to create higher-level abstractions, based on signals and ptrace notifications.

Linux is not the easiest nor most pleasant system to program on, but the implementation is so simple a child could understand it. Ahem.


Thursday, May 24, 2007

Oops. I Did It Again

Yesterday I spoke at the Northwest C++ Users Group, presenting my work on the ZeroBUGS Debugger for Linux. This in itself isn't that remarkable (I basically reused the presentation I did at Google's Kirkland offices back in March and spiced it up with some humor and C++ code).

I projected my slides off a Powerbook. Not so remarkable either, even considering that the topic was a Linux-based project.

The Northwest C++ Users Group meetings are currently hosted by Microsoft. So I did a Linux gig in the Lion's lair, and I lived to talk about it. Pleasantly remarkable.

Sunday, April 15, 2007

Mini Me

I turned one year older since my last blog entry. I usually get depressed at this time of the year (a trend that begun the day I turned thirty).

But this time around it is different: my birthday came with a fresh feeling of renewal, brought by newborn son, Nick.

There's always hope when version 2.0 comes out.

Tuesday, April 03, 2007

Limericks

There was a lady from Bombay
Her breasts looked as if made of clay.
She was short and hairy
And her brow was scary.
But otherwise she was okay.


There was a beast on Noah's Ark
Whose testes glowed red in the dark.
The species is now long extinct:
Such trait (although distinct)
Makes for great a hunter's mark.


There was a guy from Romania,
Who had a limerick mania...

Darn!

Thursday, March 22, 2007

How Do I Bake a Cake Shaped Like the Internet?


When I first saw this xkcd cartoon I did not get it at all. I fancy myself as an independent thinker more than a corporate pawn.

But today I gave a tech talk on debugger architecture at Google's Kirkland office. The experience was... how should I describe it? I arrived there, and had my ten-year old Camry valet parked. Then, my host took me to lunch (to the Google cafeteria). I avoided filling my plate because I imagined that in between him and I, someone was going to pay for it. Darn, Jeez Gosh! Nobody warned me that they do get free lunches over there! And pretty yummy too.

And people arrived on time, were very polite, the logistics worked flawlessly, and nobody made me feel like being surrounded by Super Humans. I even had the fine pleasure of seeing familiar faces from my old Amazon days!

XKCD dude, I see your point. Now: how do you bake that Internet-shaped cake?

Sunday, March 11, 2007

Keep it Cool

Last week I fixed some breakpoint management logic in my debugger for Linux, and also reduced the memory footprint for debug symbols and stuff.

Walter Bright is fixing his D compiler back end for Linux, so now the DWARF source line info appears to be almost correct. Once complete, these fixes will mark a huge milestone for debugging D on Linux. And then I will have no excuses left for not adding more D support in ZeroBUGS... oh wait: Our first baby is due on April 5th.

At any rate, I believe future historians will categorize this dawn of the computer's era into BC ("before C"), BC++, and AD ("after D").

But speaking of babies. My wife and I are taking all kinds of classes these days, related to car seat safety, childbirth, feeding, etc. Aside from learning more about female anatomy than I ever cared to know, I also decided that:

  • When I am done with computers I will seek a career as lactation consultant, and

  • those drawings of aliens with huge heads on very thin, wiry bodies are bull, unless alien women have no pelvic bone

  • I have also concluded that all what is holding back the human race is that darned pelvic bone. That's why we can't get bigger brains, and get smarter about war and peace, and the meaning of life.

    A friend of mine argues that size is irrelevant, since the existing neurons could become more efficient by developing more connexions. I believe that there's a problem with developing more synapses: there will be lots of interference in our heads, resulting in loss of focus and did you see the latest xkcd comics? Oops, my neuron miss fired.

    So we need bigger heads, but the hole is not getting any bigger. But there's hope. Soon, with the help of brain-computer interface technologies we will overcome our limitations with more RAM from Radio Shack.

    And that will change how we feel. The purpose of emotions, from an evolutionary point of view, is the same as for database indexing. Can you imagine our ancestors out there in Africa, with a very slow and small memory, performing a linear search: I wonder... I have seen this big cat 'round here before, if only I could 'member what did I do the last time? ... oh wait! of course! I ran for my life!

    So we have developed emotions which act as a big red tab in the thick book of memories: adrenaline kicks in, and the monkey is thumbing its nose to the lion (from a safe distance).

    But with a brain connected to fast memory, and maybe with some additional CPU power and decision-making software thrown in the mix, I could weasel my way out of almost any difficult situation, while keeping my cool.

    Tuesday, February 20, 2007

    Debugger Breakpoints

    An overview of breakpoints, as implemented in the ZeroBugs debugger for Linux.

    Breakpoints are central to the ZeroBugs debugger engine layer. Breakpoints can be set by the user, or by the debugger for internal purposes (such as detecting the creation of new threads).

    Physical vs. Logical


    Breakpoints can be classified in several ways. One categorization distinguishes between "logical breakpoints" and "physical breakpoints". What this means is that not all the breakpoints that you have inserted in the program are physically there, but the debugger will support the illusion that they are; reality is the realm of physical breakpoints, and logic is derived off perception. So if you perceive a breakpoint as being inserted in the debugged process, it is logically there, even though, physically, the debuggee has not been affected.

    Let's consider a couple of examples, to help bring the discussion out of the philosophical realm:

    1. The user inserts a breakpoint at the beginning of a function that is not loaded into memory yet, because it lives in a shared library that has not been mapped into the debugge's memory space (just yet). The debugger nicely shields the user from knowing such details, and may say: "OK. I don't know what the address in memory of function `abc' is; but I know that it is implemented inside the dynamic library libabc.so; I will keep this in mind, so that if I later detect that libabc.so is loded, I will insert the breakpoint. "



    2. Another case may be that the debugger has inserted a breakpoint at the beginning of the pthread_create() function, to internally keep track of newly created threads. The user wants to insert a breakpoint at the same address, and does not need to know that a physical breakpoint is already there. The debugger associates two logical actions with the same physical breakpoint: one that internally updates the list of debugged threads, and another one that initiates an interaction with the user.



    The logical breakpoints are implemented as actions associated with physical breakpoints. Each physical breakpoint maintains a list of actions. An action may be temporary (or once-only), which means that it gets discarded after being executed once. Once-only actions are similar to UNIX System V signal handlers. Non-temporary actions are executed each time the physical breakpoint is hit -- similar to BSD signal handlers.

    Algorithm for executing breakpoint actions




    // Execute actions on given thread
    void BreakPoint::execute_actions(Thread* thread)
    {
    // The list of actions associated with this
    // breakpoint may change during
    // the execution of actions, and thus the
    // iterators may be invalidated:
    // make a copy of the actions and cycle thru
    // the copy, to be safe.
    ActionList tmp(this->actions_);
    ActionList::iterator i = tmp.begin();
    for (size_t d = 0; i != tmp.end(); ++d) {
    if (is_disabled(*i)) {
    ++i; continue;
    }
    // a temporary action returns false
    if ((*i)->execute(thread, this)) {
    ++i;
    }
    else {
    // remove it from the master list
    ActionList::iterator j = actions_.begin();
    advance(j, d); actions_.erase(j);
    // remove it from tmp as well so that
    // destruction is not delayed
    i = tmp.erase(i);
    }
    }
    }



    Software vs. Hardware



    The Intel 386/486/585/686 family of chips offers support for debugging, including breakpoints. The CPU has 6 debug registers: 4 for addresses, one for control, and one for status. Each of the first 4 can hold a memory address that causes a hardware fault when accessed.

    In Intel's lingo, a "fault" is a hardware notification, or event, that happens when the CPU is about to access a memory address -- that is, before the access happens. An "exception" is a similar notification, only that it happens after the access has occurred.

    Remember: Hardware breakpoints are faults, software breakpoints are exceptions.

    The control register holds some flags that specify the type of access (read, read-write, execute) and some other bits; the status register is helpful for determining which breakpoint was hit, when handling a system fault.

    Thanks to Operating System magic, the hardware breakpoints are multiplexed, so we can have as many as N times 4 hardware breakpoints per program, where N is the number of threads in the program.Hardware breakpoints have the advantage of being non-intrusive -- the debugged program is not modified. Another advantage is that they can be set to monitor data as well as code. A debugger may use a hardware breakpoint to detect that a memory location is being accessed.

    Software breakpoints are implemented as a special opcode (INT 0x3) that is inserted in the code at location to be monitored.

    Nicely enough, Intel has a dedicated opcode for breakpoints. Other CPUs (PowerPC, for example) do not have a special opcode; on those platforms software breakpoints are implemented by inserting an invalid code at the desired location.

    Software breakpoints have the main drawbacks of being slow and intrusive. The debugged program has to be modified, and the debugger needs to memorize the original opcode at the modified location, so that the debuggee's code is restored when the breakpoint is removed. When a software breakpoint is hit, the instruction pointer needs to be decremented, and the original opcode restored. Then the debugged program has to be stepped out of the breakpoint. After the breakpoint is handled, the breakpoint opcode is reinserted.

    On UNIX derivatives (such as Linux), a debugger does not manipulate the debugged program directly; rather, it uses the operating system as a middle man (via the ptrace or /proc API). This implies that every time the debugger reads or writes into the debuggee's memory space, a context switch from user mode to kernel mode happens.

    Another disadvantage of soft breakpoints is that they can only monitor code. Software breakpoints cannot be used for watching data accesses.

    What makes software breakpoints indispensable is that there's no limit to how many can be inserted. Hardware breakpoints are a very scarce resource (you can run out of the 4 of them quite fast); software breakpoints are intrusive and slower, but can be used abundantly.

    The design decision in my debugger is to use software breakpoints for user-specified breakpoints, and prefer hardware breakpoints for internal purposes. Watchpoints (breakpoints that monitor data access) are implemented as hardware breakpoints.

    An example of breakpoints maintained by the debugger internally is stepping over function calls. A breakpoint is inserted at the location where the function returns, and control is given to the debuggee to run at full speed until the breakpoint is hit. The breakpoint is removed once it is hit, and the hardware resource can then be reused.

    As a rule of thumb, the debugger employs the hardware support for cases where breakpoints are expected to be released after relatively short amounts of time. If no hardware registers are avaialable, the debugger falls back to using a software breakpoint.

    Global vs. Per-Thread


    Another categorization of breakpoints is by the what threads they affect in a multi-threaded program. A global breakpoint causes the program to stop, regardless of what thread has hit it. Per-thread breakpoints will stop the program only when reached by a given thread. Because all threads share the same code segment, a software breakpoint is also a global breakpoint, since
    any thread that reaches the break opcode will stop.

    The operating system creates the illusion of each thread running on its own CPU, therefore a hardware breakpoint may be private to a given thread.

    A bit in the debug control register of the 386 chip can be used to control the global/per-task behavior of hardware breakpoints.

    A thread ID can be added to the data structure or class that represents a software breakpoint. When the breakpoint is hit, the thread ID in the structure may be compared against the ID of the current thread. The behavior of a per-thread breakpoint can be emulated this way.

    The debugger uses emulated breakpoints when it needs a hardware breapoint and none of the 4 debug registers is available.

    Consider the case where the debugger uses a breakpoint for quickly stepping over function calls. The debugged program must stop only if the breakpoint at the function's return address is hit by the same thread that was current when the user gave the "step over" command.

    Sunday, February 11, 2007

    pszGulasz: PULONG_LONG, PULARGE_INTEGER

    I am a big fan of Mac computers and even use the word Mac as synonym for coolness (or lack thereof), as in "There is no mac in emacs".

    The other day I made a typo in the attempt of e-mailing a link to this blog to someone: oh my! I used a slash instead of a dot: the-free-meme/blogspot.com

    Firefox somehow read through the clutter in my head and took me to the right place (in spite of the error). By contrast Internet Explorer just sat there impotently, summoning up to mind this quote from the Hitchhiker's Guide To The Galaxy: [...] the incredstupid but equally dangerous and ravenous Bug Blatter Beast of Traal. This incredibly stupid monster thinks that if you can't see it, it can't see you!

    I dream of a day when humans will be able to talk to computers in fuzzy terms, and the machines pick up the correct message. Think about it: why should we adjust to the demands of the machine, and appease it by addressing it in very precise, unequivocal ways? The human brain is able to understand the meaning of an article in spite of typos; more so, it can detect subtle changes of context, satirical undertones, and so on. Machines? Not so much.

    As the song goes, you may think I'm a dreamer, but I am not the only one. I read that the genius who came up with Hungarian notation seems to be harboring similar thoughts. Why do I think this is funny?

    Regardless of what camp you are in, both strong-typed languages and dynamic-typed languages strive towards a similar goal: the programmer should not be burdened with the intellectual overhead of remembering the types of the variables. In strong-typed languages the compiler enforces the correct types at... compile-time, and with most dynamic-typed languages the runtime takes care of the needed conversions under the hood.

    By contrast, the HN is designed to help the poor programmer remember the types of the data; something you should not care much about in the first place. Instead of going after the root of the problem (i.e. some fuzziness in the C language wrt types), Microsoft went for the quick-and-dirty solution of making developers adjust to the idiosynchrasies of their machines, and scarred them for life: to this day there are some darned souls in Redmond who are still using HN.

    And the hidden joke is that HN and Romanian Language don't mix well.