Wednesday, June 13, 2007

I Will Break Your Unit Testes (tm)

How much does it cost to develop a debugger? I ran SLOC Count on the source tree of my ZeroBUGS debugger for Linux, which I have been developing single-handedly since 2003 (late Fall).

I have removed some of the third party components, such as libdwarf and Google's sparsehash, but I kept all experimental and support code (you can subtract the ansi C lines though, they represent some old tests).

SLOC Directory SLOC-by-Language (Sorted)
38470 plugin cpp=35578,asm=1485,python=1258,sh=149
26576 engine cpp=21091,sh=5409,ansic=76
12115 interp cpp=11475,yacc=450,lex=190
9300 zdk cpp=9300
9101 unmangle cpp=5402,ansic=3692,sh=7
8147 dwarfz cpp=8147
5592 typez cpp=5592
4648 stabz cpp=4648
4353 dharma cpp=4353
4289 zero_python cpp=4289
2967 elfz cpp=2967
2813 misc cpp=2093,python=629,ansic=91
2357 symbolz cpp=2357
2151 minielf cpp=2151
949 readline cpp=949
798 top_dir python=477,sh=237,ansic=84
376 tools python=296,sh=80
362 bench cpp=362
176 make sh=176
61 zrt cpp=61
0 docs (none)
0 help (none)

Totals grouped by language (dominant language first):
cpp: 120815 (89.10%)
sh: 6058 (4.47%)
ansic: 3943 (2.91%)
python: 2660 (1.96%)
asm: 1485 (1.10%)
yacc: 450 (0.33%)
lex: 190 (0.14%)

Total Physical Source Lines of Code (SLOC) = 135,601
Development Effort Estimate, Person-Years (Person-Months) = 34.67 (415.99)
(Basic COCOMO model, Person-Months = 2.4 * (KSLOC**1.05))
Schedule Estimate, Years (Months) = 2.06 (24.73)
(Basic COCOMO model, Months = 2.5 * (person-months**0.38))
Estimated Average Number of Developers (Effort/Schedule) = 16.82
Total Estimated Cost to Develop = $ 4,682,930
(average salary = $56,286/year, overhead = 2.40).
SLOCCount, Copyright (C) 2001-2004 David A. Wheeler
SLOCCount is Open Source Software/Free Software, licensed under the GNU GPL.
SLOCCount comes with ABSOLUTELY NO WARRANTY, and you are welcome to
redistribute it under certain conditions as specified by the GNU GPL license;
see the documentation for details.

I don't know what else to read into this, other than I am a badass.

You think that cowboy programming is bad?

I believe that agile is well suited for developers with small unit testes.

Friday, June 08, 2007

Worse is worse

Last week I presented my work on the ZeroBUGS debugger at I started my talk by saying that the debugging support in Linux is in line with the worse is better principle: the building blocks are rudimentary for the sake of keeping the implementation simple.

For example, there is no native BreakpointEvent notification. Rather, the debugger implementer needs to keep track of all breakpoints; if a SIGTRAP occurs at the address where an active breakpoint exists, then it is most likely because the said breakpoint was hit.

Another solution is to use PTRACE_GETSIGINFO (available since kernel 2.3.99) and inspect the siginfo_t structure:

struct siginfo_t {
int si_signo; /* Signal number */
int si_errno; /* An errno value */
int si_code; /* Signal code */
pid_t si_pid; /* Sending process ID */
uid_t si_uid; /* Real user ID of sending process */
int si_status; /* Exit value or signal */
clock_t si_utime; /* User time consumed */
clock_t si_stime; /* System time consumed */
sigval_t si_value; /* Signal value */
int si_int; /* POSIX.1b signal */
void *si_ptr; /* POSIX.1b signal */
void *si_addr; /* Memory location which caused fault */
int si_band; /* Band event */
int si_fd; /* File descriptor */

For a SIGTRAP signal, the si_code field may be TRAP_BRKPT, in which case we know that the program hit a breakpoint.

At any rate, it is up to the debugger application to create higher-level abstractions, based on signals and ptrace notifications.

Linux is not the easiest nor most pleasant system to program on, but the implementation is so simple a child could understand it. Ahem.