Sean’s Obsessions

Sean Walberg’s blog

Linux Unwired

Linux Unwired
Weeks, Dumbill, Jepson
O’Reilly, 2004
$24.95USD/$36.95CDN

While Windows users have no problem using all the wireless gear out there, not everything is supported in Linux. Add to this the command line tools for dealing with wireless, and you have a recipe for confusion. “Linux Unwired” demystifies the Linux Wireless system, and provides guidance for purchasers to make sure that they buy the most supported equipment possible.

It should be noted early on that 802.11b is not the only thing covered in the book. The other variants (a and g) are there, in addition to IRDA (infrared), Bluetooth, and Data over Cellular. On the latter point, much of the content deals with US based providers, but it still provides a good backing on the subject for those of us outside the country.

802.11b is the main focus, taking up around half the book. It starts with a discussion of the chipsets behind the cards, and how the map to Linux support. Here is where the reader gets advice on which card to buy, or at least what to look out for when buying a card. One thing I found interesting was the WLAN driver loader, which is an inexpensive product that lets Linux load binary WLAN modules. Some cards are not supported enough in Linux to do things like WEP security, which is where this product comes in. Again, the book leads the reader around the situations when this is necessary and when it isn’t.

In addition to WEP, other methods of authentication are covered such as 802.1x authentication and 802.11i, the successor to WEP. It’s also a good example of the broad scope of the book and a focus on interoperability with existing systems, rather than assuming the reader is building everything from the ground up.

Access points take up two chapters, the first looking at how to use them with Linux. Before reading this book, I was under the impression that access points all used web browsers or telnet for configuration, but apparently some need Windows software. There are situations where people have developed software to emulate this functionality, and pointers are provided. The second of the two chapters is on building your own access point, which is a fascinating look at using micro-linux distributions and mini-x86 hardware to build access points with rich functionality (for those less adventurous or well funded, the same can be done with any old hardware). There is also a look at soldering on a serial port to a popular Linksys router to allow command line access to the underlying Linux OS.

Bluetooth and IRDA are less common uses of wireless that let computers speak to phones and PDAs. I was completely unaware of the level of support that existed in Linux until after I read these chapters. While the Bluetooth coverage was comprehensive, it went to a deeper level of detail than I thought necessary, such as a detailed breakdown of the Bluetooth stack of protocols. However, at the end, it is possible to use Bluetooth and IRDA to pull data from devices, and to connect to their resources (ie modems and databases) over the air.

I should also mention the chapter on GPS. It is fairly thin on its own, but as an addition to the 802.11b section (ie wardriving), it does well.

A couple of things stood out about this book. The first is that the target audience isn’t necessarily Linux geeks, but Linux users. You don’t have to be a Linux guru to get this stuff running, the level of detail is sufficient to get anyone who isn’t scared of a command line up and running. The second is that the authors spent a lot of time testing various hardware. Many wireless cards and Linux distributions were tested in the early chapters. Where several options for software existed, they were all looked at (such as the source vs binary drivers mentioned above). This all adds to the book’s value not only as a howto manual for wireless, but also as a guide for navigating through product and software selection.

“Linux Unwired” is perfect for anyone who wants to use wireless on Linux, be it connecting to an 802.11b network, or trying to use a cell phone to send a fax. Those looking to purchase equipment will want to go through the book first to make use of the product advice and compatibility testing.

More information is available from the O’Reilly website at http://www.oreilly.com/catalog/lnxunwired/index.html which includes a sample of chapter 3, “Getting on the network”

Tweaking Knobs, Verifying Results

One common thing in systems administration is “fiddling with knobs”. You know, some value that has to be changed, that affects results. One approach is to randomly try values, the other is to use a systematic approach.

I use the excellent picprog software to burn PIC microcontrollers on occasion. One day, after not using it for a while, I was dismayed to find that with one piece of hardware, it didn’t work. After lots of testing, I tracked it to picprog. The author suggested tweaking some delays in the code.

I used the value the author suggested, which caused it to work once in a while, but not reliably. What I wanted to do was find a value that worked consistently, or at least 90% of the time.

There were four instances of the following code

  delay (1000);

I wanted to try different values and see if they work.

There are two approaches to testing the values. One is to find a lower limit and an upper limit, and do a binary search. The lower limit is easy, I know 1000 doesn’t work. Since I didn’t know what the upper value was, I elected to start at 1000 and step up by 500.

So, I changed all occurences of the static value to

    delay (TEST_DELAY);

and added

#define TEST_DELAY 1000

at the top.

The next step was to write a script that changed this value, recompiled, and ran 50 tests.

val=$1
good=0
bad=0
# Change the define
perl -p -i -e "s/^#define TEST_DELAY.*/#define TEST_DELAY $val/" picport.cc
# Rebuild
make > /dev/null 2>&1

# Run the test 50 times
for i in `seq 1 50`; do
./picprog --erase --burn -i rs232-add.hex > /dev/null 2>&1
if [ $? -eq 0 ]; then # return code of 0 is good
let good=good+1;
else
let bad=bad+1;
fi
done
# store in flat file
echo $val:$good:$bad >> resultfile

The code there should be self explanatory, I’m just running the same thing over 50 times after doing an in place substitution in perl (perl -p -i -e ‘s/foo/bar/’ file)

From the command line, I can then run the test over different values

for i in `seq 1000 500 10000`; do ./runtest $i; done

The seq command is a handy one to have in your toolkit.

seq start last

will count from start to last. Wrap that in a for loop, and you can run the same test a certain number of times (like I do above).

seq start increment last

will count from start to last in increments of increment.

So, in the end, I get

1000:0:50
1500:0:50
2000:0:50
2500:22:28
3000:16:34
3500:29:21
4000:32:18
4500:41:9
5000:37:13
5500:48:2
6000:46:4
6500:49:1
7000:49:1
7500:46:4
8000:46:4
8500:50:0
9000:50:0
9500:50:0
10000:50:0

Intrerestingly enough, the results aren’t completely linear. If I set it to 6500, I should be good though.

Most importantly, I have a repeatable set of tests. If a newer version of picprog comes out, I can retest the delays to see if I have to change.

Yet Another Reason WWW::Mechanize Rocks

I found this article about a perl script that downloads True Type fonts from http://grsites.com/fonts/. Looking at the script, it’s about 100 lines of fairly dense, comment-less code that’s quite tied to the web page and url structure that it’s scraping. I rewrote it using WWW::Mechanize in 33 lines including comments and it runs under strict. It took about 15 minutes, and I never had to view the source code of the page I was scraping.

Code below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#!/usr/bin/perl
use strict;
use WWW::Mechanize;
# Hit the first page
my $mech = WWW::Mechanize->new();
$mech->agent_alias("Linux Mozilla");
$mech->get("http://grsites.com/fonts/");
die $mech->response->status_line unless $mech->success;
# Pull out all the page links
my @links = $mech->find_all_links(text_regex => qr/^Page/);
foreach my $page (@links) {
        print "Getting " . $page->url_abs . "\n";
        $mech->get($page->url_abs);
        next unless $mech->success;
        my @fonts = $mech->find_all_links(url_regex => qr/fontview.cgi/);
        foreach my $font (@fonts) {
                print "Getting " . $font->url_abs ."\n";
                $mech->get($font->url_abs);
                unless $mech->success {
                        print $mech->response->status_line;
                        next;
                }
                my $fname = $font->url_abs;
                # pull out the name from the url
                $fname =~ s/.*fn=(.*)&?.*$/$1/;
                my $fontlink = $mech->find_link(url_regex => qr/fontdownload/);
                $mech->get($fontlink->url_abs, ":content_file"=>"$fname.ttf");
        }
}

Speeding Up Movable Type Without Mod_perl

Fedora uses Apache 2 and mod_perl2, which is completely incompatible with Movable Type. Even under Apache::compat, there are a lot of problems which I got tired of debugging.

Since I’m using it for SmokePing, I already had CGI::Speedy installed, and figured I’d try it with Movable Type. The only thing to change is the shebang line in mt.cgi (and the others, I suppose), from

#!/usr/bin/perl -w

to

#!/usr/bin/speedy -w

Instant performance increase. I didn’t take any measurements, but it’s noticeable.

BTW, SpeedyCGI can be installed from CPAN:

# perl -MCPAN -e ‘install CGI::SpeedyCGI’

I’ve also done this to mt-comments.cgi and mt-tb.cgi and all tests appear to work well. I’ve also changed the shebang line to

#!/usr/bin/speedy -w – -M5 -g

to limit speedy to using 5 worker tasks, and to have all the CGIs share one set of worker tasks. My machine doesn’t have a lot of memory, and traffic is low, so YMMV.

In Toronto Next Week..

I’m in downtown Toronto next week for some training, not sure of the schedule, but let me know if you want to hook up for a beer.

Either You’ve Got a SIP Address or You’re Not Worth Talking To

Well, not quite, but we’ve been there with both Fax numbers and email addresses.

Bill Reid gave another presentation at the EPIC technology day today on VoIP and the Asterisk PBX. Between the presentation and an earlier discussion with him, he brought up the notion of Fax as a change in how we communicate, and why it was successful…

A big reason Fax caught on quickly was because it made use of an infrastructure that was already in place. Phone lines were easy to get, and all you needed was a fax machine. The advantages that faxes gave you (rapid transfer of any paper document across the world) made it spread quickly.

Eventually the Internet came around, and with it, email. However, with few people on the Internet, and how slow it was (ie modems), it didn’t replace the fax machine. Fast forward a couple of years, everyone has access to the Internet, and it’s rare to ask someone for their fax number. Whenever a vendor calls me and wants to send me information, it’s always “What’s your email address?”. Today, almost everyone has email, at which point Bill said (somewhat tongue in cheek) “If you don’t have email, you’re not worth talking to”

This led to a discussion of SIP addresses and legacy phone numbers. SIP addresses look more like email addresses than phone numbers, ie I’m sip:269423@fwd.pulver.com. When the idea becomes popular enough, I’ll be able to use my ertw.com email address to point to my SIP server. Given how easily people have adapted to email addresses, how much longer will we need phone numbers? It’s far easier to put the address of a SIP server in DNS, and to transform user@host into an IP endppoint than it is to connect to the global SS7 network and get phone numbers to the right place.

So, the prediction then became “Either you’ve got a SIP address or you’re not worth talking to.” While that’s a while off, I do believe it will come true.

From the Feeds…

Some interesting stuff from the feeds this morning:

Solaris 10 to drop root
Brief tutorial on using gdb for developing exploits
Cisco IOS to go modular (finally)
Cisco to release datacentre deployment strategy blueprint Couldn’t find anything about it on CCO, though

There also seems to be something going around on blogs (1, 2, 3)

1. Grab the nearest book.
2. Open the book to page 23.
3. Find the fifth sentence.
4. Post the text of the sentence in your journal along with these instructions.

As a rule, all routers really care about is the location of each network
Routing TCP/IP Volume I, Jeff Doyle

Make bzImage vs Make zImage

What’s the difference between the bzImage and zImage targets when compiling a kernel?

If you said that the former used bzip2 compression while the latter used gzip, then read on for the real reason.

I ran into this one a few years ago when I was digging into RPM and some custom kernels. A couple of years later I noticed a column that made mention of the compression angle. (Note that this is otherwise a superb column, and was probably the largest influence on my writing over the past few years… I was deeply saddened when the magazine went under). I’m also currently looking through some LPI material that mentions the same reasoning, and have heard that it may even appear on the exam the wrong way!

I then asked a coworker who has a brilliant Linux mind, and he got it wrong, too.

So, let’s stop the bzImage == bzip2 compression myth! :)

Take a look at
/usr/src/linux-2.4/arch/i386/boot/setup.S:

# Now we move the system to its rightful place ... but we check if we have a
# big-kernel. In that case we *must* not move it ...
testb $LOADED_HIGH, %cs:loadflags
jz do_move0 # .. then we have a normal low
# loaded zImage
# .. or else we have a high
# loaded bzImage
jmp end_move # ... and we skip moving

Basically, a “b” kernel is loaded into high memory, a regular kernel into low memory. All compressed kernels are compressed with gzip -9.

This document provides some eye watering detail on the whole bootup process. Or just take it from me, a bz kernel is a “big” kernel, and gets loaded into high memory.

Attention, People Who Write Assembly Instructions

On a step that requires screws, have a life size picture of the screw on the same step. With several pages of instructions and a wind outside, I can’t be flipping between page 3 and 1 to determine if “part E” is the 3” screw or the 4” screw.

Also let me know how many of each screw I’ll be using in each step. You’ve got several diagrams for the same step, but nothing lines up, so I don’t know if I should be putting in 4 screws or 6.

Label the parts themselves so I can figure out what the heck they are

Don’t arbitrarily switch perspectives on me in the diagrams, especially if the thing we’re building is vaguely squarish. It just leads to people putting parts in the wrong way.

Speaking of which, make sure your diagrams clearly show the orientation of the product. If your designers had any clue, they’d design the product so that if I tried to put the piece in the wrong way, it wouldn’t fit, instead of finding out two steps later.

If you’re going to make a “helpful hints” section, make sure they’re helpful. For instance, saying “in step 7, you may want to try it this way” isn’t helpful. Tell me on step 7.

A quick search of the ‘net shows I’m not alone thinking this particular product has some problems. (The second comment in is really funny, doesn’t make me feel so bad about the time I spent on it)

Read the full article to see some pics.


CRTC Issues Preliminary View on VoIP

Jeff Pulver made a mention of the CRTC’s notice about Voice over IP. While I don’t have much of the background on telecommunications law, the CRTC is saying that they feel VoIP carriers should be subjected to the same regulations that the Bellheads follow.

My layman’s view of this is that it’s a bad idea. Many of these laws came in to place to compensate for the years of monopolies the ILECs had, and are to help increase competition. If competition, and more specifically, the consumer, are on the CRTC’s mind, it would seem to me the best approach would be to let the VoIP carriers be free from the bonds of regulation.

I’ll be researching this more closely, and hopefully understand enough to make a submission to the CRTC by the April 28’th deadline. I encourage any Canadians reading this to do the same.