Sean’s Obsessions

Sean Walberg’s blog

If You Run Your Mail on a Dynamic IP…

In case you didn’t know, I’m running this website and email domain on my cable modem. A few weeks ago I upgraded the box, which also resulted in a new address.

Now, dealing with a dynamic address is easy enough, I use Zone Edit for free DNS hosting, and ddclient to automagically update the records in the event the address changes.

However, something I’ve since learned is that when you get a new address, check to see if you are blacklisted.

I’ve dealt with being blacklisted before on non static IPs, when a mail server was found to be an open relay. It’s always been a pain to get off – once you find out which site is blacklisting you, you visit their page, request a retest, and wait.

However, these days there are dozens of blacklists, each with varying criteria for getting on to their list. This time around, the IP I picked up seemed to have an open proxy and open SOCKS servers. I didn’t notice until someone who I gave an alias to let me know he was having problems.

After checking into it, I found out that in early February there were problems reported from the address I inherited, and I was on 7 blacklists. After visiting each page and requesting a retest, I’m still waiting for 6 of them to perform the test. The one I did manage to get on tested me quickly, but the procedure involved sending the output of one web page to an email address, responding to that email with another key, and then visiting a web page. This had to be done per service, and I had 3 services that were flagged as problematic.

This is a big reason I don’t like blacklists. People who manage to get themselves on there have a hell of a time getting off. Reading through the FAQs on the various pages seem to indicate that I’ve committed a horrendous crime. There are a lot of admins out there that may not know they are on a list, or may have fixed the problems but not know how to get off. There don’t seem to be periodic retests either, it has to be manually put in.

To check if you are on blacklists, you can go to

http://www.senderbase.org/
http://openrbl.org

and put in your address.

To test if you are an open mail relay, telnet to relay-test.mail-abuse.org from your mail server, it will connect back.

Also make sure you have a working postmaster and abuse alias on your site, as some of the relays are kind enough to send a message. Remember, though, that they will be sending to your dynamic hostname rather than your regular domain, so make sure you accept mail for that host. Ie, my current IP means I have to accept mail for the h24-76-10-54.wp.shawcable.net domain. Sendmail users put this in /etc/mail/local-host-names or whatever is specified in the CW line in sendmail.cf.

Improve Fedora’s Performance

Turn off Nautilus. I’m not kidding. It’s that much of a bloated pig. Sure, you won’t have icons on your desktop, or use the file manager, but to be honest, I never have. I think I’ll lose some stuff with drag and drop, but I’ve never used it anyway. The performance increase is noticable.

Run gnome-session-properties, pick the current session tab, highlight “nautilus”, and click “remove”. Then click “apply”. If you don’t like it that way, you can just run nautilus from the command line.

I also tried Openbox and sawfish for window managers. The latter is the best of the two, IMHO, but I couldn’t get some things to work the way I wanted to so I reverted back to the default, Metacity.

Finally, Someone Who Monitors RAID!

It is surprising to see the number of people that implement some sort of redundancy with no means of measuring when one of the components has failed. For example, building a disk mirror, but never knowing when one of them has failed.

That’s one thing that’s kept me away from using Linux Software RAID in anything serious, since I never saw a good way to determine if anything had failed until I happen to stumble upon an odd log entry that ends up meaning a failed disk.

Sys Admin magazine has a good article on software RAID where they actually tell you how to monitor for a failed disk and how to rebuild it.

I should also note that Fedora includes smartmontools, which monitor SMART capable drives (and most are) for both faults and prefaults. It somewhat makes the former link a moot point, but since you’re building redundant disks, you may as well have redundant ways of checking for failures.

Blogs as a Study Tool

I’ve recently embarked on my CCNP Recertification. It’s not due ‘till the end of the year, but I wanted to give myself time.

Normally when I study this type of stuff I make notes and drawings. However this gets lost once I’m done with the task. I thought this time around I’d try using a blog, and open up my study notes to others.

I don’t want the blog to simply track notes, I need some degree of summarization. In particular, I’d like a place to track all the references (ie other web sites) and key words. I found the Collect plugin, which lets you do a whole bunch of fun stuff.

The first thing I wanted was to put a list of all the links used at the bottom of each page. In the Individual Entry Archive template, I added the following:



<$MTEntryBody$>
<$MTEntryMore$>


[<$MTCollectedIndex$>]" target="_blank"><$MTCollectedAttr attr="href"$>



Basically, the MTCollect tag specifies that we’ll be only looking at “A” elements (ie links). MTCollectThis encapsulates the content to be evaluated, and then MTCollected iterates through the collection. You pull out the data items through the MTCollectedAttr element.

The next thing is that I’d like to make a page containing all the articles with a list of the references:


<$MTEntryDate format="%x"$>: <$MTEntryTitle$>


<$MTEntryBody$>
<$MTEntryMore$>





Here, the MTEntries tag is used to iterate through all the Movable type articles, like a for loop. After that, it’s the same thing as before, except that I print out the name of the article and the date, with a link to the article. Some other interesting points are the sort_by attribute in MTCollected, “content” referring to the stuff referenced by the A tag (ie what is linked). MTCollectedContent then pulls out the name of the link, rather than the URL as before.

The final goal is to pull out keywords out of all the articles and make a clickable list that lets me search for the term on my site. For example, if “OSPF” were a term used in various articles, I’d like OSPF to appear on the page with a link that lets me search for it on the site (and google, perhaps, as an enhancement)




<$MTEntryBody$>
<$MTEntryMore$>



<$MTCollectedContent$>



When I’m writing the text, I put the terms in boldface the first time I use them. With this, I can pull out the B tags in the MTCollect element. This time, I iterate through all the entries on the outside of the loop, and MTCollectThis from the text of the article.

After that, iterate through the collected content, sorting by the name of the link, and skipping duplicates. The URL is formed by pulling out the various tags.

One item to note is that in order for Collect to find the information, it has to be in quotes. That is

doesn’t work, you need

If you want to see the site in action, it is at CCNPRecertification.com

Time will tell if it works.

It’s National “Ask for Your Credit Report” Day

It’s not an official holiday, but I’m starting it. Since I’m allowed to ask for a free copy of my credit report once per year (or any time I’ve been denied credit), I’m going to do it every March 15th.

I’m not worried about anything in particular, but identity theft has started to become more common place. I even know someone that this happened to and the results weren’t pretty. Furthermore, I just want to make sure that the information is current, as in the past couple of years I’ve bought and sold a house, and paid off loans.

So, I encourage everyone to request a copy of their credit report on March 15th (or thereabouts), and make sure everything is on the level.

In Canada, there are two places to send a request to:

TransUnion - Send a letter with your information
Equifax - Fill out the form

This report from the Canadian privacy commissioner details a case that shows some of the rights that Canadians have about access to the information that banks use to make decisions about them.

Apologies to the non-Canadians in the crowd, if someone has information on the policies in other countries please let me know or post it and send a trackback.

Testing

On new server… testing. Nothing to see here.

If you were looking for ertw.com for the past couple of hours, you might have had stale DNS info. Rather than a graceful swingover I cut over DNS and shut down the old web server – there is some dynamic stuff that I’d prefer not go to the wrong server and have to be merged in by hand.

Pay Me to Do Something I Like, and I’ll Never Work Another Day in My Life

I’ve seen a bit being written about job satisfaction lately, so I figured I’d toss my two cents in.

“Pay me to do something I like, and I’ll never work another day in my life” is something I’ve always thought, meaning that if I’m doing stuff that interests me, it’s not really work.

Job satisfaction starts with being in a position that challenges and interests you. Nothing else matters until you have that.

My first job out of University was a software developer at a small company of about 20. I was paid fairly well for a freshly minted graduate, had excellent benefits, and the company took care of its employees. To celebrate the winning of a large contract, the company took the whole staff & spouses to Banff for a week. Airfare, hotel, and a fair whack of spending money. A dream job by all other accounts.

But I didn’t have job satisfaction. I didn’t like being a programmer (even if it was a Unix programmer)

So I moved to a hospital. Due to some people quitting, I found myself moved into networking, and given tremendous responsibility. I spent three years learning about networking and being constantly challenged with interesting work. I loved my job there. Eventually, I got into a rut, largely because the work stopped becoming challenging and the unions (spit, spit) were starting to get to me. Even though there were many things about the job that pissed me off, for most of the time I didn’t feel like I was at work.

I moved on to a large payroll company, where I am now. My immediate co-workers are brilliant, and the company takes fairly good care of us. Yes, we had some severe layoffs a few weeks ago, but I emerged even happier the next day. I’m paid well for the market I’m in (demand for high end network folk isn’t excessive in Winnipeg), and I feel secure in my job. Most importantly, I’m constantly challenged.

Sure, there are a lot of things I don’t like about it, but I walk in every Monday looking forward to the week ahead. And, as such, I’ll continue to like my job.

A measure of job satisfaction is to simply ask yourself, “would recommend a position in your company to a friend?” I would (and do). People like Jeremey Zawodny publicly display their love of their job by calling for co-workers(or he’s just desparate for the cash).

I don’t mean to say I have all the answers for job satisfaction, but I know it starts with simply liking what you do.

Review: Apache Cookbook

0596001916.01._PE30_PI_SCMZZZZZZZ_.jpgApache Cookbook
Ken Coar & Rich Bowen
O’Reilly, 2004
234pp, $29.95USD/$46.95CDN

I finished going through O’Reilly’s Apache Cookbook a little while back, but it came in handy so often at work, I never brought it home to complete the review!

Like the other entries in the Cookbooks Series, the Apache Cookbook focuses on common problems, their solutions, and an explanation of the thought process behind it. For an application such as Apache, this is the perfect way to help people out.

Each recipe poses a common problem, such as how to install the web server or a module, a concise solution, and a discussion of how the solution works. Even though some solutions are “there is no solution” (such as how to log the IP address of a proxied client), the fact that it is stated as such, along with an explanation of why (either technically impossible, or no such software written) is still helpful.

The breadth of topics is good, starting out on basic installation from source, RPM, or helper scripts, moving onto logging, virtual hosts, and security, and covering more advanced topics such as proxying and url rewriting.

I found that the books treatment of logging, normally a mundane topic, was particularily good. Many of the recipes may not have had immediate practical value, such as logging cookies, but they all showed off how versatile Apache is. The procedure for logging a cookie turns out to be fairly simple, but in doing so the reader is shown the many ways that the CustomLog directive can be used. For logging proxied requests, something that this author has unsuccessfully tried to do in the past, the answer turns out to show off some Apache features that let the administrator set environment variables for the request that get picked up later in the process. Along with logging specific things, alternate methods of logging, such as SQL and Syslog, are also shown. Surprisingly, I saw no mention of what to do with the logs once they’ve hit disk, even if it were a few links to packages such as Webalizer or AWStats

Chapter 5, “Aliases, Redirecting, and Rewriting” shows some of the more powerful aspects of Apache, namely its ability to manipulate any aspect of the query. There are several practical recipes here, such as moving parts of your site to another url, mapping several URLs into one file, and so forth. This chapter shows off many of the regular expression features, not only the obvious sledgehammer of mod_rewrite, but many of the Match commands, such as RedirectMatch, and ScriptAliasMatch.

The chapter on SSL is also very helpful, guiding the user through many scenarios such as generating keys, requiring SSL for certain sections of the site, and even using client certificates.

Likewise, the chapters on proxies and performance are excellent if the topic is of interest to you, or you find yourself in need.

The book covers both Apache 1.3 and 2.0, being careful to make notes where the configurations differ.

I brought this book in to work when I first got it, which coincidentally was around the time where some of us were doing some Apache work. The book proved indispensible, answering everything from “Why does the site work with a trailing slash, but not without?” (hint, check your ServerName directive) to setting up SSL and some site redirections. This book will be close at hand the next time I have an Apache question.

Apache Cookbook combines an easy to follow writing style with a format conducive to solving problems. Anyone who works with Apache will want this book handy.

Playing With VoIP

After attending a great presentation on Linux and VoIP at MUUG last night, I have Free World Dialup a try. It’s a public SIP registrar that you can use, includes voice mail and other goodies.

If you’re on the system, I’m at 269423, or 269423@fwd.pulver.com for other SIP clients.

While toll bypass has become less and less of a selling point for VoIP, some of the other PBX type features of this, and projects like Asterisk really show off the advantages of VoIP over traditional TDM gear.

There Are Two Types of IT Shops Out There

There are those that are going to embrace Voice-Data convergence, and those that are wasting time and money.

I’ve always thought along these lines, but my recent trip to see a large call centre in action reinforced it.

It’s difficult to gather my thoughts on this issue and explain, so start here:

The End of The Middle - a condensed version of the famous Rise of the Stupid Network

Tim Denton wrote some papers on the same subject.

Traditionally, the voice world has been dominated by big iron, running on leased lines, and making the network the intelligent part. Think of how dumb your phone is… You pick up the handset, the other end sends you a dialtone. You press buttons that make music, the other end decyphers it into a number. The other end figures out where to send the call. The other end rings the other person’s phone.

The old world is only good at building channels from person A to person B. Anything else.. Call waiting, voice mail, and call hold, is done on the PBX end.

Compare this to the Internet. Your web browser speaks HTTP. The web server speaks HTTP. You talk. The network doesn’t know what you’re doing, nor does (or should) it care. It’s just a packet.

Bring this into the voice world.

An organization needs flexibility and to be cost effective. TDM lines take weeks to be installed. PBX programming is complex. Getting into the SS7 integration is expensive and even more complex.

Data connections are built and rebuilt thousands of times a second. All you need are two endpoints, and it’s there. An organization that needs to connect two voice endpoints over its WAN just needs IP addresses.

Moving the PBX functionality to the network, and onto a piece of open hardware allows tighter integration of existing software into the system. Suddenly you have access to everything you need. You have the flexibility to easily monitor the system in real time. You can move agents between queues or change queues in a flash.

While I am rather PBX naive, I know that a lot of these things can still be done on big iron. But I know enough that the stupid network, the one that just flips packets, is a better bet than one that has all the smarts in the centre.