Sean’s Obsessions

Sean Walberg’s blog

About Writing a Book

Whenever people find out I’m writing a book, I’m always asked how it all works. Now that I’m almost done, I’ll take some time to write it out.

The first step is to get a proposal accepted. The first time I tried this I worked with an agent from Studio B. He had me write out my ideas, and we gradually shaped them into a proposal document. The agent then tried to shop it around to various publishers, but I got no bites.

For the book I ended up writing (Check Point CCSA Exam Cram 2), I was supposed to be the technical editor. I had previously been the technical editor for a Linux+ book since the author was a friend, and this enabled me to get on to the Check Point book since I hadn’t screwed up on the Linux+ book, and I had relevent Check Point experience. There were some problems with the progress of the project, and they were looking to get a new author. I was given the opportunity to put in a proposal.

I had been dealing with the acquisitions editor (AE), who oversees the project and whose prime role is to approve/disapprove book ideas. She had a standard proposal template, which I worked through with her, after which the book was accepted.

I then put my agent in contact with the AE, and they negotiated a contract. The way contracts in the publishing industry seem to work is that the author gets royalties on the wholesale cost of the book (wholesale is about 60% of the cover price, I’m told). Typical royalty rates are between 7% and 12%, and often are on a sliding scale (ie 7% on the first 5,000 9% on the next 5,000, and so forth). The author is paid an advance on the royalties, from which future royalties are deducted until the balance is paid out, and the author can start getting cheques.

There is far more to a contract than just the money. There are many things that deal with future editions, schedule, translations, re-use, and cross accounting. The last one is interesting, because it means that if you write several books for the publisher, they have the right to deduct unearned advances on one book from the royalties of another. The agent negotiates these terms with the AE based on their knowledge of the market and other contracts the publisher has signed.

After that, the writing begins. The payments depend on the contract – the royalty may be paid out in whole, or after predefined milestones. My project was to go from Nov 15, 2004 to Mar 1, 2005.

I was given a Word template to do the writing in, which sets out the styles for headings and such. I submitted my work to my development editor (DE), who managed the project for the most part.

My work was given to the technical editors (TE) who checked for accuracy and completeness, and made comments where needed. The copy editor (CE) checked for grammar, making changes and asking questions where needed. The DE also made comments on the structure of the document, and for conformance to the series guidelines (this book was part of the ExamCram 2 series, so they want a consistent look and feel). All these changes were incorporated back into the original document by the project editor (PE), who sent them to me.

I then go through all the queries made by the various people (called author review) and make changes as I see the need. Sometimes I disagreed with some of them (keep in mind that only the TEs are technical people), sometimes their comments prompted me to add a new section. Their combined effort really made a difference in the book. This last section ran in parallel with the writing of the book, though in my case I didn’t start getting the edits back until I was about 75% done the writing.

Where it sits now is that by the end of the weekend, I will have submitted all the content for the book, and will have completed the author review stage for about 75% of the material. Once that’s all done, I will have another opportunity to review the entire book, and once that’s done, the book is off to production. Then I’m told it’s 3-4 months until it hits the shelves. In the meantime, I have to build a small website for the book.

As for what’s next? I don’t plan on writing another book in the near future. I’d like to write some articles for magazines, and perhaps be a technical editor on another project.

What’s the Big Deal With AutoLink?

I still don’t see why AutoLink is evil. I have yet to see an argument against it that isn’t in part related to people’s ignorance on how it works.

Geek News is the latest site to flip their lid. Parts of his comments below, because I think they point to why people don’t get it.

(you may also want to read StopScum, where I originally began my thoughts on the issue.

1st, I want a way to prevent your Autolink feature from creating your advertising links on my website.

AutoLink allows the user to get more information regarding items on a page. For instance, if an address is shown on a page, the user can click on the AutoLink button and it becomes a link to Google Maps, MapQuest, or Yahoo! maps, whichever the user chooses. A FedEx code gets changed to a link to FedEx’s tracking page. An ISBN gets changed to an Amazon link.

So, how do you prevent these links from happening?

First, provide the information the user is looking for in the first place. If the user is looking for a description of the book, they won’t use autolink if the information there already. Provide a link to a map of your building if you provide an address. If the user uses AutoLink to find this information then you have failed the user and don’t deserve the traffic.

Secondly, if you don’t want to let users use the AutoLink feature to get the information you are obviously failing to provide, make a link yourself. I’ve tested books and addresses – AutoLink will not override a link that’s already specified in the HTML.

2nd, if you are not going to give us a way to block Autolink I want you to pay me every time you cause traffic to leave my website by a reader clicking on one of your links not mine. Along with that I want independent auditing of those click aways.

Again, the user has made a choice. You failed to provide the information. The user clicked the autolink button and left.

3rd, I suspect that you may be violating my copyright and creative commons license. That will be up to a copyright lawyer to determine.

Do you make the same threats to people who run ad blockers? Blind people who have your page read to them through a text to speech converter? Users who override stylesheets? It’s all the same, since they all change the way the page is displayed.

As far as I’m concerned, the writer owns the content of the site, not the layout. HTML is simply markup. If you want absolute control of the web site then publish in PDF or print. If I, as a reader, choose to do things after the fact, then you have no cause to complain. Especially since in this case the page is only changed after the user clicks a button with the express purpose of getting more information.

I’m Still Here

Been spending all my free time working on the book. Turned in the last content chapter earlier in the week, which leaves me with two practice tests and the appendices to write.

The appendices won’t be hard, I’m almost done them. But practice questions annoy me, especially when I’ve got 120 to do.

After that, I have all the edits to go through and then the final review. The schedule calls for it all to be done at the end of the month. I’m sure my submissions will be on time, but I’m not sure the edits will be. It’s somewhat spooky, I have yet to receive any feedback from the technical editors, or anything significant from the other editors.

I’ll be happy to be done with this project. I’ve learned a lot about Check Point in the process, but it’s been taxing with a full time job and two small kids.

IOS 12.3(1) Has a Cron Feature

IOS Command Scheduler

Cool!

The Command Scheduler feature provides the ability to schedule some EXEC command-line interface (CLI) commands to run at specific times or at specified intervals.

Restrictions for Command Scheduler
The EXEC CLI specified in a Command Scheduler policy list must not generate a prompt or have the ability to be terminated using keystrokes. Command Scheduler is designed as a fully automated facility and no manual intervention is permitted.

Information About Command Scheduler
To configure Command Scheduler, you need to understand the following concept:

Command Scheduler Overview
Command Scheduler Overview
Command Scheduler allows customers to schedule fully-qualified EXEC mode CLI commands to run once, at specified intervals, or at specified calendar dates and times. Originally designed to work with CNS commands, Command Scheduler has a broader application. Using the CNS image agent feature, remote routers residing outside a firewall or using Network Address Translation (NAT) addresses can use Command Scheduler to launch CLI at intervals to update the image running in the router.

Command Scheduler has two basic processes. A policy list is configured containing lines of fully-qualified EXEC CLI to be run at the same time or interval. One or more policy lists are then scheduled to run after a specified interval of time or at a specified calendar date and time. Each scheduled occurrence can be set to run once only or on a recurring basis.

What Do People Have Against the Word Scalable?

I’m not sure why a lot of people are jumping on the bandwagon about the word “scalable”. I read another article the other day that said it simply means “it can grow”. This is far from the truth.

Scalable has a well understood meaning. Something that is scalable means that its architecture allows it to grow to any given size without inordinate amounts of upgrades. If you had to triple your web servers in order to handle 1000 more hits a day, that’s not scalable.

Computer scientists have always had a way to describe the performance of an algorithm, which it essentially its scalability. It’s called “Big Oh notation”.

Something that is O(1) keeps the same complexity no matter how big the problem gets. O(N) grows linearly with the problem. Double the number of hits, you double the web server.

Once you get beyond O(N), it’s possible that the problem can grow faster than you can scale your resources. For example, doubling the parameters on an O(N^2) (that’s N-squared) problem requires quadruple the resources.

The reason that encryption works is because decryption without the key is not a scalable problem. In a symmetric key system such as DES or AES, adding a single bit to the key doubles the complexity of the problem. By the time you get to 128 bits of keyspace, checking every key would take more energy and time than exist in the universe.

Of course, a lot of people don’t like to talk about scalability is because it’s easier to slap together a system, collect the bonus, and leave the maintenance to someone else. A good scalable architecture that solves an IT problem is vital.

Nofollow Plugin Testing

Google has announced they’ll respect “rel=nofollow” in links, meaning that they won’t follow them or assign PR to any link with this tag. I’ve added the nofollow plugin to Movable type, hopefully it works!

This will hopefully curb comment spam, since the primary goal is to get backlinks. I’m sure they’ll continue, but you have to start somewhere.

Google Buying Dark Fibre, a Contrarian Opinion

Many are referring to the ZDNet article entitled “Google Wants Dark Fibre” as a sign that the big G is going to compete with the telcos. I’m going to take the contrarian opinion and say this is probably nothing.

First, to those of you who don’t know, you buy your data fibre services either lit or dark.

With a lit service, you get a specified protocol and speed on either end, usually Ethernet or some flavour of ATM/SONET. It’s lit because there is literally light from the carrier’s lasers on the line to provide the service. The carrier takes care of all the interconnects, and depending on the type of service you may know very little about what goes on in the middle. You can even have the carrier merge several links because they’ve got active gear in the centre, and are speaking the same protocol as you are.

Dark fibre, by contrast, is a piece of unlit glass, starting on one location, terminating at the other, there’s nothing in the middle. You can send morse code over it, or 10 gig ethernet if you want. The neat thing about dark fibre is that through “Wave Division Multiplexing” (WDM), you can run multiple channels on the same fibre. You could have two OC-48’s (2.5Gbit/S) going at the same time as your fibre channel SAN, and even your Morse code. By sending the different signals at different frequencies, and pulling them out at the other end, you can multiplex many channels. I don’t follow the technology, but using Dense WDM (DWDM) you can get in excess of 64 channels on a single pair.

This is great if you’re running many pipes between the same location, since you could potentially save money. And after the dot com bust, there is silly amounts of unlit fibre in the ground.

I posted this to Webmasterworld in response to the topic.

A few of things make me think this might be blown out of proportion.

- The job also talks about them being responsible for negotiations for all the data centre and power stuff, their Internet traffic, and HVAC. So this isn’t a “take over the world with dark fibre” position

- In the part where they ask for experience getting dark fibre, they also ask for the same thing with managed MAN and lambdas. Basically, all the pipes G would use for their network.

- It’s not unreasonable to assume that G makes use of dark fibre, or would look to try and lower the $/Mb/s costs by trying alternative means. If you look at the “Peering Manager” description above, they have a guy that does this solely for their Internet peering.

- It doesn’t specify “bundles” of dark fibre. It could easily be a single pair.

The stuff about global backbone networks, while true, sounds more like it’s there to get the right person excited than it is to announce they’re competing with the telcos ;)

Sean

Add on to this that Google deals in managing information, not bits on a wire. I think it makes way more sense that dark fibre is simply one tool in the huge toolbox at Google, and people are jumping to conclusions.

So, IMHO, while the geniuses at Google could conceivably get into the bandwidth market, I don’t think it’s where they want to be.

Removing Spyware

I’m usually pretty careful when using IE, but I don’t know what happened this time. My home page was changed to “Home Search”, and links were starting to be changed around on me. My Trend Antivirus started chirping like mad.

Even Spybot couldn’t get rid of this one. I finally found these instructions on removing Home Search. It was painful, but I’m clean.

I only use IE at work, or when I have my work laptop at home. I use Mozilla on my Linux box at home. After this, I’m seriously thinking of making the switch at work.

Amanda Didn’t Index the Tape, Had to Recover

I’ve been using AMANDA to back up my systems for ages. After accidentally blowing away a file today, I had to recover. However, for some reason, amrecover told me that I didn’t have an index for that volume, meaning that it had no way of knowing what tape the file would be on.

Checking myself, indeed, there were no indices for that host, and after looking at the configuration, it was my mistake (why the heck is that the default, I ask?). The files are either dump or tar files with an AMANDA header, but I had no way of knowing what tape my files were on, or how to get the files out .

I finally found the AMANDA FAQ which explains the header and the way to get data off.

Basically, I go

# mt -f /dev/nst1 fsf 1
# dd if=/dev/nst1 bs=32k count=1

looking for the right dumplevel, host, and volume. I knew I needed a full (level 0) backup, since the file has been there for a while.

When I found the tape, I ran mt -f /dev/nst1 status to figure out the block number. Finally, I rewind the tape, and pass the output through restore:

# mt -f /dev/nst1 rewind
# mt -f /dev/nst1 fsf 9
# dd if=/dev/nst1 bs=32k skip=1 | gunzip | restore -if -

This puts me into an interactive restore, where I can view the virtual filesystem contained within the dump, then pick the files I want.

While I use the built in amverify to periodically test the backups, I never actually tried to do a restore from that host in particular. Hopefully, now that I have indices, I won’t have to do this again.

Recently Added to My List of Feeds

I’ve recently added a bunch of new sites to my RSS reader, some of which I think are worth sharing.

SANS Internet Storm Center - Every day there is a diary entry about the analysis going on, ie what trends in attacks are being seen.

Ask Dave Taylor - Well known author answers a question or two every day. Topics range from shell scripting to desktop issues. RSS makes this site great, the feed contains the question summary and a more detailed version. If I’m interested in the answer, I click. If not, I move on.

Lost Below the 49th - a transplanted Canadian who’s interested in web performance. Interesting statistical stuff. And he’s got a master’s degree in history.

ProBlogger - An Aussie who blogs for a living. Lots of fascinating stuff on the site, and he’s an interesting guy to talk to.

Lycos top 50 blog - commentary on the state of search, from the Lycos perspective.