No Starch Press sent me Practical Packet Analysis, 2ed a little while back. At about 250 pages it’s a lot smaller than Chappell’s “Wireshark Network Analysis”, and more appropriate for someone who wants to get up and running quickly rather than going for a certification.
The book assumes no knowledge of Wireshark, and a basic understanding of networking. More than half the book is devoted to teaching the Wireshark interface and how the popular protocols work. So, if you don’t know anything about DNS recursion, you’ll get a taste of it here along with what it looks like in Wireshark. The first half covers everything from filtering inside Wireshark to how different protocols work.
The second half of the book follows fairly typical examples, such as decoding HTTP streams and troubleshooting the causes of network congestion. Of special interest is Chapter 10, which is about using wireshark for security analysis. This chapter is merely an introduction to a huge topic, but the author has chosen some interesting examples such as an ARP poisoning attack and analysis of a trojan horse.
One theme the author continually comes back to is appropriate placement of the analysis tool. The early chapters discuss the matter in theory, and every example in the second half has some text that analyzes the options for where to use Wireshark and where the best spot is.
Some of the highlights of the book:
A great discussion of TCP congestion and analysis of a congestion scenario
A good tradeoff between depth and breadth. This is a “getting started” guide/
Uses many of the features of Wireshark in a practical context
A good, though basic, chapter about wireless sniffing
Some of the downsides:
No IPv6 (other than a brief mention of a host filter)
Would have liked to see more use about IO graphs and TCP stream graphs especially when talking about congestion.
On the whole, a great book for the IT administrator who wants to quickly get started using Wireshark. Cover price is $49.95 US, Amazon.com is showing it for $30 which is a bargain.
I use Nagios to monitor the health of a few servers, and would like to be paged if something goes wrong.
When I set it up a couple of years ago, I used SMS Gateway which was $10 for 100 SMSes. I was able to page with a simple curl command. However I’d get the odd page that wouldn’t go through, and despite being very responsive, the support wasn’t very reassuring.
Now that I’ve depleted my 100 pages, I figured I’d move over to Twilio because they’re pretty slick, and the reliability has to be better.
Some Nagios code, first:
123456789101112131415161718192021222324
define contactgroup{
contactgroup_name important
alias Sean Buzz
members sean, page
}
define contact{
contact_name page
alias page
service_notification_commands notify-by-page
host_notification_commands host-notify-by-page
service_notification_period 24x7
host_notification_period 24x7
service_notification_options w,u,c,r
host_notification_options d,u,r
pager nobody@localhost
}
define service{
use local-service ; Name of service template to use
host_name localhost
service_description smallpayroll.ca
contact_groups important
check_command check_http_string!smallpayroll.ca!Easy to use
}
The first stanza creates a contact group called “important” that emails sean, and pages. The second stanza implements the “host-notify-by-page” and “notify-by-page” which do the actual paging. The final stanza is an example of a service that would get paged on. If the check_http_string check fails, the “important” contact group is notified.
To get the SID and TOKEN (note there are two instances of the SID, the second is in the URL right after accounts) go to your dashboard and look at the top:
To get a number click on Numbers then Buy a Number:
Then search for a number. It should be in the USA, as it looks like Canadian numbers don’t support SMS. You can verify this by clicking on “Buy”:
Buy the number for $1/month. You don’t have to set up any URLs with it if you’re doing outbound only.
YOURCELL should be obvious :) It could also be templated within Nagios.
SmallPayroll.ca was my first big Rails project, and looking back at some of the code, it shows. One of the first things I did was the timesheet. The form has 21 input fields per employee, then it has to go through the database and figure out if days have changed or deleted. So it’s doing a lot, and at the time I was trying to figure out how both Ruby and Rails worked, and the code ended up being a mess.
But I was OK with that. If I was to get anywhere with SmallPayroll, people had to submit a timesheet. They didn’t care if the server side code was efficient as long as it worked. And, as I was to find out, they didn’t seem to care if it was slow. In order to build the rest of the app I had to have a timesheet. So I left my ugly inefficient code in, along with the tests that exercised it, and got to building the rest of the application.
Between Rails Analyzer and New Relic I kept an eye on things. The timesheet did get worse as time passed. Now that SmallPayroll has become more successful and I can spend more time on it, I’ve come back to look at fixing this. But before I know if I’m doing a better job, I have to know how I’m doing now.
moduleActiveSupportclassBufferedLoggerattr_reader:tracked_queriesdeftracking=(val)@tracked_queries=[]@tracking=valenddefdebug_with_tracking(message)@tracked_queriesdebug_without_tracking(message)endalias_method_chain:debug,:trackingendendclassActiveSupport::TestCasedeftrack_queries(&block)RAILS_DEFAULT_LOGGER.tracking=true# Had to add this to get queries to be loggedRails.logger.level=ActiveSupport::BufferedLogger::DEBUGyieldresult=RAILS_DEFAULT_LOGGER.tracked_queriesRAILS_DEFAULT_LOGGER.tracking=falseRails.logger.level=ActiveSupport::BufferedLogger::INFOresultendend
After that it was a matter of making a performance test, copying over some of my functional tests that represented a case I was trying to optimize, then doing some before/after.
So it would seem I’ve been able to knock off some time and memory consumption, along with lots of queries, by optimizing my code. Since I had already written test cases I was able to show that it worked the same as before.
But I think the better point to make here is that I could have spent a lot of time trying to build these optimizations in from day 1 and detracted from building a good product. Instead, I deferred the hard work until the time that it mattered. And now that I have more Ruby and Rails experience, doing the optimization was much easier. Something that might have taken several evenings over the course of a couple of weeks was done in less than a day. And while I don’t follow TDD, having existing tests to start from made a huge difference.
This site is hosted on a Linode 768 VPS, and has been for a couple of years now, along with some other domains. I have hosted it at home, and also on a GoDaddy VPS which didn’t end up being all that good, but am now very happy with Linode. I host a combination of PHP (mostly WordPress) and Ruby on Rails applications.
Over the years Linode has kept the price the same ($30/month for my plan) but have increased the disk and memory of their plans every year. When I started out my plan had 18G of disk space and around 512MB of RAM, now it has 30G of disk and 768MB of RAM. So the value for money keeps on getting better.
I’ve also set up Linode VPSes for a few people, including TwiTip.com and TopMMANews.com and continue to assist in their management. Both of them are fairly heavy sites and also run on a Linode 768. TwiTip hit 11mbps of traffic when it was tweeted by Ashton Kutcher, and TopMMANews has a fairly active site.
I’ve found the service to be very reliable. At one point one of their data centres was having problems but they were fixed reasonably quickly and the company kept the customers updated.
You get a control panel that lets you see your cpu/disk/io status, along with how much disk and network you’ve used. The screenshot below shows my system (you can see that I haven’t yet taken advantage of the 6GB of disk space they added to my account)
One feature I really like about the service is that you get free DNS hosting, and the interface is very simple (I mean “simple” as “does not get in your way”, not as in “stripped of features”). You can do AAAA, TXT, and SRV records, or control the whole thing through an API.
I can’t speak highly enough about Linode VPSes. If you’re looking for a VPS service they offer great value for money and a high service level. If you’re wondering about which size to buy, I’ve found the 768 to be a real workhorse. You can also upgrade/downgrade your plan with minimal downtime and no loss of data, so there’s little risk in picking the wrong plan.
I have been playing with the Freshbooks API and the Twilio API as part of a contest that Twilio is running. It’s a great excuse to try something I’ve been meaning to do for a while.
I ran into a few problems.
The freshbooks gem doesn’t work under 1.9.2, which I found out after trying to deploy to Heroku and then trying locally under RVM. The error was:
12
NoMethodError in OutstandingController#index
undefined method `gsub!' for :invoice_id:Symbol
Someone made a compatible gem on Github that you can use by using the following in your Gemfile instead of gem “freshbooks”:
Instead of using FreshBooks.setup to enter your credentials, use FreshBooks::Base.establish_connection
The original gem let you do FreshBooks::Invoice.send_by_email(id), the bcurren one makes it an instance method… FreshBooks::Invoice.get(id).send_by_email
Those were the only two changes I had to make.
The second problem was with the Twilio gem. I got SSL errors if I had to make calls to Twilio (as opposed to incoming requests from the Twilio API).
The reason for this problem is that the OpenSSL library is making a call to an HTTPS resource, but it has no way to verify the certificate. There are two ways to fix this problem:
Tell OpenSSL not to verify the certificate
Give OpenSSL the proper Certification Authority (CA) certificates to verify the certificate
I’ll confess that my normal approach here is #1, but this time I felt like doing it properly… Since the Twilio module includes HTTPParty you can call methods right on the Twilio module. So add an intializer inside config/initializers, such as twilio.rb:
Then, all you have to do is grab cacert.pem from somewhere else. Many other gems include it so if you look for the file on your hard drive you should find it. FWIW, the NewRelic one doesn’t work, it only includes the certificates they need. I ended up finding mine in ActiveMerchant.
*edit* I forked the twilio gem to try and bundle the cacerts.pem file, and as I was going through the code I saw that they look for the certs in /etc/ssl/certs unless you use the method above.
I have an administration panel for my Rails application that shows various information. I’ve found it helpful to show the last few commits along with a link to the repository. Here’s the code:
It’s pretty simple, it just parses the output of git log and spits it out as a list, showing the description and how long ago it was checked in. If you have gitweb or something similar installed, you get a link to the repo.
It’s helped me to find production bugs, and also when I deploy without pushing my code to the git repo, and end up forgetting some changes!
I was doing some work that involved moving between several directories. Remembering about pushd and popd, I googled around to try and find out how to use them properly. I found this article which was helpful, but what was even better was one of the comments talking about “cd -“.
12345678910
[root@host tmp]# pwd/tmp
[root@host tmp]# cd -/auto/tmp/sean
[root@host sean]# pwd/auto/tmp/sean
[root@host sean]# cd -/tmp
[root@host tmp]# pwd/tmp
What it does is cycle you between your last two directories. It also operates outside of the pushd/popd stack:
I am doing some work with Wordpress, where we have a development server and a production server. The development side is set up as a git repo, and the production side pulls from the dev repo when we want to pull in changes:
1
git pull origin master
I move between the two environments using the host file, which sometimes means I’m not sure which environment I’m in. I put the following in functions.php to help me out:
123456789
functionmysite_i_am_in_dev(){// Red border if we're in dev modeecho'<!-- dev mode --><style type="text/css"> body { border: 2px solid #FF0000; } </style>';}if($_SERVER["SERVER_ADDR"]=="x.x.x.x"){// x.x.x.x is the IP address of your dev serveradd_action('wp_head','mysite_i_am_in_dev');add_action('admin_head','mysite_i_am_in_dev');}
So now, anyone viewing the development server will have a small red border around the screen.
I recently changed SmallPayroll to use Beanstalkd instead of delayed_job for background processing. delayed_job is an awesome tool and makes asynchronous processing so simple. However I wanted to have multiple queues so that I could have different workers processing different queues, and have some upcoming needs to process the jobs quicker than the 5 second polling time.
After watching railscasts on beanstalkd and stalker I decided to use that. Beanstalkd is a lightweight job processor, and stalker makes it very simple to use from the client end.
delayed_job was nice in that the job would just run against the model, but now I have to process the job in config/worker.rb:
1234567891011
require'stalker'includeStalkerrequireFile.expand_path("../environment",__FILE__)job'default'do|args|puts"I don't support the default queue"endjob'email.new_user'do|args|user=User.find(args["user_id"])UserMailer.deliver_welcome_email(user)UserMailer.deliver_notify_admin(user)end
One thing about stalker is that it wants you to pass simple objects instead of ActiveRecord objects, so I queue the user_id instead of the user model.
The script above monitors the default tube, which I don’t use, because the nagios plugin for beanstalk expects someone to monitor it (after setting it all up, I guess I could have set it up to ignore that tube). I’m also using the munin plugin for beanstalkd to graph the activity in the daemon.
Then, script/worker uses the daemons gem to start the job and restart it if it crashes:
The only problem I’ve run into so far is that my HTML email seems to go out without the text/html content-type. Fixing that was a simple matter of putting content_type 'text/html' inside my mailer, which wasn’t needed when I was using delayed_job.
classApplication# .....# somewhere in this block put the following:config.middleware.use"::ExceptionNotifier",:email_prefix=>"[MyApp Error] ",:sender_address=>%{"notifier" },:exception_recipients=>%w{youraddress@example.com}
3. Verify:
12
$ rake middleware | grep ExceptionNotifier
use ExceptionNotifier
Now you’ll get any application errors emailed to the addresses in the exception_recipients array.