Sean’s Obsessions

Sean Walberg’s blog

Managing Secrets in Chef With Hashicorp Vault

The Problem

It’s pretty common to need Chef to be able to store secrets such as database passwords or API keys. The easiest thing to do is to store them in a data bag, but that’s open for the world to see. Encrypted data bags are nice but key management is a gigantic pain. A better solution is Chef Vault, which encrypts the data bag’s secret once for each client (a client being a Chef node or administrative user)

At the same time your organization likely has a need to keep secret data for applications, too. One could store these secrets in the same place as the Chef secrets but if you don’t like having Chef manage application configuration files then you’re out of luck. HashiCorp Vault is one solution here that we’ve used successfully.

With HashiCorp Vault, each client (or groups of clients) has a token that gives them access to certain secrets, dictated by a policy. So you can say that a certain token can only read user accounts and give that to your application or Chef. But how do you keep that token a secret?

I’ll also add that the management of HashiCorp Vault is nicer than that of Chef-Vault. That is, making changes to the secrets is a bit nicer because there’s a well defined API that directly manipulates the secrets, rather than having to use the Chef-Vault layer of getting the private key for the encrypted data bag and making changes to JSON. Furthermore this lets us store some of our secrets in the same place that applications are looking, which can be beneficial.

In this example, I have a Chef recipe with a custom resource to create users in a proprietary application. I want to store the user information in HashiCorp vault because the management of the users will be easier for the operations team, and it will also allow other applications to access the same secrets. The basic premise here is that the data will go in HashiCorp Vault and the token to access the HashiCorp Vault will be stored in Chef’s Vault.

The Code

The first thing to do is set up your secrets in HashiCorp Vault. We’ll want to create a policy that only allows read access in to the part of the Vault that Chef will read from. Add this to myapp.hcl

1
2
3
path "secret/myapp/*" {
  policy = "read"
}

Create the policy:

1
2
[root@vault ~]# vault policy-write myapp myapp.hcl
Policy 'myapp' written.

Create a token that uses that policy. Note that the token must be renewable, as we’re going to have Chef renew it each time. Otherwise it’ll stop working after a month.

1
2
3
4
5
6
7
8
[root@vault ~]# vault token-create -policy=myapp -renewable=true
Key               Value
---               -----
token             ba85411e-ab76-0c0f-c0b8-e26ce294ae0d
token_accessor
token_duration    720h0m0s
token_renewable   true
token_policies    [myapp]

That value beginning with ba85 is the token that Chef will use to talk to the Vault. With your root token you can add your first secret:

1
vault write secret/myapp/testuser password=abc123 path=/tmp

At this point we have a user in the HashiCorp Vault and a token that will let Chef read it. Test for yourself with vault auth and vault read!

Now it’s time to get Chef to store and read that key. Store the token in some JSON such as secret.json.

1
{ "token": "ba85411e-ab76-0c0f-c0b8-e26ce294ae0d"}

And create a secret that’s accessible to the servers and any people needed:

1
knife vault create myapp_credentials vault_token -A sean,server1.example.com -M client -J ./secret.json

This creates a secret in a data bag called myapp_credentials in an item called vault_token. The secret itself is a piece of JSON with a key of token and a value of the token itself. The secret is only accessible by sean (me) and server1.example.com. If you later want to add a new server or user to manage it, you need to run

1
knife vault update myapp_credentials vault_token -A server2.example.com

Which will encrypt the data bag secret with something that only server2.example.com can decrypt.

I won’t get into all the details of Chef Vault other than to refer you to this helpful article.

Now, let’s get Chef to read that secret within the recipe! Most things in these examples are static strings to make it easier to read. In a real recipe you’d likely move them to attributes.

First, get chef-vault into your recipe. Begin by referencing it in metadata.rb

1
depends 'chef-vault'

And in the recipe, include the recipe and use the chef_vault_item helper to read the secret:

1
2
3
4
5
include_recipe 'chef-vault'

# The key to unlock the HashiCorp vault is in Chef
bag = chef_vault_item('myapp', 'vault_token')
vault_token = bag['token']

Now that we have the token to the HashiCorp Vault, we can access the secrets using the Vault Rubygem.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
vault = Vault::Client.new(address: 'https://vault.example.com:8200')
vault.token = vault_token
# Renew the lease while we're here otherwise it'll eventually expire and be useless.
vault.auth_token.renew_self 3600 * 24 * 30

# Take a listing of the secrets in the path we chose earlier
vault.logical.list('/secret/myapp/').each do |name|
  # Extract each individual secret into a hash
  user_data = vault.logical.read("/secret/myapp/#{name}")
  # Apply the custom resource using parts of the secret
  myapp_user name.downcase do
    unix_user name.downcase
    password user_data.data[:password] # This is the password in the vault
    path user_data.data[:path] # This is the path from the vault
  end
end

The testing story is fairly straightforward. If you’re using the chef_vault_item as opposed to directly through ChefVault::Item, then it’ll automatically fall back to using unencrypted data bags which are easily mockable. Similarly, HashiCorp Vault can be mocked or pointed to a test instance.

This seems to give a good balance of security and convenience. We manage the Chef specific secrets in the Chef Vault, and use the HashiCorp vault for things that are more general. And the pattern is simple enough to be used in other places.

Getting Started With Chef

I’ve been a proponent of configuration management with Chef for a while. It’s done amazing things for me and my workplace and I think everyone could benefit from it. But when I talk to people the question always comes up: “How do I get started? Things move slowly here.”. I’m going to share the plan that worked for me. YMMV.

Note - While I talk about Chef, this also goes for Ansible, Salt, Puppet, cfengine, etc.

The plan

First, establish a beachhead. Get the Chef agent on all the servers you can, including your snowflakes. Then, start automating your “new box” workflow so that it’s as hands-off as possible and results in a fairly standardized build with Chef on it. Finally, commit to using those new boxes for everything you do.

Once you have this done, you’ll immediately be able to prove the value of configuration management. You’ll be able to query Chef for things that normally took a while to get (who is running that old kernel version) and be able to automate many ad-hoc tasks (delete that account on all the servers). Over time you can improve to deploy your servers using cookbooks.

Step 1: Chefify all the current infrastructure

Install Chef Server on a separate box. The server is not necessary to get use out of Chef but it makes things easier. More importantly, once you finish this step you’ll immediately be able to store configuration management data and run queries on your infrastructure (once they run the Chef client).

Next, create your first cookbook that will be a very sparse configuration that applies to all of your current infrastructure. When I did it I called it role-minimal and went down the role cookbook path. The TL;DR of that is that you create a role that only includes the role cookbook so that you get the benefits of versioning.

What do you put in the minimal role? It can be nothing to start, if you want. Or maybe something that’s so benign that no-one could complain, like setting the banner or motd:

1
2
3
4
cookbook_file '/etc/motd' do
  source 'motd'
  mode '0644'
end

and then put your motd in files/default/motd. This will manage the file and ensure that all nodes have the same MOTD. Who can argue with that? You probably already have an item in your backlog to do that anyway.

The other thing to add to this cookbook is to make the Chef client run on a schedule with the chef-client cookbook

1
2
3
4
5
6
7
8
9
10
default['chef_client']['init_style'] = 'none'
default['chef_client']['cron'] = {
  minute: '*/30',
  hour: '*',
  path: nil,
  environment_variables: nil,
  log_file: '/dev/null',
  use_cron_d: false,
  mailto: nil
  }

That can go in your attributes for the recipe to run it every half hour, or whatever you want. Don’t forget to include_recipe 'chef-client::cron' to have Chef manipulate your crontab to add the job.

You may want to create environments if that’s the way you do things.

After this, start bootstrapping your machines with knife bootstrap and a run list containing your new role. Maybe start with the non production servers. Don’t worry too much if people resist, you can leave those servers alone for now.

Now you have Chef running in a low risk fashion. But what’s going to happen? Someone will eventually need something:

  • We need to move to LDAP authentication
  • We need to ensure that we’re never running an old version of openssl
  • We need to know which servers are running which kernels

And then you can put up your hand and say it’s easy, because you happen to have Chef agents on each server, so the easy way would be to leverage that. Except that server that they were concerned about earlier – did they want that one done by hand or should we use our new repeatable automated process? Great, I’ll just go bootstrap that node.

Step 2: Fix your provisioning

This one really depends on how you build new machines. The general idea is that you want to come up with a base machine configuration that everything you do from now on is built from. We use the knife vsphere plugin to create new images with Chef already bootstrapped, but depending on what you use, you may need to search the plugin directory.

Create a second role for all the new stuff. We call ours role-base. This can be identical to role-minimal, but you may want to add some stuff to make your life easier. For example, I feel it should be a crime to run a server without sar, so I have our base role make sure that the sysstat package is up to date, plus we throw in some other goodies like screen, strace, lsof, and htop.

After this, commit to using this base image wherever humanly possible. Your boxes will be more consistent.

Step 3: Write cookbooks for new stuff

Sooner or later you’ll have a project that needs some servers. Do what you can to leverage community cookbooks or your own cookbooks to save yourself time and enforce consistency.

The benefit here is that all your MySQL servers will be consistent. You may not be able to fix your old servers, but at least everything new will be consistent. You’ll also be able to create new machines or environments much easier because the base image and the apps will be in code and not some checklist in the wiki. You’ll spend less time worrying about small details and spend more time thinking about bigger picture items.

I don’t have much advice here other than to do it. You’re definitely going to learn as you go and make mistakes. But it’s code, you can correct it and move on.

The other part of this step is to promote this new tool with your co-workers. Show them how knife tools can make your life easier. Learn how to write cookbooks as a group. Get a culture of reviewing each other’s code so you learn new ways of doing things and share information.

Step 4: while (true) { get_better }

There’s not a while lot to say here. Your first few weeks with Chef are going to be hard, but you’ll find that it gives you so many benefits in consistency and speed that it’s worth it. Mastering devops practices is an ongoing thing.

I recommend contributing patches back to community cookbooks as a way to get better. You will find problems with other people’s cookbooks eventually, and you can submit fixes and have them help other people.

At some point, not doing things in Chef will just seem strange.

LPI Certification Book Is Out

Ross Brunson and I spent a lot of time working on a study guide for the Linux+ and LPIC-1 certification, and I’m happy to say it’s finally out. Here’s an Amazon link.

I’m particularly proud of this one. It started out as an update to Ross’ book from 10 years ago, but in the end we rewrote most of it, expanded other sections, and added a ton of new content to cover the new objectives. Ross is a former employee of LPI so you know this book will cover the right material.

Canadian Equity Crowdfunding Rules

Be forewarned, I’m not a lawyer or a securities dealer. I’m just an interested person writing down my opinion.

Back in May the Canadian Security Administrators released the Start-up Crowdfunding Registration and Prospectus Exemptions (PDF). The core idea is that in certain situations a startup company can sell equity in their company without filing a detailed prospectus or registering as a dealer, often called crowdfunding, though here more properly called equity crowdfunding.

What’s the difference? In crowdfunding sites, such as Kickstarter, you give a project money in return for a prize or product. The project is assured a certain amount of sales to cover any capital costs, which should put them on a good footing for later sales. If the project is a success or a bust you don’t have any long term interest in the company – you’re basically pre-purchasing a product. In the equity crowdfunding scenario you’re buying a piece of the company much like if you bought it on the stock market. If the company does well then your investment may be worth a multiple of the purchase price. If the company does poorly then it’s worth nothing.

Normally these types of equity transactions are heavily regulated to prevent fraud and people from preying on novice investors. These new guidelines are an attempt to reduce the regulations while still preserving the investor protection. It should be noted that these only apply to British Columbia, Saskatchewan, Manitoba, Québec, New Brunswick and Nova Scotia. It is interesting to note that Ontario is developing their own rules.

While there are many conditions in these new equity crowdfunding guidelines the most important are:

  • The issuer (startup company) must write up an offering document containing basic information about the company and the money being raised. No financial statements are required, or even supposed to be attached to the offering document.
  • Each distribution (fundraising round) can be up to $250,000 and an issuer can only raise twice per calendar year.
  • A distribution declares a minimum amount it will raise, and it must raise that within 90 days or the money is returned to investors

Additionally the investors have some rights and limitations:

  • Can only contribute $1,500 per distribution
  • Has 48 hours to back out of a decision (either after the initial decision or after any updates have been made to the offering document)
  • May not be charged a fee for investing (i.e. the issuer must pay any fees)
  • Can not resell their shares unless
    • They are sold as part of another distribution
    • They are sold under a formal prospectus (e.g. what this system is trying to avoid)
    • The company becomes publicly listed and then only after a 4 month hold period expires

These distributions are expected to be sold through online portals that are either run by registered dealers or privately.

My take on all this is that it’s a good start. My main problem is the low limit on the personal contributions. If you raise $100k you need at least 67 people to buy in. I realize there must be a limitation to protect the downside but it seems it could have been a lot higher, maybe as much as $5,000. You now have 67 investors to manage, though you can make sure that the shares being issued to this class of shareholders has very few rights. If you go for institutional funding after a crowdfunded round then this may complicate things.

Thinking About Cyber Security

There have been a lot of high profile security breaches lately. If people like Sony can get hacked, what chance do you have?

The people at Sony are humans. They get together at the water cooler and complain about the state of affairs and all the legacy applications they have to support. Even new companies like Dropbox are going to have dark corners in their applications. Money isn’t going to solve all these problems - Apple has billions of dollars and still gave up information about users. The bigger the company, the bigger the attack surface and the more the team has to defend.

How do you prevent your company or product from being hacked given your resources are finite and you may not be able to change everything you want to?

I’ve been thinking of how Agile methods such as rapid iteration and sprints could be applied to security. With that in mind, some high level principles:

  • Solutions to problems should be ranked in terms of business value
  • If a solution takes more than a week or two to implement, it should be broken down into individual phases with their own business value scores
  • It’s not necessary to completely solve the problem as long as you’re better off than you were before. You can make it better the next iteration
  • Instead of “how can we eliminate all risk?” the better question is “how can we make an attacker’s work more difficult?”
  • Detection is just as important as prevention. Look at safes – they protect valuables against a determined adversary for a given period of time, it’s still up to you to make sure you can react in that timeframe

The list above is trying to get away from the traditional security project where you spend lots of time producing documentation, shelve it, and then provide a half-assed solution to meet the deadline. Instead you break the solution into parts and try and continually produce business value. Time for a diagram:

agile vs waterfall

Even in the best case where you deliver full value, why not try to deliver parts of it sooner?

Look at it this way – at any point in time you know the least amount about your problem as you ever will. It’s folly to think you can solve them all with some mammoth project. Pick something, fix it, move on. You have an end goal for sure, but the path may change as you progress.

With that in mind, how do you figure out what to do?

One organization technique I’ve found helpful is the attack tree. Here you define the goal of the attacker: Steal some money, take down the site, and so forth. Then you start coming up with some high level tasks the attacker would have to do in order to accomplish the goal. The leaf nodes are the more actionable things. For example, consider what it would take to deface a website:

attack tree to deface website

While not totally complete, this attack tree shows where the attack points are. Given that, some low effort and high value activities that could be done:

  • Audit who has access to CDN, DNS, and registrar accounts
  • Audit CMS accounts

Some higher effort activities would then be:

  • Code fix to enforce strong passwords
  • Code fix to lock out accounts after a certain period
  • Code fix to centralize the authentication to the corporate directory
  • Investigate two factor or SAML login with hosting providers
  • Network fix to ban IPs after a certain number of attempts
  • Monitor failed login attempts and investigate manually

Some of those options may be a lot of work. But if you first start with a simple password policy, build on that in the next iteration to lock out accounts, and finally tie in to another directory, you’re able to incrementally improve by making small fixes and learning as you go.

What if a group like Anonymous threatens to attack your website on the weekend? Look at the attack tree, what kind of confusion can you throw at the attacker? Change the URL of the login page? Put up a static password on the web server to view the login screen itself? Security through obscurity is not a long term fix, but as a tactical approach it can be enough to get you past a hurdle.

Too often security projects are treated in a waterfall manner. You must figure everything out front and then implement the solution, with the business value delivered at the end. Instead, treat this all as a continual learning exercise and strive to add value at each iteration. If the situation changes in the middle of the project, like an imminent threat, you’re in a better position to change course and respond.

Test Driven Infrastructure

In software, test driven development happens when you write an automated test that proves what you are about to write is correct, you write the code to make the test pass, then you move on to the next thing. Even if you don’t follow that strict order (e.g. write your code, then write a test), the fact that there’s a durable test that you can run later to prove the system still works is very valuable. All the tests together give you a suite of tools to help prove that you have done the right thing and that regressions haven’t happened.

What about the infrastructure world? We’ve always had some variant of “can you ping it now?”, or some high level Nagios tests. But there’s still some value to knowing that your test was good – if you make a change and then test, how can you be sure your test is good? If you ran the same test first you’d know it failed, then you could make your change. And then there’s the regression suite. A suite of tests that may be too expensive to run every 5 minutes through Nagios but are great to run to verify your change didn’t break anything.

Enter the Bash Automated Test System - a Bash based test suite. It’s a thin wrapper around the commands that you’d normally run in a script but if you follow the conventions you get some easy to use helpers and an easy to interpret output.

As an example, I needed to configure an nginx web server to perform a complicated series of redirects based on the user agent and link. I had a list of “if this then that” type instructions from the developer but had to translate them into a set of cURL commands. Once I had that it was simple to translate them into a BATS test that I could use to prove the system was working as requested and ideally share with my team so they could verify correctness if they made changes.

share_link tests
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
#!/usr/bin/env bats

@test "root" {
  run curl http://example.com
  [[ $output =~ "doctype html" ]]
}

@test "mobile redirects to share" {
  run curl -H "User-Agent: this is an iphone" -i -k http://app.example.com/shareapp/65ac7f12-ac2e-43f4-8b09-b3359137f36c
  [[ $output =~ "302 Found" ]]
  [[ $output =~ "Location: http://app.example.com/share/65ac7f12-ac2e-43f4-8b09-b3359137f36c" ]]
}

@test "mobile redirects to share and keeps query string" {
  run curl -H "User-Agent: this is an iphone" -i -k http://app.example.com/shareapp/65ac7f12-ac2e-43f4-8b09-b3359137f36c?a=b
  [[ $output =~ "302 Found" ]]
  [[ $output =~ "Location: http://app.example.com/share/65ac7f12-ac2e-43f4-8b09-b3359137f36c?a=b" ]]
}

@test "desktop redirects to play" {
  run curl -H "User-Agent: dunno bob" -i -k http://app.example.com/shareapp/65ac7f12-ac2e-43f4-8b09-b3359137f36c
  [[ $output =~ "302 Found" ]]
  [[ $output =~ "Location: http://app.example.com/play/65ac7f12-ac2e-43f4-8b09-b3359137f36c" ]]
}

@test "desktop redirects to play and keeps query string" {
  run curl -H "User-Agent: dunno bob" -i -k http://app.example.com/shareapp/65ac7f12-ac2e-43f4-8b09-b3359137f36c?a=b
  [[ $output =~ "302 Found" ]]
  [[ $output =~ "Location: http://app.example.com/play/65ac7f12-ac2e-43f4-8b09-b3359137f36c?a=b" ]]
}

@test "bots redirect to main site" {
  run curl -H "User-Agent: facebookexternalhit" -i -k http://app.example.com/shareapp/65ac7f12-ac2e-43f4-8b09-b3359137f36c
  [[ $output =~ "302 Found" ]]
  [[ $output =~ "Location: http://www.example.com/app/social?id=65ac7f12-ac2e-43f4-8b09-b3359137f36c" ]]
}

@test "bots redirect to main site and keeps query string" {
  run curl -H "User-Agent: facebookexternalhit" -i -k http://app.example.com/shareapp/65ac7f12-ac2e-43f4-8b09-b3359137f36c?a=b
  [[ $output =~ "302 Found" ]]
  [[ $output =~ "Location: http://www.example.com/app/social?id=65ac7f12-ac2e-43f4-8b09-b3359137f36c&a=b" ]]
}

And running the tests with one mistake in the configuration:

$ bats ~/Downloads/share_link.bats
 ✓ root
 ✓ mobile redirects to share
 ✗ mobile redirects to share and keeps query string
   (in test file /Users/sean/Downloads/share_link.bats, line 17)
     `[[ $output =~ "301 Found" ]]' failed
 ✓ desktop redirects to play
 ✓ desktop redirects to play and keeps query string
 ✓ bots redirect to ndc
 ✓ bots redirect to ndc and keeps query string

 7 tests, 1 failure

With the tests in place it’s more clear when the configurations are correct.

As a bonus, if you use Test Kitchen for your Chef recipes, you can include BATS style tests that will be run. So if this configuration is in Chef (which it was) I can have my CI system run these tests if the cookbook changes (which I don’t yet).

Using Google Universal Analytics With NationBuilder

We spent a lot of time trying to understand our visitors at Bowman for Winnipeg and part of that was using Google Analytics. The site was built with NationBuilder but they only support the async version of Analytics and it’s difficult to customize. In particular, we used the demographic and remarketing extensions and there was no easy way to alter the generated javascript to get it to work.

Normally you’d just turn off your platform’s analytics plugins and do it yourself, but NationBuilder has a great feature that fires virtual page views when people fill out forms, and we wanted to use that for goal tracking.

The solution was to turn off NationBuilder’s analytics and do it ourselves but write some hooks to translate any async calls into universal calls. Even with analytics turned off in our NationBuilder account, they’d fire the conversion events so this worked out well.

In the beginning of our template:

Header code
1
2
3
4
5
6
7
8
9
10
11
12
<script type="text/javascript">
  var _gaq = _gaq || []; // leave for legacy compatibility
   var engagement = {% if request.logged_in? %}"member"{% else %}"public"{% endif %};
  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
  })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
  ga('create', 'UA-XXXXXXXX-1', 'www.bowmanforwinnipeg.ca');
  ga('require', 'displayfeatures');
  ga('require', 'linkid', 'linkid.js');
  ga('send', 'pageview', { "dimension1": engagement});
}

It’s pretty vanilla Google Analytics code with a couple of features – display features for layering on demographic features and retargeting integration and the enhanced link attribution for better tracking of clicks within the site. We also added a custom dimension so we could separate out people who took the time to create an account in our nation vs those that were public.

Then, at the bottom:

Footer code
1
2
3
4
 $(function() {
    // Send anything NB tried to inject
    for(i in _gaq) { var c = _gaq[i]; if(c[0] == "_trackPageview"){ ga('send', 'pageview', c[1]) } }
  });

That’s a simple loop that iterates through the _gaq array (that the async version of the tracker uses as an event queue) and pushes out any page views using the universal API. We didn’t have to worry about the initial page view because turning off NationBuilder’s analytics feature removes all of that.

Review: Agile Web Development With Rails 4

I had the good fortune to receive Agile Web Development with Rails 4 from the publisher to review.

The book is really two books in one. The first is a walkthrough of building a simple Rails 4 application for an online bookstore. The second is a component by component look at Rails 4. If you want to be generous there’s a quick introduction to Ruby and Rails concepts at the beginning of the book.

The online bookstore walkthrough is interesting especially if you are new to Rails and the ideas behind Agile development. You take the role of a developer who is building an online bookstore for a client. You start off with a general idea of what needs to be done, but you build it incrementally, showing it to your customer at each step. Based on feedback you decide what to tackle next, such as showing pictures, getting details, or adding a shopping cart. Along the way there are some discussions of deployment, internationalization, authentication, and sending email.

Through the examples you learn the basics of creating Rails models, views, and controllers. Though the examples lean heavily on scaffolding to automatically generate the CRUD actions, you do extend it somewhat to handle XML and JSON responses. You also do some AJAX and automated testing. The book does stick pretty closely to the default Rails toolset including the test framework, though at the very end of the book there are some quick examples on using HAML for views.

At the end of each chapter are some exercises. Unlike many other books I found them to be of appropriate difficulty, with answers available online.

The second half of the book is a detailed look at the components of Rails. This is the more technical part of the book as it gets into specifics of how a request is processed and how a response is generated. There’s no bookstore application anymore, it’s just discussion of what’s available, code samples, and diagrams.

Along the way there are some interesting sidebars that explain some of the Rails philosophies or some real world scenarios. For example, one of the sidebars talks about when you want to use a database lookup that raises an exception on failure versus one that returns a nil or an empty set.

I didn’t read any of the previous editions so I can’t say with authority how much has changed. The book is up to date on the many changes that came in Rails 4 so it is current in that respect. However there are times when you read it and some older terminology, like fragment or page caching, creep in. This is more a matter of editing than it is about it being out of date, as the associated examples are correct. The index is fairly weak – many of the terms I tried to look up, including those featured on the back cover, were not found.

If you’re an experienced Rails developer this book will not help you much. But if you’re looking to get into Ruby on Rails, either from another language or even with a weak programming background, this book is an excellent way to get started. At a little over 400 pages it’ll take you a few weekends to get through depending on your skill level.

Using an IP Phone With Twilio

This article is now outdated.

Twilio has added more features to their SIP calling since this article was written, and there’s a much easier way to connect a SIP phone to Twilio that you should use.

Twilio has supported SIP termination for a while but recently announced SIP origination. This means that previously you could receive calls with SIP but now you can also make calls from a hard phone using SIP instead of using the browser client or going through the PSTN.

It was this second announcement that got my interest. I have an IP phone that I use in my office, currently it’s through Les.net but I like the pricing and interface of Twilio and would rather use them.

For some reason everything I read about SIP and Twilio uses a separate SIP proxy even if they have a compliant SIP device. Even their own blog takes a working SIP ATA and puts it behind Asterisk. I knew I could do better.

What you’ll need

  • An IP phone. I used a Cisco 7960 converted to SIP
  • A publicly available web server running PHP (feel free to use another language, we have to do some basic processing of the request so static won’t work)
  • A Twilio account

When thinking about VoIP, always think in two directions. Inbound and outbound. Sending and receiving. Talking and listening.

Receiving calls

Get a number and point the Voice Request URL to your web server. Please don’t use mine.

Phone Number | Dashboard | Twilio 2013-10-21 08-04-09

Your outbound.php script will send some TwiML to dial your phone:

1
2
3
4
5
6
7
8
<?xml version="1.0" encoding="UTF-8"?>
<Response>
<Dial>
    <Sip>
        sip:line_number@address_of_phone
    </Sip>
</Dial>
</Response>

Note: this part was a lot of trouble. After some packet tracing and some brilliant detective work by Brian from Twilio support, it turns out that the address of the phone in the SIP invite had to be an IP address, not a hostname. With a hostname the phone received the INVITE but always returned 404.

Your phone will need to be on the Internet, either with a public address or with UDP port 5060 port forwarded to it. The “line_number” has to match the name of the line on your phone. In my case, I named my line after my phone number:

1
2
3
4
5
proxy2_address: "ertw.sip.twilio.com"
line2_name: "204xxxxxxx"
line2_shortname: "204xxxxxxx"
line2_displayname: "204xxxxxxx"
line2_authname: "204xxxxxxx"

One thing to note is that you don’t register your phone to Twilio. I left the proxy address there so that the requests will keep the NAT translation alive. After detailed packet tracing it looks like the Twilio SIP messages originate from different IP addresses so this might not be helping as much as I thought.

At this point you should be able to dial your Twilio number from a regular phone. Twilio will request inbound.php and then do a SIP INVITE to the phone. The phone will accept it and then you have voice!

Making calls

The first step is to set up a SIP domain in Twilio:

Twilio User - Account Sip Domains 2013-10-21 08-15-14

Call it whatever you want, but you’ll need to set the Voice URL.

Twilio User - Account Sip Domains 2013-10-21 08-15-57

The script you point it at has to parse the data coming in from Twilio to find the phone number and then issue a Dial instruction to get Twilio to dial the phone and connect the two ends.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<?php
  $called = preg_replace('/sip:1?(.*?)@.*/', '{$1}', $_POST['Called']);
  header("content-type: text/xml");
  echo "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n";
  ob_start();
  var_dump($called);
  var_dump($_POST);
  $contents = ob_get_contents();
  ob_end_clean();
  error_log($contents, 3, "/tmp/twilio.txt");
?>
<Response>
  <Dial timeout="10" record="false" callerId="YOURTWILIONUMBER"><?= $called ?></Dial>
</Response>

All we’re doing here is extracting the phone number from the Called header that Twilio sends us, stripping any leading 1’s, and then sending a TwiML response to dial that number. The ob_start through to the error_log is just logging the parameters if you’re interested.

Don’t forget to change the caller ID to your phone number, otherwise you get errors in the Twilio console.

So now when you place a call on your phone, Twilio will send the digits to the application which will return a Dial verb and the proper 10 digit number. Twilio links the two calls.

Conclusions

It took a bit of playing around to get this going but now I’ve shown that you don’t need an Asterisk server to integrate a SIP phone with Twilio. If you are setting up a phone bank or something with hard phones you can just configure them to hit Twilio, and for Twilio to hit them.

Of course, if you are expecting significant inbound traffic the benefit of a SIP proxy is that it can direct calls to multiple numbers without needing Twilio to be able to reach the phone directly. I’m hoping that Twilio can improve on that in the future!

Find Method Definitions in Vim With Ctags

Ever asked yourself one of these questions?

  • Where is the foo method defined in my code?
  • Where is the foo method defined in the Rails source?

Then you spend a couple of minutes either grepping your source tree or looking on GitHub and then going back to your editor.

This weekend I went through VIM for Rails developers. There’s a lot that’s out of date, but there’s also some timeless stuff in there too. One thing I saw in there was the use of ctags which is a way of indexing code to help you find out where methods are defined.

Install the ctags package with brew/yum/apt/whatever. Then generate the tags with

ctags -R –exclude=.git –exclude=log *

You may want to add tags to your ~/.gitignore because you don’t want to check this file in.

Also add set tags=./tags; to your .vimrc which tells vim to look for the tags in the current directory. If you have it in a parent directory, use set tags=./tags;/ which tells vim to work backward until it’s found.

Then, put your cursor on a method and type control-] and you’ll jump to the definition. control-T or control-O will take you back to your code. control-W control-] opens it up in a horizontal split. Stack Overflow has some mappings you can use to reduce the number of keystrokes or use vertical splits.

If you use bundler and have it put the gems in your vendor directory, ctags will index those too. So you can look up Rails (or any gem) methods.