<![CDATA[Sean's Obsessions]]> 2019-03-10T09:31:46-04:00 https://ertw.com/blog/ Octopress <![CDATA[Managing Secrets in Chef With Hashicorp Vault]]> 2016-11-08T09:51:49-05:00 https://ertw.com/blog/2016/11/08/managing-secrets-in-chef-with-hashicorp-vault The Problem

It’s pretty common to need Chef to be able to store secrets such as database passwords or API keys. The easiest thing to do is to store them in a data bag, but that’s open for the world to see. Encrypted data bags are nice but key management is a gigantic pain. A better solution is Chef Vault, which encrypts the data bag’s secret once for each client (a client being a Chef node or administrative user)

At the same time your organization likely has a need to keep secret data for applications, too. One could store these secrets in the same place as the Chef secrets but if you don’t like having Chef manage application configuration files then you’re out of luck. HashiCorp Vault is one solution here that we’ve used successfully.

With HashiCorp Vault, each client (or groups of clients) has a token that gives them access to certain secrets, dictated by a policy. So you can say that a certain token can only read user accounts and give that to your application or Chef. But how do you keep that token a secret?

I’ll also add that the management of HashiCorp Vault is nicer than that of Chef-Vault. That is, making changes to the secrets is a bit nicer because there’s a well defined API that directly manipulates the secrets, rather than having to use the Chef-Vault layer of getting the private key for the encrypted data bag and making changes to JSON. Furthermore this lets us store some of our secrets in the same place that applications are looking, which can be beneficial.

In this example, I have a Chef recipe with a custom resource to create users in a proprietary application. I want to store the user information in HashiCorp vault because the management of the users will be easier for the operations team, and it will also allow other applications to access the same secrets. The basic premise here is that the data will go in HashiCorp Vault and the token to access the HashiCorp Vault will be stored in Chef’s Vault.

The Code

The first thing to do is set up your secrets in HashiCorp Vault. We’ll want to create a policy that only allows read access in to the part of the Vault that Chef will read from. Add this to myapp.hcl

1
2
3
path "secret/myapp/*" {
  policy = "read"
}

Create the policy:

1
2
[root@vault ~]# vault policy-write myapp myapp.hcl
Policy 'myapp' written.

Create a token that uses that policy. Note that the token must be renewable, as we’re going to have Chef renew it each time. Otherwise it’ll stop working after a month.

1
2
3
4
5
6
7
8
[root@vault ~]# vault token-create -policy=myapp -renewable=true
Key               Value
---               -----
token             ba85411e-ab76-0c0f-c0b8-e26ce294ae0d
token_accessor
token_duration    720h0m0s
token_renewable   true
token_policies    [myapp]

That value beginning with ba85 is the token that Chef will use to talk to the Vault. With your root token you can add your first secret:

1
vault write secret/myapp/testuser password=abc123 path=/tmp

At this point we have a user in the HashiCorp Vault and a token that will let Chef read it. Test for yourself with vault auth and vault read!

Now it’s time to get Chef to store and read that key. Store the token in some JSON such as secret.json.

1
{ "token": "ba85411e-ab76-0c0f-c0b8-e26ce294ae0d"}

And create a secret that’s accessible to the servers and any people needed:

1
knife vault create myapp_credentials vault_token -A sean,server1.example.com -M client -J ./secret.json

This creates a secret in a data bag called myapp_credentials in an item called vault_token. The secret itself is a piece of JSON with a key of token and a value of the token itself. The secret is only accessible by sean (me) and server1.example.com. If you later want to add a new server or user to manage it, you need to run

1
knife vault update myapp_credentials vault_token -A server2.example.com

Which will encrypt the data bag secret with something that only server2.example.com can decrypt.

I won’t get into all the details of Chef Vault other than to refer you to this helpful article.

Now, let’s get Chef to read that secret within the recipe! Most things in these examples are static strings to make it easier to read. In a real recipe you’d likely move them to attributes.

First, get chef-vault into your recipe. Begin by referencing it in metadata.rb

1
depends 'chef-vault'

And in the recipe, include the recipe and use the chef_vault_item helper to read the secret:

1
2
3
4
5
include_recipe 'chef-vault'

# The key to unlock the HashiCorp vault is in Chef
bag = chef_vault_item('myapp', 'vault_token')
vault_token = bag['token']

Now that we have the token to the HashiCorp Vault, we can access the secrets using the Vault Rubygem.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
vault = Vault::Client.new(address: 'https://vault.example.com:8200')
vault.token = vault_token
# Renew the lease while we're here otherwise it'll eventually expire and be useless.
vault.auth_token.renew_self 3600 * 24 * 30

# Take a listing of the secrets in the path we chose earlier
vault.logical.list('/secret/myapp/').each do |name|
  # Extract each individual secret into a hash
  user_data = vault.logical.read("/secret/myapp/#{name}")
  # Apply the custom resource using parts of the secret
  myapp_user name.downcase do
    unix_user name.downcase
    password user_data.data[:password] # This is the password in the vault
    path user_data.data[:path] # This is the path from the vault
  end
end

The testing story is fairly straightforward. If you’re using the chef_vault_item as opposed to directly through ChefVault::Item, then it’ll automatically fall back to using unencrypted data bags which are easily mockable. Similarly, HashiCorp Vault can be mocked or pointed to a test instance.

This seems to give a good balance of security and convenience. We manage the Chef specific secrets in the Chef Vault, and use the HashiCorp vault for things that are more general. And the pattern is simple enough to be used in other places.

]]>
<![CDATA[Getting Started With Chef]]> 2016-06-03T09:41:45-04:00 https://ertw.com/blog/2016/06/03/getting-started-with-chef I’ve been a proponent of configuration management with Chef for a while. It’s done amazing things for me and my workplace and I think everyone could benefit from it. But when I talk to people the question always comes up: “How do I get started? Things move slowly here.”. I’m going to share the plan that worked for me. YMMV.

Note - While I talk about Chef, this also goes for Ansible, Salt, Puppet, cfengine, etc.

The plan

First, establish a beachhead. Get the Chef agent on all the servers you can, including your snowflakes. Then, start automating your “new box” workflow so that it’s as hands-off as possible and results in a fairly standardized build with Chef on it. Finally, commit to using those new boxes for everything you do.

Once you have this done, you’ll immediately be able to prove the value of configuration management. You’ll be able to query Chef for things that normally took a while to get (who is running that old kernel version) and be able to automate many ad-hoc tasks (delete that account on all the servers). Over time you can improve to deploy your servers using cookbooks.

Step 1: Chefify all the current infrastructure

Install Chef Server on a separate box. The server is not necessary to get use out of Chef but it makes things easier. More importantly, once you finish this step you’ll immediately be able to store configuration management data and run queries on your infrastructure (once they run the Chef client).

Next, create your first cookbook that will be a very sparse configuration that applies to all of your current infrastructure. When I did it I called it role-minimal and went down the role cookbook path. The TL;DR of that is that you create a role that only includes the role cookbook so that you get the benefits of versioning.

What do you put in the minimal role? It can be nothing to start, if you want. Or maybe something that’s so benign that no-one could complain, like setting the banner or motd:

1
2
3
4
cookbook_file '/etc/motd' do
  source 'motd'
  mode '0644'
end

and then put your motd in files/default/motd. This will manage the file and ensure that all nodes have the same MOTD. Who can argue with that? You probably already have an item in your backlog to do that anyway.

The other thing to add to this cookbook is to make the Chef client run on a schedule with the chef-client cookbook

1
2
3
4
5
6
7
8
9
10
default['chef_client']['init_style'] = 'none'
default['chef_client']['cron'] = {
  minute: '*/30',
  hour: '*',
  path: nil,
  environment_variables: nil,
  log_file: '/dev/null',
  use_cron_d: false,
  mailto: nil
  }

That can go in your attributes for the recipe to run it every half hour, or whatever you want. Don’t forget to include_recipe 'chef-client::cron' to have Chef manipulate your crontab to add the job.

You may want to create environments if that’s the way you do things.

After this, start bootstrapping your machines with knife bootstrap and a run list containing your new role. Maybe start with the non production servers. Don’t worry too much if people resist, you can leave those servers alone for now.

Now you have Chef running in a low risk fashion. But what’s going to happen? Someone will eventually need something:

  • We need to move to LDAP authentication
  • We need to ensure that we’re never running an old version of openssl
  • We need to know which servers are running which kernels

And then you can put up your hand and say it’s easy, because you happen to have Chef agents on each server, so the easy way would be to leverage that. Except that server that they were concerned about earlier – did they want that one done by hand or should we use our new repeatable automated process? Great, I’ll just go bootstrap that node.

Step 2: Fix your provisioning

This one really depends on how you build new machines. The general idea is that you want to come up with a base machine configuration that everything you do from now on is built from. We use the knife vsphere plugin to create new images with Chef already bootstrapped, but depending on what you use, you may need to search the plugin directory.

Create a second role for all the new stuff. We call ours role-base. This can be identical to role-minimal, but you may want to add some stuff to make your life easier. For example, I feel it should be a crime to run a server without sar, so I have our base role make sure that the sysstat package is up to date, plus we throw in some other goodies like screen, strace, lsof, and htop.

After this, commit to using this base image wherever humanly possible. Your boxes will be more consistent.

Step 3: Write cookbooks for new stuff

Sooner or later you’ll have a project that needs some servers. Do what you can to leverage community cookbooks or your own cookbooks to save yourself time and enforce consistency.

The benefit here is that all your MySQL servers will be consistent. You may not be able to fix your old servers, but at least everything new will be consistent. You’ll also be able to create new machines or environments much easier because the base image and the apps will be in code and not some checklist in the wiki. You’ll spend less time worrying about small details and spend more time thinking about bigger picture items.

I don’t have much advice here other than to do it. You’re definitely going to learn as you go and make mistakes. But it’s code, you can correct it and move on.

The other part of this step is to promote this new tool with your co-workers. Show them how knife tools can make your life easier. Learn how to write cookbooks as a group. Get a culture of reviewing each other’s code so you learn new ways of doing things and share information.

Step 4: while (true) { get_better }

There’s not a while lot to say here. Your first few weeks with Chef are going to be hard, but you’ll find that it gives you so many benefits in consistency and speed that it’s worth it. Mastering devops practices is an ongoing thing.

I recommend contributing patches back to community cookbooks as a way to get better. You will find problems with other people’s cookbooks eventually, and you can submit fixes and have them help other people.

At some point, not doing things in Chef will just seem strange.

]]>
<![CDATA[LPI Certification Book Is Out]]> 2015-12-26T10:27:58-05:00 https://ertw.com/blog/2015/12/26/lpi-certification-book-is-out Ross Brunson and I spent a lot of time working on a study guide for the Linux+ and LPIC-1 certification, and I’m happy to say it’s finally out. Here’s an Amazon link.

I’m particularly proud of this one. It started out as an update to Ross’ book from 10 years ago, but in the end we rewrote most of it, expanded other sections, and added a ton of new content to cover the new objectives. Ross is a former employee of LPI so you know this book will cover the right material.

]]>
<![CDATA[Canadian Equity Crowdfunding Rules]]> 2015-09-07T09:58:42-04:00 https://ertw.com/blog/2015/09/07/canadian-equity-crowdfunding-rules Be forewarned, I’m not a lawyer or a securities dealer. I’m just an interested person writing down my opinion.

Back in May the Canadian Security Administrators released the Start-up Crowdfunding Registration and Prospectus Exemptions (PDF). The core idea is that in certain situations a startup company can sell equity in their company without filing a detailed prospectus or registering as a dealer, often called crowdfunding, though here more properly called equity crowdfunding.

What’s the difference? In crowdfunding sites, such as Kickstarter, you give a project money in return for a prize or product. The project is assured a certain amount of sales to cover any capital costs, which should put them on a good footing for later sales. If the project is a success or a bust you don’t have any long term interest in the company – you’re basically pre-purchasing a product. In the equity crowdfunding scenario you’re buying a piece of the company much like if you bought it on the stock market. If the company does well then your investment may be worth a multiple of the purchase price. If the company does poorly then it’s worth nothing.

Normally these types of equity transactions are heavily regulated to prevent fraud and people from preying on novice investors. These new guidelines are an attempt to reduce the regulations while still preserving the investor protection. It should be noted that these only apply to British Columbia, Saskatchewan, Manitoba, Québec, New Brunswick and Nova Scotia. It is interesting to note that Ontario is developing their own rules.

While there are many conditions in these new equity crowdfunding guidelines the most important are:

  • The issuer (startup company) must write up an offering document containing basic information about the company and the money being raised. No financial statements are required, or even supposed to be attached to the offering document.
  • Each distribution (fundraising round) can be up to $250,000 and an issuer can only raise twice per calendar year.
  • A distribution declares a minimum amount it will raise, and it must raise that within 90 days or the money is returned to investors

Additionally the investors have some rights and limitations:

  • Can only contribute $1,500 per distribution
  • Has 48 hours to back out of a decision (either after the initial decision or after any updates have been made to the offering document)
  • May not be charged a fee for investing (i.e. the issuer must pay any fees)
  • Can not resell their shares unless
    • They are sold as part of another distribution
    • They are sold under a formal prospectus (e.g. what this system is trying to avoid)
    • The company becomes publicly listed and then only after a 4 month hold period expires

These distributions are expected to be sold through online portals that are either run by registered dealers or privately.

My take on all this is that it’s a good start. My main problem is the low limit on the personal contributions. If you raise $100k you need at least 67 people to buy in. I realize there must be a limitation to protect the downside but it seems it could have been a lot higher, maybe as much as $5,000. You now have 67 investors to manage, though you can make sure that the shares being issued to this class of shareholders has very few rights. If you go for institutional funding after a crowdfunded round then this may complicate things.

]]>
<![CDATA[Thinking About Cyber Security]]> 2015-01-06T11:47:32-06:00 https://ertw.com/blog/2015/01/06/thinking-about-cyber-security There have been a lot of high profile security breaches lately. If people like Sony can get hacked, what chance do you have?

The people at Sony are humans. They get together at the water cooler and complain about the state of affairs and all the legacy applications they have to support. Even new companies like Dropbox are going to have dark corners in their applications. Money isn’t going to solve all these problems - Apple has billions of dollars and still gave up information about users. The bigger the company, the bigger the attack surface and the more the team has to defend.

How do you prevent your company or product from being hacked given your resources are finite and you may not be able to change everything you want to?

I’ve been thinking of how Agile methods such as rapid iteration and sprints could be applied to security. With that in mind, some high level principles:

  • Solutions to problems should be ranked in terms of business value
  • If a solution takes more than a week or two to implement, it should be broken down into individual phases with their own business value scores
  • It’s not necessary to completely solve the problem as long as you’re better off than you were before. You can make it better the next iteration
  • Instead of “how can we eliminate all risk?” the better question is “how can we make an attacker’s work more difficult?”
  • Detection is just as important as prevention. Look at safes – they protect valuables against a determined adversary for a given period of time, it’s still up to you to make sure you can react in that timeframe

The list above is trying to get away from the traditional security project where you spend lots of time producing documentation, shelve it, and then provide a half-assed solution to meet the deadline. Instead you break the solution into parts and try and continually produce business value. Time for a diagram:

agile vs waterfall

Even in the best case where you deliver full value, why not try to deliver parts of it sooner?

Look at it this way – at any point in time you know the least amount about your problem as you ever will. It’s folly to think you can solve them all with some mammoth project. Pick something, fix it, move on. You have an end goal for sure, but the path may change as you progress.

With that in mind, how do you figure out what to do?

One organization technique I’ve found helpful is the attack tree. Here you define the goal of the attacker: Steal some money, take down the site, and so forth. Then you start coming up with some high level tasks the attacker would have to do in order to accomplish the goal. The leaf nodes are the more actionable things. For example, consider what it would take to deface a website:

attack tree to deface website

While not totally complete, this attack tree shows where the attack points are. Given that, some low effort and high value activities that could be done:

  • Audit who has access to CDN, DNS, and registrar accounts
  • Audit CMS accounts

Some higher effort activities would then be:

  • Code fix to enforce strong passwords
  • Code fix to lock out accounts after a certain period
  • Code fix to centralize the authentication to the corporate directory
  • Investigate two factor or SAML login with hosting providers
  • Network fix to ban IPs after a certain number of attempts
  • Monitor failed login attempts and investigate manually

Some of those options may be a lot of work. But if you first start with a simple password policy, build on that in the next iteration to lock out accounts, and finally tie in to another directory, you’re able to incrementally improve by making small fixes and learning as you go.

What if a group like Anonymous threatens to attack your website on the weekend? Look at the attack tree, what kind of confusion can you throw at the attacker? Change the URL of the login page? Put up a static password on the web server to view the login screen itself? Security through obscurity is not a long term fix, but as a tactical approach it can be enough to get you past a hurdle.

Too often security projects are treated in a waterfall manner. You must figure everything out front and then implement the solution, with the business value delivered at the end. Instead, treat this all as a continual learning exercise and strive to add value at each iteration. If the situation changes in the middle of the project, like an imminent threat, you’re in a better position to change course and respond.

]]>
<![CDATA[Test Driven Infrastructure]]> 2014-12-29T19:02:57-06:00 https://ertw.com/blog/2014/12/29/test-driven-infrastructure In software, test driven development happens when you write an automated test that proves what you are about to write is correct, you write the code to make the test pass, then you move on to the next thing. Even if you don’t follow that strict order (e.g. write your code, then write a test), the fact that there’s a durable test that you can run later to prove the system still works is very valuable. All the tests together give you a suite of tools to help prove that you have done the right thing and that regressions haven’t happened.

What about the infrastructure world? We’ve always had some variant of “can you ping it now?”, or some high level Nagios tests. But there’s still some value to knowing that your test was good – if you make a change and then test, how can you be sure your test is good? If you ran the same test first you’d know it failed, then you could make your change. And then there’s the regression suite. A suite of tests that may be too expensive to run every 5 minutes through Nagios but are great to run to verify your change didn’t break anything.

Enter the Bash Automated Test System - a Bash based test suite. It’s a thin wrapper around the commands that you’d normally run in a script but if you follow the conventions you get some easy to use helpers and an easy to interpret output.

As an example, I needed to configure an nginx web server to perform a complicated series of redirects based on the user agent and link. I had a list of “if this then that” type instructions from the developer but had to translate them into a set of cURL commands. Once I had that it was simple to translate them into a BATS test that I could use to prove the system was working as requested and ideally share with my team so they could verify correctness if they made changes.

share_link tests
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
#!/usr/bin/env bats

@test "root" {
  run curl http://example.com
  [[ $output =~ "doctype html" ]]
}

@test "mobile redirects to share" {
  run curl -H "User-Agent: this is an iphone" -i -k http://app.example.com/shareapp/65ac7f12-ac2e-43f4-8b09-b3359137f36c
  [[ $output =~ "302 Found" ]]
  [[ $output =~ "Location: http://app.example.com/share/65ac7f12-ac2e-43f4-8b09-b3359137f36c" ]]
}

@test "mobile redirects to share and keeps query string" {
  run curl -H "User-Agent: this is an iphone" -i -k http://app.example.com/shareapp/65ac7f12-ac2e-43f4-8b09-b3359137f36c?a=b
  [[ $output =~ "302 Found" ]]
  [[ $output =~ "Location: http://app.example.com/share/65ac7f12-ac2e-43f4-8b09-b3359137f36c?a=b" ]]
}

@test "desktop redirects to play" {
  run curl -H "User-Agent: dunno bob" -i -k http://app.example.com/shareapp/65ac7f12-ac2e-43f4-8b09-b3359137f36c
  [[ $output =~ "302 Found" ]]
  [[ $output =~ "Location: http://app.example.com/play/65ac7f12-ac2e-43f4-8b09-b3359137f36c" ]]
}

@test "desktop redirects to play and keeps query string" {
  run curl -H "User-Agent: dunno bob" -i -k http://app.example.com/shareapp/65ac7f12-ac2e-43f4-8b09-b3359137f36c?a=b
  [[ $output =~ "302 Found" ]]
  [[ $output =~ "Location: http://app.example.com/play/65ac7f12-ac2e-43f4-8b09-b3359137f36c?a=b" ]]
}

@test "bots redirect to main site" {
  run curl -H "User-Agent: facebookexternalhit" -i -k http://app.example.com/shareapp/65ac7f12-ac2e-43f4-8b09-b3359137f36c
  [[ $output =~ "302 Found" ]]
  [[ $output =~ "Location: http://www.example.com/app/social?id=65ac7f12-ac2e-43f4-8b09-b3359137f36c" ]]
}

@test "bots redirect to main site and keeps query string" {
  run curl -H "User-Agent: facebookexternalhit" -i -k http://app.example.com/shareapp/65ac7f12-ac2e-43f4-8b09-b3359137f36c?a=b
  [[ $output =~ "302 Found" ]]
  [[ $output =~ "Location: http://www.example.com/app/social?id=65ac7f12-ac2e-43f4-8b09-b3359137f36c&a=b" ]]
}

And running the tests with one mistake in the configuration:

$ bats ~/Downloads/share_link.bats
 ✓ root
 ✓ mobile redirects to share
 ✗ mobile redirects to share and keeps query string
   (in test file /Users/sean/Downloads/share_link.bats, line 17)
     `[[ $output =~ "301 Found" ]]' failed
 ✓ desktop redirects to play
 ✓ desktop redirects to play and keeps query string
 ✓ bots redirect to ndc
 ✓ bots redirect to ndc and keeps query string

 7 tests, 1 failure

With the tests in place it’s more clear when the configurations are correct.

As a bonus, if you use Test Kitchen for your Chef recipes, you can include BATS style tests that will be run. So if this configuration is in Chef (which it was) I can have my CI system run these tests if the cookbook changes (which I don’t yet).

]]>
<![CDATA[Using Google Universal Analytics With NationBuilder]]> 2014-10-25T12:46:09-05:00 https://ertw.com/blog/2014/10/25/using-google-universal-analytics-with-nationbuilder We spent a lot of time trying to understand our visitors at Bowman for Winnipeg and part of that was using Google Analytics. The site was built with NationBuilder but they only support the async version of Analytics and it’s difficult to customize. In particular, we used the demographic and remarketing extensions and there was no easy way to alter the generated javascript to get it to work.

Normally you’d just turn off your platform’s analytics plugins and do it yourself, but NationBuilder has a great feature that fires virtual page views when people fill out forms, and we wanted to use that for goal tracking.

The solution was to turn off NationBuilder’s analytics and do it ourselves but write some hooks to translate any async calls into universal calls. Even with analytics turned off in our NationBuilder account, they’d fire the conversion events so this worked out well.

In the beginning of our template:

Header code
1
2
3
4
5
6
7
8
9
10
11
12
<script type="text/javascript">
  var _gaq = _gaq || []; // leave for legacy compatibility
   var engagement = {% if request.logged_in? %}"member"{% else %}"public"{% endif %};
  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
  })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
  ga('create', 'UA-XXXXXXXX-1', 'www.bowmanforwinnipeg.ca');
  ga('require', 'displayfeatures');
  ga('require', 'linkid', 'linkid.js');
  ga('send', 'pageview', { "dimension1": engagement});
}

It’s pretty vanilla Google Analytics code with a couple of features – display features for layering on demographic features and retargeting integration and the enhanced link attribution for better tracking of clicks within the site. We also added a custom dimension so we could separate out people who took the time to create an account in our nation vs those that were public.

Then, at the bottom:

Footer code
1
2
3
4
 $(function() {
    // Send anything NB tried to inject
    for(i in _gaq) { var c = _gaq[i]; if(c[0] == "_trackPageview"){ ga('send', 'pageview', c[1]) } }
  });

That’s a simple loop that iterates through the _gaq array (that the async version of the tracker uses as an event queue) and pushes out any page views using the universal API. We didn’t have to worry about the initial page view because turning off NationBuilder’s analytics feature removes all of that.

]]>
<![CDATA[Review: Agile Web Development With Rails 4]]> 2013-11-21T22:20:27-06:00 https://ertw.com/blog/2013/11/21/review-agile-web-development-with-rails-4 I had the good fortune to receive Agile Web Development with Rails 4 from the publisher to review.

The book is really two books in one. The first is a walkthrough of building a simple Rails 4 application for an online bookstore. The second is a component by component look at Rails 4. If you want to be generous there’s a quick introduction to Ruby and Rails concepts at the beginning of the book.

The online bookstore walkthrough is interesting especially if you are new to Rails and the ideas behind Agile development. You take the role of a developer who is building an online bookstore for a client. You start off with a general idea of what needs to be done, but you build it incrementally, showing it to your customer at each step. Based on feedback you decide what to tackle next, such as showing pictures, getting details, or adding a shopping cart. Along the way there are some discussions of deployment, internationalization, authentication, and sending email.

Through the examples you learn the basics of creating Rails models, views, and controllers. Though the examples lean heavily on scaffolding to automatically generate the CRUD actions, you do extend it somewhat to handle XML and JSON responses. You also do some AJAX and automated testing. The book does stick pretty closely to the default Rails toolset including the test framework, though at the very end of the book there are some quick examples on using HAML for views.

At the end of each chapter are some exercises. Unlike many other books I found them to be of appropriate difficulty, with answers available online.

The second half of the book is a detailed look at the components of Rails. This is the more technical part of the book as it gets into specifics of how a request is processed and how a response is generated. There’s no bookstore application anymore, it’s just discussion of what’s available, code samples, and diagrams.

Along the way there are some interesting sidebars that explain some of the Rails philosophies or some real world scenarios. For example, one of the sidebars talks about when you want to use a database lookup that raises an exception on failure versus one that returns a nil or an empty set.

I didn’t read any of the previous editions so I can’t say with authority how much has changed. The book is up to date on the many changes that came in Rails 4 so it is current in that respect. However there are times when you read it and some older terminology, like fragment or page caching, creep in. This is more a matter of editing than it is about it being out of date, as the associated examples are correct. The index is fairly weak – many of the terms I tried to look up, including those featured on the back cover, were not found.

If you’re an experienced Rails developer this book will not help you much. But if you’re looking to get into Ruby on Rails, either from another language or even with a weak programming background, this book is an excellent way to get started. At a little over 400 pages it’ll take you a few weekends to get through depending on your skill level.

]]>
<![CDATA[Using an IP Phone With Twilio]]> 2013-11-05T14:40:20-06:00 https://ertw.com/blog/2013/11/05/using-an-ip-phone-with-twilio This article is now outdated.

Twilio has added more features to their SIP calling since this article was written, and there’s a much easier way to connect a SIP phone to Twilio that you should use.

Twilio has supported SIP termination for a while but recently announced SIP origination. This means that previously you could receive calls with SIP but now you can also make calls from a hard phone using SIP instead of using the browser client or going through the PSTN.

It was this second announcement that got my interest. I have an IP phone that I use in my office, currently it’s through Les.net but I like the pricing and interface of Twilio and would rather use them.

For some reason everything I read about SIP and Twilio uses a separate SIP proxy even if they have a compliant SIP device. Even their own blog takes a working SIP ATA and puts it behind Asterisk. I knew I could do better.

What you’ll need

  • An IP phone. I used a Cisco 7960 converted to SIP
  • A publicly available web server running PHP (feel free to use another language, we have to do some basic processing of the request so static won’t work)
  • A Twilio account

When thinking about VoIP, always think in two directions. Inbound and outbound. Sending and receiving. Talking and listening.

Receiving calls

Get a number and point the Voice Request URL to your web server. Please don’t use mine.

Phone Number | Dashboard | Twilio 2013-10-21 08-04-09

Your outbound.php script will send some TwiML to dial your phone:

1
2
3
4
5
6
7
8
<?xml version="1.0" encoding="UTF-8"?>
<Response>
<Dial>
    <Sip>
        sip:line_number@address_of_phone
    </Sip>
</Dial>
</Response>

Note: this part was a lot of trouble. After some packet tracing and some brilliant detective work by Brian from Twilio support, it turns out that the address of the phone in the SIP invite had to be an IP address, not a hostname. With a hostname the phone received the INVITE but always returned 404.

Your phone will need to be on the Internet, either with a public address or with UDP port 5060 port forwarded to it. The “line_number” has to match the name of the line on your phone. In my case, I named my line after my phone number:

1
2
3
4
5
proxy2_address: "ertw.sip.twilio.com"
line2_name: "204xxxxxxx"
line2_shortname: "204xxxxxxx"
line2_displayname: "204xxxxxxx"
line2_authname: "204xxxxxxx"

One thing to note is that you don’t register your phone to Twilio. I left the proxy address there so that the requests will keep the NAT translation alive. After detailed packet tracing it looks like the Twilio SIP messages originate from different IP addresses so this might not be helping as much as I thought.

At this point you should be able to dial your Twilio number from a regular phone. Twilio will request inbound.php and then do a SIP INVITE to the phone. The phone will accept it and then you have voice!

Making calls

The first step is to set up a SIP domain in Twilio:

Twilio User - Account Sip Domains 2013-10-21 08-15-14

Call it whatever you want, but you’ll need to set the Voice URL.

Twilio User - Account Sip Domains 2013-10-21 08-15-57

The script you point it at has to parse the data coming in from Twilio to find the phone number and then issue a Dial instruction to get Twilio to dial the phone and connect the two ends.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<?php
  $called = preg_replace('/sip:1?(.*?)@.*/', '{$1}', $_POST['Called']);
  header("content-type: text/xml");
  echo "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n";
  ob_start();
  var_dump($called);
  var_dump($_POST);
  $contents = ob_get_contents();
  ob_end_clean();
  error_log($contents, 3, "/tmp/twilio.txt");
?>
<Response>
  <Dial timeout="10" record="false" callerId="YOURTWILIONUMBER"><?= $called ?></Dial>
</Response>

All we’re doing here is extracting the phone number from the Called header that Twilio sends us, stripping any leading 1’s, and then sending a TwiML response to dial that number. The ob_start through to the error_log is just logging the parameters if you’re interested.

Don’t forget to change the caller ID to your phone number, otherwise you get errors in the Twilio console.

So now when you place a call on your phone, Twilio will send the digits to the application which will return a Dial verb and the proper 10 digit number. Twilio links the two calls.

Conclusions

It took a bit of playing around to get this going but now I’ve shown that you don’t need an Asterisk server to integrate a SIP phone with Twilio. If you are setting up a phone bank or something with hard phones you can just configure them to hit Twilio, and for Twilio to hit them.

Of course, if you are expecting significant inbound traffic the benefit of a SIP proxy is that it can direct calls to multiple numbers without needing Twilio to be able to reach the phone directly. I’m hoping that Twilio can improve on that in the future!

]]>
<![CDATA[Find Method Definitions in Vim With Ctags]]> 2013-06-24T07:39:28-05:00 https://ertw.com/blog/2013/06/24/find-method-definitions-in-vim-with-ctags Ever asked yourself one of these questions?

  • Where is the foo method defined in my code?
  • Where is the foo method defined in the Rails source?

Then you spend a couple of minutes either grepping your source tree or looking on GitHub and then going back to your editor.

This weekend I went through VIM for Rails developers. There’s a lot that’s out of date, but there’s also some timeless stuff in there too. One thing I saw in there was the use of ctags which is a way of indexing code to help you find out where methods are defined.

Install the ctags package with brew/yum/apt/whatever. Then generate the tags with

ctags -R –exclude=.git –exclude=log *

You may want to add tags to your ~/.gitignore because you don’t want to check this file in.

Also add set tags=./tags; to your .vimrc which tells vim to look for the tags in the current directory. If you have it in a parent directory, use set tags=./tags;/ which tells vim to work backward until it’s found.

Then, put your cursor on a method and type control-] and you’ll jump to the definition. control-T or control-O will take you back to your code. control-W control-] opens it up in a horizontal split. Stack Overflow has some mappings you can use to reduce the number of keystrokes or use vertical splits.

If you use bundler and have it put the gems in your vendor directory, ctags will index those too. So you can look up Rails (or any gem) methods.

]]>
<![CDATA[Is It Hacked?]]> 2013-05-13T20:33:22-05:00 https://ertw.com/blog/2013/05/13/is-it-hacked After coming across a few sites that were serving pharma spam to Google Bot but not regular browsers, I thought it would be fun to give Sinatra a try and come up with a quick web app that checks for cloaked links. That lead to a few more checks, then some improvements, and Is it hacked? was launched. I’ve got some ideas for where I want to go with the project, but in the meantime it’s catching stuff that other sites in the space are missing.

There’s also a bookmarklet at the bottom of the page. You can drag it to your button bar and check the site you’re on for any infection.

Update: I’ve sold the site to someone else and am no longer involved.

]]>
<![CDATA[Understanding Capistrano Hooks and Deploy vs Deploy:migrations]]> 2013-03-13T07:26:01-05:00 https://ertw.com/blog/2013/03/13/understanding-capistrano-hooks-and-deploy-vs-deploy-migrations Capistrano is a deployment framework for Rails, though it can be extended to do pretty much anything. The system comes with a bunch of built in tasks, and each task can be made up of other tasks and code. Plugins can add their own tasks and hook into default tasks, and the user can do the same through the configuration files.

Like any popular open source project capistrano has gone through changes. Documentation on the Internet is plentiful but often refers to old ways of doing things, so copy/pasting recipes can often result in stuff that doesn’t work quite right.

One common thing people need to do is to run some command after the deploy or at some time during the deploy. Capistrano has a very easy way of doing this.

1
after 'deploy:update_code', 'deploy:symlink_shared'

This says “after running deploy:update_code, run deploy:symlink_shared”. The latter is custom task that’s defined elsewhere.

The problem comes in when you look at the way the “deploy” and “deploy:migrations” tasks differ. I’ve seen a common problem where something works when deploying without migrations but not when migrations are used. Usually this is because the hook used is not the right one, either because of bad information or the user figured out where to hook into by looking at the output of a deploy.

If you look at Capistrano’s default deploy.rb you can piece together the tasks that are run in both cases.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
deploy:migrations
  update_code
   strategy.deploy!
   finalize_update
  migrate
  create_symlink
  restart
deploy
  update
    update_code
      strategy.deploy!
      finalize_update
    create_symlink
  restart

From this, you can see that the sequence is somewhat different. The “update” task isn’t used in the migrations case. Instead, the components are replicated.

If you want to hook in, use

  • update_code to run before/after the code is put into the release directory, such as if you want to make more symlinks or do something before migrations are potentially run.
  • create_symlink to run before/after the symbolic link pointing to the current release is made. Note that symlink is deprecated. You can run it from the command line, but if you try and hook in to it, you won’t be happy.
  • restart to run before/after the application server is restarted, e.g. to restart background workers or do deployment notifications

]]>
<![CDATA[BASH History Expansion]]> 2013-01-24T14:22:28-06:00 https://ertw.com/blog/2013/01/24/bash-history-expansion The Unix shell has a whole bunch of features to make your life easier. One has to do with the history. Some I have managed to ingrain into muscle memory, others I have to remember which often means I do it the long way. I hope these examples help you out.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Start off with some files
$ touch one two three four
# !$ expands to all the arguments from the last command
$ ls !*
ls one two three four
four  one three   two
# !foo runs the last command that starts with foo
$ !touch
touch one two three four
# Carat (^) does a search/replace in the previous command
$ ^touch^ls
ls one two three four
four  one three   two
# !$ is the last item in the previous command, !^ is the first _argument_
$ head !$
head four
$ ls four three two one
four  one three   two
$ cat !^
cat four
# !:n is the n'th item on the previous command line, 0 indexed
$ ls four three two one
four  one three   two
$ cat !:3
cat two
$ ls four three two one
four  one three   two
$ which !:0
which ls
/bin/ls

There are a lot more, run a man bash and look for the section called HISTORY EXPANSION.

]]>
<![CDATA[Mixpanel Track_links and Track_forms]]> 2013-01-05T13:39:08-06:00 https://ertw.com/blog/2013/01/05/mixpanel-track_links-and-track_forms I’ve long been a fan of MixPanel and was happy that I got to get to use them again over at Wave Payroll. Last time I used them you could only track a page view, but now they have added track_links and track_forms which fire the event asynchronously after the link is clicked or the form is submitted.

I started off by using the event_tracker gem which only handles the page view events, but it does make it easier to fire events in the view based on controller actions so it is well worth using. I talked to the author about adding support for track_links and track_forms, but after a good discussion he convinced me that the gem was not the right place for this and that I should pursue something more elegant such as tagging the links.

Ideally, what we wanted to arrive at was something like

1
<a href="blah" class="track-with-mixpanel" data-event="clicked on login">

or with Rails helpers:

1
=link_to login_path, :class => "track-with-mixpanel", :data => { :event => "clicked on login" }

which would magically call

1
mixpanel.track_links("#id_of_element", "clicked on login")

One problem is that not all the elements had IDs and I didn’t want the added friction of having to add that in.

What I came up with was:

1
2
3
4
5
6
7
8
9
10
11
12
$(".track-with-mixpanel").each( function(i) {
  var obj = $(this);
  // Make up an ID if there is none
  if (!obj.attr('id')) {
    obj.attr('id', "mixpanel" + Math.floor(Math.random() * 4294967296));
  }
  if (obj.is("form")) {
    mixpanel.track_forms("#" + obj.attr("id"), obj.data('event'), obj.data('properties'));
  } else {
    mixpanel.track_links("#" + obj.attr("id"), obj.data('event'), obj.data('properties'));
  }
});

So this simply finds all the links with the track-with-mixpanel class, assigns them a random ID if they don’t have one, and then calls the appropriate mixpanel function on them.

A couple of notes, though…

The first is that the data-properties doesn’t quite work right. Pulling the data into a hash requires some finesse that I haven’t been able to figure out.

The second is that track_links and track_forms are horribly broken and will mess up any onClick handlers you have. So if you have links that you use :method => :post for, some third party javascript like validations or Stripe, you’re better off with the old track method because Mixpanel won’t play nicely with it. But for regular links and forms, this is great.

]]>
<![CDATA[JIRA Shortcuts With Quicksilver]]> 2012-11-16T09:12:31-06:00 https://ertw.com/blog/2012/11/16/jira-shortcuts-with-quicksilver One of my favourite applications on my Mac is Quicksilver. It’s an application launcher that can be extended to do almost anything. I’ve been able to set up hot keys to rate songs in iTunes, and also set shortcuts to URLs.

These screenshots explain how to make a “jira” shortcut that will take you to a particular issue in JIRA. The instructions will work equally well for anything where the URL is well known. For example, ticket PR-139 is at https://mycompany.atlassian.net/browse/pr-139. To go there, you’ll hit your Quicksilver hot key (e.g. Control-space), then type “jira”, enter, then the ticket number, and enter.

First, make sure the Web Search Module plugin is loaded by checking the box in the plugins menu.

Then, add the shortcut to your catalog by adding it as a custom source under Web Search List.

Looking more closely at what you add:

The name is whatever you want to type when first opening Quicksilver. The url is the URL you want to go to, with the dynamic part replaced by ***.

You may have to rescan the catalog from the main Quicksilver menu.

]]>
<![CDATA[Podcast - Linux Admin Show]]> 2012-09-23T07:40:49-05:00 https://ertw.com/blog/2012/09/23/podcast-linux-admin-show I just wanted to mention that I’ve started a podcast over at LinuxAdminShow.com. It’s me talking to a guest every week about issues that matter to the Linux administrator. The first episode was about systems management and admins-who-code, the second was about SSL.

Always looking for feedback and guest/topic suggestions. Have a listen and let me know what you think!

]]>
<![CDATA[Deploying Rails Review]]> 2012-08-14T09:09:40-05:00 https://ertw.com/blog/2012/08/14/deploying-rails-review Deploying Rails
Automate, Deploy, Scale, Maintain, and Sleep at Night
Anthony Burns and Tom Copeland
Deploying Rails

“Deploying Rails” is more than just about deploying a Rails application, it’s about that and everything that goes into managing servers, from provisioning to monitoring. This book explains how to do these tasks with the help of some popular Open Source tools and a focus on automation.

The flow of the book is fairly logical. Start by building virtual machines with Vagrant. Learn how to automate configuration with Puppet. Nail down deployment and remote tasks with Capistrano. Monitor the server and application with Nagios and Ganglia. Delve into some side topics like systems administration. Even though there is ample free documentation on all of these topics, this book sets itself apart in two ways.

First is that the tools from previous chapters are used to augment later chapters. You’ll learn how to use Vagrant to set up a virtual machine in the first chapter, and from them on when you need a server you’ll configure a Vagrant box. You’ll learn how to automate configuration management with Puppet in the second chapter, and all successive chapters build on that. You don’t simply install Nagios, you write Puppet scripts that install Nagios for you. By the end of the book you have a collection of tools that you can start using in your own real world environment.

Secondly, you’re doing everything on an environment you can build yourself without needing to know how to install Linux, owning spare servers, or knowing how to manage servers. You just install Vagrant and follow the book. The book is heavy on code samples and screen captures – it is the exception to open up to a random page and not see some code or example. You can have a replicated MySQL setup and work on your database recovery practices, destroy it, and know you can rebuild it with a few keystrokes.

Puppet is a large part of this book. Almost every task is done in a Puppet manifest, from installing the web server to setting up monitoring. The authors walk you through creating couple of simple manifests and then refactoring the code to be more reusable. The basics of Puppet are covered such as installing packages, starting services, and copying files. Later on Puppet is used to interact with the existing system by managing cron jobs and using templates to edit existing configuration files.

The popular deployment suite, Capistrano, is the topic of two chapters. The first looks at a simple deployment, then goes on to examine roles and adding hooks that automate tasks at points during the deployment. The advanced chapter delves into remote command invocation and parsing, multistage deployments (such as a separate staging and production deployment) and further automation of the deployment. People who have used Capistrano before will not be surprised by much in the basic chapter, but are almost certain to find something helpful in the advanced chapter. It opened my eyes to what Capistrano can do outside of the deployment – it can automate maintenance and support tasks, too.

The last three chapters discuss various topics, from managing multiple Ruby interpreters with RVM to backing up your database and how to manage a master-slave setup. Some of these topics can be books in themselves, though Deploying Rails does a good job at getting you started. Even though the examples throughout the book use Apache and Phusion Passenger, the appendixes have a chapter on using Nginx and Unicorn.

Despite all the remarkable content, I did feel there were some areas that could have been covered. Given the extensive use of Vagrant throughout the book I found it surprising there was no discussion about using it for its intended purpose of managing developer’s environments. There’s a brief mention that Vagrant can run the Puppet scripts and you can save the step of running it manually, but other than that I found little that would tell the reader that they could reuse the work they had been doing so that all the developers would have a production like environment in which to work. Similarily, since the environment is well defined the authors were able to make several assumptions in their coniguration that would not necessarily work in a typical production environment. Some of these are simple, such as IP addresses and SSH keys being hard coded, but some are more involved, such as how to distribute the Puppet manifests when that’s not taken care of by Vagrant. Books, like software, have to draw the line somewhere though.

As a systems administrator turned developer I was encouraged to see this book being released. It shows the ideal marriage of the systems administration mindset, with its relentless focus on automation and monitoring, with the tools available to the modern programmer. In some circles this practice is called DevOps, but even in shops that keep these two separate, this book will benefit both teams.

]]>
<![CDATA[Controlling HipChat Status Through AppleScript]]> 2012-05-02T14:47:25-05:00 https://ertw.com/blog/2012/05/02/controlling-hipchat-status-through-applescript At my awesome job we use HipChat for team collaboration. I also use the Pomodoro app to try and manage my time. One problem is that I often get interrupted while working.

Long story short, the Pomodoro app lets you run AppleScript when various events happen, so I wrote some stuff to change my HipChat status to DND when I’m in the middle of a work cycle. Here’s the code:

tell application "System Events" to tell UI element "HipChat" of list 1 of process "Dock"
    perform action "AXShowMenu"
    delay 0.5
    click menu item "Status" of menu 1
    click menu item "DND" of menu 1 of menu item "Status" of menu 1
 end tell

All that remains is to insert that into the Pomodoro app through Preferences -> Scripts:

Just note that you have to change “DND” to “Available” for some of the events.

This was my first foray into AppleScript, so it’s possible I’m sending my banking details off to Nigeria, but it seems to work so far.

Edit you need to enable access for assistive devices from System Preferences -> Universal Access:

]]>
<![CDATA[Making New Relic Awesome]]> 2012-01-21T11:39:41-06:00 https://ertw.com/blog/2012/01/21/making-new-relic-awesome Update - if you sign up through this link, New Relic will send you $25 gift card for trying it out.

Like many Rails people, I use New Relic to monitor my Rails app. At Wave Accounting we even pay for it. It’s well worth the money, as you get a lot of extra visibility into your app.

At the standard level, New Relic is pretty good, but sometimes it seems like I’m missing out on something. RPM will show me that a controller is running slowly but most of the time is spent in the controller itself, not really saying what’s happening. I’ve recently discovered a few tweaks that have made a huge difference.

Turn on garbage collection

It’s pretty embarrassing, but this isn’t on by default. It’s so simple to figure out how much of your time is spent in GC, the only caveat is that you have to be running REE or 1.9.x. This doc explains how, but all you have to do is turn on the stats and New Relic does the rest.

1
2
3
4
# For REE
GC.enable_stats
# For 1.9.2
# GC::Profiler.enable

Put that in an initializer, and you get GC activity in almost all your graphs:

Local development mode

If you go to http://localhost:3000/newrelic you will get some in depth information about what’s going on in your app when in dev mode. If you’re using pow then add the following to ~/.powconfig:

export NEWRELIC_DISPATCHER=pow
export POW_WORKERS=1

There’s a wealth of information here.

Trace specific sections of your code

Your controller does a bunch of stuff but New Relic reports it as one big block. block tracing to the rescue!

1
2
3
4
5
6
7
8
9
10
11
12
respond_to do |format|
  format.html do
    self.class.trace_execution_scoped(['Custom/MyController#show/render_html']) do
      do_it_in_html
    end
  end
  format.pdf do
    self.class.trace_execution_scoped(['Custom/MyController#show/render_pdf']) do
      do_it_in_pdf
    end
  end
end

Then, these blocks will be counted separately.

Trace methods in your code, or other people’s code

Want to do the same thing as before on someone else’s code, or at a method level? Add a method tracer in your initializer:

1
2
3
4
5
6
7
require 'new_relic/agent/method_tracer'
CalculationEngine.class_eval do
    include NewRelic::Agent::MethodTracer
    add_method_tracer :calculate_taxes
    add_method_tracer :calculate_totals
    add_method_tracer :send_invoice
end

Poof, all those are now traced and broken out separately:

Other things

You can also trace custom metrics, such as user signups or report views. I’m still working on that, along with monitoring background jobs.

]]>
<![CDATA[Twilio Client Quickstart - in Ruby]]> 2011-10-25T21:38:47-05:00 https://ertw.com/blog/2011/10/25/twilio-client-quickstart-in-ruby I’ve wanted to play with the Twilio client for a while. They have this great quick start but it’s written in PHP. Now I don’t mind PHP, but I prefer Ruby. If I’m going to write anything using the client, it’s going to be in Ruby, so I don’t see the point in learning it on PHP.

So, here is the meat of the quickstart done up as a Rails 3.1 application.

First, generate the application.

1
2
3
4
5
6
7
8
9
$ rails new twilio
      create
      create  README
      create  Rakefile
      create  config.ru
      create  .gitignore
      create  Gemfile
      create  app
      ...

This creates a new Rails 3.1 app in the current directory called twilio. Change to this directory, and add a line to your Gemfile:

Gemfile
1
gem 'twilio-ruby'

Run bundle install to add the official Twilio gem to your bundle.

Next, head on over to your Twilio account and get your SID and auth token. Those can go in an initializer, config/initializers/twilio.rb:

config/initializers/twilio.rb
1
2
TwilioAccountSID="AC........"
TwilioAuthToken="......."

Those are the magic tokens that let you authenticate yourself to the Twilio API, and more importantly for them, let them bill you.

Next, head on over to app/helpers/application_helper.rb:

app/helpers/application_helper.rb
1
2
3
4
5
module ApplicationHelper
  def twilio_javascript_include_tag
    javascript_include_tag "http://static.twilio.com/libs/twiliojs/1.0/twilio.min.js"
  end
end

Then in app/views/layouts/application.html.erb add that helper in the head:

app/views/layouts/application.html.erb
1
2
3
4
5
<pre><code>
  <%= stylesheet_link_tag    "application" %>
  <%= javascript_include_tag "application" %>
  <%= twilio_javascript_include_tag  %>
  <%= csrf_meta_tags %><br />

Yea, you could have put the code right in the layout, but I like sparse layout files.

Next up, create a controller:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ rails generate controller client index
      create  app/controllers/client_controller.rb
       route  get "client/index"
      invoke  erb
      create    app/views/client
      create    app/views/client/index.html.erb
      invoke  test_unit
      create    test/functional/client_controller_test.rb
      invoke  helper
      create    app/helpers/client_helper.rb
      invoke    test_unit
      create      test/unit/helpers/client_helper_test.rb
      invoke  assets
      invoke    coffee
      create      app/assets/javascripts/client.js.coffee
      invoke    scss
      create      app/assets/stylesheets/client.css.scss

Then add root :to => ‘client#index’ to config/routes.rb so that your new view is displayed in the root url.

Run rails server or whatever you do to start your dev instance and browse to it. You should get the usual “find me in app/views/client/index.html.erb” message. Check the headers to make sure the library is being installed. The rest of the examples then deal with app/views/client/index.html.erb and app/helpers/client_helper.rb.

For the first example you want:


Helper
1
2
3
4
5
6
7
module ClientHelper
  def twilio_token
    capability = Twilio::Util::Capability.new TwilioAccountSID, TwilioAuthToken
    capability.allow_client_outgoing "APabe7650f654fc34655fc81ae71caa3ff"
    capability.generate
  end
end
View
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<%= javascript_tag do %>
function call() {
  Twilio.Device.connect();
}
function hangup() {
  Twilio.Device.disconnectAll();
}
$(function() {
  Twilio.Device.setup("<%= twilio_token %>");
Twilio.Device.ready(function (device) {
    $("#log").text("Ready");
  });
Twilio.Device.error(function (error) {
    $("#log").text("Error: " + error.message);
  });
Twilio.Device.connect(function (conn) {
    $("#log").text("Successfully established call");
  });
});
<% end %>
<button class="call" onclick="call();">
  Call
</button>

For the second example, you just change the view.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
<%= javascript_tag do %>
function call() {
  Twilio.Device.connect();
}
function hangup() {
  Twilio.Device.disconnectAll();
}
$(function() {
  Twilio.Device.setup("<%= twilio_token %>");
Twilio.Device.ready(function (device) {
    $("#log").text("Ready");
  });
Twilio.Device.error(function (error) {
    $("#log").text("Error: " + error.message);
  });
Twilio.Device.disconnect(function (conn) {
    $("#log").text("Call ended");
  });
Twilio.Device.connect(function (conn) {
    $("#log").text("Successfully established call");
  });
});
<% end %>
<button class="call" onclick="call();">
  Call
</button>
<button class="hangup" onclick="hangup();">
  Hangup
</button>
<div id="log">Loading pigeons...</div>

For the third example we’ll have to change the helper and the view accordingly:

app/helpers/client_helper.rb (note I’m using my own sandbox id. You get your own inside the Twilio account page!)

1
2
3
4
5
6
7
8
module ClientHelper
  def twilio_token
    capability = Twilio::Util::Capability.new TwilioAccountSID, TwilioAuthToken
    capability.allow_client_outgoing "AP..."
    capability.allow_client_incoming 'jenny'
    capability.generate
  end
end
app/views/client/index.html.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
<%= javascript_tag do %>
function call() {
  Twilio.Device.connect();
}
function hangup() {
  Twilio.Device.disconnectAll();
}
$(function() {
  Twilio.Device.setup("<%= twilio_token %>");
Twilio.Device.ready(function (device) {
    $("#log").text("Ready");
  });
Twilio.Device.error(function (error) {
    $("#log").text("Error: " + error.message);
  });
Twilio.Device.disconnect(function (conn) {
    $("#log").text("Call ended");
  });
Twilio.Device.connect(function (conn) {
    $("#log").text("Successfully established call");
  });
Twilio.Device.incoming(function (conn) {
    $("#log").text("Incoming connection from " + conn.parameters.From);
    // accept the incoming connection and start two-way audio
    conn.accept();
  });
});
<% end %>
<button class="call" onclick="call();">
  Call
</button>
<button class="hangup" onclick="hangup();">
  Hangup
</button>
<div id="log">Loading pigeons...</div>

Now, hook up a new action in the client controller to redirect the call from Twilio to the app inside app/controllers/client_controller.rb

1
2
3
4
5
6
7
8
def incoming
  response = Twilio::TwiML::Response.new do |r|
    r.Dial  do |d|
      d.Client 'jenny'
    end
  end
  render :text => response.text
end

Don’t forget to add post “client/incoming” to config/routes.rb. Then point your sandbox URL to your dev box, such as http://yourhome.com:3000/client/incoming.xml.

As a bonus, here’s a rake task to log in to a remote host and set up an ssh tunnel on remote port 3000 to local port 3000:

Rakefile
1
2
3
4
5
6
7
8
9
10
11
12
13
namespace :tunnel do
  desc "Start a ssh tunnel"
  task :start => :environment do
    public_host_username = "sean"
    public_host = "home.ertw.com"
    public_port = 3000
    local_port = 3000
    puts "Starting tunnel #{public_host}:#{public_port} \
          to 127.0.0.1:#{local_port}"
    exec "ssh -nNT -g -R *:#{public_port}:127.0.0.1:#{local_port} \
                           #{public_host_username}@#{public_host}"
  end
end

There are two more examples in the quickstart, but as they are more of the same, I’ll leave them for another post. I’d also like to try rewriting the Javascript in Coffeescript.

Update - Code is at https://github.com/swalberg/twilio-client-ruby

]]>