It’s pretty common to need Chef to be able to store secrets such as database passwords or API keys. The easiest thing to do is to store them in a data bag, but that’s open for the world to see. Encrypted data bags are nice but key management is a gigantic pain. A better solution is Chef Vault, which encrypts the data bag’s secret once for each client (a client being a Chef node or administrative user)
At the same time your organization likely has a need to keep secret data for applications, too. One could store these secrets in the same place as the Chef secrets but if you don’t like having Chef manage application configuration files then you’re out of luck. HashiCorp Vault is one solution here that we’ve used successfully.
With HashiCorp Vault, each client (or groups of clients) has a token that gives them access to certain secrets, dictated by a policy. So you can say that a certain token can only read user accounts and give that to your application or Chef. But how do you keep that token a secret?
I’ll also add that the management of HashiCorp Vault is nicer than that of Chef-Vault. That is, making changes to the secrets is a bit nicer because there’s a well defined API that directly manipulates the secrets, rather than having to use the Chef-Vault layer of getting the private key for the encrypted data bag and making changes to JSON. Furthermore this lets us store some of our secrets in the same place that applications are looking, which can be beneficial.
In this example, I have a Chef recipe with a custom resource to create users in a proprietary application. I want to store the user information in HashiCorp vault because the management of the users will be easier for the operations team, and it will also allow other applications to access the same secrets. The basic premise here is that the data will go in HashiCorp Vault and the token to access the HashiCorp Vault will be stored in Chef’s Vault.
The first thing to do is set up your secrets in HashiCorp Vault. We’ll want to create a policy that only allows read access in to the part of the Vault that Chef will read from. Add this to myapp.hcl
1 2 3 |
|
Create the policy:
1 2 |
|
Create a token that uses that policy. Note that the token must be renewable, as we’re going to have Chef renew it each time. Otherwise it’ll stop working after a month.
1 2 3 4 5 6 7 8 |
|
That value beginning with ba85
is the token that Chef will use to talk to the Vault. With your root token you can add your first secret:
1
|
|
At this point we have a user in the HashiCorp Vault and a token that will let Chef read it. Test for yourself with vault auth
and vault read
!
Now it’s time to get Chef to store and read that key. Store the token in some JSON such as secret.json
.
1
|
|
And create a secret that’s accessible to the servers and any people needed:
1
|
|
This creates a secret in a data bag called myapp_credentials
in an item called vault_token
. The secret itself is a piece of JSON with a key of token
and a value of the token itself. The secret is only accessible by sean
(me) and server1.example.com
. If you later want to add a new server or user to manage it, you need to run
1
|
|
Which will encrypt the data bag secret with something that only server2.example.com
can decrypt.
I won’t get into all the details of Chef Vault other than to refer you to this helpful article.
Now, let’s get Chef to read that secret within the recipe! Most things in these examples are static strings to make it easier to read. In a real recipe you’d likely move them to attributes.
First, get chef-vault
into your recipe. Begin by referencing it in metadata.rb
1
|
|
And in the recipe, include the recipe and use the chef_vault_item
helper to read the secret:
1 2 3 4 5 |
|
Now that we have the token to the HashiCorp Vault, we can access the secrets using the Vault Rubygem.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
The testing story is fairly straightforward. If you’re using the chef_vault_item
as opposed to directly through ChefVault::Item
, then it’ll automatically fall back to using unencrypted data bags which are easily mockable. Similarly, HashiCorp Vault can be mocked or pointed to a test instance.
This seems to give a good balance of security and convenience. We manage the Chef specific secrets in the Chef Vault, and use the HashiCorp vault for things that are more general. And the pattern is simple enough to be used in other places.
]]>Note - While I talk about Chef, this also goes for Ansible, Salt, Puppet, cfengine, etc.
First, establish a beachhead. Get the Chef agent on all the servers you can, including your snowflakes. Then, start automating your “new box” workflow so that it’s as hands-off as possible and results in a fairly standardized build with Chef on it. Finally, commit to using those new boxes for everything you do.
Once you have this done, you’ll immediately be able to prove the value of configuration management. You’ll be able to query Chef for things that normally took a while to get (who is running that old kernel version) and be able to automate many ad-hoc tasks (delete that account on all the servers). Over time you can improve to deploy your servers using cookbooks.
Install Chef Server on a separate box. The server is not necessary to get use out of Chef but it makes things easier. More importantly, once you finish this step you’ll immediately be able to store configuration management data and run queries on your infrastructure (once they run the Chef client).
Next, create your first cookbook that will be a very sparse configuration that applies to all of your current infrastructure. When I did it I called it role-minimal
and went down the role cookbook path. The TL;DR of that is that you create a role that only includes the role cookbook so that you get the benefits of versioning.
What do you put in the minimal role? It can be nothing to start, if you want. Or maybe something that’s so benign that no-one could complain, like setting the banner or motd:
1 2 3 4 |
|
and then put your motd in files/default/motd
. This will manage the file and ensure that all nodes have the same MOTD. Who can argue with that? You probably already have an item in your backlog to do that anyway.
The other thing to add to this cookbook is to make the Chef client run on a schedule with the chef-client cookbook
1 2 3 4 5 6 7 8 9 10 |
|
That can go in your attributes for the recipe to run it every half hour, or whatever you want. Don’t forget to include_recipe 'chef-client::cron'
to have Chef manipulate your crontab to add the job.
You may want to create environments if that’s the way you do things.
After this, start bootstrapping your machines with knife bootstrap and a run list containing your new role. Maybe start with the non production servers. Don’t worry too much if people resist, you can leave those servers alone for now.
Now you have Chef running in a low risk fashion. But what’s going to happen? Someone will eventually need something:
And then you can put up your hand and say it’s easy, because you happen to have Chef agents on each server, so the easy way would be to leverage that. Except that server that they were concerned about earlier – did they want that one done by hand or should we use our new repeatable automated process? Great, I’ll just go bootstrap that node.
This one really depends on how you build new machines. The general idea is that you want to come up with a base machine configuration that everything you do from now on is built from. We use the knife vsphere plugin to create new images with Chef already bootstrapped, but depending on what you use, you may need to search the plugin directory.
Create a second role for all the new stuff. We call ours role-base
. This can be identical to role-minimal
, but you may want to add some stuff to make your life easier. For example, I feel it should be a crime to run a server without sar
, so I have our base role make sure that the sysstat
package is up to date, plus we throw in some other goodies like screen
, strace
, lsof
, and htop
.
After this, commit to using this base image wherever humanly possible. Your boxes will be more consistent.
Sooner or later you’ll have a project that needs some servers. Do what you can to leverage community cookbooks or your own cookbooks to save yourself time and enforce consistency.
The benefit here is that all your MySQL servers will be consistent. You may not be able to fix your old servers, but at least everything new will be consistent. You’ll also be able to create new machines or environments much easier because the base image and the apps will be in code and not some checklist in the wiki. You’ll spend less time worrying about small details and spend more time thinking about bigger picture items.
I don’t have much advice here other than to do it. You’re definitely going to learn as you go and make mistakes. But it’s code, you can correct it and move on.
The other part of this step is to promote this new tool with your co-workers. Show them how knife
tools can make your life easier. Learn how to write cookbooks as a group. Get a culture of reviewing each other’s code so you learn new ways of doing things and share information.
There’s not a while lot to say here. Your first few weeks with Chef are going to be hard, but you’ll find that it gives you so many benefits in consistency and speed that it’s worth it. Mastering devops practices is an ongoing thing.
I recommend contributing patches back to community cookbooks as a way to get better. You will find problems with other people’s cookbooks eventually, and you can submit fixes and have them help other people.
At some point, not doing things in Chef will just seem strange.
]]>I’m particularly proud of this one. It started out as an update to Ross’ book from 10 years ago, but in the end we rewrote most of it, expanded other sections, and added a ton of new content to cover the new objectives. Ross is a former employee of LPI so you know this book will cover the right material.
]]>Back in May the Canadian Security Administrators released the Start-up Crowdfunding Registration and Prospectus Exemptions (PDF). The core idea is that in certain situations a startup company can sell equity in their company without filing a detailed prospectus or registering as a dealer, often called crowdfunding, though here more properly called equity crowdfunding.
What’s the difference? In crowdfunding sites, such as Kickstarter, you give a project money in return for a prize or product. The project is assured a certain amount of sales to cover any capital costs, which should put them on a good footing for later sales. If the project is a success or a bust you don’t have any long term interest in the company – you’re basically pre-purchasing a product. In the equity crowdfunding scenario you’re buying a piece of the company much like if you bought it on the stock market. If the company does well then your investment may be worth a multiple of the purchase price. If the company does poorly then it’s worth nothing.
Normally these types of equity transactions are heavily regulated to prevent fraud and people from preying on novice investors. These new guidelines are an attempt to reduce the regulations while still preserving the investor protection. It should be noted that these only apply to British Columbia, Saskatchewan, Manitoba, Québec, New Brunswick and Nova Scotia. It is interesting to note that Ontario is developing their own rules.
While there are many conditions in these new equity crowdfunding guidelines the most important are:
Additionally the investors have some rights and limitations:
These distributions are expected to be sold through online portals that are either run by registered dealers or privately.
My take on all this is that it’s a good start. My main problem is the low limit on the personal contributions. If you raise $100k you need at least 67 people to buy in. I realize there must be a limitation to protect the downside but it seems it could have been a lot higher, maybe as much as $5,000. You now have 67 investors to manage, though you can make sure that the shares being issued to this class of shareholders has very few rights. If you go for institutional funding after a crowdfunded round then this may complicate things.
]]>The people at Sony are humans. They get together at the water cooler and complain about the state of affairs and all the legacy applications they have to support. Even new companies like Dropbox are going to have dark corners in their applications. Money isn’t going to solve all these problems - Apple has billions of dollars and still gave up information about users. The bigger the company, the bigger the attack surface and the more the team has to defend.
How do you prevent your company or product from being hacked given your resources are finite and you may not be able to change everything you want to?
I’ve been thinking of how Agile methods such as rapid iteration and sprints could be applied to security. With that in mind, some high level principles:
The list above is trying to get away from the traditional security project where you spend lots of time producing documentation, shelve it, and then provide a half-assed solution to meet the deadline. Instead you break the solution into parts and try and continually produce business value. Time for a diagram:
Even in the best case where you deliver full value, why not try to deliver parts of it sooner?
Look at it this way – at any point in time you know the least amount about your problem as you ever will. It’s folly to think you can solve them all with some mammoth project. Pick something, fix it, move on. You have an end goal for sure, but the path may change as you progress.
With that in mind, how do you figure out what to do?
One organization technique I’ve found helpful is the attack tree. Here you define the goal of the attacker: Steal some money, take down the site, and so forth. Then you start coming up with some high level tasks the attacker would have to do in order to accomplish the goal. The leaf nodes are the more actionable things. For example, consider what it would take to deface a website:
While not totally complete, this attack tree shows where the attack points are. Given that, some low effort and high value activities that could be done:
Some higher effort activities would then be:
Some of those options may be a lot of work. But if you first start with a simple password policy, build on that in the next iteration to lock out accounts, and finally tie in to another directory, you’re able to incrementally improve by making small fixes and learning as you go.
What if a group like Anonymous threatens to attack your website on the weekend? Look at the attack tree, what kind of confusion can you throw at the attacker? Change the URL of the login page? Put up a static password on the web server to view the login screen itself? Security through obscurity is not a long term fix, but as a tactical approach it can be enough to get you past a hurdle.
Too often security projects are treated in a waterfall manner. You must figure everything out front and then implement the solution, with the business value delivered at the end. Instead, treat this all as a continual learning exercise and strive to add value at each iteration. If the situation changes in the middle of the project, like an imminent threat, you’re in a better position to change course and respond.
]]>What about the infrastructure world? We’ve always had some variant of “can you ping it now?”, or some high level Nagios tests. But there’s still some value to knowing that your test was good – if you make a change and then test, how can you be sure your test is good? If you ran the same test first you’d know it failed, then you could make your change. And then there’s the regression suite. A suite of tests that may be too expensive to run every 5 minutes through Nagios but are great to run to verify your change didn’t break anything.
Enter the Bash Automated Test System - a Bash based test suite. It’s a thin wrapper around the commands that you’d normally run in a script but if you follow the conventions you get some easy to use helpers and an easy to interpret output.
As an example, I needed to configure an nginx web server to perform a complicated series of redirects based on the user agent and link. I had a list of “if this then that” type instructions from the developer but had to translate them into a set of cURL commands. Once I had that it was simple to translate them into a BATS test that I could use to prove the system was working as requested and ideally share with my team so they could verify correctness if they made changes.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
|
And running the tests with one mistake in the configuration:
$ bats ~/Downloads/share_link.bats
✓ root
✓ mobile redirects to share
✗ mobile redirects to share and keeps query string
(in test file /Users/sean/Downloads/share_link.bats, line 17)
`[[ $output =~ "301 Found" ]]' failed
✓ desktop redirects to play
✓ desktop redirects to play and keeps query string
✓ bots redirect to ndc
✓ bots redirect to ndc and keeps query string
7 tests, 1 failure
With the tests in place it’s more clear when the configurations are correct.
As a bonus, if you use Test Kitchen for your Chef recipes, you can include BATS style tests that will be run. So if this configuration is in Chef (which it was) I can have my CI system run these tests if the cookbook changes (which I don’t yet).
]]>Normally you’d just turn off your platform’s analytics plugins and do it yourself, but NationBuilder has a great feature that fires virtual page views when people fill out forms, and we wanted to use that for goal tracking.
The solution was to turn off NationBuilder’s analytics and do it ourselves but write some hooks to translate any async calls into universal calls. Even with analytics turned off in our NationBuilder account, they’d fire the conversion events so this worked out well.
In the beginning of our template:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
It’s pretty vanilla Google Analytics code with a couple of features – display features for layering on demographic features and retargeting integration and the enhanced link attribution for better tracking of clicks within the site. We also added a custom dimension so we could separate out people who took the time to create an account in our nation vs those that were public.
Then, at the bottom:
1 2 3 4 |
|
That’s a simple loop that iterates through the _gaq array (that the async version of the tracker uses as an event queue) and pushes out any page views using the universal API. We didn’t have to worry about the initial page view because turning off NationBuilder’s analytics feature removes all of that.
]]>The book is really two books in one. The first is a walkthrough of building a simple Rails 4 application for an online bookstore. The second is a component by component look at Rails 4. If you want to be generous there’s a quick introduction to Ruby and Rails concepts at the beginning of the book.
The online bookstore walkthrough is interesting especially if you are new to Rails and the ideas behind Agile development. You take the role of a developer who is building an online bookstore for a client. You start off with a general idea of what needs to be done, but you build it incrementally, showing it to your customer at each step. Based on feedback you decide what to tackle next, such as showing pictures, getting details, or adding a shopping cart. Along the way there are some discussions of deployment, internationalization, authentication, and sending email.
Through the examples you learn the basics of creating Rails models, views, and controllers. Though the examples lean heavily on scaffolding to automatically generate the CRUD actions, you do extend it somewhat to handle XML and JSON responses. You also do some AJAX and automated testing. The book does stick pretty closely to the default Rails toolset including the test framework, though at the very end of the book there are some quick examples on using HAML for views.
At the end of each chapter are some exercises. Unlike many other books I found them to be of appropriate difficulty, with answers available online.
The second half of the book is a detailed look at the components of Rails. This is the more technical part of the book as it gets into specifics of how a request is processed and how a response is generated. There’s no bookstore application anymore, it’s just discussion of what’s available, code samples, and diagrams.
Along the way there are some interesting sidebars that explain some of the Rails philosophies or some real world scenarios. For example, one of the sidebars talks about when you want to use a database lookup that raises an exception on failure versus one that returns a nil or an empty set.
I didn’t read any of the previous editions so I can’t say with authority how much has changed. The book is up to date on the many changes that came in Rails 4 so it is current in that respect. However there are times when you read it and some older terminology, like fragment or page caching, creep in. This is more a matter of editing than it is about it being out of date, as the associated examples are correct. The index is fairly weak – many of the terms I tried to look up, including those featured on the back cover, were not found.
If you’re an experienced Rails developer this book will not help you much. But if you’re looking to get into Ruby on Rails, either from another language or even with a weak programming background, this book is an excellent way to get started. At a little over 400 pages it’ll take you a few weekends to get through depending on your skill level.
]]>Twilio has added more features to their SIP calling since this article was written, and there’s a much easier way to connect a SIP phone to Twilio that you should use.
Twilio has supported SIP termination for a while but recently announced SIP origination. This means that previously you could receive calls with SIP but now you can also make calls from a hard phone using SIP instead of using the browser client or going through the PSTN.
It was this second announcement that got my interest. I have an IP phone that I use in my office, currently it’s through Les.net but I like the pricing and interface of Twilio and would rather use them.
For some reason everything I read about SIP and Twilio uses a separate SIP proxy even if they have a compliant SIP device. Even their own blog takes a working SIP ATA and puts it behind Asterisk. I knew I could do better.
When thinking about VoIP, always think in two directions. Inbound and outbound. Sending and receiving. Talking and listening.
Get a number and point the Voice Request URL to your web server. Please don’t use mine.
Your outbound.php script will send some TwiML to dial your phone:
1 2 3 4 5 6 7 8 |
|
Note: this part was a lot of trouble. After some packet tracing and some brilliant detective work by Brian from Twilio support, it turns out that the address of the phone in the SIP invite had to be an IP address, not a hostname. With a hostname the phone received the INVITE but always returned 404.
Your phone will need to be on the Internet, either with a public address or with UDP port 5060 port forwarded to it. The “line_number” has to match the name of the line on your phone. In my case, I named my line after my phone number:
1 2 3 4 5 |
|
One thing to note is that you don’t register your phone to Twilio. I left the proxy address there so that the requests will keep the NAT translation alive. After detailed packet tracing it looks like the Twilio SIP messages originate from different IP addresses so this might not be helping as much as I thought.
At this point you should be able to dial your Twilio number from a regular phone. Twilio will request inbound.php and then do a SIP INVITE to the phone. The phone will accept it and then you have voice!
The first step is to set up a SIP domain in Twilio:
Call it whatever you want, but you’ll need to set the Voice URL.
The script you point it at has to parse the data coming in from Twilio to find the phone number and then issue a Dial instruction to get Twilio to dial the phone and connect the two ends.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
All we’re doing here is extracting the phone number from the Called header that Twilio sends us, stripping any leading 1’s, and then sending a TwiML response to dial that number. The ob_start through to the error_log is just logging the parameters if you’re interested.
Don’t forget to change the caller ID to your phone number, otherwise you get errors in the Twilio console.
So now when you place a call on your phone, Twilio will send the digits to the application which will return a Dial verb and the proper 10 digit number. Twilio links the two calls.
It took a bit of playing around to get this going but now I’ve shown that you don’t need an Asterisk server to integrate a SIP phone with Twilio. If you are setting up a phone bank or something with hard phones you can just configure them to hit Twilio, and for Twilio to hit them.
Of course, if you are expecting significant inbound traffic the benefit of a SIP proxy is that it can direct calls to multiple numbers without needing Twilio to be able to reach the phone directly. I’m hoping that Twilio can improve on that in the future!
]]>Then you spend a couple of minutes either grepping your source tree or looking on GitHub and then going back to your editor.
This weekend I went through VIM for Rails developers. There’s a lot that’s out of date, but there’s also some timeless stuff in there too. One thing I saw in there was the use of ctags which is a way of indexing code to help you find out where methods are defined.
Install the ctags package with brew/yum/apt/whatever. Then generate the tags with
ctags -R –exclude=.git –exclude=log *
You may want to add tags to your ~/.gitignore because you don’t want to check this file in.
Also add set tags=./tags; to your .vimrc which tells vim to look for the tags in the current directory. If you have it in a parent directory, use set tags=./tags;/ which tells vim to work backward until it’s found.
Then, put your cursor on a method and type control-] and you’ll jump to the definition. control-T or control-O will take you back to your code. control-W control-] opens it up in a horizontal split. Stack Overflow has some mappings you can use to reduce the number of keystrokes or use vertical splits.
If you use bundler and have it put the gems in your vendor directory, ctags will index those too. So you can look up Rails (or any gem) methods.
]]>There’s also a bookmarklet at the bottom of the page. You can drag it to your button bar and check the site you’re on for any infection.
Update: I’ve sold the site to someone else and am no longer involved.
]]>Like any popular open source project capistrano has gone through changes. Documentation on the Internet is plentiful but often refers to old ways of doing things, so copy/pasting recipes can often result in stuff that doesn’t work quite right.
One common thing people need to do is to run some command after the deploy or at some time during the deploy. Capistrano has a very easy way of doing this.
1
|
|
This says “after running deploy:update_code, run deploy:symlink_shared”. The latter is custom task that’s defined elsewhere.
The problem comes in when you look at the way the “deploy” and “deploy:migrations” tasks differ. I’ve seen a common problem where something works when deploying without migrations but not when migrations are used. Usually this is because the hook used is not the right one, either because of bad information or the user figured out where to hook into by looking at the output of a deploy.
If you look at Capistrano’s default deploy.rb you can piece together the tasks that are run in both cases.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
From this, you can see that the sequence is somewhat different. The “update” task isn’t used in the migrations case. Instead, the components are replicated.
If you want to hook in, use
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
There are a lot more, run a man bash and look for the section called HISTORY EXPANSION.
]]>I started off by using the event_tracker gem which only handles the page view events, but it does make it easier to fire events in the view based on controller actions so it is well worth using. I talked to the author about adding support for track_links and track_forms, but after a good discussion he convinced me that the gem was not the right place for this and that I should pursue something more elegant such as tagging the links.
Ideally, what we wanted to arrive at was something like
1
|
|
or with Rails helpers:
1
|
|
which would magically call
1
|
|
One problem is that not all the elements had IDs and I didn’t want the added friction of having to add that in.
What I came up with was:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
So this simply finds all the links with the track-with-mixpanel class, assigns them a random ID if they don’t have one, and then calls the appropriate mixpanel function on them.
A couple of notes, though…
The first is that the data-properties doesn’t quite work right. Pulling the data into a hash requires some finesse that I haven’t been able to figure out.
The second is that track_links and track_forms are horribly broken and will mess up any onClick handlers you have. So if you have links that you use :method => :post for, some third party javascript like validations or Stripe, you’re better off with the old track method because Mixpanel won’t play nicely with it. But for regular links and forms, this is great.
]]>These screenshots explain how to make a “jira” shortcut that will take you to a particular issue in JIRA. The instructions will work equally well for anything where the URL is well known. For example, ticket PR-139 is at https://mycompany.atlassian.net/browse/pr-139. To go there, you’ll hit your Quicksilver hot key (e.g. Control-space), then type “jira”, enter, then the ticket number, and enter.
First, make sure the Web Search Module plugin is loaded by checking the box in the plugins menu.
Then, add the shortcut to your catalog by adding it as a custom source under Web Search List.
Looking more closely at what you add:
The name is whatever you want to type when first opening Quicksilver. The url is the URL you want to go to, with the dynamic part replaced by ***.
You may have to rescan the catalog from the main Quicksilver menu.
]]>Always looking for feedback and guest/topic suggestions. Have a listen and let me know what you think!
]]>“Deploying Rails” is more than just about deploying a Rails application, it’s about that and everything that goes into managing servers, from provisioning to monitoring. This book explains how to do these tasks with the help of some popular Open Source tools and a focus on automation.
The flow of the book is fairly logical. Start by building virtual machines with Vagrant. Learn how to automate configuration with Puppet. Nail down deployment and remote tasks with Capistrano. Monitor the server and application with Nagios and Ganglia. Delve into some side topics like systems administration. Even though there is ample free documentation on all of these topics, this book sets itself apart in two ways.
First is that the tools from previous chapters are used to augment later chapters. You’ll learn how to use Vagrant to set up a virtual machine in the first chapter, and from them on when you need a server you’ll configure a Vagrant box. You’ll learn how to automate configuration management with Puppet in the second chapter, and all successive chapters build on that. You don’t simply install Nagios, you write Puppet scripts that install Nagios for you. By the end of the book you have a collection of tools that you can start using in your own real world environment.
Secondly, you’re doing everything on an environment you can build yourself without needing to know how to install Linux, owning spare servers, or knowing how to manage servers. You just install Vagrant and follow the book. The book is heavy on code samples and screen captures – it is the exception to open up to a random page and not see some code or example. You can have a replicated MySQL setup and work on your database recovery practices, destroy it, and know you can rebuild it with a few keystrokes.
Puppet is a large part of this book. Almost every task is done in a Puppet manifest, from installing the web server to setting up monitoring. The authors walk you through creating couple of simple manifests and then refactoring the code to be more reusable. The basics of Puppet are covered such as installing packages, starting services, and copying files. Later on Puppet is used to interact with the existing system by managing cron jobs and using templates to edit existing configuration files.
The popular deployment suite, Capistrano, is the topic of two chapters. The first looks at a simple deployment, then goes on to examine roles and adding hooks that automate tasks at points during the deployment. The advanced chapter delves into remote command invocation and parsing, multistage deployments (such as a separate staging and production deployment) and further automation of the deployment. People who have used Capistrano before will not be surprised by much in the basic chapter, but are almost certain to find something helpful in the advanced chapter. It opened my eyes to what Capistrano can do outside of the deployment – it can automate maintenance and support tasks, too.
The last three chapters discuss various topics, from managing multiple Ruby interpreters with RVM to backing up your database and how to manage a master-slave setup. Some of these topics can be books in themselves, though Deploying Rails does a good job at getting you started. Even though the examples throughout the book use Apache and Phusion Passenger, the appendixes have a chapter on using Nginx and Unicorn.
Despite all the remarkable content, I did feel there were some areas that could have been covered. Given the extensive use of Vagrant throughout the book I found it surprising there was no discussion about using it for its intended purpose of managing developer’s environments. There’s a brief mention that Vagrant can run the Puppet scripts and you can save the step of running it manually, but other than that I found little that would tell the reader that they could reuse the work they had been doing so that all the developers would have a production like environment in which to work. Similarily, since the environment is well defined the authors were able to make several assumptions in their coniguration that would not necessarily work in a typical production environment. Some of these are simple, such as IP addresses and SSH keys being hard coded, but some are more involved, such as how to distribute the Puppet manifests when that’s not taken care of by Vagrant. Books, like software, have to draw the line somewhere though.
As a systems administrator turned developer I was encouraged to see this book being released. It shows the ideal marriage of the systems administration mindset, with its relentless focus on automation and monitoring, with the tools available to the modern programmer. In some circles this practice is called DevOps, but even in shops that keep these two separate, this book will benefit both teams.
]]>Long story short, the Pomodoro app lets you run AppleScript when various events happen, so I wrote some stuff to change my HipChat status to DND when I’m in the middle of a work cycle. Here’s the code:
tell application "System Events" to tell UI element "HipChat" of list 1 of process "Dock"
perform action "AXShowMenu"
delay 0.5
click menu item "Status" of menu 1
click menu item "DND" of menu 1 of menu item "Status" of menu 1
end tell
All that remains is to insert that into the Pomodoro app through Preferences -> Scripts:
Just note that you have to change “DND” to “Available” for some of the events.
This was my first foray into AppleScript, so it’s possible I’m sending my banking details off to Nigeria, but it seems to work so far.
Edit you need to enable access for assistive devices from System Preferences -> Universal Access:
]]>Like many Rails people, I use New Relic to monitor my Rails app. At Wave Accounting we even pay for it. It’s well worth the money, as you get a lot of extra visibility into your app.
At the standard level, New Relic is pretty good, but sometimes it seems like I’m missing out on something. RPM will show me that a controller is running slowly but most of the time is spent in the controller itself, not really saying what’s happening. I’ve recently discovered a few tweaks that have made a huge difference.
It’s pretty embarrassing, but this isn’t on by default. It’s so simple to figure out how much of your time is spent in GC, the only caveat is that you have to be running REE or 1.9.x. This doc explains how, but all you have to do is turn on the stats and New Relic does the rest.
1 2 3 4 |
|
Put that in an initializer, and you get GC activity in almost all your graphs:
If you go to http://localhost:3000/newrelic you will get some in depth information about what’s going on in your app when in dev mode. If you’re using pow then add the following to ~/.powconfig:
export NEWRELIC_DISPATCHER=pow
export POW_WORKERS=1
There’s a wealth of information here.
Your controller does a bunch of stuff but New Relic reports it as one big block. block tracing to the rescue!
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Then, these blocks will be counted separately.
Want to do the same thing as before on someone else’s code, or at a method level? Add a method tracer in your initializer:
1 2 3 4 5 6 7 |
|
Poof, all those are now traced and broken out separately:
You can also trace custom metrics, such as user signups or report views. I’m still working on that, along with monitoring background jobs.
]]>So, here is the meat of the quickstart done up as a Rails 3.1 application.
First, generate the application.
1 2 3 4 5 6 7 8 9 |
|
This creates a new Rails 3.1 app in the current directory called twilio. Change to this directory, and add a line to your Gemfile:
1
|
|
Run bundle install to add the official Twilio gem to your bundle.
Next, head on over to your Twilio account and get your SID and auth token. Those can go in an initializer, config/initializers/twilio.rb:
1 2 |
|
Those are the magic tokens that let you authenticate yourself to the Twilio API, and more importantly for them, let them bill you.
Next, head on over to app/helpers/application_helper.rb:
1 2 3 4 5 |
|
Then in app/views/layouts/application.html.erb add that helper in the head:
1 2 3 4 5 |
|
Yea, you could have put the code right in the layout, but I like sparse layout files.
Next up, create a controller:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Then add root :to => ‘client#index’ to config/routes.rb so that your new view is displayed in the root url.
Run rails server or whatever you do to start your dev instance and browse to it. You should get the usual “find me in app/views/client/index.html.erb” message. Check the headers to make sure the library is being installed. The rest of the examples then deal with app/views/client/index.html.erb and app/helpers/client_helper.rb.
For the first example you want:
1 2 3 4 5 6 7 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
For the second example, you just change the view.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
For the third example we’ll have to change the helper and the view accordingly:
app/helpers/client_helper.rb (note I’m using my own sandbox id. You get your own inside the Twilio account page!)
1 2 3 4 5 6 7 8 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
|
Now, hook up a new action in the client controller to redirect the call from Twilio to the app inside app/controllers/client_controller.rb
1 2 3 4 5 6 7 8 |
|
Don’t forget to add post “client/incoming” to config/routes.rb. Then point your sandbox URL to your dev box, such as http://yourhome.com:3000/client/incoming.xml.
As a bonus, here’s a rake task to log in to a remote host and set up an ssh tunnel on remote port 3000 to local port 3000:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
There are two more examples in the quickstart, but as they are more of the same, I’ll leave them for another post. I’d also like to try rewriting the Javascript in Coffeescript.
Update - Code is at https://github.com/swalberg/twilio-client-ruby
]]>