I’d like to share five tips that have really helped me to be more productive and get the most out of my work day. None of these are new or revolutionary but they have made a real difference in how I get things done.
Here are the bullet points:
I’d like to dive deeper into each of these and share some examples of how I’ve put them into practice. This list has worked for me but it is fairly personal and so I don’t expect it to fit your life exactly. Rather, I hope it can give you a few ideas of things to try as you go ahead with your work day.
Let me know if you other strategies and tips and let’s learn from each other.
I think the most important thing I have done recently to improve my productivity is to make a concerted effort to eliminate distractions. I’m sure this will look different for everyone but in my case I took some specific actions just over a year ago and so far I’ve not regretted any of them:
Of all the items on this list I put this at the top because I believe it has made the most significant positive difference in my life. And not just in my work either, it has helped me to be more present in moments outside work and ultimately to be a better husband, father and friend.
So my challenge to you is to identify those things that distract you throughout the day and make an actionable plan on how you will reduce or eliminate them.
As a programmer my main tool is my text editor and I probably enter thousands of keystrokes into it each day. If I can become more productive with my editor then the impact on my day-to-day productivity is potentially massive. I personally use a combination of Vim and Tmux and while I am quite proficient at both, I continue to look for ways to improve my workflow and cut down on unnecessary keystokes.
Another group of tools that I use daily are the programming languages and frameworks that I develop in. I’m always seeking to get a deeper knowledge of how they work and look for ways to write code that is more elegant and more efficient.
My suggestion is to identify the main tools you use everyday and make it a priority to invest time into getting to know them better and becoming more proficient.
I have found that continual learning is just a part of being a responsible software developer. Unless you are only working on legacy systems there is always something new to learn. In my experience the rate of change seems to be increasing each year. Finding time to stay on top of the change has been critical for my career. One practice that I have found helpful is to set aside a specific time every day or every couple days which is devoted to learning. For me this usually happens in the morning and I have even made it a practice to wake up a bit earlier and use that time first thing in the morning to learn.
In the field of programming and software development if you’re not actively seeking to learn and grow you’re actually moving backwards. So think about some of the things you want to learn and block out some time each week devoted to study. One of my favourite resources for learning is The Pragmatic Bookshelf, I highly recommend their books.
When it comes to coding there are those times when it is helpful to just spike something quickly in code and get that immediate feedback. However, I don’t want that to be my default practice. Rather, I find it rarely hurts to take a bit of time before starting to code to sketch out a rough design and/or plan for what I want to accomplish. I can’t begin to count the amount of time that I’ve ultimately saved by just taking a few minutes to think through what I am trying to do before blindly jumping in.
Mindful coding isn’t just for writing new code either. Sometimes if I am stuck on a problem I find it helpful to just step away from the computer for a few minutes and go for a walk. I find this helps to gain some fresh perspective and many times when I come back to the computer the solution is already apparently. It’s not a magic bullet but it has helped me time and time again.
So to sum up this tip try to think first and code second.
In recent years I’ve been reading more and more research into the connection between our minds and our bodies. For me it has been so important to take a holistic view of myself and realize that in order to be at my best mentally I can’t ignore my physical health. As such exercise has been a big part of my work/life balance.
My perferred exercise is long distance running. I find that nothing helps me better at improving my creativity, helping me to think through difficult problems and de-stress than going for a nice long run. I know running isn’t for everyone and so find what works for you. Not only does exercise help you to feel better but I really do think it helps you to think better too.
I’m certainly no productivity guru but I hope that something here has been helpful or encouraging. Whatever you do, never stop being curious.
]]>Making Tmux and Vim work well together is something that is critical for my workflow. One of my goals is to keep my fingers on the homerow as much as possible and as part of that I wanted to use two conflicting mapping.
I wanted to map the ESC key to CAPS LOCK in vim so that I just had to hit CAPS LOCK instead of ESC all the time.
Then I also wanted to map the CTRL key to CAPS LOCK so that in Tmux I can use CAPSLOCK as part of the Tmux prefix.
These are two potentially conflicting key mapping as I want to use the CAPSLOCK key for two different operations.
Here is how I did this on OS X Sierra.
First open up your system keyboard preferences and open the “Modifier Keys” pane. Set kki
]]>When you read a lot of books I have found it’s important to have a practice of underlining and note taking while reading in order to really have the important ideas in the book stick. Then in addition, I have found it’s helpful to go back and review my notes a couple weeks after the book to aid in retention and help me to apply what I’ve learned.
I thought it might be fun and helpful for me to take a look back at the books that I read in 2016 and make a list of the top 5 that I really enjoyed or that had the biggest impact on me.
The Checklist Manifesto is a beautifully written and throughly convincing case for the use of checklists in the daily life and work of professionals.
On the surface this may appear to be somewhat obvious and just basic common sense but in fact many consider the use of checklists a trivial exercise that is beneath them. Especially for very highly trained individuals they can tend to look down on the use of checklists as something just for beginners, not for experts like themselves.
The book counters those notions with incredible stories of how checklists are making a real difference in many industries including health care, airline and financial. They are not only helping to save time and money but even saving lives.
The book was a joy to read, Atul is a masterful storyteller and an incredibly gifted and engaging writer. The reason I put it here on this list is that in addition to simply being a great book, it really impacted my life and got me to be more disciplined in my use of daily checklists in my work as a software developer. Not only do they help me avoid errors but primarily they get me to work thoughtfully and with intention. Before I start work, I first have to think through what to do and make a checklist outlining what I need to do and how I will accomplish the work. I have found this both increases my focus and productivity, making much better use of the time I have.
A Walk in the Woods is an account of Bill Bryson’s attempt to walk the Appalachian Trail with his friend Stephen Katz. The Appalachian Trail (aka the A.T.) is an incredible hiking trail in the eastern United States that stretches from Georgia to Maine, going through a diversity of terrain over 3,500 kilometres.
I personally love hiking and so I was drawn to this book as our hiking season here in British Columbia was coming to a close. The book did not disappoint. I’m not sure if I’ll ever hike the A.T. myself but I came away from this book with the desire to just do more hiking and spend more time out in nature.
I’m certainly not the first in stating that Gilead is a masterpiece of modern fiction. It is a novel of incredible beautiful and deep spiritual insight. It draws you into it’s world slowly and wraps you up in the story and the characters.
Suffice to say, I loved this book. It’s almost like reading poetry, the narrative is so beautiful and the characters are so nuanced and complex and that you almost feel like they are real people. It’s a story from a different time and a very different place to my own. I love books that can transport you into a different world from your own and bring you into new experiences that you would never have otherwise.
I had to include at least one technical book here in this list. I read quite a few programming and software design books in 2016 but Avdi’s Confident Ruby really stood out from the pack. It’s a fairly easy read but incredibly deep. As I work primarily with Ruby in my day-to-day job, there were immediate take-aways and principles I could being to apply immediately as I went from chapter to chapter. It made me a better programmer and I had fun in the process, can’t ask for more than that from a programming book.
If you are a Ruby programmer and have not yet read it, I would highly recommend it.
The subtitle for this book is “discovering the secrets of the fastest people on Earth”, and the book takes you into the relatively unknown world of some incredible Kenyan runners. Although spoiler alert - through the course of the book Finn discovers that there really is no one secret to the success of the Kenyan runners. Life is never that simple That said, there were some incredible insights made into how the Kenyans have come to dominate the world of long distance running.
I am a runner and so I love to read books that tell stories about other runners and inspire me to go further and faster and be more consistent and disciplined in my running practice. Running with the Kenyans hit me on all those levels and I found myself soaking in the narrative which then translated into my actual running.
In the book Finn moves with his family to a small village in Kenya called Iten where so many of the world class long distance Kenyan runners train. Before reading this book I didn’t really understand how dominant the Kenyans are in the field of long distance running and so that was eye-opening. As I read Finn’s heartfelt accounts of his time in Kenya I couldn’t help but feel a longing to be able to take the same journey he did and run alongside the Kenyans.
Regardless of whether you are a runner or not I would highly recommend the book.
]]>I didn’t really think about why I was using social media or the potential drawbacks of doing so. I pretty much just embraced it and in some ways got a bit addicted to it.
As a software developer I work primarily in front of a computer screen and over the years I noticed that I would have to fight the urge to check my Twitter feed during the work day. When my mind would want to take a break from the current problem I was working on, the temptation was to just open up Facebook or Twitter and take a quick look at what was happening there on my timeline.
These were warning signs, indicators that my use of social media wasn’t exactly healthy. However I think the real alarm came as I realized that I would be checking Twitter on my phone during times that I would be hanging out with my kids. I’d be sitting with my son as he practiced piano and instead of focusing on his playing I would unlock my phone and glance at my Twitter feed.
It was like I couldn’t just live in the moment, I was constantly being pulled away from the present and chasing the endorphin fix of the social stream.
Thankfully I wasn’t completely oblivious to these problem and in fact for years now I have been aware of this struggle. I would try to make changes to combat this. There were times I would be more disciplined and not give into these pulls towards distraction. But then I would get lazy and fall back into the same old patterns.
The one thing I didn’t think I could do was to actually delete my social media accounts. I needed to stay connected, I just needed to be more self controlled. At least that is what I thought until I watched a TED Talk called “Quit social media” by Dr. Cal Newport.
In this video, Cal Newport didn’t pull any punches, told it like it is and basically made a very strong case to simply quit social media.
It was a wake up call…
I deleted my Instagram and Facebook accounts
These were my two biggest distractions and held the least value, two weeks later I have no regrets.
I decided to keep my Twitter account for now but drastically change how it.
I deleted the Twitter app from my phone and I am trying the practice of only checking Twitter a maximum of twice a day - once in the morning and once in the evening. (I also unfollowed about 100 users which minimizes the noise). I’m planning to see how this works and make further changes if necessary but so far it’s been working well and I find many days I don’t even check Twitter at all.
It’s only been about two weeks since I did this but already I am sensing a world of difference. Here are a few that come to mind:
I also have the sense that these changes are just the beginning. I want to live a life with more clarity, focus and mindfulness. Quitting social media is only a part of the puzzle in this quest. But for me, it was a key decision that I think will ripple out into other areas of my life.
]]>While I find value in keeping the shortness of life in perspective, I also try to not dwell on the passing moments but to focus on what I can do in the time that I have ahead of me. As such, one thing that I do appreciate about a new year is that it always gives a chance to reflect on the past year lived and plan ahead for the coming one.
One thing that these reflections can lead to are new years resolutions.
Did you make any this year?
I know for me while I have made many in the past, I have always struggled to actually maintain them and have them impact my life for the long term.
My typical pattern is an initial few weeks of consistency, followed by another couple weeks of inconsistency, which ultimately ended in the abandonment of the resolution.
I don’t think mine is an atypical experience. A common experience with new years resolutions tends to be that they start off with the best intentions, but they just don’t last.
In my own life I have wrestled with this desire for self improvement yet dissatisfaction with new years resolutions as a way to achieve positive and long-term change. Through this process I have gradually come to some helpful realizations which have made all the difference for me.
These realizations have caused me to re-evaluate how I approach self improvement and personal development. While they may not be a univeral remedy for the problem, I hope the following observations will at least be helpful for you as well.
From my own personal experience, I think the problem hasn’t been resolutions as such, but rather the fact that my resolutions tended to be vague and broad sweeping in their scope.
In other words, I didn’t think through and establish a specific plan as to how I would implement the resolution. I would simply make some bold and decisive change such as
January 1st would come and I would start cutting out all sugar and it would go okay for a while, but it wouldn’t take long before such a drastic and difficult to maintain change would unravel and I would end up back at square one.
Now, it’s probably somewhat an issue of semantics, but what has helped me is to stop thinking in terms of resolutions but to rather think in terms of goal setting. So instead of a vague resolution such as “lose 10 pounds”, I would rather set a goal such as “run a 10K”.
And here is the key…
I wouldn’t just set the goal and leave it but I would make a specific plan as to how to accomplish it.
In this example I would find a local 10K race for the spring, register in the race and then build a training plan that would get me ready for the race. It just so happens that by training for a 10K, I’d probably lose a few pounds along the way, maybe even more than 10!
The difference is that now I have something tangible and attainable, a specific goal that is doable and I can not only work towards it but know when I’ve completed it.
I basically quit making New Years Resolutions several years ago and instead I simply do the following throughout the year:
I think the key to this approach is knowing how to set good goals. One set of criteria that I have found helpful when making and reviewing goals is the SMARTER goals criteria. The SMARTER criteria is a handy mnemonic that comes from the world of project management and stands for:
As much as I like the SMARTER criteria you can keep it simpler than that. Do what works for you, that’s the most important thing. I think the key is to start small and win little victories, building confidence before taking on larger goals.
This is the approach I’ve taken over the past several years and I can’t tell you the incredible difference it has made in my life.
Another helpful tip is to not wait for January 1st to set new goals. I think the new year is a good time to review all the goals from the previous year and either revise them or set new ones. But this is a practice that I like to do much more regularly. Typically on a weekly basis.
I like to use Sunday evenings as a time to review my past week and plan for the week ahead and I find it also is a good time to briefly review my longer term goals. Basically I determine if I’m falling behind and if I need to change anything to keep them on track.
Maybe new goals will come to mind as well and so I’ll add them to my list and start to work on a plan for accomplishing them. If you don’t want to commit to a weekly review then at least monthly is a good idea.
Thanks to this approach, in reflecting back on this past year of 2016 I can point to specific goals that I was able to accomplish.
One of my primary goals in 2016 was to run in a 10K race. I’ve always been a casual runner but I wanted to take it up a notch this year and set some specific targets in my running.
And so back in January I registered for a 10K that was to take place in June, plenty of time to train up and be in good shape for the race.
Despite dealing with some minor injuries due to running too much too soon, I got in the training and the race was a wonderful and rewarding event.
After the race I enjoyed it so much that I ended up registering for a second 10K in October and continued my training throughout the year. The second run was even better and I improved my time by several minutes.
Now with 2017 on the horizon I’m looking increase my running goals further and so I have registered for a 10K in February and then a Half Marathon in April. I’ve never run a half marathon before and so it’s a bit intimidating but with a specific target and a plan, it now becomes attainable.
The key for each of these races is that I have already got a date on the calendar and I’ve paid money to register for each one. Paying money for something is a way I’ve found that makes sure that I have skin in the game, it greatly increases my level of commitment. In addition for each of the races I’ve got training plans and I’m working through those on a daily basis, ensuring I stay on track.
This process of setting attainable goals and then crafting a plan to make them a reality has been life changing for me. If it is something that you’ve never done and yet you’ve got a list of failed resolutions it is something you might want to consider. If you do end trying out the exercise be sure to let me know and we can encourage each other along.
]]>Thankfully we don’t have to build this functionality from scratch as there is a great Ruby gem called Whenever that allows us to set up cron jobs from within our Rails apps using Ruby code. In this blog post I’ll cover how you can schedule your background jobs in Rails using Whenever to set up your schedule, along with Sidekiq to run the actual background jobs.
To get started simply add the whenever, sidekiq and sidekiq-client-cli gems to your Gemfile:
1 2 3 |
|
After updating your Gemfile, run bundle install
to install the
gems. I should also point out that Redis is
a required dependency of Sidekiq and so you’ll need to get that installed
in both your dev and server environments. For further details on
using Redis with Sidekiq please check out the Using Redis
page on the Sidekiq wiki.
Also, if you were wondering why we need the sidekiq-client-cli
gem
in addition to sidekiq, it is actually the key piece of the puzzle.
The sidekiq-client-cli gem is a command line client for
Sidekiq and allows the cron jobs to interact with Sidekiq,
without it our cron jobs could never execute a Sidekiq worker.
Now that we have all the required gems installed we can get started with creating our background workers.
To do that I create a folder called app/workers
in my Rails application
and create a simple class there for each of my background jobs.
In order to run these jobs in the background with Sidekiq you just
need to include the Sidekiq::Worker
mixin and create a perform
method
which contains the actual work to be performed.
Here is an example of what your class might look like:
1 2 3 4 5 6 |
|
Once your background job is created and ready to be executed we can now turn to whenever and set it up to run the job on a predetermined schedule.
To get started with whenever, you can run the wheneverize
command in
your app’s root folder to set up an initial configuration:
1
|
|
The wheneverize command will create an initial config/schedule.rb
file for you. I recommended opening it up and taking
a look at some of the default configuration options to get an idea of
how whenever works.
Below is an example of a config/schedule.rb
that executes our
CronJob worker’s perform method once every thirty minutes. However,
before we look at the configuration here are a couple things to note:
job_type
called :sidekiq
,
this makes sure that the job is sent to the sidekiq-client binary.perform
method, Sidekiq is expecting that
method already exists and so it’s assumed to be there.In the above example we created a class called CronJob
and so that is what
we pass along to the sidekiq job:
1 2 3 4 5 |
|
As you can see whenever makes it incredibly easy to schedule your background jobs. Of course this is just a very basic configuration and you can get much more complex as needed. However, one additional configuration that you will need is for your deployment pipeline to update your server’s crontab with your whenever schedule.
I personally use Capistrano to deploy my rails apps and thankfully it
makes it very easy to set up your deployment process to update your server’s
cron jobs based upon your whenever configuration. Here is an example from my
config/deploy.rb
which sets up a capistrano task called
update_cron
to update my crontab with whenever.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
The update_cron
task is set to run after capistrano completes the
deployment so that it will use the latest version of config/schedule.rb
for configuration.
After setting up that capistrano task you can run it directly and then check on your server to verify the updates to your crontab.
1
|
|
After the task runs successfully you can log in to your remote server and view your crontab as follows:
1
|
|
Assuming everything is hooked up correctly you should now be seeing that your server’s crontab mirrors your whenever configuration.
There is much more that could be explored here, however I hope this gives a good overview on how to get started. Let me know in the comments if you have any further questions or any feedback on the article.
]]>Thankfully all my concerns were unfounded and it ended up being much easier that I could have imagined. The kids were all excited to learn coding and they latched on to the material quickly. What was clear to me is that they loved it and they wanted more. What was also obvious is that coding is something that is currently missing from their elementary school curriculum. While I was glad to for the opportunity to share with the kids, it seemed unfortunate that they weren’t already getting taught these skills.
Coming away from this experience, not only was it obvious that the kids wanted more, but I also wanted more. It was a learning experience for me and a very rewarding one at that. If you are a developer, I would encourage you to look for opportunities in your local community where you can share your knowledge and expertise with kids.
One of the difficult parts in teaching kids how to code is figuring out exactly what to teach and how to teach it. When I was first learned how to program there were generally two programming languages that were used: Basic and Pascal. I learned both of those in my high school computer science classes and looking back I don’t think either of those options were really very good. Thankfully there has been a lot of research and development done in this area and today there are much better options.
Let’s take a look at some of the options that are available today.
Scratch is a web-based visual programming language. It was specifically designed by educators at MIT for the purpose of teaching kids to code. They even have Scratch for Educators, a section of their website dedicated to instructors who want to use scratch in both formal and informal learning environments. It is the language that I ended up using for this class of Grade 5 & 6 students and it worked out great.
One of the big advantages of Scratch is that everything can be done in a web browser and so you don’t need to install anything to get started. This is important when you are coming into a situation like I was in which they have a locked down computer lab and it would be a massive undertaking to get the IT department to install any custom software on the computers.
A great place to start finding out more about Scratch is this TED talk given by Mitch Resnick on “Let’s Teach Kids to Code”. Mitch is the head of the Lifelong Kindergarten group at MIT Media Lab which has been developing Scratch.
After watching the talk I would encourage to head on over to the Scratch website and try it out yourself. It’s completely free.
The Hour of Code is a great resource for teaching kids to code. They have done all the hard work of preparing resources and lessons for anyone who wants to get started doing do. They have put together a variety of hour-long tutorials for students of all ages and so all you have to do is pick one and go for it. They even have a how-to guide for teachers, handouts to help spread the word and all sorts of other learning and promotional resources.
Several of the tutorials at the Hour of Code, such as the Angry Birds tutorial, uses a simplified version of Scratch called Blockly. In my limited experience so far this seems to the best place to start introducing kids to Scratch. Once they go through a couple of these tutorials they can easily graduate to the full Scratch environment.
The Coder Dojo is a global network of free computer programming clubs for young people. The basic idea is to get a bunch of young people together with some volunteer technical mentors and see what they can create. They do everything from Scratch to web development to programming the Raspberry Pi. When I first was introduced to CoderDojo I was inspired by how just a couple people started this amazing global movement. Each of the clubs are volunteer-led and community based and they have taken off at an amazing rate in the U.K., Europe and the United States.
The incredible thing about CoderDojo is they are filling in the gap that currently exists with the education system and computer literacy. It is a gap that I have recognized here in the Canadian education system but CoderDojo shows us it clearly is one that is world-wide. Instead of becoming overwhelmed by the scope of the problem, CoderDojo helps us along the path forward and for that I am very grateful.
Unfortunately for those of us in Canada we only have 5 clubs the last time I checked and there aren’t any clubs near me in Vancouver. However, this is a problem that can be solved and so I have started to look into what it might take to start one here. For me it would be a great way to continue what was begun at my local elementary school a couple weeks ago.
This resource was sent in by Katie (thank you Katie!), who is a reader of this site and is currently learning to code. It’s a great one! It is called Computer Science Fun for Kids and it is a fantastic list of educational games and activities which teach you how to code while having fun.
This is a big and diverse list and so I invite you to check it out and explore the recommended resources. I think this is a wonderful starting point for anyone who wants to learn to code but feels intimidated by the prospect and is unsure how to begin. Also I think that when combined with the other resources out there, it is another way to keep learning and growing. Thanks to Katie I’m definitely going to be sharing these games and activites with my kids and I think several of them would work great in a group and/or classroom setting. So if you want to teach kids to code, check out this list and I’m sure you’ll find something you can use for your next lesson.
–
Those are just a few of the available resources that are out there if you want to find out more about teaching kids to code. If you are a developer getting involved in educating young people about programming is not only an opportunity to pay it forward but also a wonderful way to grow and develop in our craft. I know nothing that helps me grow and learn faster than trying to teach others. I hope you will take up the challenge!
]]>I’ve been a regular listener of podcasts ever since.
Lately I’ve had a daily routine of going for walks and listening to podcasts. It’s great way to get some exercise and learn new things at the same time. I specifically use the time for listening to podcasts related to my profession as a software developer. I have found that the time spent doing so is invaluable and through it I have been helped to stay on top of new technology while getting inspired and motivated to keep improving my craft. It keeps me learning and helps to guard against burn out and learning fatigue. As someone who works from home, it’s easy to feel isolated and get lost in your own echo chamber. Podcasts help me feel connected and I feed off the energy and excitment of the hosts and their guests.
I think most importantly podcasts have made me a better developer.
Listening to world class developers on a regular basis discuss their craft has been truly inspiring. If you’ve never added podcasts to your developer toolkit I would recommend you give them a try. If you’re not sure where to start, I’m going to share a bunch of my current favourites below. I always appreciate it when other share some of their favourite podcasts and so I hope this list is helpful for someone.
The Changelog is more than just a podcast. It’s also a blog, top-notch newsletters (Weekly & Nightly) and they have a membership which gives you some great benefits includes a members-only Slack room. The content delivered by The Changelog is consistently high quality and their guests are some of the best in the open source world. One of my favourite recent episodes was a conversation with DHH on 10+ Years of Rails, that one is a must-listen. If you are only planning on subscribing to only one programming podcast, I would recommend you start here.
As a developer I find it essential to be always expanding my perspective on programming paradigms and toolsets. Most of my daily work is in Ruby which is an object oriented language, but over the past several years functional programming has increasing come onto my radar. This podcast has been a fantastic way to learn about different functional programming languages and hear from those who are using functional languages at their workplace and in various projects. If you’ve wanted to learn more about functional programming Functional Geekery is a great way to start on that path.
If you are a Ruby developer this podcast should be an essential part of your listening habits. The panel format, led by Charles Max Wood, is excellent and the diversity of the panel always leads to great insights and new ways of thinking. One of the highlights of the show for me is “the picks”, in which the panelists and guests share interesting and helpful things they have come across recently. It could a product, a service, an article, a conference talk, basically pretty much anything worth sharing. I’ve discovered some amazing stuff thanks to the picks. All in all, a fantastic podcast and it’s also very friendly to new Rubyists, and so if Ruby is something you’ve wanted to check out the Ruby Rogues is a great way to do so.
The Ruby on Rails Podcast is a long running one which ended up on quite a long hiatus, but was resurrected in 2014 by Sean Devine. Sean is a fantastic host and he always has an interesting guest on. I’ve really appreciated that fact that Sean doesn’t just stick to Rails but ventures into topics such as Ember and how to integrate a Rails API server with an Ember front-end. If you are a Rails developer then this podcast is an excellent way to hear from other Rails developers and keep on top of what is happening in the Rails world.
This podcast is a fairly recently discovery for me, but a very welcome addition to the queue. The distinctive element of Developer Tea is that the episodes are relatively short, typically anywhere from 5 - 20 minutes in length. Basically they are typically of a length that could be listening to during a tea (or coffee) break. The host, Jonathan, manages to pack a lot of great content into the shorter time format and so I always come away from an episode with some helpful insights or information on new tools and techniques.
Last but not least is the excellent Giant Robots Smashing into other Giant Robots podcast. This is just one of the excellent podcasts created by ThoughtBot (they have others and I would recommend those as well). What I love about Giant Robots is that it’s a technical podcast but it regularly ventures into a wide variety of topics, including design and business, areas that are outside my expertise. Many times I’m learning about new things or new ways of thinking or solving problems. Ben Orenstein is the usually host and I just really appreciate how he interacts with the guests.
Well, that’s my list. If anyone has any other podcasts they would like to share please do so in the comments below.
]]>My interest in Conway’s Game of Life was recently piqued when I read Corey Haines excellent little book Understanding the Four Rules of Simple Design. I highly recommend the book to any software developer who is seeking to improve their skills in the craft of software design. The book details the lessons learned from applying the Four Rules of Simple Design (which were first articulated by Kent Beck) to Conway’s Game of Life.
In reading the book and going through all the examples, which were based around the Game of Life, I realized that I had never written an implementation of the Game of Life myself. I thought it was high time to and go ahead and do that. Since my current programming language of choice is Ruby, and also since the examples in Corey’s book use Ruby, I decided to implement my version in Ruby as well.
I used a test-driven approach to writing the code with RSpec, and also used the Gosu 2D game development library to generate the nice looking output. It was a very worthwhile exercise and I found it got me thinking about different programming problems than I normally do in my day to day work. Plus watching the end result as cells are born and die is a lot of fun.
You can find the resulting implemention at my Github account. As I note in the Readme, the code is indebted to another Ruby implementation of the Game of Life, which I leaned on heavily for inspiration. Eventually I would like to try re-writing the implementation from scratch and work in some of the exercises from Corey Haines’ book at the same time.
Please send me any feedback or ideas on how I could improve the code (or just open a pull request).
If you’re a programmer and haven’t yet implemented the Game of Life in your language of choice, I would highly recommended giving it a try.
]]>As a software developer I feel a continual pull towards perfectionism in my craft. It comes down to wanting to produce elegant software designs that are executed with clean code, resulting in quality software that is easy to maintain and adapt to future requirements. Essentially I like to write code that I wouldn’t be ashamed to share with other programmers whose work I look up to and admire.
This desire to write good code doesn’t just appear natually. I think it is something has been developed and encouraged by having great mentors, reading good books on software development and listening to master programmers, those who have honed the art of software development. What I love about listening to great programmers is that I always feel the push to do better, to improve myself and the code I write. This is source of the pull to perfectionism for me and I’m very glad for it.
Overall this is a great thing because I want to grow and improve, I want to learn from my mistakes and know I can do better in the future. However, there is also be a dark side which can be debilitating.
This dark side causes me to doubt that I’m doing things right and makes me unable to start coding until I’ve figured out the perfect design. Because I’ve so often seen how others to do things “right”, my own code can sometimes feel to fall so short in comparison.
The dark side means means that the drive to perfectionism can also be a drive to inaction and can lead to an inability to execute.
It can lead to a kind of procrastination, in which I feel like I am being busy, searching for the best design or current “best practice” on how to implement some detail. But in reality I’m wasting time, not producing anything of value. It can lead to a vicious cycle where having more options makes it harder to make any decisions on how exactly to get started.
When I find myself caught in this trap of inaction, I find the key to get over this hump is to think small. Instead of trying get the perfect big picture design, I just start working on something trivial yet valuable. I think about the smallest piece of functionality that would move the requirements forward and then implement that. Generally in my experience it’s impossible to get the perfect design without trial and error, without building something which can then later be improved iteratively.
I have found this is one of the keys to agility in software development. Never get bogged down by trying to build some perfect work of art on the first try but rather move forward in small steps, always iterating on and improving the design. Don’t get overwhelmed by trying to understand the big picture, but take a deep breath and break your procrastination but doing something.
It’s amazing how getting something done, no matter how small, can really help to move you forward.
]]>The key dependency which pdf-forms utilizes for working with PDF files is PDFtk which is a command line tool for interacting with PDFs. As such, in order to use pdf-forms you’ll need a binary of PDFtk which pdf-forms can access. On my local dev environment this was no problem, just a matter of downloading and installed the binaries for my platform. However it was a different story when it came to deploying the app on heroku and getting it working there.
It took a bit of digging but I eventually got it working and thought it would be worth sharing here in case someone runs into the same issue.
The following code worked for me, but you might need to make some changes to get it working on your system. If you have any questions, don’t hesitate to comment below and I’ll do my best to help.
The first step is to download a binary of pdftk that will work on heroku and add it to your Rails app:
1 2 3 4 5 6 7 8 9 10 |
|
Once that is done you can push up your changes to heroku and then set up the necessary environment variables so that heroku knows where to find this new binary:
1 2 3 |
|
Finally you can try it out and confirm that pdftk is now working on your heroku instance by running bash and trying it out:
1 2 |
|
If pdftk is working then you should see a bunch of help output from pdftk and that means you are now good to go. Your deployed app should be able to work with pdfs and generate beautifully filled out pdf forms.
]]>To some (and I would fit into this category), this is an attractive element of the job as it keeps things challenging and the work rarely gets boring. To others it is a wearying trend, knowing that what you learn this year will most likely be obselete by the next year, or at the very least, will have gone through significant evolutionary development. If that’s you, then you probably want to find another career because it’s not slowing down anything soon.
My interest in the field of education is not limited to my own self improvement, but rather it extends out into a passion of mine, which is to teach others about technology, and specifically programming. As such, I take a keen interest in various websites, courses and tools which have been created to teach programming. Throughout the years of using and interacting with various educational materials, I’ve found that I’ve always been left wanting, but I never felt I could express what exactly was missing. Thanks to Bret Victor’s incredible article Learnable Programming, that has now changed.
Bret’s article begins by asking a question:
How do we get people to understand programming?
He then utilizes the Khan Academy’s programming courses as an example of programming education gone wrong. I found it helpful for Bret to utilize a popular example like the Khan Academy since their approach is a very common one. Thus the critique isn’t specifically aimed at the Khan Academy, but rather upon the popular style of teaching programming that they epitomize. From there the article critiques various aspects of this teaching approach while showing how it could be done better, much better.
In Bret’s words:
- Programming is a way of thinking, not a rote skill. Learning about “for” loops is not learning to program, any more than learning about pencils is learning to draw.
- People understand what they can see. If a programmer cannot see what a program is doing, she can’t understand it.
Thus, the goals of a programming system should be:
- to support and encourage powerful ways of thinking
- to enable programmers to see and understand the execution of their programs
This is truly a brilliant article, one of the best I’ve ever read on how to teach programming. It should be mandatory reading for any educator in the field.
So please do yourself a favour, stop whatever you are doing and read Learnable Programming today.
]]>About a year ago I took a class entitled Advanced Object Oriented Software Development, a significant portion of which was spent drawing UML (Unified Modeling Language) diagrams for the software we were designing. I needed to get up to speed with UML in short order and the materials given by our professor were painfully obtuse. To the rescue came this fantastic book UML Distilled, the contents of which are indeed true to the title. It takes a very large and dense subject like UML and distils it down to the essential aspects. If you are a working software developer and want to start utilizing UML in the design of your projects, then this is the book you should get.
Now, anyone who has some serious experience in software development is probably familiar with the name Martin Fowler. He is probably best known for his contributions to the classic book Refactoring: Improving the Design of Existing Code. In UML Distilled, Fowler is true to form and not only gives the low down on UML itself, but keeps the material very interesting and compelling thanks to his own invaluable opinions and experience throughout. Now, whether you agree with all of Fowler’s opinions is besides the point, for me I really appreciate such anecodes as they get me to think about my own opinions. It also helps to make the book an enjoyable and entertaining read, which is quite a feat when considering that it is a book about UML, which typically is a pretty dry subject.
So, regardless of your experience, whether you are already using UML regularly or simply wanting to learn more, this is a book you will want on your shelf. However, maybe you’re wondering why you would even want to know anything about UML, let alone use it from day-to-day. I’ll admit UML is not for everyone, however speaking from my experience, it has proven be a very useful tool in my toolbox.
As a programmer, I do find it very helpful to draw diagrams when I’m thinking through and planning my software designs. This is especially the case when I’m documenting a design in order to communicate it to others. This is really where the strength of UML comes in, because instead of working with some adhoc design notations, you instead work from a standard which is industry-wide.
This in turn has two far-reaching benefits:
Now of course, the benefits don’t stop there but for me those two alone made it worth the investment of learning UML. With UML Distilled coming in at a consise 160 or pages, it really won’t take long to be able to start using UML and realizing it’s benefits.
]]>I can’t remember a time of life when I have been more busy than the last year or two. It certainly has been a self-inflicted form of busyness, but it is busyness none the less. This is what you get then you work full time, go to university part time to finish off a degree, have a growing family that you want to spend time with, and yet still have interests, hobbies and somewhat of a life.
Thankfully the university portion of the schedule will be ceasing in about 2 weeks. Yes, after two years of running myself into the ground, a light is now visible at the end of this tunnel. I’m not sure exactly what will come of this degree apart from the satisfaction of finishing it. But to be honest, I’m quite happy with that outcome alone and don’t really need anything more than that. Once it is done, and the busyness ebbs down a bit, a bit of rest will certainly feel well-deserved.
Although as my wife loves to remind me I can’t help but stay busy, and so when I finish this degree there will inevitably be something else that will take it’s place and life as I know it will remain just as busy as before. However, when I think about it, would I really want it any other way? Probably not. Still times of rest are critical and that is what I am hoping to enjoy soon.
Maybe then I’ll be able to write a bit more regularly on this blog.
]]>Today my eldest son Jonathan turned 5 and one of the first things I did after he woke up was to sit him on my lap and tell him the story of when he was born. It’s one of his favourite stories, and he loves to hear how that day transpired. From the time that my wife entered into labour at about 5am up until when we held our precious newborn in our arms around 10pm. It was a long and wonderful day. One of the best days of my life.
Time is strange. In some ways that day feels like yesterday, I distinctly remember key details of those moments and can still feel the excitement and thrill that experience. Then, likewise I almost can’t remember what life was like before Jonathan was born. It feels like a void, I know that lived those 30 years before Jonathan entered into the world, but they feel so fuzzy and unreal today. They feel like a shadow.
Yes, time is strange but so are our memories. The transience of past experiences is very real to me today as I think back on these last 5 years. Yes, we can try and capture these moments in photographs and movies and yet these are also simply shadows of the actual experience. I am freshly awakened to the importance of making the most of each moment, spending time with my wife and kids, and enjoying their company and cherishing our time together. Life doesn’t get much better than that.
]]>Currently one of my largest projects is maintaining and improving a very large and complex PHP / Zend Framework web application. To even think about porting this app to Rails would be off the table and despite such limitation, I want to make the process as comfortable as possible. To do so, I will still attempt to use the tools I’m familiar with whenever I can. Enter Zend Framework deployment with Capistrano.
I’m sure they’re out there, but I’m not aware of any PHP or Zend Framework based deployment tools that even come close to the functionality of Capistrano. The great thing about Capistrano is that you’re not limited to using it with only Ruby and Rails based projects, but it actually works beautifully for any kind of project. In my case, it was an ideal deployment solution for this large PHP web application that I manage.
In this post I will detail exactly what you need to do to get your Zend Framework apps deploying with Capistrano. To play along you will need to have Ruby and Git installed on your system.
If you are not currently using a Version Control System with your code, then you’ll need to start doing so before ever contemplating an automated deployment solution. Capistrano requires a version control system since that is how it will know which files to deploy to your server for specific builds. It also allows Capistrano to roll-back builds when needed. With great tools like Git and Mercurial that are freely available along with a deluge of resources to help you get up and running, it’s never been easier to get started with version control.
If you are not yet using Git, I would recommend a couple resources to help you get started: Git Immersion, Learn.GitHub “Introduction to Git”, and Git Ready.
If you don’t want to be typing in the password every time you deploy your application, you’ll need to set up Public-Key Authentication. I’ve covered how to do this in two previous posts: here and here.
Here comes the actual Capistrano set up. First, assuming you have Ruby on your system, start by installing both the capistrano and railsless-deploy gems:
$ gem install capistrano
$ gem install railsless-deploy
Now create a file called Capfile in the root directly of your ZF application. When working with Rails you can actually perform this task automatically using the “capify” command, however since we’re not using Rails we’ll need to create the file manually since the ZF project structure differs from a Rails project. The contents of this file should be as follows:
require 'rubygems'
require 'railsless-deploy'
load 'application/configs/deploy.rb
The next file you will need to create is deploy.rb (which is referred to in Capfile) inside the “application/configs” folder of your ZF project. This file is where the bulk of your Capistrano configuation will be specified. Here is a generalized copy of the file from one of my projects:
# Add colours to the Capistrano output
require 'capistrano_colors'
# What is the name of the local application?
set :application, "MyApp"
# Is sudo required to manipulate files on the remote server?
set :use_sudo, false
# How are the project files being transferred to the remote server?
set :deploy_via, :copy
# Maintain a local repository cache. Speeds up the copy process.
set :copy_cache, false
# Ignore any local files?
set :copy_exclude, %w(.git)
######################################################
# Git
######################################################
# What version control solution does the project use?
set :scm, :git
# Where is the local repository?
set :repository, "file:///Users/derekbarber/vhosts/myapp"
#############################################################
# Stages
#############################################################
set :stages, %w(production development)
set :stage_dir, "application/configs/deploy"
set :default_stage, "application" #if we only do “cap deploy” this will be the stage used.
require 'capistrano/ext/multistage' #yes. First we set and then we require.
#############################################################
# Tasks
#############################################################
# Remove older realeases. By default, it will remove all older then the 5th.
after :deploy, 'deploy:cleanup'
Hopefully the comments in the above lines make the content of the file fairly self-explainatory. I will however make a couple points about my specific configuration as each of the options do take multiple possible values and my values may not be the best values for your specific situation. I do recommend that you get familiar with the various options available and you can do so by taking a look at the Capistrano wiki and specifically the Significant Configuration Variables article.
In my example, you will notice that for the “:copy_cache” option, I chose “false” which is not the recommended choice. I simply was unable to get copy_cache working properly with my setup. I would recommend setting this “true” since it will greatly speed up your deployments, but if you run into any problems, you can do what I did and try turning off such an option to see if it helps.
A key aspect of my configutation is that I have specified multiple “:stages” for deployment. This allows me to deploy to both a development site and a production site. I use different branches in my Git repository to track my development and production versions of the application. This type of configuration requires additional files for each stage, which are stored in the “application/configs/deploy” folder. You can read more about this at this Capistrano wiki article: Multistage Extension. To help further clarify this, here is an example of the configuration file for my development stage:
#############################################################
# Servers
#############################################################
# What is the user that will connect to your Development server?
set :user, "development"
# What is the domain name of your development server ?
role :web, "dev.myapp.org"
# What is the directory path used to store your project on the remote server?
set :deploy_to, "/home/vhosts/dev.myapp.org/htdocs"
# What is the branch in your Git repository that will be deployed to the development server?
set :branch, 'unstable'
# Specify the development configuration file
task :create_symlinks, :roles => :web do
run "rm #{current_release}/application/configs/application.ini"
run "ln -s #{current_release}/application/configs/production/development.ini #{current_release}/application/configs/application.ini"
end
# After the deployment, we call the created task (create_symlinks):
after "deploy:finalize_update", :create_symlinks
With most of the settings in the above file you can simply replace them directly with the settings for your specific application and server configuration. However, please pay close attention to the task block as it is a critical piece of the puzzle. What is going on here is that I am ensuring that the correct configuration file is in place for the correct stage. In my case, each stage requires some slightly different configuration options, such as which database server to connect to and what the base URL for the application happens to be. What I do is store separate configuration files for each stage and then symlink it into the proper location based upon the stage that is being deployed.
Once you have created your own configuration files and customized the values for your specific situation, you can then give it a try. To prepare your project for deployment, first run this command which will create the necessary Capistrano folders on your remote server:
cap production deploy:setup
Assuming that task executes successfully, you are now ready for the first full deployment of your application. To proceed, enter the following command which is only used the first time that you deploy a new application:
cap production deploy:cold
For all subsequent deployments, you can run the slightly simpler command:
cap production deploy
And of course, if you ever deploy a bad build that breaks your application, you can roll it back with a single command:
cap production deploy:rollback
In the above commands you will notice that I specifed the production stage as the target, keep in mind that if you have a multi-stage deployment you will need to re-run the above commands for each stage. For exapmle, when deploying for my development stage, the command will look like this:
cap development deploy
Additionally, if you only have a single stage application, you can simply run any capistrano command without specifying the stage. To get a full list of all the capistrano tasks that are available for your project, type in the following command:
cap -T
It might be a little scary the first time you deploy but soon you should get used to the incredible ease with which you can deploy new builds of your web application. I can’t imagine ever going to back to a manual deployment process and I shutter slightly when thinking about how much time I’ve wasted in the past doing manual deployments.
Well, that was quite the whirlwind tour of using Capistrano for deploying a Zend Framework application. I hope it was helpful. Please don’t hesitate to leave a comment below or get in touch directly if you have any questions about this article.
]]>There were many times throughout the past year and a half that I really felt that what I was doing was basically insane. Here I was, a 33 (now 34) year old man with a wife, two kids, a great full-time job and over 12 years of experience as a professional software developer. What benefit could an undergrad degree ever serve at this point in my life?
The journey to return back to school began several years ago during a period of dissatisfaction with my career as a software developer. I found myself starting to ask the question: can I really see myself doing this for the rest of my life? Sure it had been a fantastic ride so far, and I really did enjoy writing code, but I had a hard time seeing myself still doing this in my 40s and beyond. It was this time of personal re-evaluation that I began to think about some other options for myself in the future.
In my early 20’s I had completed close to three years of a bachelors degree but then one summer I got a job working in downtown Vancouver as a software developer and had way too much fun to return back to classes the following fall semester. For a time, finishing school was always at the back of my mind but then as the years went on and life progressed, I basically gave up on ever going back. I got better as a software developer, met my wife, got married and had kids and all was well with the world. That is until my early-30s as I entered into this period of rethinking my future and desiring to change my course in life.
As events unfolded, it took me a while to actually make the jump to go back and there were several key incidents (which I won’t go into here) that were part of moving me to that point. But jump I did, and so in the summer of 2010 I applied to re-enter the Bachelor of Technology degree program at Kwantlen Polytechnic University. I was accepted and I could have started that fall but I held back at the last moment. Some difficult questions still loomed large in my mind: was this really the best path forward for me? I knew that starting this would entail a difficult season of working full-time then going to classes at night and studying on any free evenings and weekends. I basically had to give up my life for about 2 years and of course this would not only affect me but also my wife and kids. There was much to consider.
When I think of all the reasons to finish the degree verses all the reasons not to, what ultimately tipped the scales in the direction of going back was the deep understanding in the value of finishing what you begin. If there was no other reason and no other benefit to be gained from this experience, finishing what I began those 14+ years ago would be cause enough to do it. And so, that fall I registered in 2 classes for the January 2011 semester with fear and trepidation. To say that the last year and a half has been a completely exhausting blur would be an understatement, but somehow I’ve made it through some very difficult and trying months and have emerged for the better.
At this point where I now find myself, I have completed the majority of the degree requirements. After this current summer semester I only have 2 final courses to complete in the fall and then I will be done in time for Christmas. From this vantage point I can say with confidence that I am very thankful for the journey but also very thankful it will be coming to an end.
Not only is the benefit of finishing what you began becoming very real and tangible for me, but there has been an incredible amount of other benefits throughout this experience. For one, it has reinvigorated my love for software development and technology in general, and I think this has percolated back into my job as I have sought to hone my craft and do ever better work with ever better programming practices. Additionally, it has helped me to appreciate the value in being stretched and pushed outside my own comfort zone. I admit that prior to this, my life was pretty comfortable and safe. This has shook me up and I think I will now be more ready to take other risks in the future.
That said, I think possibly the most important benefit to be gained from finishing what you begin, is the practice of actually finishing. It’s so easy to start new things but to actually finish something significant, that takes real hard work and dedication. I hope to take this lesson with me throughout the rest of my life.
]]>It is important to note that Test-driven development (TDD) is not solely a testing technique, but rather part of a holistic design, development and testing process. The basic idea of TDD is that instead of writing your code first and then writing tests after to ensure the code works, you write your tests first and then write the code that will get all the tests to pass. This is known as a Test-First approach.
There are two generally accepted views on how and why you should practice TDD in your software development. The first view sees TDD as a technique for specifying your requirements and design before writing the actual program code. The second view takes a more pragmatic approach and sees TDD as a technique that helps programmers write better code. Regardless of the view one takes, what TDD practitioners all agree on is that TDD will not only improve your code, but it will also improve the overall design and implementation of your software system.
Generally, the modern “rediscovery”” of TDD is attributed to Kent Beck, who is also known as the creator of Extreme Programming. It was through both the Extreme Programming and Agile software development movements that TDD came to be widely accepted in the software development community.
It is said that Kent Beck “rediscovered” TDD since a prototype of TDD dates back to the early days of computing in the 1960s. During the mainframe era when program code would be entered onto punch cards, programmers had limited time with the machine and thus would need to maximize the time they had. One documented practice was to write the expected output of whatever operation you were doing before entering the punch cards into the computer. Then when the mainframe would output the results of your program, you could immediately see whether the results you got were correct by comparing the actual output with the expected output that had been documented earlier.
The big difference between modern TDD and those early days is that it used to be a completely manual testing process, whereas the re-birth of TDD in recent years was facilitated due to automated testing. Today TDD refers solely to automated test-driven development.
Modern TDD was first practiced in the Smalltalk community and they used the Smalltalk SUnit suite for their automated testing. However, the Smalltalk community was always quite a small group and while influential, it took years for much of their brilliance to reach the wider industry. Additionally, since Smalltalk never really took off, that also hampered the initial spread of TDD.
It was in the Java community that TDD really started to take off thanks to the JUnit tool. JUnit was a port of SUnit written by Kent Beck and a couple others, and it brought automated testing to Java. At this time, many Java developers who were practicing either Agile or Extreme programming methodologies began to embrace TDD thanks to JUnit. Since then, a class of tools known as XUnit have been created for almost every programming language from PHP, Ruby, Python to JavaScript. Today, regardless of the programming language you are using, you should be able to find the tools and resources to implement TDD as part of your software development process.
The Agile Manifesto is considered to be a very important milestone in modern software development and it was from this document that the term Agile Software Development was born. Many cutting-edge programming techniques such as Extreme Programming, Scrum, pair programming and refactoring have come the agile software community.
The Agile Manifesto is also an important milestone in the development of TDD as several of the key principles in the manifesto are the cornerstones of TDD. Some of these ideas are captured in the following quotes that are from the article Principle behind the Agile Manifesto:
It is these and other principles of the agile software movement that are truly realized through test-driven development. TDD helps software developers to manage and even welcome changing requirements, allowing them to quickly adapt their code as needed. TDD helps to deliver working software frequently as developers write their tests and then quickly write code that gets the tests to pass. TDD helps with a continuous attention to technical excellent and good design, as it forces developers to carefully plan and think through their design before writing any code. Finally, TDD helps to deliver working software, which is really the primary measure of progress on a software development project. TDD produces software that works.
It is helpful to now take a careful look at exactly how the test-driven process works. Exactly how does a developer begin to embrace and implement a TDD process in their workflow?
A TDD practitioner begins by extracting a specific function from the system requirements. Once a specific function has been determined, the developer then writes an automated test that will test that specific function. The developer writes the test and then will write the minimal amount of code required in order for the test to simply run. Of course since the actual functionality has not been developed, the test will initially fail. This initial failure of the test is a key aspect of TDD and it is how each test begins.
Once a failing automated test is completed, the developer can now write the code to implement the function. As the developer works on the code they can execute the automated test at any time to see if their code is working. This gives the developer immediate feedback and is part of the reason why TDD improves programmer productivity. The developer continues to work on their code until their automated test is passing. Once the test is passing the programmer can know with confidence that his code works as it should.
Refactoring is covered in detail below, however it is mentioned here as once a test is passing, the code is ready to be be refactored as needed. Now, this is not always a required step at this time, however some programmers will try to quickly get a test working and may in the process write some sloppy code. If that is the case, once the test is passing, the developer should go back and refactor their code so that is implemented properly with good design. Throughout the refactoring process, the developer can re-run the automated tests to make sure that they have not broken the required functionality.
Once the programmer has a test that is passing and they have clean and refactored code, they can then move on to the next function and proceed in the same manner. As the programmer progresses through all the required functions of the application, they will ultimately develop the actual working application. A real benefit of TDD comes with this, as they will simultaneously be completing a full test suite for the application with complete test coverage. It is very rare that a programmer will go back to their code and write tests for everything after having a fully implemented application. Complete test suite code coverage is simply a natural by-product of the TDD process and it is one that should not be underestimated.
As mentioned above, refactoring is a part of the TDD process and is in fact a key advantage to TDD. This is because refactoring requires code that has complete test coverage and so when a programmer is following TDD, all their code will always meet this requirement.
Code refactoring is essentially a restructuring of an existing body of code in order to improve the underlying structure and design of the code. When you refactor, you don’t change or add any functionality, rather you want the code to produce the exact same behaviour as it did before refactoring. This is why complete test coverage is required before refactoring, since the tests will ensure that the functionality does not change and still works exactly as required.
A developer can refactor a method or function and throughout the process, they can re-run the automated tests to ensure that the behaviour of the code is still working as expected and that the tests are all passing. This gives a developer great confidence to go in and refactor code that they might otherwise avoid since without the tests, you never will know if the changes you are making have broken anything.
The following flowcharts show the primary differences between traditional and test-driven development processes. Both of these charts are largely based upon diagrams that are found in the Impact of Using Test-Driven Development: A Case Study and Test driven development: empirical body of evidence papers, which are shown under Resources.
As the above diagrams show, the key difference between the two approaches is simply the test first nature of TDD. In the traditional process, you don’t write your tests until after completing the full implementation. Thus, testing is primarily a verification that the code works as intended; it doesn’t help the developer in their work. Since TDD starts with testing and incorporates testing as a part of the development process, the tests actually help the developer and will generally improve the quality and design of the code.
The foundation of test-driven development is automated testing, since without it, TDD would not be possible. In order to understand TDD, it is critical to also have a good understanding of automated testing. There are three main types of automated testing that are generally performed and these tests are written and executed by the actual developers. There may also be other types of automated testing but these three are some primary ones that developers will be working with.
With TDD, the primary focus of the developer will be on first writing Unit Tests and then writing Integration Tests. This is because both of these tests are built around the system requirements and ensure that each function works as intended both in isolation and then when integrated into the whole system. Performance testing is generally not included as a part of TDD but depending upon the application, it may also be included. Performance testing would be applicable when the performance of the system is itself a feature of the application.
The benefits of using TDD are wide reaching and very compelling. In the paper Test driven development: empirical body of evidence, the following is stated, “TDD may help to improve software quality significantly, in terms of decreased fault rates, when employed in an industrial context.” As this paper gathers and summarizes 13 empirical studies from the industry, this is quite a compelling conclusion that is backed by hard facts.
In an article entitled, Does Test-Driven Development Really Improve Software Design Quality, the authors document how TDD contributes to better code. The article shows that code produced through TDD is generally better organized into smaller and well-designed units that are better tested. Such code not only is better from its initial release, but it is easier and less costly to maintain and adapt in the future.
Overall, you can find many articles and reports that list the many purported benefits of using a TDD process in your software development. Some of these benefits are as follows:
Overall these are quite substantial benefits that should at the very least pique the interest of developers. Embracing a TDD process does require quite a radical shift in the thinking of developers but these benefits show that the effort required to do so is likely worth it.
In the case study entitled Impact of Using Test-Driven Development: A Case Study, we have the results of an experimental study that was done using undergraduate students at the University of Southern Mississippi. Each of these two groups consisted of 9 students and time period for the whole study was 3 months.
In this study, two groups of students were created; one group would develop software using a traditional approach of writing unit tests after developing the application. The other group would take a TDD approach and write their tests first and then implement the functionality of the application second. Both of these two groups developed the exact same software and both used an incremental and iterative approach. So, this study sought to determine whether the practice of TDD in a real software project resulted in better code and more productive programmers.
An additional SQA team was also utilized in the study and they were assigned the role of measuring the results of the study. This SQA team did subsequent additional unit testing, integration testing and acceptance testing of the software from each of the two development groups. Their work was done to measure the quality of the code produced by the two groups.
Some specific metrics were calculated for each of these two groups, which were used to help determine which development process was superior. These included the following:
The following table of results can be found in the case study:
Metric | TDD Approach | Traditional Approach |
---|---|---|
Number of test cases written | 629 | 211 |
Number of faults detected by the SQA group during all the units tested | 74 | 109 |
Number of faults detected by the SQA group during integration testing | 13 | 15 |
Number of faults detected by the SQA group during acceptance testing | 14 | 31 |
Total Person Hours spent | 928 | 1245 |
The results of this case study are indeed compelling. While this is only one small case study, the results are consistent with other academic and industry studies. Clearly the TDD approach resulted in code that was much better tested, had fewer faults and ultimately took less time to write.
In conclusion, I hope that this article has shown the very real benefits that can be gained from a test-driven development process. The industry results and case studies are quite conclusive; TDD has a very positive real and measurable benefit to software development projects.
I’m learning as I go here and so I would love to hear any feedback. Please get in touch if you have any questions or to let me know if I’ve gotten any details wrong.
]]>I set everything up as expected, copying my public key into the ~/.ssh/authorized_keys file on the remote server. However, it just didn’t work. Whenever I tried to log into the server, instead of immediately logging me on, it still gave me the password prompt. In order to dig a little deeper and find out exactly what was going on, I ran the ssh command in verbose mode (adding -v). This showed me that my public key was offered but then rejected:
$ ssh -v derek@xyz.com
.....
debug1: Authentications that can continue: publickey,password,keyboard-interactive
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /Users/derek/.ssh/id_rsa
debug1: Next authentication method: keyboard-interactive
debug1: Authentications that can continue: publickey,password,keyboard-interactive
So, that confirmed my suspicion that the server was rejecting my public key. In order find out why the server was rejecting my key, I needed to get the SSH server to give me additional details. To do so, I turned on AUTH logging in the /etc/ssh/sshd_config file on the server. My relevent portion of my sshd_config file looked like this:
SyslogFacility AUTH
SyslogFacility AUTHPRIV
LogLevel INFO
After restarting sshd (/etc/init.d/sshd restart) I tried again to authenticate using my public key, and expected the key was rejected. However, thanks to turning on AUTH logging, the details of my login attempt were captured in the server logs (/var/log/messages to be exact):
Authentication refused: bad ownership or modes for directory /home/derek
It was from the above log message that I was able to figure out the problem, it was simply one of permissions. SSH is very particular about the permissions on your home folder, the .ssh folder and the authorized_keys file. The solution thus involved adjusting permissions to make sure they conformed to what SSH expected. SSH wants only your user account to have write access to your home directory (group can have read), and then only it can have read access to your .ssh folder and your authorized_files file (group cannot have read).
I was then able to fix the permissions by using the following commands:
chmod go-w ~/
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
After doing that I tried again and voilà - it worked!
]]>In addition to being more convenient, public key authentication is also more secure since your password is not transmitted over the network. It’s also relatively quick and easy to set up, and I hope the following information will be helpful in that regard.
With SSH public key authentication, instead of using your password to authenticate, it will instead use your public key. Of course the public key is only half of the solution, the other half is your private key. These two keys work together, so that a message encrypted using your public key can only be decrypted with your private key. It is important to remember that you can freely distribute your public key but you must never give your private key to anyone.
Public key authentication with SSH works by having your public key reside on the server, inside your account’s home directory. When you connect to the server using SSH, the server will encrypt a message using your public key and send it to you. SSH will then use your locally stored private key to decrypt that message and thus prove to the server that you are the bearer of the private key. Once your identity is authenticated through this exchange, you will be given access to your account on the server.
The following commands should work on any Unix-based system such as Mac OS X or Linux. If you are on Windows, please check out PuTTY for information on using SSH on Windows.
If you don’t yet have a key pair, you’ll need to generate that first. The following command will generate a new RSA key pair:
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/derek/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/derek/.ssh/id_rsa.
Your public key has been saved in /home/derek/.ssh/id_rsa.pub
Unless you have a good reason not to, you should accept the default location for the keys. You will also be asked for a passphrase, you can leave this blank for convenience but it’s more secure to have one. In the above example my private key is stored in the id_rsa file, and my public key is stored in the id_rsa.pub file. If you’re curious, feel free to open those files in a text editor and take a look at their contents.
If you do enter a passphrase you will want to use a tool called “ssh-agent” to cache your private key so that you won’t need to type in the passphrase every time. Use the following command to do so:
$ ssh-add ~/.ssh/id_rsa
Need passphrase for /home/derek/.ssh/id_rsa
Enter passphrase:
Once you have your key-pair, you’ll need to use scp to securely copy your public key up to the remote server:
$ scp ~/.ssh/id_rsa.pub username@remote:publickey.txt
Then you should log into the server so that you can move your public key into the correct location. As you will note in the following commands, it is also essential to have the permissions set correctly both the .ssh directly and on your public key, so that only your user account has access to those items.
$ ssh username@remote
...
$ mkdir ~/.ssh
$ chmod 700 .ssh
$ cat publickey.txt >> ~/.ssh/authorized_keys
$ chmod 600 ~/.ssh/authorized_keys
$ logout
Once that is all done you should be all set to log in to your remote server. Simply type in the ssh command as you normally would and within moments you should be securly logged into your remote account.
I hope this post has been helpful, please get in touch if you have any questions. I am also planning a follow-up which will cover some SSH troubleshooting and should hopefully give solutions to some common issues you might experience.
]]>