When Firefox was updated on my laptop a few days ago to version 19, suddenly my Cucumber tests using Selenium and Firefox stopped working, with error:

unable to obtain stable firefox connection in 60 seconds (127.0.0.1:7055) (Selenium::WebDriver::Error::WebDriverError)

When I investigated, I found that Capybara / Selenium didn’t support this version yet.

The first thing I did was to downgrade Firefox to a previous version… But I don’t like this idea. In my opinion, the test suite should never depend on the same browser which is used by the user, it feels fragile and may break anytime again, when Firefox is upgraded.

So, the solution is relatively simple:

1) Download a previous Firefox version (I choosed the version 17, 64bits) here:
http://releases.mozilla.org/pub/mozilla.org/firefox/releases/17.0/linux-x86_64/en-GB/

2) Extract the archive, you should get a “firefox” folder containing the program and its binaries

3) Move the entire folder to your desired location.
For example, on Ubuntu:

1
sudo mv firefox /opt/firefox17

4) Now, in your env.rb file, add this:

file: features/support/env.rb
1
2
3
4
5
Capybara.register_driver :selenium do |app|
  require 'selenium/webdriver'
  Selenium::WebDriver::Firefox::Binary.path = "/opt/firefox17/firefox"
  Capybara::Selenium::Driver.new(app, :browser => :firefox)
end

I did a slightly improved version, so I can define the path to the binary as an environment variable. If the environment variable is not present, then it simply falls back to the system’s default Firefox:

file: features/support/env.rb
1
2
3
4
5
Capybara.register_driver :selenium do |app|
  require 'selenium/webdriver'
  Selenium::WebDriver::Firefox::Binary.path = ENV['FIREFOX_BINARY_PATH'] || Selenium::WebDriver::Firefox::Binary.path
  Capybara::Selenium::Driver.new(app, :browser => :firefox)
end

But don’t forget to add this line to your .bashrc file for the above to work as expected:

file:.bashrc
1
export FIREFOX_BINARY_PATH="/opt/firefox17/firefox"

Thanks for reading.
Happy testing !

Typical scenario

Consider this HTML fragment:

1
2
3
4
5
6
7
<section id="recipes">
  <h1>The tasty recipes of uncle Alfred <button class="close">Close</button></h1>
  <div id="toad-recipe" class="recipe">
    <h2>Toad lasagna <button class="add">Add to favorites</button></h2>
    <p>First, you have to catch a big, juicy toad.</p>
  </div>
</section>

And this css:

1
2
3
section#recipes button.close{
  /* some funky styles */
}

And javascript (jQuery):

1
2
3
$("#recipes > h1 > button.close").click(function(){
  /* some funky behavior */
});

And this example step definition in your tests:

1
2
click_on("div#toad-recipe > h2 > button.add")
# some funky testing

When it goes bad

Now, suppose that you want to change the design of your recipes page, you’ll be very likely to break some javascript functionality or your test. Can you spot the problems generated by the example above ?

Using the same attribute(s) for different concerns (using the class for styles, javascript and testing) will make you lose time and energy each time you’ll want to change one of these, it makes your code rigid.

In addition to that, all of these concerns are way too much dependent on the HTML structure. Should you want to change the h2 for an h3, and you’ll have to fix everything everywhere.

The solution: a clear separation of concerns

Here are my guidelines for manageable html, css and javascript in a well-tested project, I’m still experimenting with this and I’m open to any suggestion or critic.

  • For css: use class attributes only
  • For javascript: use IDs for unique elements or data-attributes for generic elements
  • For testing: use data-attributes only, except when you’re targeting a specific object from the domain, in which case you’re allowed to use the ID.

The refactored HTML fragment:

1
2
3
4
5
6
7
8
9
10
11
12
13
<section id="recipes" class="recipes" data-purpose="recipes-list">
  <h1>
    The tasty recipes of uncle Alfred
    <button class="close" data-purpose="close-button">Close</button>
  </h1>
  <div id="toad-recipe" class="recipe" data-purpose="entry">
    <h2>
      Toad lasagna
      <button class="add" data-purpose="add-button">Add to favorites</button>
    </h2>
    <p>First, you have to catch a big, juicy toad.</p>
  </div>
</section>

And css:

1
2
3
.recipe .add {
  /* some funky styles */
}

And javascript:

1
2
3
$("[data-purpose='add-button']").click(function(){
  // some funky javascript
});

And step definition:

1
2
3
4
within( "[data-purpose='recipes-list']" )
  click_on( "[data-purpose='add-button']" )
end
# some funky testing

Benefit

With this approach, the different concerns (js, css, testing) are decoupled from each other.

By reserving the usage of css classes for styling only, the ids for javascript and the data-attributes for javascript & testing, it means that you can completely redesign/refactor/rework one of these concern without affecting the others.

On a large project, it’s a life saver.

Some more tips

For css:

  • You should really never use ids to style elements, find a way to tag the specific element with a meaningful class instead.

For javascript:

  • Instead of writing js code for specific cases, write generic code and use data-attributes instead, like Twitter Bootstrap does.
  • Use ids only to target a unique element corresponding to real objects in the domain logic, ex: #post51.

For tests:

  • Use data attributes instead of classes to describe the purpose of elements
  • With links: use the rel attribute to describe its purpose, as described by Steve Klabnik on his blog:
1
  <a href="/articles/1/edit" rel="edit-article">Edit this article</a>
1
2
3
  When /^I choose to edit the article$/ do
    find("//a[@rel='edit-article']").click
  end

I love writing feature specs with Rspec and Capybara. In fact, I prefer this way over using cucumber. The process is cleaner and faster for me and works well for the actual project I’m working on.

Most of the features of the ror app I’m working on require authentification, so normally that implies that, for each feature spec, I have to make capybara follow all the authentication steps.

But I’s too much repetition in my eyes, because it slows down the test suite while authentication is already covered by its own integration tests.

So, is there a faster, lighter way to make this authentication happen ?

Upon investigation, I discovered that Devise doesn’t provide authentication helpers for integration/feature specs, which is justified as it contradicts the idea of full-stack testing.

But warden, on which Devise is based, provides this functionality.

Here we go with an example

Beware: when you use this technique, you don’t write full integration tests anymore in favor of taking a shortcut. A useful one IMHO as long as you know what you do.

This means you already should have authentication well tested, with its complete set of integration tests.

A barebones example

file:spec/features/admin_users_datatable.rb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
require 'spec_helper'
include Warden::Test::Helpers             ## including some warden magic
Warden.test_mode!                         ## telling warden we are testing stuff

feature "(...)" do
  context "(...)" do

    before(:each) do
      admin = FactoryGirl.create(:admin)
      login_as(admin , :scope => :user)   ## our instant magic authentication
    end

    scenario "(...)", js: true do
      visit admin_users_path
      # (...)
    end
  end
end

A working example, with some more flesh around bones

file:spec/features/admin_users_datatable.rb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
require 'spec_helper'
include Warden::Test::Helpers
Warden.test_mode!

feature "admin searching for a specific user" do
  context "when logged in as admin" do

    before(:each) do
      admin = FactoryGirl.create(:admin)
      login_as(admin , :scope => :user)

      user1 = FactoryGirl.create(:user, email: 'foo@foo.com')
      user2 = FactoryGirl.create(:user, email: 'bar@bar.com')
    end

    scenario "admin searches for a specific user", js: true do
      visit manage_users_path
      page.body.should     have_content 'foo@foo.com'
      page.body.should     have_content 'bar@bar.com'
      fill_in 'Search', with: 'foo'
      page.body.should     have_content 'foo@foo.com'
      page.body.should_not have_content 'bar@bar.com'
    end
  end
end

More info

I’m currently investigating the way of organising and separating the code in an already large and complex Rails application.

Besides refactoring to modules, engines or gems, there is also a suprisingly useful and lightweight way.

It’s not for all cases of course, but may be enough in some cases, or serve as a good start for larger refactorings.

Here we go with an example

Before

file:config/routes.rb
1
resources :plans
file:app/controllers/plans_controller.rb
1
2
3
4
5
class PlansController
  def index
    # (...)
  end
end
file:app/views/plans/index.html.haml
1
Hello from view !

After

Putting your route declaration in a scope block allows you to move your controller and your view files in subfolders, without breaking anything in your application. Doing it this way allows your existing named routes to remain intact.

file:config/routes.rb
1
2
3
scope 'planning', module: 'planning' do
  resources :plans
end
file:app/controllers/planning/plans_controller.rb
1
2
3
4
5
6
7
8
# moved to subfolder 'planning'
module Planning
  class PlansController
    def index
      # (...)
    end
  end
end
file:app/views/planning/plans/index.html.haml
1
Hello from view ! (moved to subfolder 'planning')

When it comes to the usage of my computer, I have a precise set of requirements, combining the needs of web/ruby programming with the graphic/web design and digital painting.

I’ve used Gnome 2, then fought Unity (and lost the figh), and experimented with Gnome 3. but finally moved to KDE after reading this article from David Revoy.

KDE proved to be the only desktop environment I’ve tried so far which was able to meet all my requirements out of the box, or with relatively simple customization. It’s flexibility and no-nonsense makes it the most friendly and productive environment I’ve used so far. I have the feeling that KDE evolution was made without forgetting the priorities, and while I’m not convinced by some of its newest features (activities, nepomunk, …), they are easy to disable or ignore, and don’t interfere in my workflow at all.

A convenient way to manage images for a blog is to host them on Flickr.

Using the Flickr API, it’s possible to do lots of useful things: use Flickr to serve thumbnails, fetch the metadata associated with the image, organise your galleries by tags then show them on your blog, …

Here is a very simple example.

I have a big hard disk with all my digital documents.

Backuping things is essential, but keeping the backups in the same physical place as the original documents is useful only in case of a hard disk failure.

What if there is a fire and all your data is reduced to ashes and melted plastic ?

Better store these backups in a remote place because, you know, lots of copies keep stuff safe (aka the LOCKSS principle).

Warning: The multihand tool present in Krita is highly addictive !
(If you haven’t tried Krita yet, don’t wait anymore, you’ll love it)

Krita developers and contributors, I tip my hat to you !

Here are some of my creations using this tool
( Krita + Linux Mint + Wacom tablet ).

Source: Stackoverflow

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
  var fadeStart = 0;    // 100px scroll or less => 1 opacity
  var fadeUntil = 200;  // 200px scroll or more => 0 opacity
  var fading    = $('.navbar');

  $(window).bind('scroll', function(){
    var offset  = $(document).scrollTop();
    var opacity = 0;
    if( offset<=fadeStart ){
      opacity=1;
    }else if( offset<=fadeUntil ){
      opacity=1-offset/fadeUntil;
    }
    fading.height(80 * opacity + 40)
          .css('overflow','hidden')
          .find('.brand')
          .css('opacity',opacity);
          //.html(opacity);
  });