A couple of weeks ago we had the pleasure of attending RailsCamp NZ 2013 at the beautiful Camp Kaitoke. We knew that it was RailsCamp tradition to have a project to work on over the course of the weekend and although we have plenty of rails related projects we decided that we wanted to work on our own language. We’ve had this idea for a little language, much like CoffeeScript, sloshing around in the back of our brains for a while, and we thought it’s about time we got it out. Thus Rubby was born.

Rubby consists of a transpiler that converts Rubby code into idiomatic Ruby, for example:

‘Example Rubby’ (example.rbb) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class Dog
  attr_accessor :breed, :name

  initialize -> (@breed, @name)

  bark ->
    <- if breed == 'Basenji'
    puts('Ruff! Ruff!')

  display ->
    puts "I am of #{breed} breed and my name is #{name}"

('Tinsley': 'Great Dane', 'Rufus': 'Basenji', 'Ayla': 'Malamute').each &> (name,breed)
  d = Dog.new(breed,name)
  d.bark
  d.display

Which transpiles into the following Ruby:

‘Example Rubby transpiled output’ (example.rb) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
class Dog
  attr_accessor(:breed, :name)
  def initialize(breed, name)
    @breed = breed
    @name = name
  end
  def bark
    return if breed == 'Basenji'
    puts('Ruff! Ruff!')
  end
  def display
    puts("I am of #{breed} breed and my name is #{name}")
  end
end
({ 'Tinsley' => 'Great Dane', 'Rufus' => 'Basenji', 'Ayla' => 'Malamute' }).each do |name, breed|
  d = Dog.new(breed, name)
  d.bark
  d.display
end

And while the output obviously still needs a little tweaking (specifically adding whitespace between methods, etc, and when to put parens on method arguments) it’s mostly complete feature wise.

We worked almost non-stop on Rubby at RailsCamp, but would probably would have given it up if it wasn’t for the quiet enthusiasm of Bardoe and Brett. Over the weekend, Rubby went from a barely passable lexer and parser to having a basically functional transpiler and REPL. We were hugely proud to be able to stand up on Sunday night and demonstrate our achievement to the other campers.

How it works

Rubby is based around Chris Wailes’ RLTK library, although with a number of Rubby-specific patches. Rubby code is run through the lexer which processes the input into a stream of tokens with an optional value (eg <- emits just a RETURN token, 'foo' emits a STRING token with the value 'foo'). The only real magic in the lexer is in Rubby::Lexer::Environment#indent_token_for where it attempts to measure whitespace after a newline and emit the correct number of INDENT, OUTDENT or NEWLINE tokens.

The token stream is then passed into the parser which is in essense a massive state machine; given a particular token it builds a list of possible next tokens, if there are multiple possible actions then it tries each one until it succeeds in consuming the entire token stream or it runs out of actions (a syntax error). The parser emits an abstract syntax tree, the classes for which are defined in lib/rubby/nodes.

Next, the transpiler walks through every node in the syntax tree calling #modify_ast on each which allows nodes to make modifications to other nodes in the ast (for example the InstanceArgument node modifies it’s parent method definition to contain instance variable assignments). This can’t be done at the same time collapsing the AST into Ruby because a node may need to modify an already collapsed peer to implement a language feature.

Once all this is done, the transpiler then calls #to_ruby on the root node of the AST, which in turn will call #to_ruby on it’s children (if required) and will return a large nested array of ruby statements, where an increase in nesting corresponds to an increase in indentation. This array is then passed into the RubyFormatter which joins these arrays with the correct indenting and returns the final Ruby representation of the program.

Next steps

Rubby still has a bunch of features needed before we can contemplate a 1.0 release; most pressingly support for interpolated regular expressions and a convincing rescue/ensure syntax. We also want to submit a pull request for ActiveSupport::Dependencies to be language agnostic, something that the existing Polyglot hook gets us near to, but not all the way. If you’d like to help with that, or with Rubby in general (there are a bunch of Cucumber features tagged as @todo) then we could really use your help.

I hope people enjoy programming in Rubby as much as we enjoyed writing it and I’m really keen for any feedback whatsoever.

Back at the end of January, when we first agreed to build oshpark.com for Laen we had no idea that we would spend the middle week of May cramming to squash the last of the user facing bugs before a big launch at Maker Faire

So what is OSH Park?

OSH Park is your go-to site when it comes to fabrication of prototype circuit boards. Over the last couple of years Laen has built a reputation for quality and reliability and a large customer-base from every continent on earth. What started as a group PCB order for his local hackerspace, DorkbotPDX grew and grew until it became an almost full-time job handling orders and sending panels to the fabricator every other day.  

As the creators of axe.io, prototyped using Laen’s beautiful purple boards we were familiar with the problem space as well as his workflow, and having done a review of some of the competing online services in his space knew that it would be relatively straightforward to build a website which radically raised the bar on customer experience.

How does it work?

Customers upload their design files in an industry standard format and the site immediately processes these into a high-quality rendering of what the finished board should look like, as well as each of the individual layers which make up the board (silk screen, solder mask, copper layers, etc). The user is able to review that the system has processed them correctly and then continue on to ordering.

Initial upload screen Snippet of approval screen

The interactive front-end of the site is built using the Ember.js JavaScript framework, which allows us to quickly create the interactivity of the site without having to worry about the heavy lifting of keeping the UI in sync with data from the user and server.

The back-end of the site is build using Ruby on Rails, and a job queuing system based on Event Machine which allows us to asynchronously handle the processing of design files whilst still serving web requests with the same process. The web app is deployed on Heroku with Rackspace Cloud Files providing lightning fast serving of the static assets (such as the JavaScript front-end) using the Akamai CDN.

We’re really proud of the work we were able to achieve with the launch of OSH Park, and if you’re an electronics maker we hope to see your boards in the site’s Flickr group in the near future.

So, thanks to Peter Cooper’s excellent JavaScript Weekly we read about Meteor, which is a pretty interesting new JavaScript framework which seriously blurs the lines between server and client.

Meteor is built on top of node.js with some interesting ideas:

  • hot code push: the server seemlessly pushes code updates to all active browser clients when the developer changes anything in the project.  This in combination continuous deployment means for some really interesting user experiences.
  • synchronisation and latency compensation: whenever a client or the server changes anything all the clients who care about that data are automatically updated. Included with this functionality is the idea of “latency compensation”; when a client changes state it updates the local application state (so that the user sees the change instantly), pushes the update to the server and the server either broadcasts this change to any interested clients or pushes back corrected state (if perhaps the data failed validation).
  • fully modular stack: every part of the stack and be added and removed by the developer, meaning that if you’re not using something that meteor provides you can remove it, or add extra functionality should you need it. Meteor calls these “smart packages”.
This all sounds great (and it is!) but there are some concerns which instantly pop into our heads:
  • security: the examples provided and my reading of the documentation indicate that the browser client can execute arbitrary queries against the back-end database. This is somewhat worrisome.
  • separation of concerns: Meteor (at this stage anyway) looks like it will be great for smaller applications, the idea of writing a large project with it seems a little daunting.  At this stage the documentation for structuring your app is pretty light, but we imagine as the project matures then some convention will evolve.  Meteor at this stage makes no predictions about whether it prefers MVC, MVVM or any other pattern for structuring apps. One thing is for sure; we don’t want to go back to the spaghetti style interminglingling of concerns of the old days of PHP and RXML.

So, it’s early days for Meteor with their current release being versioned “PREVIEW 0.3.2” but given what they’ve achieved so far and the team they have behind them we’re expecting a lot of our concerns to be addressed before an eventual 1.0 release.  All we can say is that this is one to watch!


we threw together a quick video demonstrating how to get instant spec feedback on your code changes right in your editor - brilliant for those of us who test-drive our code.

Used in this Video is:

  • Vim 7.3 (actually, MacVim’s binary in a terminal).
  • conque 2.3 - allows running interactive console apps in a vim buffer.
  • cmdalias.vim - nicely manages vim command aliases so that they don’t unexpected expand.
  • guard - watches project files for changes and runs appropriate actions.
  • guard-rspec - Guard plugin for running specs based on regex matches.

I have customised my .vimrc somewhat so that ConqueTerm’s rather long commands are shortened as much as possible:

Last week we spent some time modifying Kisko Labssproutcore-rails gem to use the latest build of Ember.js. emberjs-rails is a fairly simple wrapper that add’s support for serving Handlebars templates from the asset pipeline using a hjs file extension. It also serves Ember’s html5 boilerplate template as a layout called ember.  Here’s a quick run through:

First, start with an empty Rails 3.1 installation:

Next, install an empty Rails application:

Add emberjs-rails to your application’s Gemfile and run bundler:

Generate a controller to serve your Ember.js application from:

Setup basic routing for your new controller: config/routes.rb

Modify your new controller to use the ember layout included in emberjs-rails, this is a slight modification of the html5 boilerplate file included with Ember.js’ starter kit: app/controllers/hello_controller.rb

Create a handlebars template to use within your Ember.js application: app/assets/javascripts/views/hello.js.hjs

Create a view on for your home_controller to include your new handlebars view, which the asset pipeline will compile for you automatically. You can also include additional information for the html meta tags and page title if you want to: app/views/hello/index.html.erb

Lastly, create your Ember.js application and create a view from the template above. The rails 3.1 ships with CoffeeScript support by default, so we’ve created the app using CoffeeScript to show how it’s done:

Now if you start your server and browse to it (usually localhost:3000 you should see a large “Hello World!” which is rendered using the template and the property on the view we created. You can also open up your browser’s JavaScript console and change the value of the text property on the hello_view object and watch it dynamically change on screen.

We hope that gives you enough information to go forth and JavaScript. Enjoy!

There seems to be a lot of chit chat in the rails world about correctly validating email addresses.  The main problem is the compromise between speed and correctness, observe:

The examples above illustrate the possible methods of email validation ranging from pathetic to extreme.  Given the average response speed of DNS, we think it’s legitimate to attempt a DNS lookup of the addresses domain name.  This ensures that at least the user isn’t just entering ‘asdf@asdf.asdf’ to circumvent validation.  Some applications however will require a higher level of certainty though, and the only way you can do that is to connect to the remote MTA and start message delivery for the address.  Note that you don’t have to complete it, just verify that the SMTP server accepts the RCPT TO command without throwing an error.

We hope you find this somehow useful.

You know what We’re talking about. Businesses all over the world send out emails, whether they’re some sort of notification, invoice or marketing communication and they disallow direct replies, either by explicitly asking you not to or using a dead-end email address such as “no-reply@idontvaluemycustomers.com”.

This totally disrespects your customer, and devalues their time. We, as your customer are supposed to go to your website, find out how to contact you and then effectively communicate the context of my suggestion or problem when just hitting the reply button in my email client would have provided all this for me.

Here’s our suggestion:

"we value your opinion."

From now on we will be adding a “please reply” option to emails sent from my projects to ensure my customers know that we’re here to help them. See this gist for source.

Yeah, we know. The Arduino development environment is quite limiting - especially if you are using an AVR chip that is not supported or you don’t have room for the bootloader in your project.

Something you might not have realised is that the Arduino environment comes with all the development tools needed (avr-gcc, etc) needed to build and burn firmwares for a large number of AVR based MCUs.  Rather than waste your time building and installing development environment you can just hijack Arduino’s ones for your purposes with a single line of shell code.  We’ve added the following to our .bashrc:

Using this we are able to compile axe.io’s usb firmware without issue.

Kimono is using Carrierwave to handle image attachments and save them into GridFS - MongoDB’s built-in filesystem.  GridFS is great because you get the benefits of MongoDB’s replication and sharding.  Putting files into GridFS is pretty straight forward, and we won’t waste time getting into that here, Jeremy Weiland has written an excellent post on using Rails 3, Carrierwave and GridFS which does a great job of covering off the specifics of getting the three pieces of software to talk nicely together.  One thing that immediately jumped out at us was the fact that Jeremy’s Rails Metal controller reads the entire file contents into RAM before sending it back to the client:

Rather than go his route we decided to monkey patch Mongo::GridIO to respond to the each method, which is required by the Rack API:

Next we can use the Rails 3 router’s ability to pass in a proc to use directly as a rack handler to respond to the request while bypassing as much of the rails stack as possible:

So now the GridFS file is streamed chunk-by-chunk to the client without storing the contents of the file in RAM on the way.  Also, with a small modification this file can be moved directly to a rackup file and used directly by a rack enabled web servers such as passenger:

We hope we’ve given you a good overview of how insanely great Rack is, and how easy it is to use GridFS from within Rack. Yay for us.