A thought on code readability

If you are asked the question …

What are our coding standards?

… how would you react?

  1. Roll over your eyes and say nothing
  2. Roll over your eyes and mumble something
  3. Respond with something that doesn’t answer the question
  4. Provide an appropriate answer
  5. Shut up

Over the last couple of years I’ve been in environments where a lot of developers would either react according to options one and two. A handful according to option three and most of them would go with five.

Why is that? Simple: global conventions. Nowadays devs simply “know” what the conventions are, how to name variables, name files, write tests. It’s just there in your mind because you probably seen that so many times in other projects that one gets used to the way things are.

Say you’re writing Javascript you usually use 2 spaces instead of 4. Writing HTML you would be using 4 spaces, same as CSS.

Naming variables in Ruby for instance you would be using underscores whereas in Java we would be using camel case.

What’s interesting is that although those conventions are used broadly, every developer has its own style and that’s what causes discussion. And that’s what I’m challenging because it affects readability..

Challenging the convention

Variable and file names are important but it’s not as important as your code readability. How you structure your file and make it pleasant to read makes a hell lot of difference in understanding the code written by others – as well as your code when read by others.

We all know that when writing classes in Java or Ruby – or even Javascript files – we want such classes to be small enough to fit in one screen. But how small is that screen? Is that your screen or the screen of your team mate? What if you code in font size 14 and your team mate uses font size 8?

Sure these are variables that can be amended by personal preference but reading code is no different than reading a book: the better it’s written the more enjoyable you will find it and a better understanding will derive from that.

I’m not talking about writing code that is beautifully engineered. I’m talking about code that is beautiful to look at and makes you want to read it to understand the brilliance behind. Even if there’s no brilliance at least you will look at it and understand that statement A is followed by statements B and C and that they, together, create a condition to execute or not statement D.

Take this simple piece of Javascript code:

var fetchProject = function(id) {
  return ProjectDao.find({ where: { id: id }, include: [TaskDao, PersonDao, OrganisationDao] });

The main line in this function is 93 characters long discarding spaces and semicolon. It falls just right in my acceptable limit of 100 characters per line and still reads OK and I understand what is going on.

However I would modify this line to read like this:

var fetchProject = function (projectId) {
  return ProjectDao.find({
    where: { id: projectId },
    include: [ TaskDao, PersonDao, OrganisationDao ]

Or even better, like this:

function fetchProject (projectId) {
  var opts = {
    where: { id: projectId },
    include: [ TaskDao, PersonDao, OrganisationDao ]

  return ProjectDao.find(opts);

Why? A couple of things:

  • Having spaces between delimiters makes it easier on the eye
  • There’s a clear separation of arguments and function call
  • Variables are named appropriately
  • Blank lines, again, make it easier to comprehend

Although the very first example was an inline example – and yes, there are places for them – overuse is simply not good. It’s like trying to understand Lorem ipsum. Take this sample:

ProjectSubTypeDao.create({ subType: subType }).then(function(listingSubType) { iterationDone(); }).done();

What is going on here? If you find yourself having to go back to the beggining of the line to understand the statement then it is a fail – I had to go back. More than once.

I wonder if sometimes we are sacrificing readability for the sake of writing “concise” code or – worse – smart ass code. I remember being in a Clojure training once where we had to write a couple of exercises. I managed one or two given the limited time we had and was feeling pretty proud of myself. I showed the instructor the solution to my first problem and I get this in return:

Good but my solution to this is a one liner.

OK. Brilliant. You’re a genius. Can anyone else comprehend what you’ve written?

Truth is a lot of people can but that doesn’t mean they should or would. And yes, there’s space for this sort of code and if everyone in the team is on the same page I would even dare to say to go for it. But where is the line of readability? Is it so thin that it can be ignored? I don’t think so.

I was having a discussion recently with a coworker about programmer laziness and how it affects our code writing. The main point was that programmers choose shortcuts or “smart ass code” that is kinda complex to understand in favour of “just add this other string to the array and the code will do the rest”.

Now I love that in frameworks, I really do. The more boilerplate code frameworks can absorb the better. I’m not sure such practice is to be encouraged when writing business applications using those frameworks just so we can be more efficient. It will save time for sure, no doubt about that, and even if it’s tested properly and all possible permutations can be covered noone unless the original authors will deeply understand what’s the purpose of such complicated code.

It goes back to my point of “concisiness over readability”. What are we sacrificing there? As a team we have a responsibility to tell each other that such approach is or is not ideal, that such code structure can be improved in such and such ways and that coding standards are an important thing.

Without coding standards a team, for as tightly bound as they are, can fall into disarray where each member will start using what is thought the best approach at the time. I find quite weird that people don’t talk much about this sort of thing – it’s just accepted. And I’m guilty as well as I accepted things too easily to say otherwise.

What happened? Did you kill yourself because you didn’t say anything?

Well, of course not but I was rather frustrated with myself for not speaking my mind. A lot of people I know grow used to ways of doing things in one language and, when they switch languages or environments, conventions don’t. They stick around.

I find that interesting because some conventions that are carried over are good in a new environment but some others simply don’t have a place. They feel alien. And most of the time team members are OK with a mixed bag of conventions.

Take CSS for example. Do you use camel case or dashes when naming classes? Why we see dashes everywhere? Hint: it has to do with the SHIFT key.

Is that efficiency or laziness? I guess we would have answers both ways however we are not sacrificing readability here. We are using what our hands feel more comfortable with when our brains are “wired into the problem”.

Same thing applies to Java or Ruby or Clojure. Camel case, Hungarian notation, use or not use underscores (language allowing)… things differ everywhere. That doesn’t mean we should not use them, that means we should talk about them and make decisions of code readability consciously  as a team.

So the next time someone asks you the question “What are your coding standards?” don’t push them aside. Stop and think if you really have one. Talk to your team, make sure everyone understands where you’re coming from.

That’s me for the day. Cheerio!

To grid or not to grid?

The most common argument I hear about when someone is starting a new web project is:

Just use Twitter Bootstrap!

It seems to be a very reasonable argument: after all Bootstrap provides a very good grid system, good typography, helper classes, responsive elements… really, the question should be:

Why are you not using Bootstrap?

That question got me thinking when it actually is reasonable to use it – we know that we can use it unreasonably everywhere. Does it really suit every website?

The TL;DR answer to that is a sounding no.

Recently I got involved a new web project: green field, full flexibility. The first thought was really to slap Bootstrap in, make use of its grid system and responsive utilities and move along.

Boy, oh boy, was I wrong or what?

Spike Work

Every project we start with some spike work and this was no different: setting up Bootstrap, styling the internal grid elements, make sure they respond correctly and…

Hold on, what’s this weird responsive breakdown from Bootstrap? I don’t want my iPhone 3GS to look like an iPad in portrait mode.

What about those grid classes for device categories? I understand the purpose but there’s too much “fluff” going on in there.

The spike work proved a success because we could challenge the original idea and set some boundaries as to what benefit Bootstrap was actually giving us. We didn’t have a complex structure and having a large CSS file coming from Bootstrap would just complicate maintenance – plus the overrides, as we all do.

Going back a few years I remembered a project I did at Deakin University – the new Deakin Wordly website – and Bootstrap was just becoming popular. We decided to look into it and try and adopt as much as we can. As it turns out we used a customised version of Bootstrap’s typography and rolled our own grid.

Having that sort of experience is very valuable because you understand that not everything can rely on a single framework – it may be great for some things but, for others, the level of customisation is so high that we might as well write our own thing.

Defining the project

With the spike out of the way and with a design draft we could work with we started to break down the page in sections. What we really needed was:

  • A header
  • A sidebar
  • The main content (which is a map)

The design would also cater for three responsive breaks: mobile phones up to 640 pixels, tablets up to 768 pixels and desktops – anything above 769 pixels.

So I hear you asking…

Could we have used the grid from Bootstrap?

Yes, we could.


Could we have used the responsive abilities from Bootstrap?

Customising them, yes, we could.


So why the heck did you decide to roll your own thing?

Simple: Bootstrap is overwhelmingly complex for this sort of project.

Recently I wrote about using the right tool for the right job and, in this case, Bootstrap wasn’t it. We would end up with a framework that is so overly baked that is completely distasteful – plus would be hard to work with.

Note: I don’t know about you but I hate noise in the codebase. If something is there it should be there for a reason and not because it’s part of a framework – it only causes confusion and makes things harder to find.

As you read above the website is simple – the biggest work would be drawing to the map. Working with other members of the team we wrote a very simple framework that was not only semantically correct but also easy to maintain and – most importantly – extendable.

The Javascript piece is so small that we decided to not use Angular and opted for jQuery + Hogan.js for some UI manipulation. This way we ensured that there wouldn’t be any problems with mobile phones and the bandwidth consumed would be minimal.

For styling SASS was the choice: simple and efficient it allows for an organised code and is easy enough to get going – even if you don’t have experience with CSS preprocessors, they are a breeze to get going.

On top of all that for a (very) quick feedback loop, Gulp finalises the picture providing us the necessary preprocessor tools, live reload capabilities and great flexibility for packaging.

Final words

We live in a very fast paced world where technology is dictating how we live. But it’s not because of this dictatorship that we should succumb to its power and use whatever the industry throws at us.

Bootstrap is a great tool that provides quick prototyping and solution validation thus adding great value through quick feedback. Care must be taken in assuming that it can graduate from a POC tool to a production-ready framework.

Every project is different and only by analysing what you’re trying to solve and questioning your decisions that you will really succeed in delivering what you aim for – sometimes what you need is a box cutter instead of scissors.

Till next time!

Something about tools

One thing that always fascinated me is how people invent things. More precisely things to help us make other things. Yes, I’m talking about tools.

Men have been utilising tools as far back as 2.5 million years ago and since then we have been evolving them to our needs, adapting them and creating new ones. Each with a different purpose or with different levels of detailing to give us more control over our tasks.

But one thing that every tool provides, independent of its original purpose or refinement, is productivity: every tool allow us to execute the tasks we need to execute faster – maybe not better but that depends on the skill of the handler and the quality of the tool itself.

Being productive in the software industry is kind of a big deal: deadlines are never as long as we need and sometimes we have to get out of our way to do tasks that will assist the business somehow but is not part of our core skill set and we have to find the appropriate tool for the job – you don’t want to use Excel for everything, for god’s sake!

I never forget what my Project Management tutor said to the class:

Know your tools.

And that sounded very right to me: I knew that to really do my job well, apart from knowing what I needed from a conceptual perspective, I needed to know the tool I was going to use to execute the conceptual task in a effective way.

Nowadays my tooling needs have evolved quite substantially: I don’t use one IDE anymore and I try to understand what the tool is best for instead of trying to use one thing for every little piece of development I do.

But IDE is just one part of it – like Batman we all need an utility belt.

I expanded my set of tools to browser extensions and, more recently, OS extensions. In the quest for efficiency I know my machine like I know my house and I can navigate that without a mouse easily and still be very productive.

Learn your most used tool first

Before jumping into it and installing every second piece of software you see around take time to understand the tools you currently have. The most important one is not your IDE. It is your operating system. Be it Linux, Windows or Mac OS be sure you can navigate that without mouse.

Challenge yourself to learn how to open applications, minimise them, maximise them and swap between them. But most importantly, how to transfer data from one application to the other – yes, copy and paste. Do you know the shortcuts? Everybody knows.

From using your OS well needs will arise. For instance, I use Alfred for productivity – spotlight is just not enough, although the new version in Yosemite is promising – and I extend its functionality by installing several workflow plugins that allow me to be even more productive.

I hardly use the calculator app anymore, it’s all in Alfred.

If I need to know my external IP address I can simply issue a few keystrokes and it queries a website that provides me this information.

Launching VMs has become a piece of cake too – I don’t even have to open VirtualBox anymore.

Dive deep into your second most used tool

As most software engineers do, I also use an IDE to be more productive. When I was first introduced to IntelliJ I thought….

OK, new tool. Whatever, sticking to Eclipse. I know it better.

And I really did. That was 4 years ago. Now I can’t think of a better IDE than IntelliJ. I guess it became even better after I learned my way around it and started customising the keyboard shortcuts to my own needs.

Once you get that comfortable with a tool It’s hard to go back but it takes time and, most importantly, dedication to learn and improve.

For instance, one of the things I did was create code snippets for things I use the most. There are plugins that can be installed and come pre-packed with some snippets that are useful but, on a day to day basis, you want to make sure you are agile and remember that stuff properly – i.e. the shortcuts and snippets make sense to you.

But I don’t stick to only one IDE. I also use Sublime Text for Javascript and Ruby development which provides me the agility I need for such development environments – opening up WebStorm for developing in a Javascript environment is too much of a muchness.

And that’s where things get interesting. Plugins! They can improve your agility quite considerably and some of them are just too handy! For instance…

I’m currently working on a project that I can’t launch the app from the IDE for many reasons. TDD is also difficult as dependencies are wired using new Object() most of the time.


Recently we had to write a Regex to replace contents in a file and the tools we usually use refer to Javascript implementation of Regex.


We installed the Java REPL Plugin in IntelliJ and that was great benefit, allowing us to test the regex in a Java-like environment.

The important thing to remember about plugins: don’t overdo them. Simply because a plugin does something really cool but not particularly useful for you at the time it is NOT a good reason to install it.

Link your apps

Another powerful feature that a good tool should have is linkage with other apps: the ability to transfer data or use data from another app as arguments to the app you want to use is a key factor in efficiency.

I mentioned Alfred above but can’t finish this post without mentioning Dash.

Dash is an “API documentation browser” and can integrate with pretty much anything. Finding the appropriate documentation for an API takes a few seconds and, if that’s not enough, it also gives you “cheat sheets” for things like VI, Git, Capybara among others.

I have linked IntelliJ, Sublime and Alfred to it and I’m a happy dev: can just type CMD + D and Dash is presented with the exact match for the API I have highlighted on my IDE – almost magical.

Use the web

There are tons of tools available online, from CSS generators to Regex debuggers that will allow you to be more productive without spending too much time trying to find a solution for a problem just by yourself.

Also check for browser extensions: Chrome and Firefox are full of them. But the same advice apply, don’t overdo them: you will overload your browser and eventually it will slow down and you won’t know what to do with so much stuff installed.

Final words

I believe, as a professional, that we should not reinvent the wheel but use the power of the community to our benefit to learn and apply. It’s truly a great feeling when we get that Regex right or implement that complicated RFC all by ourselves – yup, we do get very proud – but sometimes somebody else has already implemented that and all we need to do is make sure we understand so if a change comes we know what we need to do.

Don’t be discouraged to try. Learning is a process that takes time and getting used to new things is even more timely but the rewards are the greatest: you feel empowered, in control and is always inviting the new.

Until next time!

iOS Pipeline: how to successfully automate your building process

OK before anything I must say: this post came about as a response to a frustration in not finding an appropriate solution out there. And I know there are a couple of ways the same process can be achieved but I decided to go about it using a simple process that worked for me.

A word of warning: I’m not a full time Objective C developer. I enjoy setting up build pipelines and making sure it all aligns at the end so, if you find any mistakes in this setup, please feel free to leave a comment on!

What we are doing

Our goal is to publish an artefact to TestFlight. In order to do that we want to make sure that the app is properly tested and the artefact is built for the appropriate provisioning profile.

In order to do that we don’t want to combine all these steps in one single job: it would take too long and we would not be able to publish artefacts with different configurations – more on that later.

Instead we want to separate all that into 3 jobs, like below, with the last build sending the artefact off to TestFlight with a changelog.


About the “Test” phase

We are not talking about unit tests here – or the XCTest framework that comes bundled with every new app you create.

This test phase is related to automated UI testing. Be it Calabash, UIAutomator or any other tool you feel comfortable with the purpose of this phase is to provide a level of confidence by automatically navigating the UI and executing operations the user would execute.

Why do it this way

We don’t want to have multiple targets generating different artefacts with different configurations. What we want is an xarchive that can generate multiple artefacts with appropriate configuration for different environments.

Plus we want to generate the xarchive only once. That’s one of the main reasons why: building an xarchive will generate a file that can be used to build IPAs where we can apply different configurations.

If we generate different artefacts for different targets that will mean we will have to test every target and your building time escalates considerably.

Finally the flexibility to have multiple publish jobs is also a plus: leverage from one xarchive and publish that to multiple places.

What you need

First and foremost we need Jenkins installed. If you don’t know Jenkins it is a Continuous Integration (CI) server that allows different projects to be build at a certain interval – either defined by polling the SCM, pre-defined or manually. Each project to be built is called a Job in Jenkins and jobs can depend on each other.

Jenkins allows the installation of multiple plugins and the bare bones installation does not come with Git. Nor TestFlight. So we also need some plugins installed in Jenkins for this whole thing to work:

  • Git plugin
  • Git client plugin
  • Github plugin
  • Testflight plugin
    • Make sure you configure that properly under Manage Jenkins > Configure System
  • Build pipeline plugin
  • Delivery pipeline plugin
  • Shared workspace plugin
  • Green balls plugin (just so we don’t get blue balls)

And finally you will need two environment variables configured in Jenkins – you can configure them via Manage Jenkins > Configure System: one for the developer certificate you will use and another one for the provisioning profile.

The environment variables are simple key-value pairs and can be accessed on any job using the syntax ${VAR_NAME}.

Prepping the project

For Jenkins to build your project successfully there are two things that need to be done: share the scheme of your target and create a shell script that will do the heavy lifting for you.

To share your app’s target scheme all you have to do is click on your target next to the Stop button, then on Manage Schemes…

Screen Shot 2014-07-24 at 1.33.31 pm

… and, on the screen that comes up, tick the Shared box on the right. If you don’t do that then you can only build your app from XCode and that’s not what you want.

Screen Shot 2014-07-24 at 1.33.50 pm

The script is a bit more complex: you want to pass in the Certificate you will be using and the Provisioning Profile that will be embedded in the artefact – that’s why we had to create those environment variables beforehand, so we can reference in the build script.

A sample script that I use to some degree of success is the below. You execute such script by issuing the following command in your terminal:


So the script looks like this – thanks to this blog post, with some modifications:


if [ ! "$1" ]; then
  echo "Please provide a provisioning profile UUID."
  exit 0

if [ ! "$2" ]; then
  echo "Please provide a code sign identity."
  exit 0


if [ ! -d './JenkinsBuild' ]; then
  mkdir './JenkinsBuild'

if [ ! -d './JenkinsArchive' ]; then
  mkdir './JenkinsArchive'

if [ ! -d './JenkinsIPAExport' ]; then
  mkdir './JenkinsIPAExport'

xcodebuild -alltargets clean

rm -rf JenkinsBuild/*

xcodebuild -target "TARGET_NAME" PROVISIONING_PROFILE="$pp" CONFIGURATION_BUILD_DIR=JenkinsBuild -arch i386 -sdk iphonesimulator7.1

if [ ! $? -eq 0 ]; then
  echo 'Build failed. Could not compile app.'
  exit -1

rm -rf JenkinsArchive/*

xcodebuild -scheme "TARGET_NAME" archive PROVISIONING_PROFILE="$pp" CODE_SIGN_IDENTITY="$csi" -archivePath "./JenkinsArchive/blog.xcarchive"

if [ ! $? -eq 0 ]; then
  echo 'Build failed. Could not generate xcarchive.'
  exit -1

rm -rf JenkinsIPAExport/*

xcodebuild -exportArchive -exportFormat IPA -exportProvisioningProfile "PROVISIONING PROFILE NAME" -archivePath "./JenkinsArchive/blog.xcarchive" -exportPath "./JenkinsIPAExport/blog.ipa"

if [ ! $? -eq 0 ]; then
  echo 'Build failed. Could not generate IPA.'
  exit -1

You might have noticed we have some paths being specified in the script, most prefixed by Jenkins. That is because we want to generate artefacts independent of where XCode puts them – every machine is different so we might as well manage those ourselves.

So JenkinsBuild, JenkinsArchive and JenkinsIPAExport are directories created during build time where artefacts will live and be referenced by in other jobs.

OK now you’re good to go. Let’s go setup Jenkins.

Jenkins Setup

The Build Job

With all those plugins installed and script ready it’s now time to create our pipeline: let’s create our first job, the Build Job – click on the New Item link on the left, give the job a name and click on Build a free-style software project. A good standard I usually follow for naming jobs is "<APP_NAME> - <STAGE>".

You will then be presented the configuration screen for the job. Since Jenkins doesn’t provide a way to export Job configurations, have a look at the screenshot below – paying attention to the yellow boxes which is where you will have to change.

Note: configuration may differ from project to project. In this case I’m not using a provisioning profile or certificate to sign my artefacts but you could, if you wanted to, use the script we wrote above and pass in the same command as explained before.

Job configuration for "Build" stage

Job configuration for “Build” stage

Good. That’s it for the build phase.

The Test Job

Let’s now build a test phase: go back to Jenkins home and click on New Item again, give it a name and click on Copy existing Item and provide the name of your build job – Jenkins has auto completion so it should pick up really easily.

The configuration will be slightly the same, we will have to execute a few things after the build step is successful such as archive the artefact for publishing and publish the test results but the most important change is to make sure that the Git SHA that was used in the Build job is the same Git SHA used here.

In order to achieve that we use the special environment variable $GIT_COMMIT in the branches to build SCM configuration in the job.

Finally we want to run a different shell script, for testing. This should automatically open the simulator and execute a suite of UI tests using the build that was created:

./runTests.sh ci.js "${BUILD_WORKSPACE}/JenkinsBuild/blog.app"

Have a look below for the screenshot configuration:

Job configuration for the "Test" stage

Job configuration for the “Test” stage

Great, with that in hand, let’s link the two jobs! To achieve that, go back to your Build job and add the post-build action Trigger parameterized build on other projects. Then type in the name of the Test job you have just created – auto-completion works here too – and click on Add Parameters to select the option Pass-through the Git Commit that was built.

That will enable a dependency between the jobs and, if you go back to your either Job home page you will notice that you will have either Downstream or Upstream Projects listed. Quite handy!

The Publish Job

But we are not done quite yet: we still need to create our last job that will publish the artefact to Testflight. Again, just like the Test job, create a new item by copying the Build job.

A couple of things to notice here:

  • We will be using the artefact that was created in previous jobs and not creating a new one, otherwise we would be compiling the whole app all over again
  • At the end of a successful publish we will be tagging the build so we can go back to it if need in the future

Again, pay attention to the highlighted boxes as it will dictate what has to be done.

Job configuration for the “Publish” stage

And just like the Build job we will have to change the Test job to allow triggering this job – but there’s a catch!

The catch is that we don’t want to publish every successful build: we want to be able to run specific builds as we wish but still leveraging from the same Git SHA used in a specific pipeline.

Thus, going back to the Test job add the post-build action Build other projects (manual step). Then type in the name of the Publish job you have just created – there’s no auto-completion here – and click on Add Parameters to select the option Pass-through the Git Commit that was built.

Great, now you have linked jobs. But how can you see it?

Adding a pipeline view

Go to Jenkins home and on the list of Jobs click on the little plus sign next to the tab All. This will prompt you to create a new view.

Give it a name and select the option Build Pipeline View. Then you will be presented the configuration page for the view… all you really have to do is select the initial Job (which in our case is the Build job) and the rest is done automatically for you.

Have a look at mine, that’s how I like it – the important bits are highlighted.

View configuration for the pipeline

View configuration for the pipeline

Final words

As you could see I’m not using build configurations – mostly because I didn’t bother looking at it in detail to so how to setup it all up. But you definitely should: I used that previously on another project (already set up by someone else) and was very handy in defining build names and specific configuration for different environments.

Another thing about this whole process is the automated testing. It uses UIAutomation as the driver and all tests are written in Javascript – you can read more about setting it all up by following this blog post from my buddy Shaun Irvine.

Finally it’s very likely that you will run into a permission issue. Follow this thread on how to solve it.

Good luck with it all!

Hacking for the community – RHoK Melbourne

Two days. Six teams. A bit over thirty people. Loads of fun.

Loads of fun. If there’s a combination of words that best describes what the last Random Hacks of Kindness (RHoK) in Melbourne was, those are them.

But it’s not only the fun on working something different and helping a community project or a startup get off the ground or make the world a better place. It’s also about the people: knowing new people, understanding their passions and aspirations, sharing, communicating, bouncing ideas and making things happen.

To me the spirit of RHoK was that: make things happen while getting to know awesome people. Create something small or create something big but carry with you the factor that you’re not there for yourself: you’re there for them.

Was with that thought in mind that I joined in this RHoK weekend. I went to the information session a month before the event and visited the website for more information on problems but couldn’t really decide what to do. I knew that I wanted to accomplish something and try some more nodejs.

But before I talk about how the days went by, let me take you through the format of the event.

RHoK Weekend in Melbourne

The whole event goes for a whole weekend. It started Saturday at 9 AM and goes until the end of the next day, with drinks at the pub.

As you expect it’s a very social event with people sharing ideas and tech solutions across teams so we can help each other get off and running quickly. Basically, the schedule is like this:

  • Day 1
    • 09.00 – Get set up
    • 09.30 – Welcome and administrivia
    • 09.45 – Choosing problem areas to work on, and teams to work with
    • 11.00 – Start your hacking
    • 19.00 – Dinner (provided)
    • 20.00 – More hacking
  • Day 2
    • 08.00 – Hacking continues
    • 14.00 – Time to down tools
    • 14.30 – All projects get 10 minutes to present their work to all
    • 15.30 – Judges deliberate
    • 16.00 – Awards and prizes
    • 16.30 – Drinks

It’s full on – as you expect – but very rewarding.

The problem I picked

OK so I wanted something small as I had some time constraints on Saturday and wanted to get something off quickly. The problem I picked was something that people were already working on but some of the team members moved on and couldn’t contribute to the project anymore.

The problem, already a project, is called Witness King Tides from Green Cross Australia. Here’s the original problem description. The snippet from RHoK website is this:

Witness King Tides is an existing website asks communities around Australia to take photos of the coastline when king tides hit. These photos capture what our coastal communities may look like in the future, as global sea levels rise. Together, the images build a picture of the threat posed by sea level rise across Australia and help track the future impact of climate change.

They needed a backend to handle some file uploads and capture some metadata about the photos that are uploaded. Previously a web client and a native iOS and Android apps have been developed to great success but the lack of a proper backend was causing them harm.

The lists of tasks

The lists of tasks

The solution

So the new guys that joined them – myself included – were very keen on nodejs. We knew it was simple and we knew it would be fun so was just really a matter of doing some planning and getting on with it!

Things looked pretty gloomy at start

Things looked pretty gloomy at start

We picked expressjs and included some extra libraries to make our life easier. The original solution relies on Flickr to store their photos and have them organised in albums – manually organised I must say – and we couldn’t change that.

From there we knew that some interaction with Flickr was needed and it proved, one more time, that OAuth 1.0 sucks. Luckily, there were libraries already built that eased our pain of dealing with OAuth and we just needed to pull them in.

Note: Flickr itself provides a library which seems to be the way to go at first. But they don’t provide an upload feature which was what we wanted and download a lot of data when you start using it. To be honest, the only thing that I adds value is that it makes your app a proxy to Flickr making your clients invoke only one endpoint.

Coming back to the solution, basically there were 2 endpoints completely necessary: /tides and /upload. Here’s a quick description:

GET /tides
Returns a list of tides already mapped by Green Cross Australia

POST /upload
Uploads a photo, streaming the content to Flickr and saving the metadata submitted on a Mongo DB

A small bump appears

So we managed to get the two endpoints up and running on Saturday before I left – which was around 5:30 PM! That was great and I was happy. Andy was sorting out the hosting and another Julián was investigating the Flickr API for querying. Everyone was tuned in!

Next day comes and I get a question from Andy, who developed the iOS app, more or less around these lines:

So I tried to upload a photo using the iOS app and I got an error. Do I have to send a multipart request?

Damn! I forgot we had implemented only the web side of things and completely forgot about the iOS. So here we go investigating how we can get this done and so on and, as it turns out, was a walk in the park.

Since we have created a small file that handles uploads – uploader.js – all we had to do was query the content type of the request and invoke another function in the uploader.

router.post('/upload', function (req, res) {
  var uploader = new Uploader();
  if (req.get('Content-Type').indexOf('json') >= 0) {
    // handles JSON payload
    uploader.handleJson(req, res);
  } else {
    // handles multipart
    uploader.handleMultipart(req, res);

And to make life easier we are not even streaming that: we are just sending a base64 representation of the image in the JSON payload so all I had to do was to convert that into bytes and send it off to Flickr.

That bump felt like a decent sized bump at start but after about 30 or so minutes the whole thing was done and everyone was happy.

Heck yeah!!

Heck yeah!!


Just a quick note to talk about deployment. Amazon Web Services (AWS) provided some free credit for RHoK projects to host their solutions on.

My experience with AWS is almost zero but Andy was experienced with it and decided to give a go on Elastic Beanstalk, which is a service very similar to what Heroku provides in terms of “ready made boxes for specific platforms”. Since we were using Node and both Heroku and Elastic Beanstalk provided support for Node, we might as well use the free credits.

I must say it was the best decision. Beanstalk is not only flexible but it’s fast and doesn’t “go to sleep” like Heroku does with free apps. Sure we had free credit, but hey, I’m sure the costs will be minimal compared to Heroku.

And the whole setting up of command line tools for automagic deployment was brilliant.

Highly recommended.

So, what did we build?

Our test site is here: http://witnesskingtides.azurewebsites.net/. You can upload photos, check the tides and etc. The whole code is open source so you can just grab it and run your own if you want.

King Tides API

King Tides Web

King Tides iOS

Jackie did a fantastic job on the UI and is now responsive and leveraging from the new endpoints we created.

On using Nodejs

If you haven’t yet, just do it. Learn and apply it. It’s way too much fun to be pushed aside.

Thanks to all!

The weekend was great: vibe was way up there, people were engaged and even our problem owner was connected to us from sunny Brisbane.

The RHoK committee did a fantastic job and I’m totally looking forward to the next one already. New problem, existing problem, bring it on!


I feel lost in this whole modern world. Not all the time. But more often than I like.

It’s tough to see the values you were taught at a younger age being ripped apart right in front of you and all you get to do is sit down and watch – and hopefully learn that things are not exactly this way anymore.

I feel like an embarrassment, a burden even. Someone that doesn’t add value, that “doesn’t get it”.

It’s tough. I don’t know what to do. Even when I open my doors so all kinds of winds flow through and portraits fall and things get out of order, it feels like the wind never disordered things enough for me to fix anything.

It’s frustrating.

I feel lost, dumb. Out of place and out of touch.

I can’t say anymore that I’m not stressed. I can’t say anymore that “It’s alright, no worries”.

Putting a brave face everyday and facing the consequences of your choices is not easy. Nor is smiling when you’re burning inside.

I guess there are times in our lives where we get tested for all sorts of things and my time is now and I’m not entirely sure I’m doing a good job.

I don’t want your compliment. I don’t want you to pet me on the head.

I want an ear. Maybe a shoulder.

Can you lend one to me?