NodeJS: The good, the bad and the Javascript – Part 3

The Javascript

On my first post I talked about the good things about developing with Node: responsiveness, flexibility, testing and the feel good feeling that developers love when having such a quick feedback loop.

My second post was about the downsides: memory leaks – and how to find them – and the infinite number of modules available, dependency management and module dependency. Some things that make me cry whenever I’m developing with it.

For this last part, a rather short post, I will just touch on Javascript.

When I joined the project I’m currently on I was fairly confident with my skill level. I knew how to create objects in Javascript, test using Mocha, knew a few extra frameworks to make my life easier and with the language in general. It’s amazing how much you don’t know what you don’t know.

Javascript has been around for so many years and been through so much changes over the years that a lot very smart people thought of different ways about doing the same thing. I love that. It simply shows how much flexibility the language has and how much it can evolve – take it for example the latest trends in functional programming and how we now have JS libraries just for that.

One thing that caught my attention during this project was Promises, most specifically the A+ standard. Over the last two years I’ve played with them, experimented on a few pet projects but never really paid much attention to the difference they can actually make to your writing – not only in cleaning the famous Pyramid of Doom but also structuring your code and architecting your solution.

Throughout my journey I’ve learned how to test them, properly use them and enhance your flow with them. A great post written by Nolan Lawson helped me understand them better and not abuse them – as with anything, there’s space for everything.

Also learned that, in Node specifically, you want to use properly written library – like Q – instead of Node’s own Promise. The implementation in Node is different and you really can use some extra features present in Q.

After four months I’m safe to say: I would not got back to callback hell unless there was no alternative.

But Promises are not the only thing. When developing with Javascript you have to make space for the quirkiness of the language. Things like:

if (a === b) { ... }

Or even…

(function() {

Or even more…

(function(num) {
}).apply(window, [1])

Some people hate it, some people don’t care – I think it’s a bit too much to say that some people love it?

Regardless, that’s what make Javascript… well, Javascript. And those are language features that may or may not have helped move Javascript to the top 10 of most used languages these days. It can be run anywhere right now and is being used to communicate with low level hardware to web development. It must be doing something right.

And with ES6 just around the corner – ES6 was approved by ECMA International last week – things can only get better. There’s a bunch of good stuff being introduced in the language that will divide opinions everywhere but it’s the next logical step to make a good language great.

To know more about ES6 – and practice a bit – check this ES6 Katas and this free e-book.

Thanks for reading, until next time!

NodeJS: The good, the bad and the Javascript – Part 2

Following up on my previous post, NodeJS: The good, the bad and the Javascript – Part 1, let’s now talk a bit about the bad things that may happen when developing NodeJS applications.

The bad

Memory Leaks

Memory leaks – at least to me – are the black holes of the black holes. They can be so deeply nested in the application code that even with a flash light and night vision goggles it can become a chore.

We had a memory leak issue that haunted us for a couple of weeks and we eventually found the issue so quite happy to share the story here.

First of all, let me get this out of the way:

Most memory leaks in Javascript can be avoided by properly declaring your variables. Be it within your module file or within your function scope, declaring them at the proper scope level will ensure they are discarded when not needed anymore.

Best of all you can enable this detection in Mocha every time your run your tests by enabling the option –check-leaks as I have mentioned on my previous post.

However there will be situations where you’re not entirely sure what’s going on and the memory leak is not related to your code or dependencies at all. We had a case that was causing our process to halt at a certain point: the memory would spike up, the CPU would increase dramatically and eventually the process would be killed.

We are using Sequelize to manage our domain model, database connectivity, migrations and the like. It seemed like a good tool and is quite mature, supporting popular database engines and seems to be the go to solution for people starting with ORM in NodeJS.

Our model contained multiple joins and nested relationships which made Sequelize generate a massive join query which would yield a result set so large that was completely out of boundaries for the process to handle.

To properly identify the line where the issue was happening our tech lead use the tool node-heapdump which writes a snapshot file that can be later inspected in Chrome. The cool thing about this tool is that you can compare dump differences if you have more than one file.

Eventually we wrote some code around the issue to get the queries executed the way we wanted but, after some research, we felt that Sequelize was not really the right choice.

Although it provides strict modelling the way the queries are constructed don’t really scream performance. BookshelfJS, on the other hand, executes the query the way we expect it to be executed: one at a time for each model object, using the result of the previous query as input for the next one. It lacks on modelling unfortunately forcing us to rely on other tools to validate the JSON object being inserted.

To find more about memory leaks I would strongly suggest reading the excellent 4 part article on Profiling Node.JS applications by Will Villanueva.

Dependency Injection

Angular brought the wonderful world of dependency injection into Javascript. It was quite easy to follow the framework rules and write tests against your controllers. With NodeJS things don’t go so smoothly like that but you still have some good ways around things.

Every module you write can be required by any other module. The main issue with this is that the module you are requiring is relative to the module you’re writing thus leading to situations like require(“../../../../../myModule”) which is not only ugly but pretty confusing.

On my googling around I found that Bran van der Meer already solved this issue in a multitude of ways. My favourite from the list is the last one: using a global wrapper, which goes like this (credit to @a-ignatov-parc):

global.rootRequire = function(name) {
    return require(__dirname + '/' + name);

Then use it like this:

var dependency = rootRequire('app/models/dependency');

When it comes to testing things actually get worse because you don’t want to test the required modules. For that you can use a tool named rewire which allows you to redefine a required module’s functions. It’s quite handy and will enable you to do your unit testing but depending on the number of dependencies your module have you may find yourself rewriting so many functions that your unit test will be as long the module you are testing.

At that point you should ask yourself: should I really continue down that path?

Sometimes you do but sometimes all you need is a little refactor. Pull the logic out and test it individually: you will end up with smaller, maintainable modules and will be able to mock things at the appropriate level without leading to a test that is pure mock.

Another approach for dependencies is to bend the rules a bit and do things a bit different: instead of requiring something at a modular level, you can pass that dependency on the method call. Tests may not change much at least you won’t need to rewire anything and have a better test for your module and clear stubs.

Dependency Management

package.json is where your dependencies live. You have dev dependencies, peer dependencies, optional dependencies… the lot. It works pretty well until you decide to blow away your node_modules folder and install everything again and suddenly things don’t work so well.

The issue is related to locking your dependencies down. Things move quickly in the Node world and, in the space of 4 months, Sequelize went from version 2 to version 3 and multiple minor revisions. That broke a few things and we had to find a way to lock stuff down.

npm-shrinkwrap is what you want for that. It generates a new file that contains the versions installed in your node_modules folder and, when you execute npm install again, it will install from the shrinkwrap file and not from package.json.

It got a few “gotchas” though: installing and removing dependencies become a 3 step process as you have to install the dependency – saving to package.json – then generate the shrinkwrap file and commit the new file.

If you have anything extra in your node_modules folder, shrinkwrap will complain and you will have to start from the top again.

If there are dependencies missing your node_modules folder, shrinkwrap will complain and you will have to start from the top again.

With shrinkwrap you don’t have to commit your node_modules folder but if you ever have to do that for whatever reason, a good way to keep your node_modules folder free from junk is ModClean. It strips out all useless files like readmes, tests, examples, build files, etc. Worth a look.

And since we are on the topic of dependencies, native dependencies are quite a nuisance. When it runs beautifully on your Mac it won’t on Linux distributions because you’re missing development libraries. And if you want to run on Windows, be prepared for a world of pain as you will have to install .NET and a bunch of other libraries that may be needed by your project.

A good piece of advice is to be very careful on the dependencies you pick: native dependencies will run faster but they need extra libraries. Some pure Javascript solutions (very likely) will run slower but you don’t have to concern yourself about operating systems.

Too much out there

Finally, the last down side: there’s just too much out there. It’s not that hard to pick as you just need to make sure that whatever you pick is maintained constantly and has a good backing but, sometimes, small libraries written by “unknowns” are very good and they do a better job.

One trick is to have a clear criteria: dependencies recently maintained have done the trick for us most of the time but also broke some stuff as well as they were not backwards compatible. Well used libraries like Express and Restify were also very good and with a great community backing.

Ultimately it will take some installing and uninstalling to clearly identify what works and what doesn’t but don’t be discouraged by that: sometimes hidden gems are worth the effort.

NodeJS: The good, the bad and the Javascript – Part 1

I like Javascript. Quirks and all.

It’s a frustrating language that allows for a quick feedback loop and is so flexible that you can bend it 270 degrees and still achieve the same result.

I call it frustrating because doesn’t matter how much effort you put into making your software code look nice and readable, you will have to make way for the quirkiness of the language so it can do what you want it to do.

But I like Javascript. And I like NodeJS. Not because it’s the new kid on the block but because it is, indeed, very cool. With its quick feedback loop and thousands of packages available at you can find a solution for pretty much anything you need.

I’ve been in a team where we’ve been “Noding” for the last four months, doing a multitude of different things in order to deliver a few micro services. During this time we had to map domain models, manage memory leak issues, pipe data from third party services and ensure that everything is deployed in the cloud.

I thought it would be useful to collate some information from these last four months into a blog post. Or a small series about it. That’s why I’m going to split this into 3 parts: The Good, The Bad and The Javascript.

As it turns out there’s so much stuff out there that this post became some sort of “tools to keep on the back of your mind when developing in NodeJS” kinda blog post. Hope you find it useful.

The good


Blazing fast: as developers we want our unit tests to run fast, our build to take less than a minute and our coffees to keep hot the whole morning.

The feedback loop from NodeJS is a developer’s dream: we have a couple of projects using different frameworks on all of them and our average build time is about a minute on our build box:

  • Checkout from repo
  • Run code quality checks
  • Run unit tests
  • Run code coverage
  • Run UI tests
  • Run acceptance tests
  • Zip it up

In terms of processing speed, being a non-blocking single threaded, async process makes the difference. There are tons of benchmark tests of NodeJS against J2EE, Rails and even io.js. Suffice to say that an Express app can easily handle 500+ requests per second with an average response time of 150 milliseconds. Hell even Walmart is running on NodeJS serving millions of requests.

What is interesting is the fact that io.js, after being integrated into the Node Foundation, is now about to merge all of its performance goodness back into Node. The whole journey seemed a bit dubious at first but now it feels like it was necessary: without the split things probably wouldn’t have moved as fast and all the goodies introduced into io.js would have taken a lot longer to make it into Node itself.


Within our team we had an initial discussion of what would be the most beneficial: using npm scripts or write our own bash scripts to execute the tasks we needed. As it turns out you can do both – and we ended up using both. You can even write a JS file, put #!/usr/bin/env node at the top and run it like it’s a bash file.

What I like about the node environment is this flexibility. Npm in particular allows you to hook any kind of command in your npm:scripts section. It is so flexible that a strategy has been outlined by Keith Cirkel late last year to replace build tools in favor of npm:scripts. I tend to agree with many things he wrote and I like the approach he proposed. A lot has been written and discussed around Gulp vs Grunt vs npm vs fruit salad … and, ultimately, projects and their needs evolve thus it suffices to say that no approach is the right one in the Node world.

But the ultimate flexibility is the ability to have dependencies that rely on native libraries. You can add them to your package.json and npm will automatically compile the stuff for you. Cool huh?

Recently a new dependency showed up for building from native libraries: cmake-js. It relies on the very popular Make tool instead of the outdated gyp and provides a much better building environment – gyp relies on old software and Google is moving away from it.

There’s a catch, though:  native development libraries have to be installed and they will cause your build to fail if they are not available. Plus they do take some time and you probably want to cache them.

CPU & Memory consumption

Well, it really depends on what you do. As far as CPU goes, running a clustered web server (such as Express), very little is actually taken. As I mentioned earlier we have different frameworks in our architecture and we are deploying all of them on micro instances in AWS as what they provide is more than enough for the Express apps to run in. As a matter of fact they are so snappy that upgrading them right now would be a waste of resources – and money.

Sure some of them connect to databases. Sure some of them need a bit more grunt. But I’m not talking about Tomcat here nor even Jetty. Running a micro service in Node is actually very inexpensive as far as CPU goes and as long as your service continues to run in a stateless fashion you will be fine.

Memory-wise is about the same. We had to build a data slurper that connects to a third party service to retrieve and transform their records into our model. We are not talking about much – about 10 thousand records – but it does make a difference how you build it. Using promises proved to be very effective as we could process the records coming in in a pipe fashion making it easy to hook maps, transformers, injectors and everything else while keeping the memory consumption low and CPU usage below 20%.

There are tools to help you manage those though. PM2 is a great example: it provides a bunch of goodies but one of the best options is the ability to monitor your cluster, how much memory and CPU it is consuming and have the terminal free for you. The way it manages the Node processes is truly great. Recently they have even released a new tool named pm2-webshell. Just do yourself a favor and check it out.

As with anything just be careful on which tools you pick to run your job as you may end up with CPU spikes as I’ll explain later on.


Mocha is the king of NodeJS testing. Very similar to rspec in terms of structuring (with before all and after all hooks, as well as context) and powered by a multitude of testing extensions you can’t go wrong with it.

Make sure you explore Mocha: you can check for memory leakage, have different reporters and setup your test environment before loading any test by loading plugins. A sample file would be something like this (mocha –help will you get you the answers for the parameters below):

-r test/support/testPrepare

Tools like Chai and Sinon are a must have in terms of testing:

  • Chai, a collection of assertion libraries, provides so much goodies that you hardly have to write your own. In fact it’s so complete that even writing unit tests become fun again.
  • Sinon provides stubbing, mocking, spying and fake timers. It’s a powerful library which I tend to use very often but there’s always some confusion as to how we should use it: mocks and stubs are often confused thus leading to misuse. Verifications of mocks are also a bit daunting – very Mockito-like – in a way that we setup everything first then we verify our mock when executing our it() block.

Note on Sinon: they are removing mock when version 2 comes along, so update your tests people!

Opposed to Jasmine, where you only have spies and it provides pretty much everything out of the box, I believe Mocha is a lot more flexible: integration for third party extensions such Sinon and Chai is easy and you also get extra hooks to better write your test.

As far as testing micro services go we also had to write some acceptance tests and, being a micro service, there are integration points everywhere. For this this little tool called nock will help you lots: it allows you to mock the requests being executed and you can manipulate them as well. The only drawback I found is the fact that you can only mock one endpoint: if you have two integration points you will have to filter the scope and allow the request to pass through a regular expression. Kinda fiddly but it works.

Also with testing you probably want to run code quality tools such as Code Coverage and style. For Code Coverage you can rely on Istanbul, which is very flexible and – as you probably expect – allows you to comment your code in areas where you don’t want to run code coverage. Or exclude files or directories, etc.

JSHint and JSCS are a given to keep your code consistent. And you can configure them as you see fit as well. Hint: specify your rules from the start, don’t wait. When you lint your code it will give you the defaults but the defaults seem too open – at least for me. A good JSHint file will include cyclomatic complexity and possibly enforce some good practices such as triple equals and curly braces.

Run, kill, reload, debug … automagically

Yes, how cool is it that you can just change a file and automagically your changes are already live? And if you process dies, what about bring it back up without the hassle of having to start it yourself? Cool huh?

Node provides a few ways to do that and they are all easy, installable dependencies.

A lot of people prefer forever but we found that forever doesn’t actually do a good job in terms of memory management. If you have the auto reload function enabled and you have a compilation error it will eat your CPU. In fact it will become so slow that even to switch between windows will be a pain.

After some research we found that pm2 is a much better alternative. It provides better memory management and better admin tools – which helps a lot when running in cluster mode. Plus it’s in active development whereas forever’s last commit was about 4 months ago.

Lastly for debugging simply use node-debug. It allows to use the dev tools in your browser to inspect variables, fiddle with the console, etc. I’ve had some experiences with it where it was not so accurate as far as line numbers go but after updating the version (installed on my global) things stabilised.

That’s it for the good parts! Stay tuned for the bad parts. Until then, cheerio!


Part Two is here! Let’s talk about bad things ;-)

A thought on code readability

If you are asked the question …

What are our coding standards?

… how would you react?

  1. Roll over your eyes and say nothing
  2. Roll over your eyes and mumble something
  3. Respond with something that doesn’t answer the question
  4. Provide an appropriate answer
  5. Shut up

Over the last couple of years I’ve been in environments where a lot of developers would either react according to options one and two. A handful according to option three and most of them would go with five.

Why is that? Simple: global conventions. Nowadays devs simply “know” what the conventions are, how to name variables, name files, write tests. It’s just there in your mind because you probably seen that so many times in other projects that one gets used to the way things are.

Say you’re writing Javascript you usually use 2 spaces instead of 4. Writing HTML you would be using 4 spaces, same as CSS.

Naming variables in Ruby for instance you would be using underscores whereas in Java we would be using camel case.

What’s interesting is that although those conventions are used broadly, every developer has its own style and that’s what causes discussion. And that’s what I’m challenging because it affects readability..

Challenging the convention

Variable and file names are important but it’s not as important as your code readability. How you structure your file and make it pleasant to read makes a hell lot of difference in understanding the code written by others – as well as your code when read by others.

We all know that when writing classes in Java or Ruby – or even Javascript files – we want such classes to be small enough to fit in one screen. But how small is that screen? Is that your screen or the screen of your team mate? What if you code in font size 14 and your team mate uses font size 8?

Sure these are variables that can be amended by personal preference but reading code is no different than reading a book: the better it’s written the more enjoyable you will find it and a better understanding will derive from that.

I’m not talking about writing code that is beautifully engineered. I’m talking about code that is beautiful to look at and makes you want to read it to understand the brilliance behind. Even if there’s no brilliance at least you will look at it and understand that statement A is followed by statements B and C and that they, together, create a condition to execute or not statement D.

Take this simple piece of Javascript code:

var fetchProject = function(id) {
  return ProjectDao.find({ where: { id: id }, include: [TaskDao, PersonDao, OrganisationDao] });

The main line in this function is 93 characters long discarding spaces and semicolon. It falls just right in my acceptable limit of 100 characters per line and still reads OK and I understand what is going on.

However I would modify this line to read like this:

var fetchProject = function (projectId) {
  return ProjectDao.find({
    where: { id: projectId },
    include: [ TaskDao, PersonDao, OrganisationDao ]

Or even better, like this:

function fetchProject (projectId) {
  var opts = {
    where: { id: projectId },
    include: [ TaskDao, PersonDao, OrganisationDao ]

  return ProjectDao.find(opts);

Why? A couple of things:

  • Having spaces between delimiters makes it easier on the eye
  • There’s a clear separation of arguments and function call
  • Variables are named appropriately
  • Blank lines, again, make it easier to comprehend

Although the very first example was an inline example – and yes, there are places for them – overuse is simply not good. It’s like trying to understand Lorem ipsum. Take this sample:

ProjectSubTypeDao.create({ subType: subType }).then(function(listingSubType) { iterationDone(); }).done();

What is going on here? If you find yourself having to go back to the beggining of the line to understand the statement then it is a fail – I had to go back. More than once.

I wonder if sometimes we are sacrificing readability for the sake of writing “concise” code or – worse – smart ass code. I remember being in a Clojure training once where we had to write a couple of exercises. I managed one or two given the limited time we had and was feeling pretty proud of myself. I showed the instructor the solution to my first problem and I get this in return:

Good but my solution to this is a one liner.

OK. Brilliant. You’re a genius. Can anyone else comprehend what you’ve written?

Truth is a lot of people can but that doesn’t mean they should or would. And yes, there’s space for this sort of code and if everyone in the team is on the same page I would even dare to say to go for it. But where is the line of readability? Is it so thin that it can be ignored? I don’t think so.

I was having a discussion recently with a coworker about programmer laziness and how it affects our code writing. The main point was that programmers choose shortcuts or “smart ass code” that is kinda complex to understand in favour of “just add this other string to the array and the code will do the rest”.

Now I love that in frameworks, I really do. The more boilerplate code frameworks can absorb the better. I’m not sure such practice is to be encouraged when writing business applications using those frameworks just so we can be more efficient. It will save time for sure, no doubt about that, and even if it’s tested properly and all possible permutations can be covered noone unless the original authors will deeply understand what’s the purpose of such complicated code.

It goes back to my point of “concisiness over readability”. What are we sacrificing there? As a team we have a responsibility to tell each other that such approach is or is not ideal, that such code structure can be improved in such and such ways and that coding standards are an important thing.

Without coding standards a team, for as tightly bound as they are, can fall into disarray where each member will start using what is thought the best approach at the time. I find quite weird that people don’t talk much about this sort of thing – it’s just accepted. And I’m guilty as well as I accepted things too easily to say otherwise.

What happened? Did you kill yourself because you didn’t say anything?

Well, of course not but I was rather frustrated with myself for not speaking my mind. A lot of people I know grow used to ways of doing things in one language and, when they switch languages or environments, conventions don’t. They stick around.

I find that interesting because some conventions that are carried over are good in a new environment but some others simply don’t have a place. They feel alien. And most of the time team members are OK with a mixed bag of conventions.

Take CSS for example. Do you use camel case or dashes when naming classes? Why we see dashes everywhere? Hint: it has to do with the SHIFT key.

Is that efficiency or laziness? I guess we would have answers both ways however we are not sacrificing readability here. We are using what our hands feel more comfortable with when our brains are “wired into the problem”.

Same thing applies to Java or Ruby or Clojure. Camel case, Hungarian notation, use or not use underscores (language allowing)… things differ everywhere. That doesn’t mean we should not use them, that means we should talk about them and make decisions of code readability consciously  as a team.

So the next time someone asks you the question “What are your coding standards?” don’t push them aside. Stop and think if you really have one. Talk to your team, make sure everyone understands where you’re coming from.

That’s me for the day. Cheerio!

To grid or not to grid?

The most common argument I hear about when someone is starting a new web project is:

Just use Twitter Bootstrap!

It seems to be a very reasonable argument: after all Bootstrap provides a very good grid system, good typography, helper classes, responsive elements… really, the question should be:

Why are you not using Bootstrap?

That question got me thinking when it actually is reasonable to use it – we know that we can use it unreasonably everywhere. Does it really suit every website?

The TL;DR answer to that is a sounding no.

Recently I got involved a new web project: green field, full flexibility. The first thought was really to slap Bootstrap in, make use of its grid system and responsive utilities and move along.

Boy, oh boy, was I wrong or what?

Spike Work

Every project we start with some spike work and this was no different: setting up Bootstrap, styling the internal grid elements, make sure they respond correctly and…

Hold on, what’s this weird responsive breakdown from Bootstrap? I don’t want my iPhone 3GS to look like an iPad in portrait mode.

What about those grid classes for device categories? I understand the purpose but there’s too much “fluff” going on in there.

The spike work proved a success because we could challenge the original idea and set some boundaries as to what benefit Bootstrap was actually giving us. We didn’t have a complex structure and having a large CSS file coming from Bootstrap would just complicate maintenance – plus the overrides, as we all do.

Going back a few years I remembered a project I did at Deakin University – the new Deakin Wordly website – and Bootstrap was just becoming popular. We decided to look into it and try and adopt as much as we can. As it turns out we used a customised version of Bootstrap’s typography and rolled our own grid.

Having that sort of experience is very valuable because you understand that not everything can rely on a single framework – it may be great for some things but, for others, the level of customisation is so high that we might as well write our own thing.

Defining the project

With the spike out of the way and with a design draft we could work with we started to break down the page in sections. What we really needed was:

  • A header
  • A sidebar
  • The main content (which is a map)

The design would also cater for three responsive breaks: mobile phones up to 640 pixels, tablets up to 768 pixels and desktops – anything above 769 pixels.

So I hear you asking…

Could we have used the grid from Bootstrap?

Yes, we could.


Could we have used the responsive abilities from Bootstrap?

Customising them, yes, we could.


So why the heck did you decide to roll your own thing?

Simple: Bootstrap is overwhelmingly complex for this sort of project.

Recently I wrote about using the right tool for the right job and, in this case, Bootstrap wasn’t it. We would end up with a framework that is so overly baked that is completely distasteful – plus would be hard to work with.

Note: I don’t know about you but I hate noise in the codebase. If something is there it should be there for a reason and not because it’s part of a framework – it only causes confusion and makes things harder to find.

As you read above the website is simple – the biggest work would be drawing to the map. Working with other members of the team we wrote a very simple framework that was not only semantically correct but also easy to maintain and – most importantly – extendable.

The Javascript piece is so small that we decided to not use Angular and opted for jQuery + Hogan.js for some UI manipulation. This way we ensured that there wouldn’t be any problems with mobile phones and the bandwidth consumed would be minimal.

For styling SASS was the choice: simple and efficient it allows for an organised code and is easy enough to get going – even if you don’t have experience with CSS preprocessors, they are a breeze to get going.

On top of all that for a (very) quick feedback loop, Gulp finalises the picture providing us the necessary preprocessor tools, live reload capabilities and great flexibility for packaging.

Final words

We live in a very fast paced world where technology is dictating how we live. But it’s not because of this dictatorship that we should succumb to its power and use whatever the industry throws at us.

Bootstrap is a great tool that provides quick prototyping and solution validation thus adding great value through quick feedback. Care must be taken in assuming that it can graduate from a POC tool to a production-ready framework.

Every project is different and only by analysing what you’re trying to solve and questioning your decisions that you will really succeed in delivering what you aim for – sometimes what you need is a box cutter instead of scissors.

Till next time!

Something about tools

One thing that always fascinated me is how people invent things. More precisely things to help us make other things. Yes, I’m talking about tools.

Men have been utilising tools as far back as 2.5 million years ago and since then we have been evolving them to our needs, adapting them and creating new ones. Each with a different purpose or with different levels of detailing to give us more control over our tasks.

But one thing that every tool provides, independent of its original purpose or refinement, is productivity: every tool allow us to execute the tasks we need to execute faster – maybe not better but that depends on the skill of the handler and the quality of the tool itself.

Being productive in the software industry is kind of a big deal: deadlines are never as long as we need and sometimes we have to get out of our way to do tasks that will assist the business somehow but is not part of our core skill set and we have to find the appropriate tool for the job – you don’t want to use Excel for everything, for god’s sake!

I never forget what my Project Management tutor said to the class:

Know your tools.

And that sounded very right to me: I knew that to really do my job well, apart from knowing what I needed from a conceptual perspective, I needed to know the tool I was going to use to execute the conceptual task in a effective way.

Nowadays my tooling needs have evolved quite substantially: I don’t use one IDE anymore and I try to understand what the tool is best for instead of trying to use one thing for every little piece of development I do.

But IDE is just one part of it – like Batman we all need an utility belt.

I expanded my set of tools to browser extensions and, more recently, OS extensions. In the quest for efficiency I know my machine like I know my house and I can navigate that without a mouse easily and still be very productive.

Learn your most used tool first

Before jumping into it and installing every second piece of software you see around take time to understand the tools you currently have. The most important one is not your IDE. It is your operating system. Be it Linux, Windows or Mac OS be sure you can navigate that without mouse.

Challenge yourself to learn how to open applications, minimise them, maximise them and swap between them. But most importantly, how to transfer data from one application to the other – yes, copy and paste. Do you know the shortcuts? Everybody knows.

From using your OS well needs will arise. For instance, I use Alfred for productivity – spotlight is just not enough, although the new version in Yosemite is promising – and I extend its functionality by installing several workflow plugins that allow me to be even more productive.

I hardly use the calculator app anymore, it’s all in Alfred.

If I need to know my external IP address I can simply issue a few keystrokes and it queries a website that provides me this information.

Launching VMs has become a piece of cake too – I don’t even have to open VirtualBox anymore.

Dive deep into your second most used tool

As most software engineers do, I also use an IDE to be more productive. When I was first introduced to IntelliJ I thought….

OK, new tool. Whatever, sticking to Eclipse. I know it better.

And I really did. That was 4 years ago. Now I can’t think of a better IDE than IntelliJ. I guess it became even better after I learned my way around it and started customising the keyboard shortcuts to my own needs.

Once you get that comfortable with a tool It’s hard to go back but it takes time and, most importantly, dedication to learn and improve.

For instance, one of the things I did was create code snippets for things I use the most. There are plugins that can be installed and come pre-packed with some snippets that are useful but, on a day to day basis, you want to make sure you are agile and remember that stuff properly – i.e. the shortcuts and snippets make sense to you.

But I don’t stick to only one IDE. I also use Sublime Text for Javascript and Ruby development which provides me the agility I need for such development environments – opening up WebStorm for developing in a Javascript environment is too much of a muchness.

And that’s where things get interesting. Plugins! They can improve your agility quite considerably and some of them are just too handy! For instance…

I’m currently working on a project that I can’t launch the app from the IDE for many reasons. TDD is also difficult as dependencies are wired using new Object() most of the time.


Recently we had to write a Regex to replace contents in a file and the tools we usually use refer to Javascript implementation of Regex.


We installed the Java REPL Plugin in IntelliJ and that was great benefit, allowing us to test the regex in a Java-like environment.

The important thing to remember about plugins: don’t overdo them. Simply because a plugin does something really cool but not particularly useful for you at the time it is NOT a good reason to install it.

Link your apps

Another powerful feature that a good tool should have is linkage with other apps: the ability to transfer data or use data from another app as arguments to the app you want to use is a key factor in efficiency.

I mentioned Alfred above but can’t finish this post without mentioning Dash.

Dash is an “API documentation browser” and can integrate with pretty much anything. Finding the appropriate documentation for an API takes a few seconds and, if that’s not enough, it also gives you “cheat sheets” for things like VI, Git, Capybara among others.

I have linked IntelliJ, Sublime and Alfred to it and I’m a happy dev: can just type CMD + D and Dash is presented with the exact match for the API I have highlighted on my IDE – almost magical.

Use the web

There are tons of tools available online, from CSS generators to Regex debuggers that will allow you to be more productive without spending too much time trying to find a solution for a problem just by yourself.

Also check for browser extensions: Chrome and Firefox are full of them. But the same advice apply, don’t overdo them: you will overload your browser and eventually it will slow down and you won’t know what to do with so much stuff installed.

Final words

I believe, as a professional, that we should not reinvent the wheel but use the power of the community to our benefit to learn and apply. It’s truly a great feeling when we get that Regex right or implement that complicated RFC all by ourselves – yup, we do get very proud – but sometimes somebody else has already implemented that and all we need to do is make sure we understand so if a change comes we know what we need to do.

Don’t be discouraged to try. Learning is a process that takes time and getting used to new things is even more timely but the rewards are the greatest: you feel empowered, in control and is always inviting the new.

Until next time!