To grid or not to grid?

The most common argument I hear about when someone is starting a new web project is:

Just use Twitter Bootstrap!

It seems to be a very reasonable argument: after all Bootstrap provides a very good grid system, good typography, helper classes, responsive elements… really, the question should be:

Why are you not using Bootstrap?

That question got me thinking when it actually is reasonable to use it – we know that we can use it unreasonably everywhere. Does it really suit every website?

The TL;DR answer to that is a sounding no.

Recently I got involved a new web project: green field, full flexibility. The first thought was really to slap Bootstrap in, make use of its grid system and responsive utilities and move along.

Boy, oh boy, was I wrong or what?

Spike Work

Every project we start with some spike work and this was no different: setting up Bootstrap, styling the internal grid elements, make sure they respond correctly and…

Hold on, what’s this weird responsive breakdown from Bootstrap? I don’t want my iPhone 3GS to look like an iPad in portrait mode.

What about those grid classes for device categories? I understand the purpose but there’s too much “fluff” going on in there.

The spike work proved a success because we could challenge the original idea and set some boundaries as to what benefit Bootstrap was actually giving us. We didn’t have a complex structure and having a large CSS file coming from Bootstrap would just complicate maintenance – plus the overrides, as we all do.

Going back a few years I remembered a project I did at Deakin University – the new Deakin Wordly website – and Bootstrap was just becoming popular. We decided to look into it and try and adopt as much as we can. As it turns out we used a customised version of Bootstrap’s typography and rolled our own grid.

Having that sort of experience is very valuable because you understand that not everything can rely on a single framework – it may be great for some things but, for others, the level of customisation is so high that we might as well write our own thing.

Defining the project

With the spike out of the way and with a design draft we could work with we started to break down the page in sections. What we really needed was:

  • A header
  • A sidebar
  • The main content (which is a map)

The design would also cater for three responsive breaks: mobile phones up to 640 pixels, tablets up to 768 pixels and desktops – anything above 769 pixels.

So I hear you asking…

Could we have used the grid from Bootstrap?

Yes, we could.

 

Could we have used the responsive abilities from Bootstrap?

Customising them, yes, we could.

 

So why the heck did you decide to roll your own thing?

Simple: Bootstrap is overwhelmingly complex for this sort of project.

Recently I wrote about using the right tool for the right job and, in this case, Bootstrap wasn’t it. We would end up with a framework that is so overly baked that is completely distasteful – plus would be hard to work with.

Note: I don’t know about you but I hate noise in the codebase. If something is there it should be there for a reason and not because it’s part of a framework – it only causes confusion and makes things harder to find.

As you read above the website is simple – the biggest work would be drawing to the map. Working with other members of the team we wrote a very simple framework that was not only semantically correct but also easy to maintain and – most importantly – extendable.

The Javascript piece is so small that we decided to not use Angular and opted for jQuery + Hogan.js for some UI manipulation. This way we ensured that there wouldn’t be any problems with mobile phones and the bandwidth consumed would be minimal.

For styling SASS was the choice: simple and efficient it allows for an organised code and is easy enough to get going – even if you don’t have experience with CSS preprocessors, they are a breeze to get going.

On top of all that for a (very) quick feedback loop, Gulp finalises the picture providing us the necessary preprocessor tools, live reload capabilities and great flexibility for packaging.

Final words

We live in a very fast paced world where technology is dictating how we live. But it’s not because of this dictatorship that we should succumb to its power and use whatever the industry throws at us.

Bootstrap is a great tool that provides quick prototyping and solution validation thus adding great value through quick feedback. Care must be taken in assuming that it can graduate from a POC tool to a production-ready framework.

Every project is different and only by analysing what you’re trying to solve and questioning your decisions that you will really succeed in delivering what you aim for – sometimes what you need is a box cutter instead of scissors.

Till next time!

Something about tools

One thing that always fascinated me is how people invent things. More precisely things to help us make other things. Yes, I’m talking about tools.

Men have been utilising tools as far back as 2.5 million years ago and since then we have been evolving them to our needs, adapting them and creating new ones. Each with a different purpose or with different levels of detailing to give us more control over our tasks.

But one thing that every tool provides, independent of its original purpose or refinement, is productivity: every tool allow us to execute the tasks we need to execute faster – maybe not better but that depends on the skill of the handler and the quality of the tool itself.

Being productive in the software industry is kind of a big deal: deadlines are never as long as we need and sometimes we have to get out of our way to do tasks that will assist the business somehow but is not part of our core skill set and we have to find the appropriate tool for the job – you don’t want to use Excel for everything, for god’s sake!

I never forget what my Project Management tutor said to the class:

Know your tools.

And that sounded very right to me: I knew that to really do my job well, apart from knowing what I needed from a conceptual perspective, I needed to know the tool I was going to use to execute the conceptual task in a effective way.

Nowadays my tooling needs have evolved quite substantially: I don’t use one IDE anymore and I try to understand what the tool is best for instead of trying to use one thing for every little piece of development I do.

But IDE is just one part of it – like Batman we all need an utility belt.

I expanded my set of tools to browser extensions and, more recently, OS extensions. In the quest for efficiency I know my machine like I know my house and I can navigate that without a mouse easily and still be very productive.

Learn your most used tool first

Before jumping into it and installing every second piece of software you see around take time to understand the tools you currently have. The most important one is not your IDE. It is your operating system. Be it Linux, Windows or Mac OS be sure you can navigate that without mouse.

Challenge yourself to learn how to open applications, minimise them, maximise them and swap between them. But most importantly, how to transfer data from one application to the other – yes, copy and paste. Do you know the shortcuts? Everybody knows.

From using your OS well needs will arise. For instance, I use Alfred for productivity – spotlight is just not enough, although the new version in Yosemite is promising – and I extend its functionality by installing several workflow plugins that allow me to be even more productive.

I hardly use the calculator app anymore, it’s all in Alfred.

If I need to know my external IP address I can simply issue a few keystrokes and it queries a website that provides me this information.

Launching VMs has become a piece of cake too – I don’t even have to open VirtualBox anymore.

Dive deep into your second most used tool

As most software engineers do, I also use an IDE to be more productive. When I was first introduced to IntelliJ I thought….

OK, new tool. Whatever, sticking to Eclipse. I know it better.

And I really did. That was 4 years ago. Now I can’t think of a better IDE than IntelliJ. I guess it became even better after I learned my way around it and started customising the keyboard shortcuts to my own needs.

Once you get that comfortable with a tool It’s hard to go back but it takes time and, most importantly, dedication to learn and improve.

For instance, one of the things I did was create code snippets for things I use the most. There are plugins that can be installed and come pre-packed with some snippets that are useful but, on a day to day basis, you want to make sure you are agile and remember that stuff properly – i.e. the shortcuts and snippets make sense to you.

But I don’t stick to only one IDE. I also use Sublime Text for Javascript and Ruby development which provides me the agility I need for such development environments – opening up WebStorm for developing in a Javascript environment is too much of a muchness.

And that’s where things get interesting. Plugins! They can improve your agility quite considerably and some of them are just too handy! For instance…

I’m currently working on a project that I can’t launch the app from the IDE for many reasons. TDD is also difficult as dependencies are wired using new Object() most of the time.

 

Recently we had to write a Regex to replace contents in a file and the tools we usually use refer to Javascript implementation of Regex.

 

We installed the Java REPL Plugin in IntelliJ and that was great benefit, allowing us to test the regex in a Java-like environment.

The important thing to remember about plugins: don’t overdo them. Simply because a plugin does something really cool but not particularly useful for you at the time it is NOT a good reason to install it.

Link your apps

Another powerful feature that a good tool should have is linkage with other apps: the ability to transfer data or use data from another app as arguments to the app you want to use is a key factor in efficiency.

I mentioned Alfred above but can’t finish this post without mentioning Dash.

Dash is an “API documentation browser” and can integrate with pretty much anything. Finding the appropriate documentation for an API takes a few seconds and, if that’s not enough, it also gives you “cheat sheets” for things like VI, Git, Capybara among others.

I have linked IntelliJ, Sublime and Alfred to it and I’m a happy dev: can just type CMD + D and Dash is presented with the exact match for the API I have highlighted on my IDE – almost magical.

Use the web

There are tons of tools available online, from CSS generators to Regex debuggers that will allow you to be more productive without spending too much time trying to find a solution for a problem just by yourself.

Also check for browser extensions: Chrome and Firefox are full of them. But the same advice apply, don’t overdo them: you will overload your browser and eventually it will slow down and you won’t know what to do with so much stuff installed.

Final words

I believe, as a professional, that we should not reinvent the wheel but use the power of the community to our benefit to learn and apply. It’s truly a great feeling when we get that Regex right or implement that complicated RFC all by ourselves – yup, we do get very proud – but sometimes somebody else has already implemented that and all we need to do is make sure we understand so if a change comes we know what we need to do.

Don’t be discouraged to try. Learning is a process that takes time and getting used to new things is even more timely but the rewards are the greatest: you feel empowered, in control and is always inviting the new.

Until next time!

iOS Pipeline: how to successfully automate your building process

OK before anything I must say: this post came about as a response to a frustration in not finding an appropriate solution out there. And I know there are a couple of ways the same process can be achieved but I decided to go about it using a simple process that worked for me.

A word of warning: I’m not a full time Objective C developer. I enjoy setting up build pipelines and making sure it all aligns at the end so, if you find any mistakes in this setup, please feel free to leave a comment on!

What we are doing

Our goal is to publish an artefact to TestFlight. In order to do that we want to make sure that the app is properly tested and the artefact is built for the appropriate provisioning profile.

In order to do that we don’t want to combine all these steps in one single job: it would take too long and we would not be able to publish artefacts with different configurations – more on that later.

Instead we want to separate all that into 3 jobs, like below, with the last build sending the artefact off to TestFlight with a changelog.

Presentation1

About the “Test” phase

We are not talking about unit tests here – or the XCTest framework that comes bundled with every new app you create.

This test phase is related to automated UI testing. Be it Calabash, UIAutomator or any other tool you feel comfortable with the purpose of this phase is to provide a level of confidence by automatically navigating the UI and executing operations the user would execute.

Why do it this way

We don’t want to have multiple targets generating different artefacts with different configurations. What we want is an xarchive that can generate multiple artefacts with appropriate configuration for different environments.

Plus we want to generate the xarchive only once. That’s one of the main reasons why: building an xarchive will generate a file that can be used to build IPAs where we can apply different configurations.

If we generate different artefacts for different targets that will mean we will have to test every target and your building time escalates considerably.

Finally the flexibility to have multiple publish jobs is also a plus: leverage from one xarchive and publish that to multiple places.

What you need

First and foremost we need Jenkins installed. If you don’t know Jenkins it is a Continuous Integration (CI) server that allows different projects to be build at a certain interval – either defined by polling the SCM, pre-defined or manually. Each project to be built is called a Job in Jenkins and jobs can depend on each other.

Jenkins allows the installation of multiple plugins and the bare bones installation does not come with Git. Nor TestFlight. So we also need some plugins installed in Jenkins for this whole thing to work:

  • Git plugin
  • Git client plugin
  • Github plugin
  • Testflight plugin
    • Make sure you configure that properly under Manage Jenkins > Configure System
  • Build pipeline plugin
  • Delivery pipeline plugin
  • Shared workspace plugin
  • Green balls plugin (just so we don’t get blue balls)

And finally you will need two environment variables configured in Jenkins – you can configure them via Manage Jenkins > Configure System: one for the developer certificate you will use and another one for the provisioning profile.

The environment variables are simple key-value pairs and can be accessed on any job using the syntax ${VAR_NAME}.

Prepping the project

For Jenkins to build your project successfully there are two things that need to be done: share the scheme of your target and create a shell script that will do the heavy lifting for you.

To share your app’s target scheme all you have to do is click on your target next to the Stop button, then on Manage Schemes…

Screen Shot 2014-07-24 at 1.33.31 pm

… and, on the screen that comes up, tick the Shared box on the right. If you don’t do that then you can only build your app from XCode and that’s not what you want.

Screen Shot 2014-07-24 at 1.33.50 pm

The script is a bit more complex: you want to pass in the Certificate you will be using and the Provisioning Profile that will be embedded in the artefact – that’s why we had to create those environment variables beforehand, so we can reference in the build script.

A sample script that I use to some degree of success is the below. You execute such script by issuing the following command in your terminal:

./build.sh <PROVISIONING_PROFILE_UDID> <CERTIFICATE_NAME>

So the script looks like this – thanks to this blog post, with some modifications:

#!/bin/bash

if [ ! "$1" ]; then
  echo "Please provide a provisioning profile UUID."
  exit 0
fi

if [ ! "$2" ]; then
  echo "Please provide a code sign identity."
  exit 0
fi

pp="$1"
csi="$2"

if [ ! -d './JenkinsBuild' ]; then
  mkdir './JenkinsBuild'
fi

if [ ! -d './JenkinsArchive' ]; then
  mkdir './JenkinsArchive'
fi

if [ ! -d './JenkinsIPAExport' ]; then
  mkdir './JenkinsIPAExport'
fi

xcodebuild -alltargets clean

rm -rf JenkinsBuild/*

xcodebuild -target "TARGET_NAME" PROVISIONING_PROFILE="$pp" CONFIGURATION_BUILD_DIR=JenkinsBuild -arch i386 -sdk iphonesimulator7.1

if [ ! $? -eq 0 ]; then
  echo 'Build failed. Could not compile app.'
  exit -1
fi

rm -rf JenkinsArchive/*

xcodebuild -scheme "TARGET_NAME" archive PROVISIONING_PROFILE="$pp" CODE_SIGN_IDENTITY="$csi" -archivePath "./JenkinsArchive/blog.xcarchive"

if [ ! $? -eq 0 ]; then
  echo 'Build failed. Could not generate xcarchive.'
  exit -1
fi

rm -rf JenkinsIPAExport/*

xcodebuild -exportArchive -exportFormat IPA -exportProvisioningProfile "PROVISIONING PROFILE NAME" -archivePath "./JenkinsArchive/blog.xcarchive" -exportPath "./JenkinsIPAExport/blog.ipa"

if [ ! $? -eq 0 ]; then
  echo 'Build failed. Could not generate IPA.'
  exit -1
fi

You might have noticed we have some paths being specified in the script, most prefixed by Jenkins. That is because we want to generate artefacts independent of where XCode puts them – every machine is different so we might as well manage those ourselves.

So JenkinsBuild, JenkinsArchive and JenkinsIPAExport are directories created during build time where artefacts will live and be referenced by in other jobs.

OK now you’re good to go. Let’s go setup Jenkins.

Jenkins Setup

The Build Job

With all those plugins installed and script ready it’s now time to create our pipeline: let’s create our first job, the Build Job – click on the New Item link on the left, give the job a name and click on Build a free-style software project. A good standard I usually follow for naming jobs is "<APP_NAME> - <STAGE>".

You will then be presented the configuration screen for the job. Since Jenkins doesn’t provide a way to export Job configurations, have a look at the screenshot below – paying attention to the yellow boxes which is where you will have to change.

Note: configuration may differ from project to project. In this case I’m not using a provisioning profile or certificate to sign my artefacts but you could, if you wanted to, use the script we wrote above and pass in the same command as explained before.

Job configuration for "Build" stage

Job configuration for “Build” stage

Good. That’s it for the build phase.

The Test Job

Let’s now build a test phase: go back to Jenkins home and click on New Item again, give it a name and click on Copy existing Item and provide the name of your build job – Jenkins has auto completion so it should pick up really easily.

The configuration will be slightly the same, we will have to execute a few things after the build step is successful such as archive the artefact for publishing and publish the test results but the most important change is to make sure that the Git SHA that was used in the Build job is the same Git SHA used here.

In order to achieve that we use the special environment variable $GIT_COMMIT in the branches to build SCM configuration in the job.

Finally we want to run a different shell script, for testing. This should automatically open the simulator and execute a suite of UI tests using the build that was created:

./runTests.sh ci.js "${BUILD_WORKSPACE}/JenkinsBuild/blog.app"

Have a look below for the screenshot configuration:

Job configuration for the "Test" stage

Job configuration for the “Test” stage

Great, with that in hand, let’s link the two jobs! To achieve that, go back to your Build job and add the post-build action Trigger parameterized build on other projects. Then type in the name of the Test job you have just created – auto-completion works here too – and click on Add Parameters to select the option Pass-through the Git Commit that was built.

That will enable a dependency between the jobs and, if you go back to your either Job home page you will notice that you will have either Downstream or Upstream Projects listed. Quite handy!

The Publish Job

But we are not done quite yet: we still need to create our last job that will publish the artefact to Testflight. Again, just like the Test job, create a new item by copying the Build job.

A couple of things to notice here:

  • We will be using the artefact that was created in previous jobs and not creating a new one, otherwise we would be compiling the whole app all over again
  • At the end of a successful publish we will be tagging the build so we can go back to it if need in the future

Again, pay attention to the highlighted boxes as it will dictate what has to be done.

Job configuration for the “Publish” stage

And just like the Build job we will have to change the Test job to allow triggering this job – but there’s a catch!

The catch is that we don’t want to publish every successful build: we want to be able to run specific builds as we wish but still leveraging from the same Git SHA used in a specific pipeline.

Thus, going back to the Test job add the post-build action Build other projects (manual step). Then type in the name of the Publish job you have just created – there’s no auto-completion here – and click on Add Parameters to select the option Pass-through the Git Commit that was built.

Great, now you have linked jobs. But how can you see it?

Adding a pipeline view

Go to Jenkins home and on the list of Jobs click on the little plus sign next to the tab All. This will prompt you to create a new view.

Give it a name and select the option Build Pipeline View. Then you will be presented the configuration page for the view… all you really have to do is select the initial Job (which in our case is the Build job) and the rest is done automatically for you.

Have a look at mine, that’s how I like it – the important bits are highlighted.

View configuration for the pipeline

View configuration for the pipeline

Final words

As you could see I’m not using build configurations – mostly because I didn’t bother looking at it in detail to so how to setup it all up. But you definitely should: I used that previously on another project (already set up by someone else) and was very handy in defining build names and specific configuration for different environments.

Another thing about this whole process is the automated testing. It uses UIAutomation as the driver and all tests are written in Javascript – you can read more about setting it all up by following this blog post from my buddy Shaun Irvine.

Finally it’s very likely that you will run into a permission issue. Follow this thread on how to solve it.

Good luck with it all!

Hacking for the community – RHoK Melbourne

Two days. Six teams. A bit over thirty people. Loads of fun.

Loads of fun. If there’s a combination of words that best describes what the last Random Hacks of Kindness (RHoK) in Melbourne was, those are them.

But it’s not only the fun on working something different and helping a community project or a startup get off the ground or make the world a better place. It’s also about the people: knowing new people, understanding their passions and aspirations, sharing, communicating, bouncing ideas and making things happen.

To me the spirit of RHoK was that: make things happen while getting to know awesome people. Create something small or create something big but carry with you the factor that you’re not there for yourself: you’re there for them.

Was with that thought in mind that I joined in this RHoK weekend. I went to the information session a month before the event and visited the website for more information on problems but couldn’t really decide what to do. I knew that I wanted to accomplish something and try some more nodejs.

But before I talk about how the days went by, let me take you through the format of the event.

RHoK Weekend in Melbourne

The whole event goes for a whole weekend. It started Saturday at 9 AM and goes until the end of the next day, with drinks at the pub.

As you expect it’s a very social event with people sharing ideas and tech solutions across teams so we can help each other get off and running quickly. Basically, the schedule is like this:

  • Day 1
    • 09.00 – Get set up
    • 09.30 – Welcome and administrivia
    • 09.45 – Choosing problem areas to work on, and teams to work with
    • 11.00 – Start your hacking
    • 19.00 – Dinner (provided)
    • 20.00 – More hacking
  • Day 2
    • 08.00 – Hacking continues
    • 14.00 – Time to down tools
    • 14.30 – All projects get 10 minutes to present their work to all
    • 15.30 – Judges deliberate
    • 16.00 – Awards and prizes
    • 16.30 – Drinks

It’s full on – as you expect – but very rewarding.

The problem I picked

OK so I wanted something small as I had some time constraints on Saturday and wanted to get something off quickly. The problem I picked was something that people were already working on but some of the team members moved on and couldn’t contribute to the project anymore.

The problem, already a project, is called Witness King Tides from Green Cross Australia. Here’s the original problem description. The snippet from RHoK website is this:

Witness King Tides is an existing website asks communities around Australia to take photos of the coastline when king tides hit. These photos capture what our coastal communities may look like in the future, as global sea levels rise. Together, the images build a picture of the threat posed by sea level rise across Australia and help track the future impact of climate change.

They needed a backend to handle some file uploads and capture some metadata about the photos that are uploaded. Previously a web client and a native iOS and Android apps have been developed to great success but the lack of a proper backend was causing them harm.

The lists of tasks

The lists of tasks

The solution

So the new guys that joined them – myself included – were very keen on nodejs. We knew it was simple and we knew it would be fun so was just really a matter of doing some planning and getting on with it!

Things looked pretty gloomy at start

Things looked pretty gloomy at start

We picked expressjs and included some extra libraries to make our life easier. The original solution relies on Flickr to store their photos and have them organised in albums – manually organised I must say – and we couldn’t change that.

From there we knew that some interaction with Flickr was needed and it proved, one more time, that OAuth 1.0 sucks. Luckily, there were libraries already built that eased our pain of dealing with OAuth and we just needed to pull them in.

Note: Flickr itself provides a library which seems to be the way to go at first. But they don’t provide an upload feature which was what we wanted and download a lot of data when you start using it. To be honest, the only thing that I adds value is that it makes your app a proxy to Flickr making your clients invoke only one endpoint.

Coming back to the solution, basically there were 2 endpoints completely necessary: /tides and /upload. Here’s a quick description:

GET /tides
Returns a list of tides already mapped by Green Cross Australia

POST /upload
Uploads a photo, streaming the content to Flickr and saving the metadata submitted on a Mongo DB

A small bump appears

So we managed to get the two endpoints up and running on Saturday before I left – which was around 5:30 PM! That was great and I was happy. Andy was sorting out the hosting and another Julián was investigating the Flickr API for querying. Everyone was tuned in!

Next day comes and I get a question from Andy, who developed the iOS app, more or less around these lines:

So I tried to upload a photo using the iOS app and I got an error. Do I have to send a multipart request?

Damn! I forgot we had implemented only the web side of things and completely forgot about the iOS. So here we go investigating how we can get this done and so on and, as it turns out, was a walk in the park.

Since we have created a small file that handles uploads – uploader.js – all we had to do was query the content type of the request and invoke another function in the uploader.

router.post('/upload', function (req, res) {
  var uploader = new Uploader();
  if (req.get('Content-Type').indexOf('json') >= 0) {
    // handles JSON payload
    uploader.handleJson(req, res);
  } else {
    // handles multipart
    uploader.handleMultipart(req, res);
  }
});

And to make life easier we are not even streaming that: we are just sending a base64 representation of the image in the JSON payload so all I had to do was to convert that into bytes and send it off to Flickr.

That bump felt like a decent sized bump at start but after about 30 or so minutes the whole thing was done and everyone was happy.

Heck yeah!!

Heck yeah!!

Deployment

Just a quick note to talk about deployment. Amazon Web Services (AWS) provided some free credit for RHoK projects to host their solutions on.

My experience with AWS is almost zero but Andy was experienced with it and decided to give a go on Elastic Beanstalk, which is a service very similar to what Heroku provides in terms of “ready made boxes for specific platforms”. Since we were using Node and both Heroku and Elastic Beanstalk provided support for Node, we might as well use the free credits.

I must say it was the best decision. Beanstalk is not only flexible but it’s fast and doesn’t “go to sleep” like Heroku does with free apps. Sure we had free credit, but hey, I’m sure the costs will be minimal compared to Heroku.

And the whole setting up of command line tools for automagic deployment was brilliant.

Highly recommended.

So, what did we build?

Our test site is here: http://witnesskingtides.azurewebsites.net/. You can upload photos, check the tides and etc. The whole code is open source so you can just grab it and run your own if you want.

King Tides API

King Tides Web

King Tides iOS

Jackie did a fantastic job on the UI and is now responsive and leveraging from the new endpoints we created.

On using Nodejs

If you haven’t yet, just do it. Learn and apply it. It’s way too much fun to be pushed aside.

Thanks to all!

The weekend was great: vibe was way up there, people were engaged and even our problem owner was connected to us from sunny Brisbane.

The RHoK committee did a fantastic job and I’m totally looking forward to the next one already. New problem, existing problem, bring it on!

Frustration

I feel lost in this whole modern world. Not all the time. But more often than I like.

It’s tough to see the values you were taught at a younger age being ripped apart right in front of you and all you get to do is sit down and watch – and hopefully learn that things are not exactly this way anymore.

I feel like an embarrassment, a burden even. Someone that doesn’t add value, that “doesn’t get it”.

It’s tough. I don’t know what to do. Even when I open my doors so all kinds of winds flow through and portraits fall and things get out of order, it feels like the wind never disordered things enough for me to fix anything.

It’s frustrating.

I feel lost, dumb. Out of place and out of touch.

I can’t say anymore that I’m not stressed. I can’t say anymore that “It’s alright, no worries”.

Putting a brave face everyday and facing the consequences of your choices is not easy. Nor is smiling when you’re burning inside.

I guess there are times in our lives where we get tested for all sorts of things and my time is now and I’m not entirely sure I’m doing a good job.

I don’t want your compliment. I don’t want you to pet me on the head.

I want an ear. Maybe a shoulder.

Can you lend one to me?

About a talk at a university

Recently I volunteered myself to give a talk at RMIT and I think it would be great to share my experience here since I enjoyed the experience so much.

I have never done such a thing before and didn’t really know what to expect. I knew the subject and believed I was prepared enough but couldn’t avoid having that “butterflies in your stomach” kind of feeling.

My talk was about Code Refactoring and the usage of Design Patterns to refactor code and make it more maintainable.

As I walked into the university for the first time since I have been in Melbourne I was anxious to find the room, settle down, and write a bit about what I was going to talk about – mind you, I had the slides ready and the content in my head but not writing it down would make for a disaster of a lecture.

I arrived early which gave me some time to elaborate the talk up until half of the slides as the actual lecturer walked in. He greeted me and asked me if I wanted to go first as he was very interested on what I had to say – I was supposed to start my lecture after his and I was interested to hear what he had to say.

Well, guess you can’t plan everything.

Anyway, as I was getting ready I asked him if he brought over what I said I needed – we emailed each other during the week to clarify things – to which the answer was no.

I had slides ready. And I have a Mac. And, stupidly, I decided to use KeyNote to write my slides – why oh why. And the university, as you might expect, still uses VGA connectors.

Why dear lord? … fades into oblivion in despair …

So, the lecture told me:

Wait here, talk to the students while I’m away. I’ll be right back.

Funny experience. I didn’t have my slides to support me and I knew I had to introduce myself and the company I work for and yadda yadda. Actually it was kinda terrifying being left with an audience that is expecting you to do something… so I had to improvise!

And so I did start introducing myself and slowly easing into it. Talking about myself, what I do, what the company I work for does and all that was a very nice way to engage with the students. But not only that: ask them back what they know and what they do was also a way to break the ice and understand where they’re at technically.

Some of them actually worked in the industry while some others were brand new to all that stuff which was a great mix as I could read their faces and understand when I was going too deep into the subject.

When the lecturer came back with what I needed I felt like I had developed something with them already and was at ease to continue.

My talk went over an hour. I touched on multiple points of code refactoring, how we did this and that and what design patterns are good – and not good – for. The students were very receptive of all I had to say and some were really engaged into all that.

To me the highlight of the talk was when I showed them some code that I had worked on previously. The code was so bad that there were giggles and talks and questions and comments… it was a really cool point and the atmosphere was lifted considerably.

The most rewarding experience actually was to see the students taking note and feeling that I had helped them understand that stuff is not as they study: what they teach in the university is not really what’s out there and, for as much value as something can add – being the building blocks of their technical expertise – one should always challenge them.

At the end there were some questions but nothing really worth mentioning. However I must say that one question was particularly interesting and it came from the lecturer:

Here at RMIT we teach J2EE using Glassfish and NetBeans. Do you think we should change?

What was interesting about this question is that, after 5 years since I graduated, they are still using the same tools. It was not my place to say Yes but I sort of hinted that in the industry we don’t use such tools. They can be simple but they don’t provide the flexibility that the industry needs.

He gave me a gift and I was away. I took almost his entire class and left him with 30 minutes left for his lecture – and possibly happy students since I shortened a lecture as no student likes to be in lectures.

Gift from RMIT

Gift from RMIT

Finally…

I would like to encourage you to do the same. Standing at the front of the students providing them what you know is such a great feeling that is hard to explain. You feel good about doing something for someone else, helping with people’s knowledge and possibly making new friends.

My experience was truly great, I would do it again for sure.

Here is the presentation and below is the video. Have fun!

The fall of TDD?

Over the last couple of days there was a conversation on Twitter involving Uncle Bob Martin, Martin Fowler and David Heinemeier Hansson about Test Driven Development (TDD), the value of feedback, code design, among other things.

I didn’t follow through the conversation but some points made me think about how much value TDD actually adds nowadays and how much effort people put into writing tests just for the sake of writing tests thus increasing feedback time and possibly over engineering a system.

Take this argument from Martin Fowler:

Instead of focusing on his argument, let’s focus on the question that his argument answers:

Write good and maintainable software without TDD? How?

TDD is a process, a way of doing things in a certain way to achieve an outcome that will make you feel comfortable at the end. This comfort, as you probably know, will come by having the confidence that every time such tests run, your code will be verified and any change that breaks the contract will be picked up, prompting the runner to fix the code.

But TDD is not only a process, is it? It’s also a methodology that can be applied or not, depending on how well someone – this someone being us, developers – adapts to it or not. And also, being a methodology, it’s prone to modification.

I believe the argument above is great because TDD is not the whole world. I believe quality code can be written without TDD but that doesn’t mean you should not test your code. Testing is essential in any application as it gives us confidence, tells us about business requirements (to a degree) and allow for enhancements.

I was working on a large Java project – as Java projects go, everything is big – with a number of Maven modules and countless tests. Actually, I counted them: 2759 test files. 14K+ tests. That’s right. The whole suite took around 25 minutes to run. Feedback is slow. Enhancements are slow. And those don’t even include the Behaviour tests which, separated in 10+ modules, took an average of 12 minutes each.

My point is that excessive testing is harmful: you should not test every if branch. You should not test every class. My rule of thumb usually is: there’s important business logic in there, let’s test. Otherwise, leave it be. And on a system so large, how can you ensure your code will not have side effects? You can trust the tests but what if the tests themselves are not really testing much? What if, after a closer look, all the tests are really doing is expecting mocks to be called?

TDD feels a bit harmful to me simply because everyone wants to do it. Just like pair programming and just like the argument above from Mr Fowler, often ≠ always and common sense and practically comes before doing anything.

But how can you ensure your design is correct?

Can you with TDD? I don’t think so, not all the time. I can still write testable code without having to write a test for it first and even make sure I’m still following the SOLID principles. Testable code. What is it, really? What makes a code testable? Are we talking unit tests Testable code, IMHO, means:

  1. Code that is simple to understand
  2. Code clearly states business language
  3. Code can be reused

That does not mean unit tests only: as I mentioned earlier, we only test what is worth testing. We could ensure our code is testable by running behaviour tests. We could say that a record being inserted in the database is a unit test. We could even say that ATDD (Acceptance TDD) is the only form we can move ahead with the project.

Allowing TDD to be the only way you can expand your software design is a trap: you will be in this circle where it will provide you all the methodology and process to follow and you will eventually forget to let your imagination run free. David actually tweeted about this:

I believe it’s correct to have a driver for design but having TDD as the only driver doesn’t sound right. Sometimes even gut feel is the right approach: you can’t measure it but you feel that things go a certain way and exploring them is the right way to go: follow your instincts and see where they lead you.

And yes, all hail TDD 2.0!

Resources