iOS Pipeline: how to successfully automate your building process

OK before anything I must say: this post came about as a response to a frustration in not finding an appropriate solution out there. And I know there are a couple of ways the same process can be achieved but I decided to go about it using a simple process that worked for me.

A word of warning: I’m not a full time Objective C developer. I enjoy setting up build pipelines and making sure it all aligns at the end so, if you find any mistakes in this setup, please feel free to leave a comment on!

What we are doing

Our goal is to publish an artefact to TestFlight. In order to do that we want to make sure that the app is properly tested and the artefact is built for the appropriate provisioning profile.

In order to do that we don’t want to combine all these steps in one single job: it would take too long and we would not be able to publish artefacts with different configurations – more on that later.

Instead we want to separate all that into 3 jobs, like below, with the last build sending the artefact off to TestFlight with a changelog.


About the “Test” phase

We are not talking about unit tests here – or the XCTest framework that comes bundled with every new app you create.

This test phase is related to automated UI testing. Be it Calabash, UIAutomator or any other tool you feel comfortable with the purpose of this phase is to provide a level of confidence by automatically navigating the UI and executing operations the user would execute.

Why do it this way

We don’t want to have multiple targets generating different artefacts with different configurations. What we want is an xarchive that can generate multiple artefacts with appropriate configuration for different environments.

Plus we want to generate the xarchive only once. That’s one of the main reasons why: building an xarchive will generate a file that can be used to build IPAs where we can apply different configurations.

If we generate different artefacts for different targets that will mean we will have to test every target and your building time escalates considerably.

Finally the flexibility to have multiple publish jobs is also a plus: leverage from one xarchive and publish that to multiple places.

What you need

First and foremost we need Jenkins installed. If you don’t know Jenkins it is a Continuous Integration (CI) server that allows different projects to be build at a certain interval – either defined by polling the SCM, pre-defined or manually. Each project to be built is called a Job in Jenkins and jobs can depend on each other.

Jenkins allows the installation of multiple plugins and the bare bones installation does not come with Git. Nor TestFlight. So we also need some plugins installed in Jenkins for this whole thing to work:

  • Git plugin
  • Git client plugin
  • Github plugin
  • Testflight plugin
    • Make sure you configure that properly under Manage Jenkins > Configure System
  • Build pipeline plugin
  • Delivery pipeline plugin
  • Shared workspace plugin
  • Green balls plugin (just so we don’t get blue balls)

And finally you will need two environment variables configured in Jenkins – you can configure them via Manage Jenkins > Configure System: one for the developer certificate you will use and another one for the provisioning profile.

The environment variables are simple key-value pairs and can be accessed on any job using the syntax ${VAR_NAME}.

Prepping the project

For Jenkins to build your project successfully there are two things that need to be done: share the scheme of your target and create a shell script that will do the heavy lifting for you.

To share your app’s target scheme all you have to do is click on your target next to the Stop button, then on Manage Schemes…

Screen Shot 2014-07-24 at 1.33.31 pm

… and, on the screen that comes up, tick the Shared box on the right. If you don’t do that then you can only build your app from XCode and that’s not what you want.

Screen Shot 2014-07-24 at 1.33.50 pm

The script is a bit more complex: you want to pass in the Certificate you will be using and the Provisioning Profile that will be embedded in the artefact – that’s why we had to create those environment variables beforehand, so we can reference in the build script.

A sample script that I use to some degree of success is the below. You execute such script by issuing the following command in your terminal:


So the script looks like this – thanks to this blog post, with some modifications:


if [ ! "$1" ]; then
  echo "Please provide a provisioning profile UUID."
  exit 0

if [ ! "$2" ]; then
  echo "Please provide a code sign identity."
  exit 0


if [ ! -d './JenkinsBuild' ]; then
  mkdir './JenkinsBuild'

if [ ! -d './JenkinsArchive' ]; then
  mkdir './JenkinsArchive'

if [ ! -d './JenkinsIPAExport' ]; then
  mkdir './JenkinsIPAExport'

xcodebuild -alltargets clean

rm -rf JenkinsBuild/*

xcodebuild -target "TARGET_NAME" PROVISIONING_PROFILE="$pp" CONFIGURATION_BUILD_DIR=JenkinsBuild -arch i386 -sdk iphonesimulator7.1

if [ ! $? -eq 0 ]; then
  echo 'Build failed. Could not compile app.'
  exit -1

rm -rf JenkinsArchive/*

xcodebuild -scheme "TARGET_NAME" archive PROVISIONING_PROFILE="$pp" CODE_SIGN_IDENTITY="$csi" -archivePath "./JenkinsArchive/blog.xcarchive"

if [ ! $? -eq 0 ]; then
  echo 'Build failed. Could not generate xcarchive.'
  exit -1

rm -rf JenkinsIPAExport/*

xcodebuild -exportArchive -exportFormat IPA -exportProvisioningProfile "PROVISIONING PROFILE NAME" -archivePath "./JenkinsArchive/blog.xcarchive" -exportPath "./JenkinsIPAExport/blog.ipa"

if [ ! $? -eq 0 ]; then
  echo 'Build failed. Could not generate IPA.'
  exit -1

You might have noticed we have some paths being specified in the script, most prefixed by Jenkins. That is because we want to generate artefacts independent of where XCode puts them – every machine is different so we might as well manage those ourselves.

So JenkinsBuild, JenkinsArchive and JenkinsIPAExport are directories created during build time where artefacts will live and be referenced by in other jobs.

OK now you’re good to go. Let’s go setup Jenkins.

Jenkins Setup

The Build Job

With all those plugins installed and script ready it’s now time to create our pipeline: let’s create our first job, the Build Job – click on the New Item link on the left, give the job a name and click on Build a free-style software project. A good standard I usually follow for naming jobs is "<APP_NAME> - <STAGE>".

You will then be presented the configuration screen for the job. Since Jenkins doesn’t provide a way to export Job configurations, have a look at the screenshot below – paying attention to the yellow boxes which is where you will have to change.

Note: configuration may differ from project to project. In this case I’m not using a provisioning profile or certificate to sign my artefacts but you could, if you wanted to, use the script we wrote above and pass in the same command as explained before.

Job configuration for "Build" stage

Job configuration for “Build” stage

Good. That’s it for the build phase.

The Test Job

Let’s now build a test phase: go back to Jenkins home and click on New Item again, give it a name and click on Copy existing Item and provide the name of your build job – Jenkins has auto completion so it should pick up really easily.

The configuration will be slightly the same, we will have to execute a few things after the build step is successful such as archive the artefact for publishing and publish the test results but the most important change is to make sure that the Git SHA that was used in the Build job is the same Git SHA used here.

In order to achieve that we use the special environment variable $GIT_COMMIT in the branches to build SCM configuration in the job.

Finally we want to run a different shell script, for testing. This should automatically open the simulator and execute a suite of UI tests using the build that was created:

./ ci.js "${BUILD_WORKSPACE}/JenkinsBuild/"

Have a look below for the screenshot configuration:

Job configuration for the "Test" stage

Job configuration for the “Test” stage

Great, with that in hand, let’s link the two jobs! To achieve that, go back to your Build job and add the post-build action Trigger parameterized build on other projects. Then type in the name of the Test job you have just created – auto-completion works here too – and click on Add Parameters to select the option Pass-through the Git Commit that was built.

That will enable a dependency between the jobs and, if you go back to your either Job home page you will notice that you will have either Downstream or Upstream Projects listed. Quite handy!

The Publish Job

But we are not done quite yet: we still need to create our last job that will publish the artefact to Testflight. Again, just like the Test job, create a new item by copying the Build job.

A couple of things to notice here:

  • We will be using the artefact that was created in previous jobs and not creating a new one, otherwise we would be compiling the whole app all over again
  • At the end of a successful publish we will be tagging the build so we can go back to it if need in the future

Again, pay attention to the highlighted boxes as it will dictate what has to be done.

Job configuration for the “Publish” stage

And just like the Build job we will have to change the Test job to allow triggering this job – but there’s a catch!

The catch is that we don’t want to publish every successful build: we want to be able to run specific builds as we wish but still leveraging from the same Git SHA used in a specific pipeline.

Thus, going back to the Test job add the post-build action Build other projects (manual step). Then type in the name of the Publish job you have just created – there’s no auto-completion here – and click on Add Parameters to select the option Pass-through the Git Commit that was built.

Great, now you have linked jobs. But how can you see it?

Adding a pipeline view

Go to Jenkins home and on the list of Jobs click on the little plus sign next to the tab All. This will prompt you to create a new view.

Give it a name and select the option Build Pipeline View. Then you will be presented the configuration page for the view… all you really have to do is select the initial Job (which in our case is the Build job) and the rest is done automatically for you.

Have a look at mine, that’s how I like it – the important bits are highlighted.

View configuration for the pipeline

View configuration for the pipeline

Final words

As you could see I’m not using build configurations – mostly because I didn’t bother looking at it in detail to so how to setup it all up. But you definitely should: I used that previously on another project (already set up by someone else) and was very handy in defining build names and specific configuration for different environments.

Another thing about this whole process is the automated testing. It uses UIAutomation as the driver and all tests are written in Javascript – you can read more about setting it all up by following this blog post from my buddy Shaun Irvine.

Finally it’s very likely that you will run into a permission issue. Follow this thread on how to solve it.

Good luck with it all!

Hacking for the community – RHoK Melbourne

Two days. Six teams. A bit over thirty people. Loads of fun.

Loads of fun. If there’s a combination of words that best describes what the last Random Hacks of Kindness (RHoK) in Melbourne was, those are them.

But it’s not only the fun on working something different and helping a community project or a startup get off the ground or make the world a better place. It’s also about the people: knowing new people, understanding their passions and aspirations, sharing, communicating, bouncing ideas and making things happen.

To me the spirit of RHoK was that: make things happen while getting to know awesome people. Create something small or create something big but carry with you the factor that you’re not there for yourself: you’re there for them.

Was with that thought in mind that I joined in this RHoK weekend. I went to the information session a month before the event and visited the website for more information on problems but couldn’t really decide what to do. I knew that I wanted to accomplish something and try some more nodejs.

But before I talk about how the days went by, let me take you through the format of the event.

RHoK Weekend in Melbourne

The whole event goes for a whole weekend. It started Saturday at 9 AM and goes until the end of the next day, with drinks at the pub.

As you expect it’s a very social event with people sharing ideas and tech solutions across teams so we can help each other get off and running quickly. Basically, the schedule is like this:

  • Day 1
    • 09.00 – Get set up
    • 09.30 – Welcome and administrivia
    • 09.45 – Choosing problem areas to work on, and teams to work with
    • 11.00 – Start your hacking
    • 19.00 – Dinner (provided)
    • 20.00 – More hacking
  • Day 2
    • 08.00 – Hacking continues
    • 14.00 – Time to down tools
    • 14.30 – All projects get 10 minutes to present their work to all
    • 15.30 – Judges deliberate
    • 16.00 – Awards and prizes
    • 16.30 – Drinks

It’s full on – as you expect – but very rewarding.

The problem I picked

OK so I wanted something small as I had some time constraints on Saturday and wanted to get something off quickly. The problem I picked was something that people were already working on but some of the team members moved on and couldn’t contribute to the project anymore.

The problem, already a project, is called Witness King Tides from Green Cross Australia. Here’s the original problem description. The snippet from RHoK website is this:

Witness King Tides is an existing website asks communities around Australia to take photos of the coastline when king tides hit. These photos capture what our coastal communities may look like in the future, as global sea levels rise. Together, the images build a picture of the threat posed by sea level rise across Australia and help track the future impact of climate change.

They needed a backend to handle some file uploads and capture some metadata about the photos that are uploaded. Previously a web client and a native iOS and Android apps have been developed to great success but the lack of a proper backend was causing them harm.

The lists of tasks

The lists of tasks

The solution

So the new guys that joined them – myself included – were very keen on nodejs. We knew it was simple and we knew it would be fun so was just really a matter of doing some planning and getting on with it!

Things looked pretty gloomy at start

Things looked pretty gloomy at start

We picked expressjs and included some extra libraries to make our life easier. The original solution relies on Flickr to store their photos and have them organised in albums – manually organised I must say – and we couldn’t change that.

From there we knew that some interaction with Flickr was needed and it proved, one more time, that OAuth 1.0 sucks. Luckily, there were libraries already built that eased our pain of dealing with OAuth and we just needed to pull them in.

Note: Flickr itself provides a library which seems to be the way to go at first. But they don’t provide an upload feature which was what we wanted and download a lot of data when you start using it. To be honest, the only thing that I adds value is that it makes your app a proxy to Flickr making your clients invoke only one endpoint.

Coming back to the solution, basically there were 2 endpoints completely necessary: /tides and /upload. Here’s a quick description:

GET /tides
Returns a list of tides already mapped by Green Cross Australia

POST /upload
Uploads a photo, streaming the content to Flickr and saving the metadata submitted on a Mongo DB

A small bump appears

So we managed to get the two endpoints up and running on Saturday before I left – which was around 5:30 PM! That was great and I was happy. Andy was sorting out the hosting and another Julián was investigating the Flickr API for querying. Everyone was tuned in!

Next day comes and I get a question from Andy, who developed the iOS app, more or less around these lines:

So I tried to upload a photo using the iOS app and I got an error. Do I have to send a multipart request?

Damn! I forgot we had implemented only the web side of things and completely forgot about the iOS. So here we go investigating how we can get this done and so on and, as it turns out, was a walk in the park.

Since we have created a small file that handles uploads – uploader.js – all we had to do was query the content type of the request and invoke another function in the uploader.'/upload', function (req, res) {
  var uploader = new Uploader();
  if (req.get('Content-Type').indexOf('json') >= 0) {
    // handles JSON payload
    uploader.handleJson(req, res);
  } else {
    // handles multipart
    uploader.handleMultipart(req, res);

And to make life easier we are not even streaming that: we are just sending a base64 representation of the image in the JSON payload so all I had to do was to convert that into bytes and send it off to Flickr.

That bump felt like a decent sized bump at start but after about 30 or so minutes the whole thing was done and everyone was happy.

Heck yeah!!

Heck yeah!!


Just a quick note to talk about deployment. Amazon Web Services (AWS) provided some free credit for RHoK projects to host their solutions on.

My experience with AWS is almost zero but Andy was experienced with it and decided to give a go on Elastic Beanstalk, which is a service very similar to what Heroku provides in terms of “ready made boxes for specific platforms”. Since we were using Node and both Heroku and Elastic Beanstalk provided support for Node, we might as well use the free credits.

I must say it was the best decision. Beanstalk is not only flexible but it’s fast and doesn’t “go to sleep” like Heroku does with free apps. Sure we had free credit, but hey, I’m sure the costs will be minimal compared to Heroku.

And the whole setting up of command line tools for automagic deployment was brilliant.

Highly recommended.

So, what did we build?

Our test site is here: You can upload photos, check the tides and etc. The whole code is open source so you can just grab it and run your own if you want.

King Tides API

King Tides Web

King Tides iOS

Jackie did a fantastic job on the UI and is now responsive and leveraging from the new endpoints we created.

On using Nodejs

If you haven’t yet, just do it. Learn and apply it. It’s way too much fun to be pushed aside.

Thanks to all!

The weekend was great: vibe was way up there, people were engaged and even our problem owner was connected to us from sunny Brisbane.

The RHoK committee did a fantastic job and I’m totally looking forward to the next one already. New problem, existing problem, bring it on!


I feel lost in this whole modern world. Not all the time. But more often than I like.

It’s tough to see the values you were taught at a younger age being ripped apart right in front of you and all you get to do is sit down and watch – and hopefully learn that things are not exactly this way anymore.

I feel like an embarrassment, a burden even. Someone that doesn’t add value, that “doesn’t get it”.

It’s tough. I don’t know what to do. Even when I open my doors so all kinds of winds flow through and portraits fall and things get out of order, it feels like the wind never disordered things enough for me to fix anything.

It’s frustrating.

I feel lost, dumb. Out of place and out of touch.

I can’t say anymore that I’m not stressed. I can’t say anymore that “It’s alright, no worries”.

Putting a brave face everyday and facing the consequences of your choices is not easy. Nor is smiling when you’re burning inside.

I guess there are times in our lives where we get tested for all sorts of things and my time is now and I’m not entirely sure I’m doing a good job.

I don’t want your compliment. I don’t want you to pet me on the head.

I want an ear. Maybe a shoulder.

Can you lend one to me?

About a talk at a university

Recently I volunteered myself to give a talk at RMIT and I think it would be great to share my experience here since I enjoyed the experience so much.

I have never done such a thing before and didn’t really know what to expect. I knew the subject and believed I was prepared enough but couldn’t avoid having that “butterflies in your stomach” kind of feeling.

My talk was about Code Refactoring and the usage of Design Patterns to refactor code and make it more maintainable.

As I walked into the university for the first time since I have been in Melbourne I was anxious to find the room, settle down, and write a bit about what I was going to talk about – mind you, I had the slides ready and the content in my head but not writing it down would make for a disaster of a lecture.

I arrived early which gave me some time to elaborate the talk up until half of the slides as the actual lecturer walked in. He greeted me and asked me if I wanted to go first as he was very interested on what I had to say – I was supposed to start my lecture after his and I was interested to hear what he had to say.

Well, guess you can’t plan everything.

Anyway, as I was getting ready I asked him if he brought over what I said I needed – we emailed each other during the week to clarify things – to which the answer was no.

I had slides ready. And I have a Mac. And, stupidly, I decided to use KeyNote to write my slides – why oh why. And the university, as you might expect, still uses VGA connectors.

Why dear lord? … fades into oblivion in despair …

So, the lecture told me:

Wait here, talk to the students while I’m away. I’ll be right back.

Funny experience. I didn’t have my slides to support me and I knew I had to introduce myself and the company I work for and yadda yadda. Actually it was kinda terrifying being left with an audience that is expecting you to do something… so I had to improvise!

And so I did start introducing myself and slowly easing into it. Talking about myself, what I do, what the company I work for does and all that was a very nice way to engage with the students. But not only that: ask them back what they know and what they do was also a way to break the ice and understand where they’re at technically.

Some of them actually worked in the industry while some others were brand new to all that stuff which was a great mix as I could read their faces and understand when I was going too deep into the subject.

When the lecturer came back with what I needed I felt like I had developed something with them already and was at ease to continue.

My talk went over an hour. I touched on multiple points of code refactoring, how we did this and that and what design patterns are good – and not good – for. The students were very receptive of all I had to say and some were really engaged into all that.

To me the highlight of the talk was when I showed them some code that I had worked on previously. The code was so bad that there were giggles and talks and questions and comments… it was a really cool point and the atmosphere was lifted considerably.

The most rewarding experience actually was to see the students taking note and feeling that I had helped them understand that stuff is not as they study: what they teach in the university is not really what’s out there and, for as much value as something can add – being the building blocks of their technical expertise – one should always challenge them.

At the end there were some questions but nothing really worth mentioning. However I must say that one question was particularly interesting and it came from the lecturer:

Here at RMIT we teach J2EE using Glassfish and NetBeans. Do you think we should change?

What was interesting about this question is that, after 5 years since I graduated, they are still using the same tools. It was not my place to say Yes but I sort of hinted that in the industry we don’t use such tools. They can be simple but they don’t provide the flexibility that the industry needs.

He gave me a gift and I was away. I took almost his entire class and left him with 30 minutes left for his lecture – and possibly happy students since I shortened a lecture as no student likes to be in lectures.

Gift from RMIT

Gift from RMIT


I would like to encourage you to do the same. Standing at the front of the students providing them what you know is such a great feeling that is hard to explain. You feel good about doing something for someone else, helping with people’s knowledge and possibly making new friends.

My experience was truly great, I would do it again for sure.

Here is the presentation and below is the video. Have fun!

The fall of TDD?

Over the last couple of days there was a conversation on Twitter involving Uncle Bob Martin, Martin Fowler and David Heinemeier Hansson about Test Driven Development (TDD), the value of feedback, code design, among other things.

I didn’t follow through the conversation but some points made me think about how much value TDD actually adds nowadays and how much effort people put into writing tests just for the sake of writing tests thus increasing feedback time and possibly over engineering a system.

Take this argument from Martin Fowler:

Instead of focusing on his argument, let’s focus on the question that his argument answers:

Write good and maintainable software without TDD? How?

TDD is a process, a way of doing things in a certain way to achieve an outcome that will make you feel comfortable at the end. This comfort, as you probably know, will come by having the confidence that every time such tests run, your code will be verified and any change that breaks the contract will be picked up, prompting the runner to fix the code.

But TDD is not only a process, is it? It’s also a methodology that can be applied or not, depending on how well someone – this someone being us, developers – adapts to it or not. And also, being a methodology, it’s prone to modification.

I believe the argument above is great because TDD is not the whole world. I believe quality code can be written without TDD but that doesn’t mean you should not test your code. Testing is essential in any application as it gives us confidence, tells us about business requirements (to a degree) and allow for enhancements.

I was working on a large Java project – as Java projects go, everything is big – with a number of Maven modules and countless tests. Actually, I counted them: 2759 test files. 14K+ tests. That’s right. The whole suite took around 25 minutes to run. Feedback is slow. Enhancements are slow. And those don’t even include the Behaviour tests which, separated in 10+ modules, took an average of 12 minutes each.

My point is that excessive testing is harmful: you should not test every if branch. You should not test every class. My rule of thumb usually is: there’s important business logic in there, let’s test. Otherwise, leave it be. And on a system so large, how can you ensure your code will not have side effects? You can trust the tests but what if the tests themselves are not really testing much? What if, after a closer look, all the tests are really doing is expecting mocks to be called?

TDD feels a bit harmful to me simply because everyone wants to do it. Just like pair programming and just like the argument above from Mr Fowler, often ≠ always and common sense and practically comes before doing anything.

But how can you ensure your design is correct?

Can you with TDD? I don’t think so, not all the time. I can still write testable code without having to write a test for it first and even make sure I’m still following the SOLID principles. Testable code. What is it, really? What makes a code testable? Are we talking unit tests Testable code, IMHO, means:

  1. Code that is simple to understand
  2. Code clearly states business language
  3. Code can be reused

That does not mean unit tests only: as I mentioned earlier, we only test what is worth testing. We could ensure our code is testable by running behaviour tests. We could say that a record being inserted in the database is a unit test. We could even say that ATDD (Acceptance TDD) is the only form we can move ahead with the project.

Allowing TDD to be the only way you can expand your software design is a trap: you will be in this circle where it will provide you all the methodology and process to follow and you will eventually forget to let your imagination run free. David actually tweeted about this:

I believe it’s correct to have a driver for design but having TDD as the only driver doesn’t sound right. Sometimes even gut feel is the right approach: you can’t measure it but you feel that things go a certain way and exploring them is the right way to go: follow your instincts and see where they lead you.

And yes, all hail TDD 2.0!


Git basics usage – or how to not destroy your tree from the start

Git is very easy – once you get used to it.

The learning curve in Git is usually around the workflow, which involves staging changes before pushing them to the remote repository. But it’s not only that: merging has a special meaning in Git and the branching model in Git can also be very challenging.

I’ve been working actively with Git for the last 2 years and, over the last 5 months, I’ve had my ass handed over to me countless times simply because I didn’t really know what was going on.

So I decided to share here my learnings so you don’t feel useless like I did countless times.

The basics


That means some setting up. You are very likely to do 3 things here:

  • Add a .gitignore file
  • Add a remote repository
  • Do your initial commit

To init a folder as a Git repository, all you need to is issue the command below:

git init

That will create a hidden folder named .git inside your directory and it will now track changes. You will receive a message like this:

Initialized empty Git repository in /Users/tarcio/dev/projects/git-test/.git/

So, let’s change the file by adding a new empty file to it.

touch .gitignore

That will create a new file inside your directory but, to Git, that file is untracked which means we can change it at will and Git will not track its changes. Issuing the command git status will tell you what has changed in the repo.

$ git status
On branch master

Initial commit

Untracked files:
 (use "git add <file>..." to include in what will be committed)


nothing added to commit but untracked files present (use "git add" to track)

As you can see Git recognizes there’s a new file in the repository but it’s not tracking any changes on that file. It even gives you a hint on what to do at the last line, so let’s follow that recommendation!


Staging a new file means that you possibly want to commit this file to the repository in the near future and would like Git to keep track of changes on that file for you.

The staging area is a temporary space in Git where you can prepare your changes for the next commit. When you stage a file you are telling Git:

Hey Git, keep that for me while I work on something else. I will catch up with you shortly.

So, let’s add our .gitignore file by issuing the following command:

git add .gitignore

Done, that file is not staged. If you issue git status again you will see a different message from Git now:

$ git status
On branch master

Initial commit

Changes to be committed:
 (use "git rm --cached <file>..." to unstage)

new file: .gitignore

You see that now the file has been staged and can be unstaged. That means Git is now tracking every single change we do in the file. Let’s test that by adding a line to the .gitignore file and issuing git status again.

$ echo "*.log" >> .gitignore && git status
On branch master

Initial commit

Changes to be committed:
 (use "git rm --cached <file>..." to unstage)

 new file: .gitignore

Changes not staged for commit:
 (use "git add <file>..." to update what will be committed)
 (use "git checkout -- <file>..." to discard changes in working directory)

 modified: .gitignore

OK that’s pretty cool. Git identified that the first version of the file – an empty file – is ready to be committed…

Changes to be committed:
 (use "git rm --cached <file>..." to unstage)

 new file: .gitignore

… but the change that was just done is not ready yet and it is telling you that the same file was modified. You can either stage the change or checkout the file to revert your change.

Changes not staged for commit:
 (use "git add <file>..." to update what will be committed)
 (use "git checkout -- <file>..." to discard changes in working directory)

 modified: .gitignore

So let’s go ahead and stage that change and commit the file.


Committing is stamping the change with a big, green OK tick. It doesn’t mean, however, that you will push this change to the repository. What you’re doing is committing to your local repository and not to the remote repository – if you want to know more about Git’s distributed model, please read about it in this free book.

So let’s commit those changes. I’m using a combination of add and commit commands that will add the unstaged changes and apply the commit with a message at the same time.

$ git commit -am "Ignoring all log files"
[master (root-commit) cd3e3aa] Ignoring all log files
 1 file changed, 1 insertion(+)
 create mode 100644 .gitignore

Issuing a git status after that you will notice the following:

$ git status
On branch master
nothing to commit, working directory clean

OK, that’s nice. Your working directory is clean, which means there’s no changes in your project. You can issue the git log command and see the history of changes in your repository if you want.

commit cd3e3aafa900a7243653cb83458b78f8f46af9aa
Author: Tarcio Saraiva <>
Date: Thu Apr 10 11:03:16 2014 +1000

 Ignoring all log files

However, we are forgetting something: we are just working locally. I want to make those changes available so everyone in my team can also ignore log files. I have to add a remote so I can push my changes to that.

Adding a remote

Adding a remote is adding a place where your code can be stored. You will be able to push your commits to that location and pull changes from that location. It’s common to name such remote repository origin as it will be the source of truth for all developers in your team.

Execute the following command to add a remote:

git remote add origin

Done. Git will not give you any feedback at this stage so it’s natural to execute git status again to see if anything changed.

$ git status
On branch master
nothing to commit, working directory clean

Hmm still looks the same. What we are missing here is really to push that change to the remote branch so now we can track our changes against the remote repository as well.


Pushing to the remote branch means you want your commit to be available to everyone else. Being your first push after you added the remote, you also want to tell your local branch – master – to track the changes in the remote branch – origin/master.

The way you do that is by simply issuing the command below:

git push -u origin master

Which will provide the following output:

Counting objects: 3, done.
Writing objects: 100% (3/3), 220 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
 * [new branch] master -> master
Branch master set up to track remote branch master from origin.

Note: the -u flag on the command above is telling Git that you want to track your local branch master with changes from origin’s remote branch master. You can omit the flag on future pushes.

Your change is now online! Woohoo! That means other devs can add stuff and remove stuff. I have just modified the remote repository by adding a README file in it but my local repository doesn’t have the changes yet.

I need to pull such changes so I can keep my repository up to date.


The pull command is actually split into two. You can either only see what’s coming down from the remote repository without affecting your local repository or you can just accept everything.

To do the first you can issue a git fetch command which will update your index without modifying the content. You will know what’s coming and here’s what the output looks like.

$ git fetch
remote: Counting objects: 4, done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
 cd3e3aa..7a718b0 master -> origin/master

The output above is telling you that you now have 4 objects opposed to 3 when you pushed. Something has changed for sure and to confirm that we can execute the command below to see what’s coming from the remote.

$ git log --all --source

commit 7a718b0e0e255c91640bd364704b0ea6ae8c9ab8 refs/remotes/origin/master
Author: Tarcio Saraiva <>
Date: Thu Apr 10 11:27:44 2014 +1000

 Adding a README

commit cd3e3aafa900a7243653cb83458b78f8f46af9aa refs/heads/master
Author: Tarcio <>
Date: Thu Apr 10 11:03:16 2014 +1000

 Ignoring all log files

You notice that now we have two commits. The top one is the remote commit and that can be confirmed by the path next to the commit hash which says refs/remotes/origin/master.

To incorporate that commit into your tree you issue the git pull command which not only update your index but also update your repository by pulling the changes from the remote repository.

$ git pull
Updating cd3e3aa..7a718b0
Fast-forward | 3 +++
 1 file changed, 3 insertions(+)
 create mode 100644

It’s quite clear what the output is saying. From the last line you read that a new file has been created and it has 3 insertions – each insertion is a line change.

That’s it for basics. Let’s now check a real life scenario where people commit code and the same file is affected both locally and remotely.

A real life scenario

You are working on a feature and decided to change the .gitignore file by ignoring the entire logs folder. In the meantime your colleague decided to modify the same .gitignore by adding a line to ignore the entire tmp folder.

You both commit your change locally but you get interrupted. Your colleague however pushes his change to the remote repository. When the interruption is over you remember that you haven’t pushed and decide to issue the git push command.

You then are presented the following output:

$ git push
 ! [rejected] master -> master (fetch first)
error: failed to push some refs to ''
hint: Updates were rejected because the remote contains work that you do
hint: not have locally. This is usually caused by another repository pushing
hint: to the same ref. You may want to first integrate the remote changes
hint: (e.g., 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

That means that you can’t push. Somebody else have already modified the file you want to change and if your change is accepted, it’s very likely that the commit from your colleague will be lost and the tree will be messed up.

Note: when I’m talking about the tree I’m talking about the sequence of commits and branches that make up a tree-like structure to ease visualisation of the repository. The tree below is not ideal – an ideal tree shows a straight line – but gives you an idea how Git displays branches.

Git tree with branches

Git tree with branches

Coming back to the scenario above, what do we do? You can’t push but you can pull your colleagues change and apply your change on top of his.

Hold on, what?


Here’s a picture of how the tree look like at this stage.

Tree one commit ahead and one commit behind

Tree one commit ahead and one commit behind

If you just issued a git pull without applying your change on top of your colleague the tree will look broken because Git will attempt a merge automatically and, if there’s no conflicts, it will add your colleagues commit along with your own and create a new “merge” commit. Here’s how it looks like.

Ugly pull and merge

Ugly pull and merge

It’s just plain ugly. What we want is a straight tree because it’s easier for us to map the changes and history line is not broken up like that.

So what we do? We issue the git pull --rebase command. This command will:

  1. Pull your colleague’s changes
  2. Apply them to your local repository
  3. Apply your commit on top of it

If there are any changes then we will have to merge. In this there are since the change was done on the same file at the same line. So you get this output after issuing the comand:

$ git pull --rebase
First, rewinding head to replay your work on top of it...
Applying: Ignoring logs folder.
Using index info to reconstruct a base tree...
M .gitignore
Falling back to patching base and 3-way merge...
Auto-merging .gitignore
CONFLICT (content): Merge conflict in .gitignore
Failed to merge in the changes.
Patch failed at 0001 Ignoring logs folder.
The copy of the patch that failed is found in:

When you have resolved this problem, run "git rebase --continue".
If you prefer to skip this patch, run "git rebase --skip" instead.
To check out the original branch and stop rebasing, run "git rebase --abort".

That means that there’s a conflict that Git alone can’t solve and you will have to merge the file manually. The rebase has now stopped until the merge is fixed. Git also provide you with options: you can either skip this rebase…

If you prefer to skip this patch, run "git rebase --skip" instead.

… or completely abort it.

To check out the original branch and stop rebasing, run "git rebase --abort".

We are not going to do either. Let’s fix that file. He now looks like this:

$ cat .gitignore
<<<<<<< HEAD

>>>>>>> Ignoring logs folder.

Because the change is on the same line, Git is unsure of what to do with it. But we know what to do: simply remove lines 3, 4, 6 and 7. Your file should look like this:

$ cat .gitignore

Once that’s is done is now time to continue the rebasing. Remember that the rebasing has stopped because of the merge and we have to give continuity to it. The way we do that is by staging the file and the continuing the rebase.

$ git add . && git rebase --continue
Applying: Ignoring logs folder.

Done. The output of the command – Applying: Ignoring logs folder. – means that Git has successfully accepted your merge and applied your commit on top of your colleague’s commit. The tree now looks nice and straight and the history is correct: your colleague’s commit comes before yours.

A nice, straight history tree

A nice, straight history tree

The benefit of such approach is that you keep your history intact: no diversions, no strange tree branches that come back to the same origin. You can easily identify what was done in a linear manner.


Branching is very easy. If we want to create a new feature branch for our project all we need to do is:

$ git checkout -b my-feature

That will do two things:

  1. Create a new branch
  2. Switch your working directory to the new branch

Every change is now applied on the branch and you can commit and push your work separately to the other team members. The same commands we issued above in the basics section are also valid, the only difference is that you are working on another branch.

But then the time has come where you want to merge your branch changes into the master branch. So do you do that? Easy.

$ git checkout master
$ git pull --rebase
$ git rebase my-feature

If there are no conflicts then it will do the same thing as pulling a change from a remote: it will apply your feature branch on top of master.

Be mindful that the master branch have to have the latest changes from the remote, otherwise you will end up in merge hell. That’s why I added the git pull command in the middle, to avoid any surprises before actually rebasing your master branch with your feature branch.


Did something terribly wrong? It’s easy to fix and start over. A reset is essentially that: get your local repository back to a state where you are comfortable with.

There are five kinds of reset but the most used ones are HARD and SOFT:

  • A hard reset resets the index and the working tree, discarding everything that was modified in the working tree
  • A soft reset does not touch the index nor the working tree but resets the head to the specified commit

The basic syntax is:

$ git reset [--hard|--soft] <commit>

The commit at the end is the hash of the commit.


The tools outlined here are the most common tools used today to alleviate the heavy command line usage. I would recommend only using such tools when you have a better understanding of how Git works and you can replicate the commands issued by the tools in the command line and understand exactly what is happening.

More tools can be found in the Git book chapter about tools.


As you can see it takes some learning curve, but what it takes to learn Git is get used to the workflow. There are plenty of resources around and people may choose to use different strategies but the essentials are above.

Following the commands above I’m sure you will not end up in merge hell and will cover most of the bases to get you started.

Good luck!

Disposable software

The other day I was talking to a friend at work about software development, frameworks and the like. One of things that we touched on was how software is becoming disposable, much like today’s modern world mentality:

This is outdated, let’s trash it and get a new one.

I come from Brazil and I have been in a very good place over there where I could do that. Technology in Brazil is quite expensive: a MacBook Pro 13″ with Retina display and 8 GB RAM costs almost 4 times the price we pay here in Australia. It is not a disposable item for brazilian standards and, for many, is considered a luxury item that is supposed to last a lifetime.

Even here in Australia a MacBook Pro is a considerable investment but you know that such investment will pay off in the next 2 years so you buy it. Because in 2 years time you have probably accrued enough money to buy the newer version with better graphics, better memory, better software and the one that will possibly make coffee for you.

And that’s where I believe software is heading. I don’t have an opinion really formed about that but hopefully, at the end of this writing, I will at least know where I stand and maybe you would like to discuss this with me.

The issue

Let’s start by raising the question: is software supposed to be disposable?

One of the definitions of soft is something “not strict or sufficiently strict”. Something “easy to mold, cut, compress or fold”. Being that malleable I believe that software should be something that can be changed, possibly into something better.

A changed software is something that is so well crafted that previous releases of it support the new and improved. Software development is a form of art and is a craft hard to master and I believe that’s one reason why software is becoming disposable.

Let me stop here and make a distinction first: I’m talking about software that we, corporate developers, use on our daily routines. Frameworks and the like, created to help developers like us make our jobs easier, may not fit into this category at present.

So, software is becoming disposable. Take micro frameworks for instance: I’m sure a lot of work has been put into them to make them work properly and reduce the whole “new project setup” to almost zero or have a prototype up and running in no time.

However, after that initial setup, how easily can you just deconstruct said prototype and start from the group up? All you need is some business rules and you’re done, show me the money, I’m outta the door!

OK it’s not that simple but the ability to destroy something and rewrite without much pain is indeed a great benefit for corporate development. Many applications are still monolithic and massive but the trend is leading towards software that can be easily replaced by a new and improved version, possibly written in a different language, that will still deliver the goods.

The reality

We all would love to live in a world where CIOs would just take our word for it and back our decision of changing working software just to keep up with current trends, wouldn’t we? New tech all the time, a constant flow of new information and ideas generating creativity everywhere.

Ah only if the world was such a dreamy place.

Truth is everyone is scared of change. And I do mean everyone. Even us, corporate developers, that would love to adopt a change and influence our leaders. Disposable software is only a reality in environments where change is accepted but not really controlled.

Hold on, what?

Control is an illusion. You can tell someone to do a job a certain way but you can’t expect the job to be done the same way you do because that’s a different person. The result may be the same but the process can be different and that’s OK.

Same principle applies to software. How many open source libraries have you pulled into your software? Do you know how every single one of them work? You know you got the result you desired but what are the side effects? Your only controlled decision was to bring that piece of software in and maybe pull it out once your CPU start to smell like it’s burning.

An environment with disposable software accepts change as a good thing. An environment afraid of change will apply change at a slower rate being very careful to not throw away anything.

Change is inevitable and micro frameworks provide that in such a beautiful way that they take away the fear of change by applying a “change first” kind of mentality to everyone involved, making the software you write disposable but not of lesser quality.

Remember that above whatever choice of software or framework you choose, you still have to deal with people’s mentality. Software disposability has a lot to do with forward thinking, change driven personnel that want to make the place they work better and offer better products.

Or just want to make a difference to the big boss by suggesting anything that they read on LifeHacker.

Final thoughts

My point is that the adoption of a “pro change mindset” is a slow process that dictates a lot how software is written, deployed and maintained.

I guess I’m pro software disposability. We should certainly not forget our lessons from past experiences nor try to push too hard that would dismantle an entire way of working but adopt change as a good thing, embracing new ways of doing things and discarding the old bits that don’t really fit the new reality.

In a world where hardware is disposable, making software disposable doesn’t generate physical waste, doesn’t generate clutter. Discard really means SHIFT + DELETE (if you are in Windows).