Computational Thinking – Decomposition

Decomposition describes the process of breaking down a problem or task into smaller, manageable tasks or problems. It is a part of Computational Thinking.

Let’s say we attend a class to learn something new. The class has a goal, for instance show us how to become a programmer. Now, becoming a programmer is a big task which takes a while. How do you approach this? To make the class worth our while, it’s broken down into individual lessons so that the knowledge is more digestible for the brain.

This results in a lot of lessons, spread out over days and weeks. Multiple lessons help us to process everything, apply it bit by bit and allow the long-term memory to store everything properly.

Now, with computers and computer programs we apply a similar strategy but for different reasons. If the course would be taught without breaks, our brain would fatigue quickly and stop absorbing information eventually. Luckily, the computer does not suffer from fatigue.

Today’s computers accomplish so many things that were unheard of 20 or 30 years ago. They can recognize speech, make smart suggestions to us, it appears as they could think. The truth is: They don’t. They only do what they’re told to do by their program. And this program has been written by humans.

When you look at the basic skills of a computer, it is only able to work with numbers, very long numbers and perform basic arithmetic operations on them (add, substract, multiply and divide). Over time, we’ve built abstractions that make interactions easier. They allow us to work with text, images, draw graphical user interfaces and so forth.

When we write a program, we break down our problem (I want to write a program that calculates my monthly budget) into smaller problems. These are now solvable with the tools the computer provides us, such as asking the user for input, storing that input, perform calculations and printing the results.

This would be the procedure for a budget calculator on the command line. The same principle applies for applications with Graphical User Interfaces as well.

It takes a bit of practice to break down problems into the right size so we can write efficient code for it. The more small programs you write, the better you will become in this.