Iskandar Setiadi

Flag #11 - 1 Year as Software Engineer vs 4 Years as Undergraduate

It has been half a year since my last post, welcome back to my personal blog :) The number of visitors also goes up by frigging 300%, wow, thanks everyone! I am planning to write this one since two months ago, but the handsome monkey (for reference, see the following TED video) keeps taking control of my mind and I am finally able to escape for a while from the monkey.

I hope you could enjoy reading this article!


During my university year, I has a belief that university is basically founded to give its students a lot of preparation in facing "The Great World". It is correct to a certain extent, however, "The Great World" has more secret weapons on its sleeves and it invalidates half things that I have usually done in my university. Personally, I feel that my first year as a software engineer is as life-changing as the entire four years learning in the university. FYI for first-time readers, I graduated from Institut Teknologi Bandung on August 2015 and joined a cloud security company on October 2015.

As a caveat, these aspects are based on my own experience, so it is probably very subjective and you might think it differently. Software engineer in this context is related to software development in enterprise, so it might be different if you compare it to data scientist, lecturer, analyst, and so on.

1. Error Handling

In one university term, I had to take around 18 - 24 credits which are equal to 5 to 7 different classes. University life was mostly filled with peaceful days except for 4 weeks every 6 months cycle: 2 weeks during mid-term period and 2 weeks during end-term period. Those 2 weeks period can be summarized as:

  • Feeling happy that a lot of final scores are taken from programming assignments
  • (2 weeks later)
  • Regretting my 2 weeks ago self that I should prefer written exams rather than programming assignments
  • Repeat in the next cycle

Those two weeks are usually brutal and hellish where as an effect, some of my friends can even ride a motorcycle while half-sleeping after working on those programming assignments. And miraculously, the failure rate of riding motorcycle while half-sleeping is less than 1% (Probably they should write it on their CV :p). For me, it was also customary to grab a can of coffee from Indomaret/Alfamart (equal to 7/11 or Lawson) every 3 a.m. during those 2 weeks period.

Probably it's not like this one :p

Programming assignment in ITB is usually tied to one laboratory (Artificial Intelligence, Distributed System, Programming, etc) and each laboratories have their own assistants, which are usually 1 to 2 years higher than the course level. The thing is that your assignment will be graded by those assistants scoring system. The score distribution is usually focused on functionalities, so as long as it works based on the given specification, 80% of total score is guaranteed. In this manner, I rarely thought outside the given specification and simply wrote the code directly. For example, if the given task is to implement simple payment gateway, you only need to show the assistant that your application can handle transactions between A and B.

def transaction(sender: User, receiver: User, amount: int):

In my current work, I usually spend more time in thinking the opposite case: error handling. In order to make a feature works, it is actually quite easy because the steps are usually clear. In the payment gateway example (I am not doing payment gateway, but it is the easiest one for an example), there should not be any corner case where a problem interrupts transaction between sender and receiver, which creates 2 different DB records of increase_money instead of 1. If the payment gateway receives 0.1% of money per transaction, 1 failed transaction might nullify the profit from other 1000 successful transactions. If the probability of connection problem is greater than 0.1%, then the payment gateway will file a bankruptcy pretty soon :D It is not only limited to material loss as customer trust will decrease significantly if we compare to the trust that we gain during no-error, daily operations.

def transaction(sender: User, receiver: User, amount: int):"...", extra={...})  # log is important for traceability
    except DefinedError:  # We know what we should do
        logging.error("...", extra={...})
        rollback_state_defined(sender, receiver)
    except UndefinedError:  # We never think about this case
        logging.exception("...", extra={...})
        rollback_state_undefined(sender, receiver)

There is a classic proverb: "make it works, then make it right". The former part is enough if our goal is only to survive from university, however, the latter one is absolutely required to develop a reliable software.

2. Length of Code

For some used-to-be competitive programming newbie participants like me, I used to write my code as short as possible. I have always thought that shorter code is better because you can type it faster and it consumes less space. The problem comes here: I could not read my old code and understand what the hell is going on. Our lecturers taught us to have a well-commented code and finally I have decided to add a lot of comments in my code with the expectation that I can read it later on in the future.

A beautiful and gorgeous code that I wrote 3 years ago

The problem comes here: a well-commented code is often tightly coupled with the code itself. Whenever I try to update my function, I often forget to update the comment section and it suddenly becomes irrelevant. It will also create a confusion to the future maintainer (or your future self) since the actual code differs from the written commentary on how this code should actually work.

void f(User& a, User& b, int c) {
      * This function accepts 3 parameters: a (User), b (User), and c (integer)
      * User "a" (sender) transfers "c" amount of money to User "b" (receiver)
    . . .

Compare the function above to:

void transferMoney(User& sender, User& receiver, int amount) {
      * transferMoney: sender transfers the specified amount of money to receiver

Let's say you have not read the comment section of those codes above, which one do you think is easier to comprehend? In the first function, some people might get confused whether "a" is the sender or the receiver. In addition, the function name is not self-explanatory; a good function name should be self-explanatory and contains the signature of how it should behave. My point here is that comments in code are actually good, but we must not get too dependent on it. Instead, we should try to write a self-explanatory code and improve its readability for other maintainers.

A classical art about code review

I remember that I had my first project which required a code review about 2 years ago. The project implementation itself finished within 5 weeks, and at the end of the project, there were 2 other reviewers and they gave a frigging 113 comments in my pull request. Those comments are rarely talking about code logic, instead, they tried to warn me that variables should have a proper naming, too deep "nested if", simplification of some methods, native libraries instead of own implementation, etc. In the end, I feel grateful for those reviews and I can still read those old codes easier compared to other non-reviewed projects that I did in the past.

3. Hype Driven vs Old Technology

How often do you choose a technology stack based on current hype? When I studied in the university, I always fell into this particular "trap". There is a good article that coined a new term: "Hype Driven Development". It is indeed arguable that new technologies are better because it is designed to fix the problem with existing technologies. Of course, I am not saying it is bad to use hype driven technologies, but there comes a time where you should choose to use stable ones instead of newest ones.

When you want to create a long-term product with continuous development, you need to choose stable technologies that have survived for at least 3 years or so. In addition to its trustworthiness, you need to take into account that some other people might join your team and you might also quit from developing it. If you keep insisting of using hype driven technology, then you have a gamble: if the technology survives the battle, you will have more polished and shiny technology stack; however, if the technology could not survive or have a hidden bottleneck or a new version is introduced without backwards compatibility, someone (or even yourself) will throw your code away and start from the scratch, again. How many times have you decided to rewrite a piece of code that you have just seen for the first time to a new technology stack? :p

Software engineer main job desk: rewrite the mess which is written by previous engineers or his/her own self

If your product needs to be shipped as a standalone product, it is also preferable to use mature technologies stack. It is incredibly hard to force your users in using newest version of your application. If the technology that you used accidentally breaks or has a compatibility problem with clients' environment, it will become pain in the ass because they do not care about your hardworking updates that you have prepared for weeks. They will simply say your product is bad and you will have a trust issues if the product is actually a part of big companies.

For enterprise scale application, the hidden risks of using hype driven technologies are apparently bigger than the polished functionalities which are offered. A lot of avionics and banking system are still sticking on old technologies (10 years or so) and it is not because they do not have good engineers on their side (they have a group of greatest software engineers out there). Even space rockets are still using C/C++ until nowadays. To put it simple, they want to ensure high availability and reliability with using battle-tested technologies.

4. Think Complex vs Think Simple

This one is actually not a matter of preference but experience in developing a software. I believe all software engineers start from "Think Complex" before they start their best effort to "Think Simple". Simplicity is far harder than complexity, especially when you need to develop and maintain big architecture. Of course, I am still trying my best to go from complex to simple mindset, so don't worry to much about it :)

For example, you are given a requirement that your application will be used by all ITB students which are around 50k users and it should be scalable. Then, you decide to use distributed cache layer for accepting read operations and so on. On the other time, you make a complex database queries which are actually not required and can be simplified even further. After finishing all implementation, you start thinking how to invalidate those distributed cache and it becomes extremely complicated to maintain the system. The system is indeed looking great but you realize that your system is too overkill as a single MySQL server can handle 10k read transactions per second and there is no 10k ITB students accessing your system at the same time. Then, you learn that the worst concurrent case is less than 2% (1k users at a time). You might brag that the system is highly scalable, but no one actually needs it.

From my own experience, I was assigned a task to migrate the entire database of 1 million users since the old and new system use different technology stack. The system should migrate those data in real-time, so that the main system can be migrated by a small percentage at a time. The problem was I have not researched the actual write throughput of old system and designed a bit too scalable component in migrating those data. After deployment, I realized that the write throughput is far smaller than the read throughput (1 write : 20 reads) and the peak concurrent scenario is around 100 write operations per second. A single server itself can handle up to 500 write operations per second and as a result, there is only 2 machines (1 master, 1 slave with several ECS tasks) running in production now. The lesson learned here is we must not fully trust total number of users, instead, we should understand their behavior. An online shopping website should learn how their users behave and should increase their servers during commuting time or after-work time, an online ticketing website should learn how their users behave and should increase their servers near holiday period, and so on.

A good software engineer should think simple and write a code which is flexible (modular, independent to each other). They try to meet basic functionalities as fast as possible which reduces needed development time, and therefore, cost. If you are working for a company, you should realize that time-to-market is extremely important and they have a limited budget to spend as they have not received any income from the product. If you only want to enrich your portfolio with all hype driven technologies stack, then you could take complex way. However, if you really care about long-term and continuous development, you need to start learning how to think simple.

5. Myth of Code Tests

During my university year, code test was a myth. In daily class, the name of code testing only appears once in a blue moon. Even in some startup companies, code tests are probably neglected and there are no unit / integration tests for their products. Of course, those companies might still have Q.A. and testers, but the problem here is that they serve different purposes to unit tests. It is quite understandable because writing unit test takes time and startup companies have a strict deadline.

True engineer does not test their code

Have you ever heard about "technical debt"? At some point, software engineers are forced to rewrite some part of the codebase (read: refactor) because of those debts. After a number of commits, some basic functionalities are tested manually and the engineers give a list of changes to the Q.A. However, there is one function which tries to call the method that has just got refactored and it is not in the change list. The drawbacks are real: either the Q.A should test all available scenarios for each small changes, or the bug will land on production environment. Do you know what is worse than that? The engineers could not trace which particular commit introduces the new annoying bug. Now, imagine if you have code tests which are executed for every pushed commits. You could catch any particular bug without reading thousand SLoC, your Q.A. doesn't need to test all functionalities except at the end of release cycle, and your product will be more reliable. It will also help other maintainers so they know something is wrong if the test fails.

My rule of thumb right now is that I will try to start my project with at least 70% coverage for each long-term projects that I have / will have. Of course, the higher percentage of coverage is better, but it is often too time-consuming to test some part of code. As the Pareto law suggests, it usually takes 20% of time to write the first 80% coverage, while it takes another 80% of time to make 100% code coverage. The hardest part is to test behavior between concurrent threads / processes, so you need to know the trade-off between time, cost, and code coverage.

There are a lot of free tools for open source project out there: CircleCI, TravisCI, Jenkins, etc. Those tools are usually used for continuous integration and deployment in broader aspects, which of course, you could use it for running simple tests.


To sum it up, some aspects of software engineer life are completely different to the university life. However, there is one thing that I learned the most during my university life: adaptability. For real experience and skill, I believe the order of importance if you want to work as a software engineer is:

  • Internship experience as a software engineer in enterprise company / write codes for real OSS
  • Internship experience in startup company / freelance projects
  • Competition experience (it only serves that you have a good foundation knowledge in IT)
  • GPA (it's the last factor as some universities never do actual software development).

There are a lot of things which are obviously new to me: continuous deployment, logs handling, monitoring & alert system, etc. Those five points above are picked as aspects which are completely felt different compared to my university life. Finally, each person has different experience and perspective, so if you want to share your thoughts, let's exchange your idea here / via private message :D Thank you for reading this long post!


Iskandar Setiadi
Software Engineer at HDE, Inc.
Freedomofkeima's Github

Author: Iskandar Setiadi - Type: Experience - Date: 9th Dec, 2016

# Back

  • Kevin Yudi Utama

    No comment at all is bad too. At least you need to document about your high level model. I forget where I hear it, if you give the same problem and the same resources to different person. The solution they come up with will most probably really different. With OOP, programming become modeling your solution into objects and interaction between them. I find it really hard sometimes in codebase where there is lots of classes that even a simple functionality is spread across all this so called "SRP" classes. The worst part is there is no document about how they model their solution into code.
  • Iskandar Setiadi

    Hi there, thanks for sharing your thoughts! Yes, I totally agree with you that without comment, and moreover documentation is totally bad. The problem is some people simply writes too much comments and it's either very hard to keep up-to-date with their comment or you need to waste more time in reading the entire comment section. If you are interested, there is a good discussion about comments in code here:
  • Kevin Yudi Utama

    Thank you for your response. I have read that article before. I agree that comment must be written sparingly. It is just many people take advice as a hard rule. So I try to balance that in the comment section. By the way nice article, I hope you share more of your story soon. By the way this is a nice article
  • Iskandar Setiadi

    Hi, thanks for reading throughout my long article :) Ah yeah, I usually have a problem writing unit tests for long functions, especially if they have external dependencies. Right now, I always try to make a self-explanatory and easy-to-test functions, which are usually resulting in a lot of functions. I agree with several parts of that article, so I believe that I still need to learn when it is better to separate a function and when it is actually better to inline it.
  • Alvin N.

    Thanks for the article! It gives a nice primer on testing, especially for fresh grads/those who are going on an internship. Have you considered using mocking for 'faking' the dependencies for unit testing? From what I know, unit tests should run in a breeze, so that they can quickly catch logical problems with the tested components. This article descibes briefly on the different types of testing (it's for JS, but the principle applies to any language)
  • Iskandar Setiadi

    Thanks for sharing your thoughts, Alvin! Yes, if we can write a test with real third party dependencies, we should use it without mocking it. I had an experience where I need to use DynamoDB local for the unit test and at one point, the conditional check behavior is different from the real one which resulted in passing test but failing on staging environment. 'Mock' is a last resort for testing, which is better than nothing. And also, writing a test (functional) from user point of view is the hardest part. There are often cases where it works on Firefox but it doesn't work in IE, or it works on IE but it doesn't work from embedded web (e.g.: popup window in Chrome). Don't get too obsessed with writing all kind of tests and forgetting the real purpose of the code itself. We need to define the proper balance by ourselves.