Saturday, March 12, 2022

Why I Switched Back to Samsung from OnePlus Phone?

 Every person has their own preferences and needs. Every one of use are using our phones in a slightly different manner. I think there is no perfect phone. Each one will some small thing that will annoy one person or the other.

That being said, at the end of 2019 I switched from Samsung to OnePlus. I was a long time Samsung Note user, but the OnePlus 7T McLaren promissed an interesting opportunity. Now, in spring 2022, I am switching back to Samsung S22 Ultra.

The pros and cons, from my personal perspective are the following.


+ Cost only slightly more than half of the Samsung phone.
+ Snapdragon CPU that allows Google Camera.
+ No punch holes or cutouts on the screen.
- Very  ... very, very, very ... slippery phone.
- No single camera app I can use for both photo and video. Heck the builtin camera cannot record video without stuttering since the Android 11 update. Google camera cannot record slow motion and stabilization is not very good.
- Proximity sensor when talking on the phone is rubbish. The screen activates all the time and I am pressing random buttons with my ears.
- Screen is has very poor visibility in direct sunlight.
- No S-Pen, and using simple pens with rubbery top is OKish, but not very precise.
- Battery life is average at best, but it goes through the day.
- No water proffing.


+ Amazing screen, good visibility in all conditions.
+ S-Pen ... I grown used to it during the years.
+ Not slippery, low risk of dropping it when your hands are sweaty.
+ Water proof.
+ No issues with proximity sensor.
+ Keep screen on while looking at it ... I hate that my OnePlus shuts down the screen if I am not touching it and just slowly reading through something or analyzing something that takes longer than the screen timeout.
+ Good camera with builtin Samsung camera app.
- Can't use Google Photos in camera app as default gallery for quick preview.
- There is a whole in the screen.
- The phone's edges are not rounded, leading to a less comfort while holding with the corner in your palm.
- Very expensive.

Conclusion: Pluses here, minuses there. At the end, the Samsung wins for me. Others may prefer other brands, but I am going back to Samsung for the foreseeable future.

Thursday, September 17, 2020

F1 2020 Force Feedback for Logitech G29 on Linux through Steam and Proton 5.0.9

 Hi there,

If you are a Linux user and bought/tried F1 2020 through Steam with Proton 5.0.9 and a Logitech G29 wheel, you probably hit an issue with force feedback. More specifically, there is no force feedback at all. Nada ...

There are vibrations for kerbs and off-track, but no force feedback.

I used the tool called "Oversteer" and the "new-lg4ff" driver to set up my wheel to my liking. However, I observed that when F1 2020 starts, it just resets all FFB setting to ... well ... nothing. And then there is no FFB in the game.

So I opened a terminal and I decided to continuously run oversteer to set the parameters I want every 5 seconds. This seems to have enabled some of the effect.

So, try this:

1. open a console an run the command below. Adjust parameters to your liking.

while true; do oversteer --autocenter 50 --spring-level 40 --damper-level 30 --friction-level 40 /dev/input/event22; sleep 5; done

2. start the game. You may feel that the game resets the settings and the wheel becomes non-resistant when turned. Just wait a few seconds ... and voila ... oversteer will reset the settings in the background.

What I felt working:

1. auto center
2. on track effects - maybe - when I was going really fast in a straight line I had to keep the wheel from moving left-right

What I didn't feel:

1. sliding effect
2. under steer effect

So, it is far from perfect, but it makes the game significantly more enjoyable.

And now the links to the resources:

- oversteer page:

- new-lg4ff driver page:

- if you are on OpenSuse, try the new-lg4ff RPM instead from here:

Please note that it won't be able to delete your existing driver. So just delete it manually. Follow the error messages for more details

- more details with issues and solution about F1 2020 on Linux through Steam and Proton here:


Saturday, May 23, 2020

Garmin Venu - An in Depth Review and Personal Opinion

Disclaimer: This post tries to be as objective as possible, but it contains personal opinions. You may or may not agree with these opinions and views, and it is up to you to decide what importance do they have for you.

[UPDATE 2020, August 23rd]: I have my watch since 4 months now. And I settled into a nice daily usage pattern. This also highlighted the excellent battery usage the Venu has. So, here is my latest battery usage, from 100% to 1%. Total time 3 days, 10 hours. 
And here is what features I used in these 3 and a half days. This is quite typical for me nowadays.
- Outdoor activity tracking ~5 hours: biking, running, gardening - all these use GPS
- Swimming tracking ~40m: I've put this separately because swim tracking is especially battery hungry. These 40m of tracking ate up 10% of battery. GPS and HR sensors are working overtime in the water.
- Indoor activity tracking ~5 hours: cooking, cleaning, vacuuming, house repairs - all these do not use GPS but track all other stuff, including heart rate, steps, calories, fluids lost (perspiration), etc.
- SpO2 (blood oxygen saturation) tracking active through the night, from 22:00 to 06:30 hours.
- Heart rate monitor active 24/7
- Automatic activity detection active

I am very pleased that with all these features running, and after tracking so many activities, the watch still last a solid 3 days. And, if you wake up in the morning with only 15% battery, and you have no time to charge the watch before going to work, you can still rest assured that it will resist for the next 8-10 hours ... unless, of course, your work implies swimming ;)

[UPDATE 2020, July 18]: After the 4.90 software update, the loading time of most IQ Store 3rd party watch faces is very fast. Some are as fast as built-in watch faces, other more complex ones load at about 800-1000 ms. In any case, the 2-5 seconds waiting is fixed for even the most complex watch faces.

Original post below:

After being an avid Samsung fan for years, I've got disappointed by my Galaxy Watch because of its hardware quality and lack of software improvements I was interested in. In less than one and a half years it had all its internal components serviced in warranty because it just broke. It had its display replaced, then motherboard and battery. To be honest, I never had a Samsung device that I had to send back for so extensive repairs after such a short period of use.

Let's clarify some things before going any further.

  • Garmin Venu is an activity and fitness tracker with smartwatch features. It is not a smartwatch.
  • Samsung Galaxy Watch is a smartwatch with fitness and activity tracking features. It is not a fitness tracker regardless of what Samsung is claiming.

Now, you may say OK, OK ... but what's the difference?

Well the difference is that the Galaxy Watch has powerful CPU, smooth animations, microphone, speakers, a virtual keyboard, and other gadget like features. It also has a very user friendly UI that is intuitive, a rotating bezel, and so on. And in this area, it is a great product.

However, when it comes to activity and fitness tracking, the Galaxy Watch is light years behind the Garmin Venu. Samsung is probably the best between other smartwatches when it comes to activity tracking (I don't know about Apple, sorry), but it is still a smartwatch that tries to do fitness tracking, nothing more.

The Garmin Venu has a slower CPU, laggy animations, basic user interface. They barely did their first steps in the smartwatch world. The OLED display is visually great. Watch faces are plentiful in their market. However all watch faces are very slow, except the preloaded ones. Those are optimized for the Venu and they work really well. You can even set what extra data fields to show. However the these preloaded watch faces are fairly basic. If you are into data galore and you want a fancy watch face, you will find one. Even Garmin has some cool ones. However be prepared to wait 2-5 seconds from the moment you rise your wrist until the watch face appears.

Now, let's talk about activity tracking and fitness... This is where the Garmin Venu really shines.

  1. Garmin made great strides to offer you all the statistics and graphics you can imagine. And they made sure that you can access them from all your devices: watch, phone, and PC. Maybe I am an old school guy, but I prefer to analyze the data and check statistics and graphs on a large computer display. Garmin Connect web interface is amazing. Samsung didn't even bother to do something similar, and Google Fit retired their web interface. They are smartphone companies and they want you to buy both their phones and watches. Garmin doesn't care. They don't have phones. They want to offer you the maximum they can using any phone or PC.

    Not only the web interface lets you see all your stats in numbers or graphs, but it also lets you create custom dashboards with cards of your choosing. And believe you have an incredible number of cards available. I have two dashboards, each with about 24 different cards.
  2. Regardless of how active are you during a day, I consider hydration an important aspect of my life. The Garmin Venu offers hydration targets and notifications. And not just a static target. More specifically you can select a base value that will increase in accordance to your activities. For example my base target is 2.5 litres of water per day. When I do an activity, the Garmin Venu will also estimate how much liquid I loose due to perspiration (sweating), and adjust my target for the current day. And one very important thing is that the watch notifies you to drink some water when you are falling behind. It is an automatic algorithm. I don't know how it works, but I observed that if I drink more frequently, it will notify me less. So, it does some adjustments. And adding hydration to your profile is very easy from the watch. There is a dedicated widget for it. Because it is probably the most frequently used one for me, I've set it to be the first widget when I swipe down. This way adding to my hydration is very easy and fast: swipe down, touch "+" sign. Done in half a second. Unfortunately, Samsung Galaxy Watch and its predecessors never adjusted your daily goal. I requested this feature about 5 years ago, but never got an answer or an implementation. So, big plus to Garmin at this feature.
  3. Training plans by actual athletes. This is really cool. I tried to form a habit of running with my Galaxy Watch, but it was not a pleasant experience. There are some programs in Samsung Health, but they are very basic and general. I never could find one that fits me. They were either too easy, or too hard. With Garmin Connect and the Venu watch, I could actually select a trainer who is an Olympic athlete. And the workouts are tailored to each individual. I am not saying that the person is tailoring it to everyone, but rather at the beginning you do some baseline runs and some algorithm at Garmin adjusts the workouts defined by the athlete to your capabilities. After about 3 weeks I find that each workout is somewhere between medium-to-hard difficulty. Every week, the last workout is one you are encouraged to do at your own pace and I suspect it serves as a baseline for the next week's workout targets. This special workout usually has a distance target, but no pace or time target. In case there is bad weather, you can reschedule in your calendar the workouts. Or if you get sick, you can pause the whole training plan and resume after you get well again. Ah, and I forgot to mention, these training plans are free, no extra payment needed.
  4. And with the Garmin watches in general, and the Venu watch in particular, you can track just about any activity you are doing. Some activity trackers are made by 3rd party developers and they may cost money. I personally prefer the activity tracking apps by a developer called fbbbrown ( For an about $5 yearly subscription you get access to all his apps. And then you can track almost any activity from gardening to shopping through kick-scootering and snow shoveling or vacuum cleaning.

    Of course the Garmin Venu comes preloaded with about 20-30 activity trackers by Garmin. So you are covered for most of the basic stuff like biking, running, walking and indoor fitness exercises.
  5.  OK. So, once in a while you just need to figure out the direction by compass. I usually need a compass once a year or so. If you do a lot of hiking, you may need it more frequently. It is not something deal breaking or extraordinary, but it is comforting to know that there is a compass on your wrist all the time.
  6. Now, let's take a rest. Sleep tracking is something very cool to have. While the Samsung Galaxy Watch can automatically determine when you go to sleep, the tracking is not very precise. Sometimes it was saying I am asleep when I was just stationary, reading a book or something similar. And when I was asleep, the data was OK, but not extraordinary. The Venu monitors much more. For example blood oxygen saturation, and body battery recharge. These are great pluses. On the minus side, the Venu does not automatically detect sleep outside of a specified time interval. So, for example I have my sleep interval set from hours 22 to 7. If I do a nap at hours 14, it will not register it. But still, it will sense that you are stationary with a low heartbeat and the body battery will stay flat or even increase. If I go to sleep at 22:30, that is OK, it will detect it correctly and set sleep time from 22:30. If I sleep until 8, one hour outside of the target interval, that is OK as well. It will detect it correctly. However, I think the blood OX saturation monitoring will stop at 7 in the morning.
  7. So, I mentioned body battery. What is it? It is an estimation of how much energy you have left in your body. And to my surprise, it is quite accurate. When I feel energized and rested and I check the body battery, it shows a value above 50%. When I feel tired and I check it, it is usually below 30%. It is also very helpful to follow during the day and check when you need to rest. Sometime I don't realize I did an activity that consumed quite a lot of energy and I need a few minutes of rest. There are also suggestions after each day regarding your body battery. For example it may say you didn't rest enough, or that you rested very well, or that you had a good rhythm of charge/discharge. The energy consumption takes into account a lot of factors like heart rate, stress level, activities, and so on. For example I can be stationary for half a day but very stressed out and my body battery will go down a lot.
  8. Of course the watch has some automatic activity detection. However it is limited to walking and running. Of these two, running is useless because you most probably want to start a running workout or training plan manually anyway. Samsung has biking detection, which works fairly well, but if you are pushing anything else like on a kick scooter or lawn mowing machine, it will misfire and record it as cycling. However, cycling auto detection would have been nice on the Garmin Venu, even though I prefer to start it manually and have my cycling workouts planned and predefined.
  9. And if you are into biking or any other sport for witch Garmin has extra sensors, these can be added directly to the watch. For example cadence sensor for biking, or speed sensors, or heart rate sensor (if you want something more precise than the wrist based watch), and so on. Of course there are other accessories as well. I am really tempted to buy a Garmin Edge 830 for my bike. ( Sensor can then be pared either with the watch or with Edge. I am not yet sure how workouts are registered when for example speed and cadence is with the Edge and pulse with the Watch. I guess Garmin's ANT+ or broadcast features can be used to send these kind of data between your devices. 
  10. And finally let's talk about software. So far, in one month I had about 3-4 software updates. The watch came with an older software version, and the first update took a very long time to transfer to the watch. So be patient. It can take hours for Garmin Connect (the android companion app) to send the update to your watch. The more frequent, smaller updates pop up quickly on the watch. I don't know how long it takes to transfer, I just update when I am notified. These frequent updates give me a feeling of security. Garmin seems to be a company that does actively develop and invest a lot into software. The watch firmware, the Garmin Connect Android app, the web interface, the all look great and work well. Of course, some people have issues. Of course it doesn't work perfectly for everybody. Of course there is always room for improvement. But when you compare this with other vendors, like Samsung, or Apple, or Huawei, or any smartwatch vendor, you will get maybe a couple of updates per year for a couple of years. I remember I was amazed and happy that Samsung pushed an update to my Galaxy Watch after a year with features backported from Galaxy Active. Same was true with my Samsung S2 watch ... but man ... they pushed these updated after years, and labeled them as "value packs". Garmin seems to have a different approach and I applaud it. And there is also an app store, called IQ Store, which is full of useful apps. Most of them are free, and the few really great ones are payed. Prices however are very low. I mean the extra tracking apps from the developer I mentioned above cost $5/year. C'mon ... there are watch faces more expensive on Samsung Galaxy Store. Ah ... and you DO NOT have to install the IQ Store Android app on your phone. From Garmin Connect you can go to the IQ Store webpage and install apps from there. Heck, they even have web version of their IQ Store you can access on a PC and install apps and widgets from your PC directly into your watch (similar to how you install Android apps from web Google Play Store into your phone).

Conclusion: I personally consider Garmin Venu a superior product overall compared to Samsung Galaxy Watch and other smartwatches. I don't say one is better than the other, but certainly the Garmin Venu fits my needs way better than any smartwatch currently available on the market. I especially appreciate activity tracking, sensor precision, long battery life, frequent updates. Other may appreciate different aspects of a watch, and they will choose Samsung, Apple, Huawei or others. That is perfectly fine.

All I wanted to do with this article is to highlight the pluses and minuses of the Garmin Venu compared to Smartwatches in general, and Samsung in particular.

Neither is better or worse than the other. They are different products, with different goals, targeting different people.

I hope this article will help you choose the one that best fits you.

Monday, April 27, 2020

How did Samsung Lost Me as a Customer

I think I had the most Samsung devices through my home about 3-4 years ago. Samsung TV, Samsung PC Display, Samsung Smartphone, Samsung Smartwatch, Samsung Vacuum Cleaner, and maybe some other products I can't recall right now.

However, slowly, I realized that Samsung is not the company I liked for years and years, and their products, though fairly good quality, are too expensive.

On household items (not gadgets) when I had to by something, without realizing, I ended up buying Philips products: vacuum cleaner, electric tooth brush, hair clipper, kitchen appliances. It was not a conscious decision. I didn't even realize it until a week or so ago. 

Then, in March 2020 I realized that my perfectly fine Samsung Note 8 won't get any updates. No Android 10. I looked at S20 and I realized that it become insanely expensive. So, I ditched Samsung and bought a OnePlus 7T McLaren at half of the price of S20 Ultra.

Then, in April 2020 my Galaxy Watch broke, for the second time in one and a half years. I decided that it is just bad. Period. I ended up buying a Garmin Venu, and it serves my needs much better. Hopefully the Galaxy Watch will be fixed in warranty and I can recover some of it's value.

My Samsung TV started acting up lately as well. And is only 3-4 years old and used at most 2h/day. I never was really pleased with it, so when it will go dead, probably a Philips will replace it.

So far, my 3-4 years old 32" Samsung  PC Display works very well. But I have a feeling that the next one will be something else.

All in all ... High prices for average products in quality or performance made me loose my trust in Samsung. Samsung became the next Apple in my eyes. That's sad.

Friday, July 22, 2016

Review: Effective Communication Skills

Effective Communication Skills Effective Communication Skills by Dalton Kehoe
My rating: 5 of 5 stars

Impressive course, by all means.
1. The first part is more academic. Contains conclusions and presentations of a lot of scientific studies. If you are familiar with psychology and in general with the way our mind works, this part may be somewhat boring. For me it was a nice reminder of all the pshycho-stuff.
2. Then there is a great deal about personal communication. About how we communicate, about how we see ourselves and the other person while talking. It contains a lot of good tips to apply in personal and family communication.
3. Finally, the last part of the course concentrates on workplace communication. There quite a lot of valuable recommendations for managers and leaders, as well as for communicating with other team members who are working at the same organizational level with you.

Thank you Dalton Kehoe

View all my reviews

Thursday, May 12, 2016

The Future of Continuous Integration

I am wondering if I will look back at this post in five years with a smile or a frown. Foreseeing the future in IT is very difficult. Other industries change at a rate of one significant change every 50 years. When was the last revolution in excavators technology? When was the last revolution in steel processing? When was the last revolution in road building? We are more or less using the same materials and techniques as 50 years ago. Yes, we can do all the things mentioned above faster, at a higher quality, and with less costs. But we mostly improved some really solid and tested processes.

Computers didn't even existed 50 years ago. Well... there were some around, but let's say they were a toy for scientist rather than machines of mass production. However, they existed. The first concepts of software development were put in place. The first paradigms of software development were defined.

In late 1950 Lisp was developed by MIT as the first functional programming language. It was the only programming paradigm that could be used. All computers, few that were, were programmed using functional programming.

 Twenty years later structural programming started to gain traction by support from IBM. Languages like B, C, Pascal started to emerge. Let's consider this the first revolution in software development. We started with functional programming, and then we got structural programming, something totally different. It was groundbreaking and it took about 20 years to emerge. While this seems a long time now, it was what? Less than half the rate of industrial revolution that tends to happen every 50 years.

The fast pace of evolution in software continued exponentially. It was about, or even less than, ten years later when Smalltalk was made public for wide audience in August 1981. Developed by Xerox PARC, it was the next big thing in computer science.

While some other paradigms came along in the upcoming years, these three remained the only ones with wide adoption.

But what about hardware? How far did we come on hardware?

How many of you can remember the very moment when you interacted with a computer for the first time? Let your memory bring back that moment. Remember what you did, who were you with... A friend? Maybe your parents? Maybe a salesmen trying to convince your parent to by a computer? Doesn't matter. Remember that very moment. Remember that computer. Remember the screen. How many color did it have? Was a green-on-black text console, or a high-resolution CRT, or a FullHD widescreen? What about the keyboard? The mouse ... if invented at that time. What about the smell of the place? What about the sound of the machine?

Was it magical? Was it stressful? Was it joyful?

I remember... It was about 30 years ago. My father has taken me to the local computer center, his workplace. Yes, he is a software developer, one of the first generations in my country (Romania). We played. It was a kind of Pong game. On a black background, two green lines lit up at each side.

It looked similar to this image, though this seems to be highly detailed graphics compared to the image of my memories. And it was running on something like this.

Well, it wasn't this particular computer. Nor even IBM. It was a copy of capitalist technology developed as a proud product of a communist regime. It was a Romanian computer, a Felix.
The Felix was a very small computer compared to its predecessor. It could easily fit into a single large room, maybe 30-40 square meters. And it even had a terminal where you could see your code. Why was this such a big revolution? It's a screen and a keyboard after all. Yes, but your code went directly on magnetic tape, and then, in just a couple of hours you could run your program. That if you made no typos.

Before the magnetic tape and console revolution, there were punch cards and printers. Programmers wrote their code on millimetric paper, usually in Fortran or other functional languages.

Then someone else, at a punchcard station typed in all the code. Please note, the person transcribing you handwriting into computer language had little computer or software knowledge. It was a totally different job. Software developers used paper and pencil, not keyboard and mouse. They were not even allowed to approach the computer.
The result was a big stack of punch cards like this.

Then these cards were loaded into the mainframe, by a computer technician.

Overnight, the mainframe, the size of a whole floor, requiring several dedicated power connection directly from the high-power power grid, processed all the information and printed the result on paper.
The next day, the programmer read the output, understood the result. If there was an error, a bug, a typo, the whole stack had to be retyped because punch cards were sequential. If you were lucky, you could find a fix that effected only a small amount of cards and a fix that required the exact same amount of characters to work with the exact same region of memory.

In other words, it took a day or more to integrate the written software with the rest of the pieces and compile something useful. Magnetic tape reduced that to a few hours. Harddisks and more powerful processes in the '90s reduced that further to tens of minutes.

I remember when I installed my first Linux operating system. I had an Intel Celeron 2 processor. It was Slackware linux, and I had to compile its kernel at install time. It took the computer a few hours to finish. A whole operating system kernel. That was amazing. I could let it work in the evening and I had it compiled in the morning. Of course I broke the whole process a few times, and it took me about 2 weeks to set it up. It seemed so fast back then.

I work at Syneto. Our software product is an operating system for enterprise storage devices. That means kernel, a set of user space tools, several programming languages, and hour management software running on top of all these. We do not only have to integrate the pieces of the kernel to work together, but we have to integrate the C compiler, PHP, Python, a package manager, an installer, about two dozen CLI tools, about 100 system services, and all the management software into a single entity that works as a whole and which is more than the sum of its parts.

We can go from zero to hero in about an hour. That means to compile everything from source code. From kernel to Midnight Commander, from Python to PHP. We even compile the C compiler we use.

But most of the time we don't have to do this. This is an absolute overkill and waste of computing resources. We usually have most of the system already compiled, and we recompile only the bits and pieces we recently changed.

When a software developer changes the code, it is saved on a server. Another server periodically checks the source code. When it detects that something has changed, it recompiles that little piece of application or module. Then it saves its result to another computer which publishes this update. Than another computer does an update so that the developer can see the result.
What is amazing in this schema is how little software development changed, and how much everything else around software developers have changed. We eliminated the technicians typing in the handwritten code ... we are now allowed to use a keyboard. We eliminated the technician loading the punch cards into the server, we just send it over the network. We eliminated the delivery guy going with the disc to the customer ... we use the Internet. We eliminated the support guy installing the software ... we do automatic updates.

All these tools, networks, servers, computers, eliminated a lot of jobs except one, the software developer. Will we became obsolete in the future? Maybe, but I wouldn't start looking for another carrier just yet. In fact we will need to write even more software. Nowadays everything uses software. Your car may very well have over 100 million lines of software in it. Software controls the World and the number of programmers doubles every 5 years. We are so many, producing so much code, that reliance on automated and ever more complex systems will be higher and higher.
Five years ago Continuous Delivery (or Continuous Deployment) was a myth, a dream. Fifteen years ago Continuous Integrations was a joke! We ware doing Waterfall. Management was controlling the process. Why would you integrate continuously, you do that only once, at the end of the development cycle!

Agile Software Development changed our industry considerably. It communicated in a way that business could understand it. And most business embraced it, at least partially. What remained lagging behind were the tools and technical practices. And in many ways, they are still light years away in maturity compared to organziational practices like Scrum, Lean, Sprints, etc.

TDD, refactoring, etc, are barely getting noticed, far from mainstream. And it is even older than Agile! Continuous Integration and Continuous Delivery systems are, however, getting noticed. Their big advantage over software technologies is that business can relate to them. We, the programmers, can say: "Hey, you wanted us doing Scrum. You want us deliver. You will need an automated system to do that. We need the tools to deliver you the business value you require from us at the end of each iteration."

Technical practices are hard to quantify economically. At least immediately or tangibly. Yeah, yeah... We can argue about the quality of code, and legacy code, and technical debt. But they are just too abstract for most business to relate to them in any sensible manner.

But CI and CD? Oh man! They are gold! How many companies deliver software over the web as webpages? How many deliver software to mobile phone? The smartphone boom virtually opened the road ... the highway ... for continuous delivery!

Trends for "Smartphone"
Trends for "Continuous delivery"
Trends for "Continuous deployment"

It is fascinating to observer how the smartphone and CD trends tipped in 2011. The smartphone business embraced these technologies almost instantaneously. However CI technology was unaffected by the rise of smartphones.
Trends for "Continuous Integration"

So what tipped CI? There is no Google Trends data later than 2004. In my opinion the gradual adoption of the Agile Practices tipped CI.
Trends for "Agile software development"

The trends have the same growth. They go hand-in-hand.

Continuous deployment and delivery will soon overtake CI. They are getting mature and they will continue to grow. Will CI have to catch up with them? Probably.

Continuous integration is about taking the pieces of a larger software, putting them together, and making sure nothing brakes. In a sense CI masks under a business value your technical practices. You need tests to be run by the CI server. Very well you could write them first. You can do TDD and the business will understand it. Same goes for other techniques.

Continuous deployment means that after your software is compiled, an update will be available on your servers. Then the client's operating system (ie. Windows) will have a small pop-up saying there are updates.

Continuous delivery means that after the previous two processes are done, the solution is delivered directly to the client. Such an example would be Gmail web page. Do you remember it sometimes saying that Gmail was update and you should do a refresh? Or the applications on your mobile phone. They are updating automatically by default. One day you may have a version, next day a new one, and so on, without any user intervention.

Agile is rising. It is starting to become mainstream. It is getting out of the early adopters category.

Follow the blue line in the Law of Diffusion graph above. Agile is in the early adopters stage. But it will soon rise into the majority section. When that happens we will write even more software, faster, better. We will need more performant CI servers, tools, and architectures. There are hard times ahead of us.

So where to go with CI from now on?

Integration times went down dramatically in past 30 years. From 3 days, to 3 hours, to 30 minutes, to 3 minutes. Five years ago I worked on a project that had a result a 100MB ISO image. From source to update took about 30 minutes. Today we have a 700MB ISO, and it takes 3 minutes. That's a 21x increase only in the past 5 years. I expect this trend to continue to rise in an exponential way.

In the next five years build times will shrink. Smaller projects will achieve true continuity in integration. You will be able to see the changes you make to a project almost instantaneously. The whole cycle described above will be in the order of 3-15 seconds.

At the same time the complexity of the projects will rise. We will write more and more complex software. We will compile more and more source code. We will need to find ways to integrate these complex systems. I expect a hard time for the CI tools. They will need to find a balance between high configurability and ease of use. They must be simple to be used by everyone, seamless, and require interaction only when something goes wrong.

What about hardware? Processing power is starting to hit its limits. Parallel processing is rising and seems to be de only way to go. We can't make processors faster, but we can throw a bunch of them into a single server.

Another issue with hardware is how fast can you write all that data to the disks. Fortunately for us SSDs are starting to take over HDDs for everyday data storage. Archiving seems to be going to rotating disks for the next 5 years, but we are hitting the limits of the physical material there as well. And yes... humanities digital data grows at an alarming rate. In 2013, the digital universe was 4.4 zettabytes. That is 4.4 billion terabytes! By 2020 it is estimated to be 10 times more, 44 zettabytes. And each person on the planet will generate on average 1.5 MB of data every second. Let's say we are 7 billion, that is 10.5 billion MB of new data every second. 605 billion MB every minute. 6050 billion MB every hour. Or in other words 6 billion GB every hour. That is about 0.114 zettabytes each day.

It is estimated that in 2020 alone we will produce another 40 zettabytes of data, effectively doubling the enormous quantities we already produced. The trick with the growth of the digital universe is that it grows exponentially, not linearly. It is like an epidemic. It doubles at ever faster rates.

And all that data will have to be managed by software you and I write. Software that will have to be so good, so performant, so reliable, that all that data will be in perfect safety. And to produce software like that we will need tools like CI and CD architectures that are capable of managing enormous quantities of source code.

What about AI? There were some great strides in artificial intelligence lately. We went from basically nothing to a great Go player. But that is still far from real intelligence. However, the first sign of AI application in CI were prototyped recently. MIT released a prototype software analysis and repair AI in mid 2015. It actually found and fixed bugs some pretty complex open source projects. So there is a chance that by 2020 we will get at least some smart code analyses AIs that will be able to find bugs in our software.

If you are curious about more on this topic, or simply want to share your view, I invite you to my keynote speech at DevTalks Bucharest / Romania, on Jun 9th 2016. As always I will be open to discuss this and other IT, software, hardware topics throughout the event. Just ping me on twitter if your are around.
 DevTalks 2016 Bucharest Romania

Friday, April 29, 2016

Review: Steal the Show: From Speeches to Job Interviews to Deal-Closing Pitches, How to Guarantee a Standing Ovation for All the Performances in Your Life

Steal the Show: From Speeches to Job Interviews to Deal-Closing Pitches, How to Guarantee a Standing Ovation for All the Performances in Your Life Steal the Show: From Speeches to Job Interviews to Deal-Closing Pitches, How to Guarantee a Standing Ovation for All the Performances in Your Life by Michael Port
My rating: 5 of 5 stars

I have some speaking experience and I wanted to improve. I needed new ideas and some help with issues I found in my talks. The first part of the book was somewhat boring for me, but the rest was amazing. It is a really good book, with ideas that apply in a lot of circumstances. From speaking on the stage to thousand of peoples to speaking to your wife in private, there will be something for you in this book.
I listened to the audio version of the book, but the written one would probably be a better choice as there are a lot of things you will want to revisit from time to time, and searching audio books is just too difficult for me.

View all my reviews

Wednesday, April 20, 2016

Your Career - Five Years in The Making

About two years ago I've read a statement from Brian Tracy that seemed extremely bold at the time. He says that you can get from novice to world wide recognition in five years.

Of course this won't happen magically. You have to work for it. You have to learn and invest your time and effort into it.

I started my professional career as a software developer in mid 2009. Before that, I was a systems and network administrator and did only occasional software developer. By any standards I was a novice software developer. I knew the very basics. I wrote a few irrelevant applications. I always programmed alone. I never worked in a team. I never even bought and read a programming book. All I knew was what I learned during my university studies and whatever tutorials I read on the Internet.

It just happened that I got a software developer job at Syneto. They needed someone with strong networking skills. I was open to dive more into software development. I was the perfect match for their requirements at the time. I had no idea how much my life will change in the upcoming years.

Without entering into too many details, I have mention that Syneto went through a huge agile transformation in the two years after I arrived. We learned a lot both as a company and as a team. Throughout this period I learned a lot, read about ten programming books, and applied most of the knowledge on our storage project.

But what is doing good for if you don't share your experience with others? We've got gradually involved in the local agile community in my town, Timisoara. I delivered my first speech at the local community at about two and half years after I started my software development career.

Brian Tracy says you need two years to get local recognition, three-four years to get national recognition, five years to get global recognition.

By the time I had four years experience in software development, I held my first speech at a national software conference. In fact, the conference was international, but held in my country, Romania. I remember how proud I was to be speaking at a conference alongside legendary software developers like Michael Feathers.

In the very next year however, I made the huge leap to speak at the World's largest agile conference, Agile2015, in Washington DC, US. At the time when I spoke in Washington DC, I was hired at Syneto for 5 years, one month, and 3 days. It was only a 30 minutes speech, but nonetheless it was at the highest level, at greatest conference.

Today, I am preparing my second speech for the AgileAlliance organized conference. I will speak at Agile2016, in Atlanta, US. This time however, a full 75 minutes talk for a larger audience.

Check out my session and reserve a seat for July the 27th, Wednesday, at 2PM, in Atlanta, US.

Tuesday, February 3, 2015

2nd of 3 Books That Changed My Life: @ericevans0's Domain Driven Design

I was thinking lately that from all the books I've read related to my professional life and career, there are three that stand out. I can not decide which one had a bigger impact because each effected a different part of my life. So there is not one better than the other. I will write three blog posts about each book. They will be presented in the chronological order I read them.


One of the most difficult to read books, and still one of the most enlightening ones, Domain Driven Design by Eric Evans is second on my list of three books that had a major impact on my professional life.

This one is a book that takes software development to a totally different level. Seemingly it leaves most technicalities behind and views the whole software from a much higher level.

Imagine your source code as a balloon filled with air. It sits between two major actors of our industry: the software developers on one side, and the business people on the others. Or if you take the people out of the picture, software production versus business domain.

In such a setting, Domain Driven Design pulls a part of the balloon toward the business people, toward the domain, while at the same time anchoring its other side in the software production department. It tries to fuse business with software by both pulling simple software concepts like modules, classes, dependencies, functionalities into business, as well as pulling business concepts to the source code.

As a software developer, I was more concerned and intrigued by the introduction of business concepts into the source code. At Syneto we work on Storage OS, an operating system for storage devices. We are both the software developers and the domain experts. So we could not pull software concepts into our domain, we already knew all the programming related concepts. But we could start working toward representing domain concepts in our code.

This had a major impact on our architecture and structure of modules. We started by implementing the Repository Design Pattern learned from Domain Driven Design. This opened up some interesting possibilities. It forced us to have each of our models represent a domain concept. As we mostly work in PHP, our modules are simple directories. Each module represents a domain concept and has a repository. The repository can provide and save objects. It's not a generic ORM though, it is more likely a domain specific query language. And what kind of objects should such a repository provide? Domain objects. These objects represent a more specific part of the domain.

For example, we can have a Network module. In this module we can have several repositories like NetworkAddresses, or HardwareLinks. A NetworkAddress repository can provide NetworkAddress objects. A NetworkAddress object represents a unique combination of IPv4 address, IPv6 address, a subnet mask, and a name. The HardwareLinks repository may provide Link objects. These represent the state of a network link: type - ethernet or fibre channel -, cable plugged or unplugged, link speed, frame sizes, etc. These are value objects, representing state. But we also have entities representing functionality like applying a NetworkAddress to a specific HardwareLink. This will result in a setting on the operating system. This setting will assign the IP address and subnet mask to a network link on a physical network card.

I will stop now and let you read and discover the mysteries of Domain Driven Design.


Read also: 1st of 3 Books That Changed My Life: @unclebobmartin's Agile Principles, Patterns, and Practices in C#

Tuesday, January 20, 2015

Galaxy Note 4 - After the First Eight Weeks

I've got my Galaxy Note 4 delivered about eight weeks ago, on December the 2nd 2014, and I waited some time for the placebo effect to subside before I write about my experience with it. My previous phone was a Galaxy Note 2, so most of my comparison and impressions will be related to that device.

The Exterior

I love how the Note 4 looks. I ordered the bronze gold version. The color you perceive is actually very much influenced by the light conditions. You will see it brown under a 1600W white colored light bulb. You will see it gold, and quite yellowish, under a 60-100W light bulb. You will see it a pinkish-magenta under natural light with clear sky but in the shadows. And it actually look bronze-gold under direct sunlight.

The Note 2's round form never really attracted me. I bought it for the big screen, not for the rounded corners. Note 2 was inspiring a natural object, like a stone, egg, leaf, etc. The Note 4 is a totally different story. The much less rounded corners and 45 degree angled flat edges give the Note 4 a futuristic technology look. It looks like a modern electronic device, not something resembling nature.

But the sharp angles give the Note 4 a big handicap compared to Note 2. It is much harder to fit it into your packet. The Note 2, with its round form, slid into any pocket with ease. I used to keep it in my front packet of my blue jeans and while I was sitting the Note 2 felt comfortable while pressing to my legs. The new Note 4 is much harder to be pushed into the pocket and the right angles produces a discomfort after some time. While having it in my pocket for 10-15 minutes is OK, I wouldn't think of keeping it there for much longer.

When you keep the two phones in your hand, they provide two totally different experiences. The Note 2's sticky, glossy cover and rounded edges encourages you to keep it laid in your palm. As it won't slide out of your hand, you can do this comfortable even at angles larger than 45 degrees. The Note 4 has a very different feeling. Its sharp edges encourages you to grab it by the sides and it will stay in your grip with easy and little effort. The faux leather back slips easier on the skin compared to Note 2, so you won't let this phone just sit in your palm. Which one is better? I don't know. They are two different experiences, I like how each one feels in its own way.

The Hardware

Regardless of the version you choose, the Note 4 is a beast. The Qualcomm CPU is slower, but it has faster 4G. The Exynos CPU is faster but it has slower 4G. As I am mostly using Internet over Wi-Fi, I've chosen the Exynos variant and I am completely amazed. Any game runs very smoothly and loads blazing fast. I didn't play Asphalt 8 on the Note 2, but I play with a friend of mine who has and LG G3. On the Note 4 the game loads about 2 times faster and runs somewhat smoother. Both phones can run the game with amazing graphics at the most demanding settings. I am very pleased with the speed of CPUs & GPU.

Now let's talk about the screen. Some may think that it's too big, but I've never met any person who bought a large screen phone (phablet) and then reverted to a smaller screen on his/her next phone. The colors of the super amoled 2K resolution display are very good and much more natural than the Note 2's screen. But beauty comes with a price. The screen is the largest battery consumer on the Note 4.

The new pen... well, it's shorter than on the Note 2 and it feels a little bit awkward to write with it. I am sure, however that I will get used to it quickly. As a small design element, I liked better how the pen hid in the Note 2. On the Note 4 case, the pen's tail is a very visible element.

The 32GB builtin storage and the 128GB SD card extension slot should be enough for everyone, I can't complain about the space. On the Note 2 I felt the need for some extra space. I had the 16GB variant, and you know 5-6 GB are always reserved by android. I never had more than 8-9 GB usable space for programs and multimedia.

Battery Consumption

I didn't do any particular test, but as with any new phone I used it quite a lot at the beginning. I installed a lot of applications, personalized it, played visually amazing games, read news, mails, chatted with friends and of course I called other people.

The battery did not discard in less than 24 hours, regardless of how I used my phone. As I am getting used with the phone and I have some automation in place that conserves power by turning on Wi-Fi and 4G over night, I am getting more and more hours out of it. My next charge should be at around 35-40 hours from the previous one with the following daily usage: 60 minutes reading stuff, 30-45 minutes playing Asphalt 8, 70-85 minutes 4G and GPS navigation, 16 hours of Wi-Fi, 10 minutes of talking, a couple of SMS, a few Hangout Messages. I am so far very pleased with the battery life and I am surprised that 4G doesn't really matter that much, but I made no extensive testing.


Well, I will let you discover the details. I just say that I love "S Finder", air commands, ScrapBook app, selective screenshots that can be easily stacked and then combined in various apps.

Handwriting recognition got much better. It almost always guesses what I write, very little correction is needed.

The camera and its software are amazing. The downloadable camera modes are a nice touch by Samsung, I love them! Colors are very realistic, the image stabilization works pretty well, pictures at low light are so much better than the Note 2 that I can't even compare them.

My Final Verdict

I love it. Very good phone. A little pricey though.

Tuesday, October 7, 2014

Programmer's Diary: Setting Up a PPTP from CLI on Linux

From time to time I have to set up a PPTP connection to my office, and the KDE GUI fails. So here is a reminder to myself and anyone curious about how to connect to a PPTP VPN.

# pptpsetup --create syneto --server --username csaba --password ******* --encrypt
# pon syneto
# ip route add dev ppp0
# ip route add dev ppp0

Add the DNS from the VPN network and a search domain.
# mcedit /etc/resolv.conf

# cat /etc/resolv.conf

Have fun :)

Thursday, October 2, 2014

1st of 3 Books That Changed My Life: @unclebobmartin's Agile Principles, Patterns, and Practices in C#

I was thinking lately that from all the books I've read related to my professional life and career, there are three that stand out. I can not decide which one had a bigger impact because each effected a different part of my life. So there is not one better than the other. I will write three blog posts about each book. They will be presented in the chronological order I read them.


I started reading Robert C. Martin's Agile Principles, Patterns, and Practices in C# about one year after I started working for Syneto. At that point, I had more than five years of programming experience and I was quite familiar with many concepts.

However, in almost all my career I worked alone. There was no chance to me to interact with other programmers, to find out about new and cool stuff from others directly. I've heard about Agile and Extreme Programming, but when you work alone you see things differently.

I could not find any satisfying online documentation back then, and there was nobody to recommend me the right books to read.

This lone programmer figure had to go under a major rework after I've got to Syneto. Suddenly I was surrounded by programmers with whom I had to collaborate. Fortunately for me, I worked with people for a long time, so the social side of the integration went well. And with social development came teachings and recommendations and a huge flood of information exchange. One of the books recommended both by colleagues and managers was Robert C. Martin's Agile Principles, Patterns, and Practices in C#.

This was not the first book I've read at Syneto. Not even the second one. It was just "the next book to read" on a long list after a year or so of intense personal and professional development. All the previous books were important and had a great impact, but none of them changed more the way I write code than this one.

Because Robert C. Martin's Agile Principles, Patterns, and Practices in C# had a profound impact on how I write code, I nominate this book one of the three life changers.

Before this book I was thinking about the structure of my code in a naive way. I had my personal experience, I heard about and knew a couple of design patterns, I even knew the basics of code structure and form.

So how this book changed the code that I commit to the version control system every day?

  1. My methods are less than 4 lines long. On average they are 2 lines long. Some methods are still huge and they may have 10-15 lines of code. But they are so rare, that they don't affect the statistics very much.
  2. My architecture is decoupled.
  3. My dependencies are inverted.
  4. My classes have a high cohesion. I once actually managed to create a class, together with +Vadim Comanescu, that we considered a perfect class: 6 public method and 6 private variables. All methods were using all private variables.
  5. I made naming things right one of my top priorities. Rarely can I write a method name that is not changed at least 3 times before the code is committed.
  6. I use design patterns in a much better informed fashion. The book helped me understand them better, and especially to understand possible use cases and scenarios.
  7. ... I could continue with other reasons. But I will stop now. I think these alone are enough. No need to write up another ten or so of them.
That is why I consider this book "The Programmer's Bible". Each software developer, regardless of the programming language or paradigm he or she uses, must read this book. It is quite long, about 600 pages, but it is not a difficult read. Robert C. Martin has a great talent to keep you hooked. I remember that some design patterns were so exciting stories that I just could not stop reading.

So, what are you waiting for, find a copy this extraordinary book and read it.

Sunday, September 28, 2014

Programmer's Diary: Constructing your Tests Line by Line

It is a different thought process for everyone, but when I write the tests that will represent the functionality I am about to implement, I always start with the Exercise or Act part.

A unit test is usually composed of three or four parts, thus the rule of 4As

1. Setup or Arrange
2. Exercise or Act
3. Verify or Assert
4. Tear down or Annihilate (this may be missing, automatic garbage collection, anyone?)

I observed that people who know about these parts have the natural tendency to start writing a test in that exact order. They start by asking themselves "What do I need?" and only then "What do I do?". This frequently leads to dilemmas that can not  be answered, and they just give up writing the test and start writing the production code.

In my opinion this type of thinking has a fundamental flaw. You cannot know what do you need before you first figure out what do you want to do. That is why I always start with 2. Exercise or Act. And my second step is always 3. Verify or Assert. This way I can put down the basis of the test by clearly defining what I want to do and what results I am expecting.

I build the 1. Setup or Arrange part as an iterative process by adding all the required dependencies for the already defined lines. Finally I do 4. Tear down or Annihilate to do the opposite of setup if needed.

1. Write a new test function and name it by the behavior you want to test.

function testItCanAddTwoNumbers() {


2. Act! Do the behavior you just defined in the test's name.

function testItCanAddTwoNumbers() {
    $sum = $calculator->add($n1, $n2);

3. Assert.

function testItCanAddTwoNumbers() {
    $actualSum = $calculator->add($n1, $n2);
    $this->assertEquals($expectedSum, $actualSum);

4. Arrange, or prepare all the missing parts.

function testItCanAddTwoNumbers() {
    $calculator = new Calculator();
    $n1 = 1;
    $n2 = 2;
    $expectedSum = 3;

    $actualSum = $calculator->add($n1, $n2);
    $this->assertEquals($expectedSum, $actualSum);

5. Annihilate, or destroy persistent information. Nothing to be done for this part here.

That's it. Have fun writing tests instead of hitting a brick wall with your head!

Saturday, August 30, 2014

Programmer's Diary: Finding Your Ways

I usually tend to give any advice with a pinch of salt. In one of my tutorials about SOLID I wrote the following phrase: "As with any other principle, try not to think about everything from before."

Which led to some dilemmas with a few of my readers, especially because it was in the Interface Segregation Principle article. I answered the questions of the reader, but I think the ideas there merit a blog post. So, here it is. Read on.

Any exaggeration is bad. If you think about everything upfront, it is bad. If you think about nothing upfront, it is also bad. Finding the right balance in what to do and what to postpone or not to do at all is essential for every project. There is no universal theorem or solution. There are however some recommendations that try to keep us on the right track.

In Agile software development, you will mostly meet tow concept. Each of them is pulling back you from one of the two extremes mentioned above.

1) Postpone everything to the last responsible moment. If you apply this, you may ask yourself every time you make an interface: should I create the interface? Will there be more than one implementation? If yes, what and when that implementation will be? Is it more expensive for me to delay the release by 2 hours and implement the interfaces now, or it is more expansive to not write any interface and introduce them on the next release when I know I will need them? How sure I am that I will need the interface on the next release? Can the plan be changed, outside of my control, so that I end up with a code that will never be relevant?

2) Program with change in mind. If not from the first release, then from the second. If you needed to change a specific piece of code, than there is quite a big chance you will need to change it again. On your first change, make it so that your third, forth, and so on, changes will be easy. If you see some code, once written and never modified, and you have no reason to change it, don't.

Basically this is it. Now you may ask yourself how to deal with these problems? You have three possible ways to go on:

1) Take the postpone extreme. Postpone everything, until you feel it starts to heart. Than, gradually try to think a little bit ahead and don't postpone things quite as much. This is how we at Syneto started.

2) Take the plan for everything extreme, and evolve from there. This is actually one of the routes many people take when coming from Waterfall. Gradually try to identify parts that take up a long time in planning but proved to be marginally important. Continue doing so until you feel pleased with your process, and you don't feel that what you do will never be used or useful.

3) Take the middle road. This may sound attractively optimal, but it is not. I don't think any project is right in the middle between the extremes. You can take the middle road, and continually think about both extremes. With time you will find toward which end your project requires more attention.

Sunday, July 20, 2014

Watch, Learn, Do, Decide

I have this concept whenever I need to decide if something new is good or not for me.

First I watch or read about the idea.  Then I study more about it. Then I do it for a relatively long time. Then I decide if it is good or not for me, that I should drop it all or I can adopt parts of it in my life.

This applies exceptionally well to new programming techniques.

At work, at Syneto, we usually do things for about 6 months before we decide.  But those are big things.  They affect a bunch of people.

In my personal life I scale down. Both the discovered things and the time for doing. Still, I always make sure I don't decide too early.

Recently I was invited to a new developers forum in my country. And after just one day,  I am amazed how many people jump over the learn and do part. They only watch and decide.

I believe the only way to decide upon a thing is by past experience. But you need to build that experience yourself. You can't avoid it, at least not for a long time.

Sunday, May 18, 2014

Belgrade CityBreak: An Unexpected Journey

My wife and I had an unplanned opportunity to visit Belgrade for the first time. It went pretty well.

We were asked to drive two of my colleagues to the Belgrade airport from where they took a plane to Paris. This trip allowed us to stop in Belgrade and visit the city. We had no plans, no knowledge about the city. I just set "Belgrade City Center" in the GPS and let it drive us ... somewhere.

First of all, parking your car in Belgrade is extremely difficult. We almost gave up after 30 minutes of randomly choosing streets in the central area and trying to find a spot to stop. Finally, we managed to park, at about 2.5 kms away from the point marked as city center on the GPS. Well, a 20 minutes walk should not be that much. But we were so hungry, and finding a restaurant was a bigger than expected challenge. We did not know the city, but based on the look of the streets and shops, we were somewhere close the center. There were even quite a lot terasses, but only coffe and drinks served. Where are the restaurants?

After trying several alleys that seemed promising, and having no luck at all finding a restaurant, we went on on the main street and finally ended up in the pedestrian area. At least finding a restaurant there was not a challenge any more. We ate at a random restaurant called Opera. They had good food and decent prices. One starter, two main courses, some mineral water and two coffees = 40 Euro.

After we ate, and with our bellies full, we decided that it was a really good time to just walk and admire the city and whatever surprises it may hide. The weather was also a good company, about 25 degrees, mostly sunny. Luckily the restaurant had free WiFi, so we had a chance to look up the surrounding attractions on TripAdvisor. Choosing our next stop was simple. The old City Fortress was just a few minutes away.

What we didn't expect is it to be so well preserved, free to visit, and really impressive. It is bigger than you may think at first sight and spending an hour or so just by walking around the old streets, walls, is not even enough. There is also a great public park surrounding the whole fortress. You can relax on a bench, walk around in a well maintained garden, do some sports, or just stop for a coffee at the Danube's bank.

An expo with first and second world war military equipment was just an amazing plus for this visit. So it's time to wrap up some pros and cons.


  • Mixed architecture - there were streets on which you could recognize 4-5 types of different architecture from different eras. From a princesses house, through a peasant's house and a communist office building to victorian architecture. All you could imagine on a single street. There were also places with fluent uniform nice architecture.
  • Food was good - even though we have chosen a restaurant at random and we ordered Serbian specialties we never ate before, we liked the food.
  • People are friendly - we felt the local people friendly, quiet, helpful.

  • Difficult to find a restaurant - unless you are in the very city center, on the pedestrian area, even a McDonalds or other fast-food is hard to find. You can get coffee and drinks, but now food.
  • Difficult to find a mini-market - on the whole 2-3 km walk from the car to the city center and back, we found a single mini-market to by some mineral water and cigarettes. Yes, there are kiosks here-and-there, but paying with your credit card is not an option there.
  • Traffic is quite intense - even though it was Sunday, there was quite heavy traffic in the city. Where did all that people had to go by car on Sunday? I can't understand...

That's it. Thanks for reading.

Friday, May 2, 2014

Agile by Instinct

There is a question on my mind for some time now. An idea, a thing that just can't let me alone.

What do you do after you tried all agile practices?

I had the opportunity to work for a company that went through a great deal of change by giving up an old-style waterfall oriented management and adopting agile. But what adopting agile actually means?

As any company and team we started by learning new techniques and practices. We started to plan our work on a board and we did a group-reading marathon of Gerard Maszaros' xUnit Patterns book. This was about 4-5 years ago, and it was enough to rise our interest in all these new things. We went on and adopted TDD and we still use it at a daily bases. We redesigned our architecture so that our business logic is isolated from the rest of the system as Robert C. Martin recommends in his clean architecture concepts.

We implemented a continuous integration and deployment system for our project, we covered most of our code by tests, we even optimized the whole deployment process to an extent that it takes about four and a half minutes to run all the 6000+ assertions in our unit tests, all the MVC framework's controller, helper and model tests (these are just a few, but still), compile and encode everything, crate packages and publish them on an update server. I think we have a process that is quite optimized. Even though there may be small changes to make, there will be no more significant gains.

And our everyday software development process? Well, after doing SCRUM for a while we tried Lean with Kanban. From all of them we devised the parts that can the most help our processes. There is not really any other formalized process we could try and fit in our management structure.

Continuous learning and deliberate discovery are another two things we do frequently. We, as professionals try to make ourselves better, each day, every day. We do courses, we practice at home, we attend conferences, we organize events, and so on.

"It sounds like a success story" as Dan North remarked it when I was talking with him about this topic. But what do we do next? What is the next thing we can try to make our process better, to go faster.

An interesting question Dan North asked me, and I was quite surprised by it, was "What makes you think you can go faster or better? Maybe you reached your maximum speed." (approximate quote). I couldn't answer him then. In retrospective that is because I have no rational reason to sustain my desire to go faster and better. But my instincts tell me we can do better. My professionalism tells me I can learn more and take better decisions. I am asking myself instead "Why should we ever stop getting faster and better?" Of course there is no magic answer. If there would be, it would be a formalized practice or technique, and this blog post would not exist.

For the time being I feel we are far from perfect. In the past year or so we tried to orient our attention more toward our clients. We tried and successfully listened to other departments. Now we are on our path to create a better synergy between dev, sales, operations, and marketing. And this is why Dan North's suggestion surprised me most. He suggested the exact same thing.

So, after you go through all the practices and techniques of agile development and you make them work for you, you must start being truly agile.

Being agile is not about adopting rules and practices. Being agile is not even about learning and devising your best way to work based on those processes.

Being agile is to learn, as a team, as a company, to follow your instinct in order to value individuals and interactions, to create working software, to listen to your customers and to respond to their needs as quickly as possible.

Agile is about us making efforts so that others doesn't have to.

Monday, April 28, 2014

#CraftConf Budapest 2014. A Big Wow!

At the end of April 2014 I went to a conference. CraftConf Budapest. We had no idea how many attendees will there be or how big the event will be, but one thing was sure, there will be a panel I've never seen in Europe at any conference before. There were so many famous people invited to speak that the event became a must-go both for me and for my colleagues.

We will write a more extensive blog post on Syneto's blog, so here I will present only my personal impressions.

First impression: This is a huge event!
I never attended such a big conference. There were more than 900 attendees and the main room had 5 screens, the big one, in the centre on the image above was 10 meters or so in diagonal. They also managed to secure some very wealthy sponsors who kept our bellies full and prevented our mouth to dry out. The other 2 rooms were smaller, but still impressive.

Second impression: There at most 5% of new information in a talk.
For whatever reason I had huge expectations from this conference. However I have to realize that a talk can not contain more than 5% new and useful information. At least not for me and my colleagues. This was a hard thing to realize, but it led to the next one.

Third impression: The value of a conference is in the chance of speaking with famous people.
Yes. You need some guts, but if you want real value for the money you payed for the conference, you must go and talk with those important people. For some I had questions in order to obtain new information, for others just to confirm some of my own ideas and perceptions and for other I actually managed to provide constructive feedback.

All in all I talked with more famous people in 2 days than in my whole life altogether. So I thank you Bruce Eckel, Dan North, Eric Evens, Gerard Meszaros, Theo Schlossnagle, John Hughes, Simon Brown for your time and for every other speaker for their great talks.

Monday, March 24, 2014

I Don't Believe in Genetically Born Leaders

I hear so many times that some are made to lead, while others to follow. And while there may be some truth in that statement I don't believe someone can born to be a leader. I believe in discipline. I believe in hard work. I believe in fulfilling of dreams. I believe any of us can lead or can be led. I believe it is ultimately our choice. But how is that possible? Don't we have different personalities? Don't we have different professional objectives? Don't we have different dreams? Don't we born and live in different societies? Sure, we have, we are, we do. Than how could any of us become a leader or a follower? Well, society, family, and friends have a great impact, but at the end of the day it's up to you what to choose to do. Some choose to follow and be happy. Others choose to lead. Others try to find the balance between the two. I believe when there is someone to follow, you should do so. However, when there are some to lead, you should do so again. There is no way you can't be leaders for some and follow others. This is the only natural situation you can be in. There will always be things to learn from those smarter and wiser than you, and there almost always will be others willing to learn from you. If you only want to be a follower, you will never feel the appreciation and amazement of other young minds discovering your secrets. If you only lead, you will burn out very quickly. Your students and followers, if balanced between the two characters, will simply become smarter and wiser than you and will become leaders instead of you. That's why the most depressive persons I ever seen were followers without will to lead, or fallen leaders without hope to rise again.

Wednesday, February 19, 2014

Programmer's Diary: Transforming PHP Objects to Strings

In my upcoming programming course for +Nettuts+  I will implement a persistence layer for the application developed throughout the course. For the sake of simplicity I decided to make it a file persistence and keep information in plain text. This was a good occasion to use a nice PHP trick to convert simple objects into plain text.

Our objects are books and besides the fact that there is an abstract book class there are a lot of specific implementations for different kind of books, like novels. Each is a little different than the other, so saving all the books in the same text format would have been impossible.

PHP offers a magic method called __toString(). Creating an implementation for this method, on any object, will allow you to use that object in a string context. Let's see a basic example.

class SystemInformation {

 private $cpu;
 private $ram;

 function __construct($cpu, $ram) {
  $this->cpu = $cpu;
  $this->ram = $ram;

 function __toString() {
  return "CPU: " . $this->cpu . "%" .
        "\nMemory: " . $this->ram . "MB";

If we create such an object and try to run it in a string context, like in echo(), it will be automatically converted to string, using whatever we return in __toString().

$sysInfo = new SystemInformation(40,1024);
echo $sysInfo;

This will output:

CPU: 40%
Memory: 1024MB

You can also use it in some other contexts also, for example this test will pass just fine:

$this->assertTrue(strpos($sysInfo, 'CPU') !== false);

And when PHP is not smart enough to figure out that you want the object as string, you can always call __toString() on it directly.

$this->assertRegExp('/CPU/', $sysInfo->__toString());

For the complete example with the whole application I mentioned at the beginning, keep an eye on the +Nettuts+ premium courses. Have a nice day of programming.

Wednesday, February 5, 2014

The advantages of working for 2 companies at the same time

Many companies deny their employees to have a second workplace, or work as freelancers in the professional domain as their main job. Other corporations have an approval procedure,  and each employee must declare any other job he or she wants to take. If the company decides that the job may conflict with its interests, it may deny the employee to accept it.
What companies rarely consider are the reciprocal benefits. I had 2 workplaces almost all my career. Right now I work for Syneto and in my free time I write for NetTuts. This is great for everyone. I can write great articles based on my experience at Syneto. Syneto benefits of me becoming a better programmer with each article or course I make. Explaining my ideas greatly improves my knowledge on that specific domain because I need to dive into details about it.
So, I learn more and better, Syneto gets better code and NetTuts better articles.

Everyone wins.

Sunday, February 2, 2014

Programmer's Diary: Writing a Series for @NetTuts

If you are following me on twitter you probably know I am a regular technical writer for +Nettuts+ . I write various tutorials and articles on programming topics. However I never wrote a series with tutorials that are connected in one way or another.

That changed with the series on the SOLID principles. I had to write four articles covering five principles and I found out there are quite a few challenges when writing a series of articles.

Challenge #1 - The first article must be good. Much better than any of my stand-alone articles, because it must convince any reader, new or regular, that the upcoming articles in the series deserve their attention. The first article has the stake of the whole series. If it fails, there is a chance the rest will never be read, doesn't matter how well written it may be.

Challenge #2 - Each article must find a way to refer to, to connect with the previous articles or at least with some of them so that the readers have a feeling of continuity. In each of my SOLID articles I referred to at least one, but preferably two other SOLID principles. Now, this is again tricky. Because the articles are published and read in sequence, from S to D, even though O may relate to I or D, in the article itself I can only refer to S because the reader may not know about the LID principles. If I refer to any of LID I risk one of two things: confusing the reader and loosing him/her, making the reader curious and making him/her read the three LID principles from other sources and not read my upcoming three articles.

Challenge #3 - Each article must be self contained, in a sense that a new reader must be able to comprehend it without reading any of the preceding or upcoming articles in the series.

Challenge #4 - Each article must provide something different so that the readers don't get bored. If one article explained the concepts with mostly text in anecdotal manner, the next one must use a different approach, maybe more schemas or more source code or more quotes of rules and definitions or more funny statements. It doesn't really matter what, but it must be different, it must be unique in a way to disrupt the monotony of the series and still provide the valuable information it proposed as its topic.

Challenge #5 - The last article must contain a conclusion to the whole series. It must be written in a way that not only transfers the last topic in the series, but also connects all the dots and provides a high level view over all the topics presented throughout each tutorial. It also has to put everything in perspective, under a different light, in a different - bigger - universe, where the whole series is just a small piece of the puzzle.

That's why I concluded my series about the SOLID principles with a reference to The Magical Number Seven, Plus or Minus Two and that is why there are 5 challenges in this blog post.