Experiences from CITCON Europe 2019

Janne Kemppainen |

It was that time of the year again, time for CITCON, an open space conference around testing and continuous integration. This was my third year attending the conference. Ever since attending for the first time in 2017 I’ve been looking forward to the next event.

In this post I will share my experiences from this year’s event in Ghent, Belgium.

Conference format

It is the open space format that sets CITCON apart from other conferences. Open space means that there was no predefined agenda. The conference started with an opening session on Friday evening where everyone could propose topics that they wanted to talk about. The conference organizators were only facilitators to get the event started.

The suggested topics were collected on post-it notes and organized on a flip chart where anyone was free to vote for the ones they’d like to talk about. After voting, the topics were scheduled as a group effort on the available time slots.

Saturday consisted of one hour time slots with a small break in between each session. Each post-it note represented a session and their order was changing during the day as people were adjusting the order; if you tried to trust the agenda that you took a photo of in the morning it was quite likely you’d end up in a session you didn’t mean to go in the evening.

We had five rooms available so it meant that there were always five options to choose a session from for any given time. Or if none of the sessions felt insipiring you could have random hallway conversations with other people.

Another important part of the format of the conference was the rule of two legs. If you didn’t feel like you were gaining anything from a session you were encouraged to leave using two your feet. It was stressed out that this is not impolite, the session just wasn’t for you. It was also pointed out that some of the best sessions have been with a very small group of dedicated people.

So what did we have on our agenda this time?

Session 1: CI in very small projects

In this session we talked about using continuous integration in very small pet projects; and if it is even needed.

The problem with small custom projects might often be that after some time you’ll forget how to actually build the whole thing. Therefore having some sort of a build pipeline could be helpful to handle what’s necessary to deploy the code. On the other hand if you have multiple projects that aren’t updated very often you probably wouldn’t want to pay for some external service.

Another complication could be the use of multiple programming languages. If you want to create your next project in a completely new language there can be lots of things that you need to learn to build the project. And the CI solution you chose might not support that technology. On the other hand a part of a pet project is often to learn new stuff.

So what could be some possible solutions for such projects? Here are some things that we discussed about.

Docker

Dockerize your projects so that you can run them anywhere. You can automate the build scripts and run them locally on your own machine or in the cloud if needed.

Set up your own server

You could run a custom build server, either in the cloud or on your own machine. Especially with multiple languages it might be a good option to set up your own Jenkins instance that you have full control over.

This would mean the need to manage and update the said server so it’s not something that you could just totally forget about.

Free CI providers

Some people had used CircleCI for their projects. They provide 1,000 build minutes for free each month. If your project can be built on Linux then you might want to give them a try. They also have builds for Mac OS but that gets quite expensive for a hobby project.

Open source

One option would be to open source the project as soon as possible. CircleCI provides a total of four linux containers for free for open source projects which should be plenty for pet projects. On the other hand Azure Pipelines also provide unlimited CI/CD minutes for open source.

No CI service

The last option we talked about was just having a build script instead of a CI pipeline.

Session 2: Reconciling perspectives, CDCT and War stories

This was the session where I used the rule of two feet twice trying to find the session that would serve me most. I started with the session Reconciling perspectives, then went on to hear what Consumer Driven Contract Testing (CDCT) was about, and finally switched to hearing people’s war stories on the battles they’ve faced at work.

Reconciling perspectives

This session was based on a research paper from Steve Adolph and Philippe Kruchten bearing the same name “Reconciling Perspectives: How People Manage th Process of Software Development”. The paper aims to describe how programming teams go about releasing software.

In short the team tries to produce some work product but might not agree on the means to achieve that. Therefore they have a perspective mismatch.

The process starts with reaching out to the other person to start converging with the goal to negotiate a consensus. These steps are looped until a consensual agreement has been formed; everyone agrees what the actual job is.

The agreement needs to validated by creating an acceptable work product through steps of bunkering and accepting. In the bunkering stage suppliers create work products based on the requirements of the job that was agreed. In this stage there is little to no conversation.

During the accepting the work product is presented and either accepted or refused. If the final product didn’t meet the expectations there might be need to go back to square one to reach out and negotiate consensus again if another round of bunkering would not suffice.

I found this session to be quite theoretical and thought that it’d be best if I read the full paper myself so I left 15 minutes into the session.

Consumer driven contract testing

The next session I walked into was about consumer driven contact testing (CDCT). As I joined in the middle of the session I missed the introduction so it was quite difficult to get the grasp of what was going on. Therefore I decided change the session and switch to war stories.

War stories

This session was proposed by Douglas Squirrel and the purpose was to gather war stories for his upcoming book. As I joined the session a bit late I only got to hear the last stories.

The more interesting case was about a small web development team that had gotten a poor security assessment. Penetration testing was performed only quarterly which meant that they weren’t confident of the security of their product between the tests.

Initially, they weren’t using pull requests but adding a code review process didn’t actually help them as the developers weren’t able to identify security vulnerabilities. The team decided to try out Secure Code Warrior which gamifies finding vulnerabilities. This helped them gain some better understanding on how to write secure code.

They also got some OWASP training and got the testers more involved in the process. This led to the team making some progress and gradually improve the situation.

For the managers at the company one issue was that the people working on the project were externals so they were worried why they should train the people because take could leave any time. The developer’s response was: “what if you don’t train us, and we stay?”

The last story was about a battle that was lost. The company had cultural, hierarchigal issues where the developers, QA and DevOps were siloed. They initially managed to deliver more often and improve the collaboration between the dev team and QA but in the end the DevOps team ended up siloing even more. As a result releases are now happening more sparsely.

Session 3: CI/CD in serverless

In this session we talked about implementing CI in a serverless environment. This session was combined from two Post-it notes on the flip chart that had similar subjects.

The first problem that was introduced in the session was quite specific to a small company that had only three developers doing trunk based development (all work is done on a single branch). They wanted to know how to run tests for the serverless system locally. Generally, the suggested way seemed to be just to deploy the Lambda functions and run the tests on the cloud.

The other part of the session was discussing about moving the test automation to AWS Lambda. For example Jenkins can be sometimes difficult to manage and writing custom plugins is rather complicated. Therefore it was suggested that the CI process could be moved completely to run on Lambda functions.

It was pointed out that there exists a project called LambCI which attempts to solve this exact problem.

This approach won’t be suited to all projects. Using Lambda functions means that the disk space is limited to 500MB and the build must finish within 15 minutes.

Session 4: Test && Commit || Revert (TCR)

In the fourth session we solved Fibonacci numbers using a rather new development method called Test && Commit || Revert. Kent Beck wrote a blog post about the method in September 2018 giving it popularity as an alternative to test driven development (TDD).

In short, TCR consists of two atomic parts. The first part is “test and commit” which means that if the unit tests pass then the changes are instantly committed. The catch is in the second part; if the tests fail then the changes are reverted. The idea is that this forces the developer to write code in small increments and keep the code constantly in “green” state.

This is a bit contrary to the TDD method where you first write a failing test and then fix your code so that the test would go green. Therefore maybe the biggest worry about the TCR method for me is that the test might not actually test anything.

The actual coding practice that we did with TCR was with almost comically tiny increments. Everyone tried to add as little as they could to create a new test and then make it pass.

Applying these kind of methods to mock projects is often rather straight forward. So far there haven’t been that many real world reports on using TCR.

On the plus side of this method I see that if you lose a bunch of your code you will probably already have an idea how to structure and implement it better. Also, it forces small commits that change only small things.

In addition to tests never failing here are some pain points that I see with the method:

  • commit messages are just “working”, you need to squash and edit commits for peer review
  • typos can make you lose code
  • running tests as often as is required for TCR might not be feasible in large projects with hundreds of unit tests
  • changes may become even too small, the actual fix is spread over multiple commits

Session 5: How I automated kid integration

The last session I attended was named “How I automated kid integration” and it was suggested by Paul Julius, one of the conference organizers.

The session started with Paul describing how he had noticed that he didn’t have enough time to message his three kids because of work. He thought that he needed to do something about the situation.

Being a technical person his solution was to automate the process. He set up a custom script on his laptop that would send randomized messages from a selection of about a hundred conversation starters at random times after school.

After two weeks his daughter (19 years) noticed that something strange was going on as the script didn’t take into account the message that she had sent earlier during the day. Funny thing was that two weeks later when the laptop had lost power she actually asked why she didn’t get a message. The kids missed receiving automated messages!

Before the experiment his daughter responded to all messages, his 16-year-old son about one message in five, and the 14-year-old son about a third of the time. After the experiment he noticed that the response rate for her daughter stayed the same, the middle kid responded over half the time and the lastborn responded four times out of five.

This improvement made him think if he could adapt his experiment to work. He wanted to improve the response time of support engineers so he experimented with e-mails and daily Slack messages sent at random times with randomized messages to remind them about open issues. In the end, they were able to improve the response time from over 40 days to under two weeks by making the support team write their average response times on a whiteboard.

Another problem was getting people to respond to a customer feedback survey. The automated form letter from sales with a link to SurveyMonkey yielded only about 2.5% response rate. On the other hand by using Google Apps Script to send automated messages which seemed to come from a manager’s e-mail address produced a 13% response rate.

Conclusion

All in all I enjoyed the conference yet again and intend to visit next year too. I feel that I’ve been able to contribute more and more each year I’ve visited the conference. To me the most important thing is to hear about new ideas and technologies and each year I’ve learned something new.

Subscribe to my newsletter

What’s new with PäksTech? Subscribe to receive occasional emails where I will sum up stuff that has happened at the blog and what may be coming next.

powered by TinyLetter | Privacy Policy