Thursday, December 29, 2011

Motivations (paper review again)

Motivational factors for software engineers:
http://dx.doi.org/10.1016/j.jss.2010.12.017

I've blogged about motivational factors in software engineering before. Here is another, more recent paper about that. I recommend the reading since the study seems solid and the results are rather interesting.

Table 11 lists "Use of competence in Software Engineering" as the main motivational factor. This means that software engineers like to be called that and that the profession has very specific skill requirements.

Wednesday, December 28, 2011

Lean efficiency (paper review)

Lean efficiency
http://www.informs.org/content/download/242195/2304425/file/ASP,%20The%20Art%20and%20Science%20of%20Practice.pdf

I've posed a number of reviews of metrics in the "Agile" world and then I though that I've not done so much in the area of Lean development. Lean software development creeps in more and more often into larger companies and the topic begins to be "hot" increasingly often.

So, what kind of metrics are there in Lean development except for the famous Six-Sigma that is simple, yet complex. The Six-sigma book review I started with a while ago was, let's be honest, a starter.

In my work I often refer to this article. This is an interesting piece of analysis of ONE simple metric - inventory - in the context of Lean effectiveness and efficiency. What I like about this paper is the solid analytical ground for it. The analysis of companies and data from a number of years. The only thing that I wonder is the presence of the geopolitics (or the lack of it). Could that play a part in the intentory levels of the studied companies? Does a market pressure affect the inventory levels of software development companies?

I guess I'll need to do that analysis myself one day.

Wednesday, December 21, 2011

Beyond accuracy...

Beyond accuracy...
http://dl.acm.org/citation.cfm?id=1189572

Working with metrics one gets to understand that there is billions of metrics that could measure one attribute and that there are equally many ways to collect these metrics. However, one often forgets that metrics are to be used by stakeholder - often humans:)

This paper is a nice touch on the subject. Recommend this reading for the long winter evenings.

Picture: screenshot from ACM Digital library.

Saturday, December 17, 2011

Metrics leading to agility

Metrics leading to agility...
http://www.slideshare.net/Softwarecentral/microsoft-word-metrics-of-agility-leads-to-agility-in-testing

This paper from Tata services shows a number of interesting metrics. RTF, which I've already mentioned and others. Like the time from business decision to delivery.

The metric itself is no rocket science, but it is interesting to read about how they measure it.

Sunday, December 11, 2011

Metrics for continuous deployment

In my search of proven metrics that would stimulate continuous deployment I've encountered this slideshow: http://www.slideshare.net/ashmaurya/continuous-deployment-startup-lessons-learned. The slides are quite high-level, but they contain a few metrics - LOC per release being one of them - which I found important for the deployment.

A company stimulating continuous deployment should strive to increase and optimize this number over time. If the LOC per release decrease, the customer value is likely to decrease too - regardless how often the company releses.

Friday, November 25, 2011

They way your work is structured has impact on customer quality...

Software Dependencies, Work Dependencies, and Their Impact on Failures
http://dx.doi.org/10.1109/TSE.2009.42

While reading this article from TSE I've realized that this is a very important aspect in software development. It has been discussed how team dynamics influence the spirit of the organization, but this paper looks closer at the work dependencies and coordination requirements.

What I like in particular about this paper is the recognition of coordination as one of the factors that can influence the quality of code. To put it in simple terms - if coordination is required, but not realized, then there are a lot of assumptions in the code. The assumptions are risks and can lead to failures.

Since it is a TSE article, it contains a lot of useful links to related work and is based on solid data collection methods.

Interesting reading, although perhaps not for friday evening....

Wednesday, November 16, 2011

Is continuous integration enough? Not... one needs to push the boundaries (paper review)

Pushing the boundaries of continous integration (paper review)
http://dx.doi.org/10.1109/Agile.2008.31

I was asked recently the following question If continous integration is not the last thing to do, what else is there left? which got me thinking.

There is the deployment, sure, but it often is part of another project. Then I thought - testing - and I've looked for articles about test automation and I've found this nice experience report from BT. They extend the CI with robustness testing and durability testing.

This is a really nice example of how to improve CI. What I like about it is the toolsuite which they have evaluated - all open source.


Sunday, November 13, 2011

Two tools for continuous deployment...

Since measuring the continuous integration and deployment is an important aspect in contemporary software engineering, I've looked at one (or two, depending how you count) tools that stimulate this.

Hudson and Jenkins, are two tools which automate jobs and provide statistics on how the result of these jobs look like.

What I like about this tool is the fact that it can automate all kinds of batch jobs, not just building via makefiles.

I will try to use this tool in our work for executing measurement systems, status of builds, but also sending e-mail to customers with links, link statistics (if coupled with Google Analytics).

Particularly the last one is very useful for continuous deployment - the system sends out links and then looks at the statistics which software links were used - i.e. how many installations we have.

Can't wait to play around with the tool a bit more.

Wednesday, November 9, 2011

Metrics Functions for Kanban Guards

Metrics Functions for Kanban Guards (paper)
http://dx.doi.org/10.1109/ECBS.2010.43

I've been recommended this paper to read by a colleague. First I was rather skeptical to it - no automation, conceptual framework. However, after I've read the paper I realized that these ideas are actually quite easy to implement.

What I like about this paper is the "Lean thinking" in the context of software development. Measuring "design debt" is also a very solid and good idea. I'll try to use that myself in my research projects - I hope to write a post on how it went after a few months (yes, months, it will take time to set things up).

Sunday, November 6, 2011

R&D Performance and metrics

Metrics for R and D organizations (paper)
http://onlinelibrary.wiley.com/doi/10.1111/1467-9310.00115/abstract

Quite often we get to see measurements at a high-level. A level so high that it is actually hard to see what should be measured. R&D or innovation is one of those things. How to measure innovation? What is a good R&D? What makes Google so good?

The last question cannot be answered by this article, but it can certainly be hinted. What I like best about this particular article is the distinction between a function and an organization in the context of R&D.

This distinction is crucial for defining effective metrics for organizations. One should first think about that the function of the organization has and then how well the organization supports this function. This is a really interesting reading...

Friday, November 4, 2011

Measuring continuous deployment

Doing the impossible - deploying 50 times per day
http://timothyfitz.wordpress.com/2009/02/10/continuous-deployment-at-imvu-doing-the-impossible-fifty-times-a-day/

This article shortly summarizes the challenges of continous deployment - how to measure it? Well, look at their diagram: the software is complete when all test cases have passed. The diagram shows a summary of passing/failing test cases for all test machines.

Wednesday, November 2, 2011

Contextualizing Agile Software Development (paper review)

Contextualizing Agile Software Development (paper review):
http://onlinelibrary.wiley.com/doi/10.1002/smr.572/full

Philippe Kruchten has written a very interesting article about adopting Agile methods in the context of companies that do not have the "ideal" grounds for it.

Kruchten presents a set of factors to consider when defining the context of the company and a set of "thresholds" for each of these factors. Nice....

I would like to quote a paragraph in the introduction which I really liked:
"An analogy [MS: to the definition of Agile] could be the definition of a road. Would you define a road as something made of crushed rocks and tar, or define it as a surface that is black rather than white, flat rather than undulated, and with painted lines rather than monochrome? Or would you rather define a road as a component of a transportation system, allowing people and goods to be moved on the ground surface from point A to point B? And then let the properties or components of the road be derived from this functional definition, allowing some novel approaches in road design, rather than defining it narrowly using a common recipe."

Think about it for a second - if we defined roads like we define Agile - would we be able to get anywhere? I think that this sums up a lot...

Saturday, October 29, 2011

Agile architecting and hurricanes...

What an Agile architect can learn from hurricane meteorologist... (paper review)
http://dx.doi.org.proxy.lib.chalmers.se/10.1109/MS.2011.152

For a late night reading I've chosen to look at the IEEE software's latest issue. What I've found was really nice - a paper discussing making decisions and predictions in Agile architecting.

I recommend reading this paper since it discusses the uncertainty of software architecting in the Agile software development environment. The analogy to the hurricane is, of course, to get the attention, but the overall article has a few nice points:
- predictions: "it's ok to be a little wrong"
- requirements for architects: "should know everything"
- BDUP (Bad Design Up Front) vs. NDUP (No Design Up Front)

Generally, a nice food for thought for the saturday evening...

Friday, October 28, 2011

Feature planning in Agile SD... (paper)


Feature planning in distributed Agile software development...
http://www.springerlink.com/content/978-3-642-20676-4/#section=891594&page=1&locus=0

A standard problem with Agile software development with multiple parallel teams is the dependency between teams. The teams that develop features/parts of features which are dependent on each other (or one is dependent on the other one) cannot work in parallel. Parallel work is then a recipe for disaster or at least a serious headache...

I've looks for some support in this matter for a while now and this is one of the very good articles that I can recommend. The article shows how distributed teams can plan feature development to avoid parallel development of the same functionality.

In a nutshell: the authors use the concepts of feature architectural similarity analysis and feature chunk contruction to calculate distances between features and use that as an input for feature planning. The idea is pretty simple, although there is some maths in the paper.

Definitely a recommended reading for this weekend... although not for a friday evening.

Wednesday, October 26, 2011

Embedded system design for automotive... (paper review)

In search for useful metrics - embedded system design for automotive applications
http://dx.doi.org/10.1109/MC.2007.344

In search for metrics to measure dependability of software systems, I've encountered this interesting article. It has nothing to do with Agile or development processes - it's the real deal measuring product properties.

The metrics described in this article are mainly related to the automotive software development, but they gave me an idea that we should be very observant on how we can measure dependability in early development phases.

Say that we have an agile project in an automotive domain, when does it make sense to measure dependability? Which metrics should we collect over time? Which models should we measure?

And most importantly, how do we provide teams with the support to reason about dependability when no product is around...

It's interesting to look into this paper to get a few research ideas out.

Agile methods in European embedded software development companies ... a survey

A survey on the actual use and usefuleness of XP and Scrum (paper review)
http://dx.doi.org/10.1049/iet-sen:20070038

Agile methods are here to stay one would say. However, I often stumble upon a question how much they are used and how. In this particular paper the authors review the actual use of XP and Scrum in embedded software development companies in Europe.

Some of the interesting findings and reflections:
- most commonly adopted Agile practice was the use of "open office space" - great, but is that really one of Agile principles?
- 51% of respondents claimed that core practices of XP like pair-programming are "rarely" or "never" practiced in their companies - hmmm.... interesting

What I find very interesting was the fact that the authors looked at both the use and usefulness of the practices - and revealed that adoption of Agile is not friction-free (ca. 28% of respondents admitted that they've had negative expectations for the Agile adoption).

One of the things I would like to see a reflection on is the situation in Sweden and Finland with a number of large companies working according to Agile, e.g. http://onlinelibrary.wiley.com/doi/10.1002/spip.355/abstract.

Monday, October 24, 2011

Software metrics - what every software engineer should know

Software metrics curriculum from SEI:

Quite often we get questions about what kind of metrics exist and how they should be used. Many companies look for specific kind of metrics or would just like to know how many they have/do not have.

I've usually steered my answer based on the curriculum from SEI's metrics course (available via the link below). The curriculum contains information about metrics for project managers, quality managers, etc. It is not complete, as it is only a curriculum, but it is a very good dictionary of what every software engineer should know about metrics.

IMHO: advanced software engineers should know something about the measurement theory as well, but that is a subject for a separate discussion.

My recommendation is to complement this reading with the standards:
- ISO/IEC 159393: Software and systems engineering - measurement processes
- ISO/IEC 25000 series
- ISO/IEC 9001: 2008 (Quality metrics)

For researchers: I recommend section III.
For companies: I recommend section I and II.

http://www.google.com/url?sa=t&rct=j&q=software%20metrics&source=web&cd=12&ved=0CCoQFjABOAo&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.83.6110%26rep%3Drep1%26type%3Dpdf&ei=Ml-kTtylBqXk4QTF8-jJBA&usg=AFQjCNEFrTg-n5iD5Cy3XmQiU5qU702W_A&sig2=91zNibxZ-Kkas0WmxcihwA

Sunday, October 23, 2011

Software metrics: A roadmap (review)

Software Metrics: A Roadmap (review):
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98.374&rep=rep1&type=pdf

On a sunday evening I've decided to settle for some easy reading about metrics and searched for "dependability" metrics, hoping this would yield interesting reading... Well, in the end I've settled for reading the classical roadmap once again.

The article is very good and I sincerely recommend reading the example about the road traffic (see the link above takes you directly to the paper).

In short, the example talks about the "undesired" behavior yielded by blindly obeying the statistics - if one wanted to opt for "safe" driving one should drive during the winter when the weather is a dreary. Obviously this is not true, but this is what the data suggests.

In the studies in industry, however, we often can observe the need for metrics, statistics, and indicators that would replace the common sense and could be blindly obeyed. However, it is (almost) impossible to provide this kind of metrics.

What I really like about this article is the set of causality aspects of metrics, the uncertainty of data and (above all) the need to complement quantitative data with other sources of information - IMHO "gut feeling".

Have a nice sunday reading...

Friday, October 21, 2011

Tracking the Evolution of Object-Oriented... (paper review)

Tracking the evolution of object-oriented metrics... (paper review)
http://www.springerlink.com/content/ml1ml2053441g898/

This paper shows trends in how CK OO metrics evolve during mutliple iterations in Agile projects. The trends look interesing, but the paper cannot draw conclusions about what the trends mean.

What I find as the most interesting:
- the paper presents trends and focuses on trends (the values are not important)
- the trends are not always growing - i.e. there are iterations where refactoring is applied and this does improve the situation.

I think this approach is in line with the Agile thinking - do not compute the metric once, do it iteratively. I wonder how many Agile-thinking companies do it the same way.

This paper can be used as an example of what kind of values OO metrics can have in Agile projects. The good thing is that the study was not calculated on a single project, but in many projects.

Monday, October 17, 2011

Benchmarking for Agility (paper review)

Benchmarking for Agility (paper review)
http://dx.doi.org/10.1108/14635770110389816

Benchmarking in the software engineering industry becomes an issue. Companies want to compare their operations across sectors and see whether they can be better. Telecom companies want to benchmark their practices against automotive companies, aerospace companies against telecom companies, and so on.

I've looked at this article to see what kind of aspects are important in benchmarking. I must say that the title does promise much, but does it deliver?

In general - yes - but in particular - no. I expected to see metrics which one could use for benchmarking, but instead I've read that one should have "appropriate metrics" and "approppriate tools". In order to find the tools, I needed to look at the sources - and there they were.

However, I must say that the paper offers a number of measures in the categories like:
  • Technology: market shares
  • Demand: Product variety - number of product line families
  • Process change: levels of management or cost to relocate processes
I recommend this paper when one needs "food-for-thought", but not as a definitive guide. It is way too abstract for that.

Tuesday, October 11, 2011

Understanding motivators... (paper review)

Understanding Motivators and De-motivators for Software Engineers – A Case of Malaysian Software Engineering Industry (paper review)

DOI: 10.1007/978-3-642-22203-0_18
Link: http://www.springerlink.com/content/w562524642r27624/

Recently I've got quite interested in the social aspects of software engineering. Why do people work as programmers, designers, testers...? What is it that is so challenging about this jobs? If we compare that to other engineering fields, software is never seen. When it comes to cars the look and the performance is what sells the car and that's what makes people work for car companies - they can do something cool.

So, recently I've come across this paper and it looks like the motivators in the SE field are exactly the same as in other fields - technical challenges, recognition, etc. It is the same what Humphrey noticed in Managing Technical People.

Impact on experimentation: quite often we see papers where authors try to assess the experience of software developers when completing a task in an experiment. Many use Likert scale-like measures and try to capture competence. In this paper, we can see a number of interesting aspects that we should measure - are people satisfied with their jobs? if so, they will do better in the experiment than people who are not motivated. Do we promote experiements as broadening activities? or as something that has to be done?

I think this paper opens up on a number of considerations in relation to empirical methods in software engineering, we should be better in capturing that!

Sunday, October 9, 2011

Real analytics...

Real analytics (by Deloitte):
http://www.deloitte.com/view/en_US/us/Services/consulting/all-offerings/hot-topics/technology-2011/858746e243a0e210VgnVCM1000001a56f00aRCRD.htm

Business intelligence tools has been proven to work very well and very poor - depending on who uses them. Just like Alice, when asked where she wants to go - "I do not know", and getting the answer - "Then it does not matter which way you choose".

This article describes one of the modern trends in measurement in IT, and in SE in particular. The authors postulate that the era of traditional BI is over, and what comes is:
  • predictions and simulations - companies want to play with what if ... scenarios
  • social networks and data collection from those - companies recognize that sheer numbers are often as good as social trends and word of mouth

What I particularly like about this article is the implication on such fields as automotive analytics - what the analytics tools should do is what the cars do today - predict and avoid accidents, not inform about them.

Friday, October 7, 2011

Appropriate Agile Measurement...

Appropriate Agile Measurement...
http://doi.ieeecomputersociety.org/10.1109/AGILE.2006.17

A very interesting article about how to measure business/customer value in the context of Agile software development. A few interesting aspects:
- they distinguish between metrics and diagnostics (just like metrics and statistics)
- they focus on measuring outcome and not output (i.e. result and not process)
- they postulate automation and easy to collect as guiding principles

I would also like to point out that this article, in contrast to many others, talks about customers and value explicitly - this is not a very common approach.

What I lack is the relation to ISO/IEC 15939.

Thursday, October 6, 2011

Comparing SW metric tools

Comparing software metrics tools
http://dx.doi.org/10.1145/1390630.1390648

I was looking for a paper the other day that would describe metric suites and I found this one (by accident). I've read it and it looks like it should be an inspiration for many master students. I see tons of thesis proposals that want to compare tools in one way or another - this is the way to do it.

There are a few things that I wanted to stress about this paper:
  • The method of comparison: they've turned a rather straightforward task into something interesting - defined a hypothesis and made a quasi-experiment to evaluate it
  • The metric suite - although metrics are well known, this paper shows that still the measurement method (ISO/IEC 15939) differs a lot between the tools - nice!
  • The link to quality aspects (although not perfect) - makes the comparison much "deeper" than just a dry number-cruncher.
Finally, I think that the use of ISO 9126 is a bit too old - wake up: ISO/IEC 25000 has been released!

Monday, October 3, 2011

Motivations and measurements in an Agile case study (paper review)

Motivations and measurements in an Agile case study (paper review):
http://dx.doi.org/10.1016/j.sysarc.2006.06.009

This is a very interesting paper for those who would like to know more about transitioning to Agile. The paper lists and evaluates a number of measures used in Agile teams. It looks at both sociological factors (like Team education level), technological aspects (like Number of changed classes) and more.

What is good about this paper is the fact that the metrics which are used in the case study, could be used to control trends in Agile/XP teams. I would be very interested myself to see whether the team education level and the number of changed classes are correlated. How about the defect count vs. team experience? I guess I would need to find another paper about that, though...

What is very good about this paper is that it is an experience report - not a scientific/theoretical thing - a real deal. One might criticize, but it is better to learn from it.

The paper is worth reading and worth using:)

_M