Saturday, October 29, 2011

Agile architecting and hurricanes...

What an Agile architect can learn from hurricane meteorologist... (paper review)
http://dx.doi.org.proxy.lib.chalmers.se/10.1109/MS.2011.152

For a late night reading I've chosen to look at the IEEE software's latest issue. What I've found was really nice - a paper discussing making decisions and predictions in Agile architecting.

I recommend reading this paper since it discusses the uncertainty of software architecting in the Agile software development environment. The analogy to the hurricane is, of course, to get the attention, but the overall article has a few nice points:
- predictions: "it's ok to be a little wrong"
- requirements for architects: "should know everything"
- BDUP (Bad Design Up Front) vs. NDUP (No Design Up Front)

Generally, a nice food for thought for the saturday evening...

Friday, October 28, 2011

Feature planning in Agile SD... (paper)


Feature planning in distributed Agile software development...
http://www.springerlink.com/content/978-3-642-20676-4/#section=891594&page=1&locus=0

A standard problem with Agile software development with multiple parallel teams is the dependency between teams. The teams that develop features/parts of features which are dependent on each other (or one is dependent on the other one) cannot work in parallel. Parallel work is then a recipe for disaster or at least a serious headache...

I've looks for some support in this matter for a while now and this is one of the very good articles that I can recommend. The article shows how distributed teams can plan feature development to avoid parallel development of the same functionality.

In a nutshell: the authors use the concepts of feature architectural similarity analysis and feature chunk contruction to calculate distances between features and use that as an input for feature planning. The idea is pretty simple, although there is some maths in the paper.

Definitely a recommended reading for this weekend... although not for a friday evening.

Wednesday, October 26, 2011

Embedded system design for automotive... (paper review)

In search for useful metrics - embedded system design for automotive applications
http://dx.doi.org/10.1109/MC.2007.344

In search for metrics to measure dependability of software systems, I've encountered this interesting article. It has nothing to do with Agile or development processes - it's the real deal measuring product properties.

The metrics described in this article are mainly related to the automotive software development, but they gave me an idea that we should be very observant on how we can measure dependability in early development phases.

Say that we have an agile project in an automotive domain, when does it make sense to measure dependability? Which metrics should we collect over time? Which models should we measure?

And most importantly, how do we provide teams with the support to reason about dependability when no product is around...

It's interesting to look into this paper to get a few research ideas out.

Agile methods in European embedded software development companies ... a survey

A survey on the actual use and usefuleness of XP and Scrum (paper review)
http://dx.doi.org/10.1049/iet-sen:20070038

Agile methods are here to stay one would say. However, I often stumble upon a question how much they are used and how. In this particular paper the authors review the actual use of XP and Scrum in embedded software development companies in Europe.

Some of the interesting findings and reflections:
- most commonly adopted Agile practice was the use of "open office space" - great, but is that really one of Agile principles?
- 51% of respondents claimed that core practices of XP like pair-programming are "rarely" or "never" practiced in their companies - hmmm.... interesting

What I find very interesting was the fact that the authors looked at both the use and usefulness of the practices - and revealed that adoption of Agile is not friction-free (ca. 28% of respondents admitted that they've had negative expectations for the Agile adoption).

One of the things I would like to see a reflection on is the situation in Sweden and Finland with a number of large companies working according to Agile, e.g. http://onlinelibrary.wiley.com/doi/10.1002/spip.355/abstract.

Monday, October 24, 2011

Software metrics - what every software engineer should know

Software metrics curriculum from SEI:

Quite often we get questions about what kind of metrics exist and how they should be used. Many companies look for specific kind of metrics or would just like to know how many they have/do not have.

I've usually steered my answer based on the curriculum from SEI's metrics course (available via the link below). The curriculum contains information about metrics for project managers, quality managers, etc. It is not complete, as it is only a curriculum, but it is a very good dictionary of what every software engineer should know about metrics.

IMHO: advanced software engineers should know something about the measurement theory as well, but that is a subject for a separate discussion.

My recommendation is to complement this reading with the standards:
- ISO/IEC 159393: Software and systems engineering - measurement processes
- ISO/IEC 25000 series
- ISO/IEC 9001: 2008 (Quality metrics)

For researchers: I recommend section III.
For companies: I recommend section I and II.

http://www.google.com/url?sa=t&rct=j&q=software%20metrics&source=web&cd=12&ved=0CCoQFjABOAo&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.83.6110%26rep%3Drep1%26type%3Dpdf&ei=Ml-kTtylBqXk4QTF8-jJBA&usg=AFQjCNEFrTg-n5iD5Cy3XmQiU5qU702W_A&sig2=91zNibxZ-Kkas0WmxcihwA

Sunday, October 23, 2011

Software metrics: A roadmap (review)

Software Metrics: A Roadmap (review):
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98.374&rep=rep1&type=pdf

On a sunday evening I've decided to settle for some easy reading about metrics and searched for "dependability" metrics, hoping this would yield interesting reading... Well, in the end I've settled for reading the classical roadmap once again.

The article is very good and I sincerely recommend reading the example about the road traffic (see the link above takes you directly to the paper).

In short, the example talks about the "undesired" behavior yielded by blindly obeying the statistics - if one wanted to opt for "safe" driving one should drive during the winter when the weather is a dreary. Obviously this is not true, but this is what the data suggests.

In the studies in industry, however, we often can observe the need for metrics, statistics, and indicators that would replace the common sense and could be blindly obeyed. However, it is (almost) impossible to provide this kind of metrics.

What I really like about this article is the set of causality aspects of metrics, the uncertainty of data and (above all) the need to complement quantitative data with other sources of information - IMHO "gut feeling".

Have a nice sunday reading...

Friday, October 21, 2011

Tracking the Evolution of Object-Oriented... (paper review)

Tracking the evolution of object-oriented metrics... (paper review)
http://www.springerlink.com/content/ml1ml2053441g898/

This paper shows trends in how CK OO metrics evolve during mutliple iterations in Agile projects. The trends look interesing, but the paper cannot draw conclusions about what the trends mean.

What I find as the most interesting:
- the paper presents trends and focuses on trends (the values are not important)
- the trends are not always growing - i.e. there are iterations where refactoring is applied and this does improve the situation.

I think this approach is in line with the Agile thinking - do not compute the metric once, do it iteratively. I wonder how many Agile-thinking companies do it the same way.

This paper can be used as an example of what kind of values OO metrics can have in Agile projects. The good thing is that the study was not calculated on a single project, but in many projects.

Monday, October 17, 2011

Benchmarking for Agility (paper review)

Benchmarking for Agility (paper review)
http://dx.doi.org/10.1108/14635770110389816

Benchmarking in the software engineering industry becomes an issue. Companies want to compare their operations across sectors and see whether they can be better. Telecom companies want to benchmark their practices against automotive companies, aerospace companies against telecom companies, and so on.

I've looked at this article to see what kind of aspects are important in benchmarking. I must say that the title does promise much, but does it deliver?

In general - yes - but in particular - no. I expected to see metrics which one could use for benchmarking, but instead I've read that one should have "appropriate metrics" and "approppriate tools". In order to find the tools, I needed to look at the sources - and there they were.

However, I must say that the paper offers a number of measures in the categories like:
  • Technology: market shares
  • Demand: Product variety - number of product line families
  • Process change: levels of management or cost to relocate processes
I recommend this paper when one needs "food-for-thought", but not as a definitive guide. It is way too abstract for that.

Tuesday, October 11, 2011

Understanding motivators... (paper review)

Understanding Motivators and De-motivators for Software Engineers – A Case of Malaysian Software Engineering Industry (paper review)

DOI: 10.1007/978-3-642-22203-0_18
Link: http://www.springerlink.com/content/w562524642r27624/

Recently I've got quite interested in the social aspects of software engineering. Why do people work as programmers, designers, testers...? What is it that is so challenging about this jobs? If we compare that to other engineering fields, software is never seen. When it comes to cars the look and the performance is what sells the car and that's what makes people work for car companies - they can do something cool.

So, recently I've come across this paper and it looks like the motivators in the SE field are exactly the same as in other fields - technical challenges, recognition, etc. It is the same what Humphrey noticed in Managing Technical People.

Impact on experimentation: quite often we see papers where authors try to assess the experience of software developers when completing a task in an experiment. Many use Likert scale-like measures and try to capture competence. In this paper, we can see a number of interesting aspects that we should measure - are people satisfied with their jobs? if so, they will do better in the experiment than people who are not motivated. Do we promote experiements as broadening activities? or as something that has to be done?

I think this paper opens up on a number of considerations in relation to empirical methods in software engineering, we should be better in capturing that!

Sunday, October 9, 2011

Real analytics...

Real analytics (by Deloitte):
http://www.deloitte.com/view/en_US/us/Services/consulting/all-offerings/hot-topics/technology-2011/858746e243a0e210VgnVCM1000001a56f00aRCRD.htm

Business intelligence tools has been proven to work very well and very poor - depending on who uses them. Just like Alice, when asked where she wants to go - "I do not know", and getting the answer - "Then it does not matter which way you choose".

This article describes one of the modern trends in measurement in IT, and in SE in particular. The authors postulate that the era of traditional BI is over, and what comes is:
  • predictions and simulations - companies want to play with what if ... scenarios
  • social networks and data collection from those - companies recognize that sheer numbers are often as good as social trends and word of mouth

What I particularly like about this article is the implication on such fields as automotive analytics - what the analytics tools should do is what the cars do today - predict and avoid accidents, not inform about them.

Friday, October 7, 2011

Appropriate Agile Measurement...

Appropriate Agile Measurement...
http://doi.ieeecomputersociety.org/10.1109/AGILE.2006.17

A very interesting article about how to measure business/customer value in the context of Agile software development. A few interesting aspects:
- they distinguish between metrics and diagnostics (just like metrics and statistics)
- they focus on measuring outcome and not output (i.e. result and not process)
- they postulate automation and easy to collect as guiding principles

I would also like to point out that this article, in contrast to many others, talks about customers and value explicitly - this is not a very common approach.

What I lack is the relation to ISO/IEC 15939.

Thursday, October 6, 2011

Comparing SW metric tools

Comparing software metrics tools
http://dx.doi.org/10.1145/1390630.1390648

I was looking for a paper the other day that would describe metric suites and I found this one (by accident). I've read it and it looks like it should be an inspiration for many master students. I see tons of thesis proposals that want to compare tools in one way or another - this is the way to do it.

There are a few things that I wanted to stress about this paper:
  • The method of comparison: they've turned a rather straightforward task into something interesting - defined a hypothesis and made a quasi-experiment to evaluate it
  • The metric suite - although metrics are well known, this paper shows that still the measurement method (ISO/IEC 15939) differs a lot between the tools - nice!
  • The link to quality aspects (although not perfect) - makes the comparison much "deeper" than just a dry number-cruncher.
Finally, I think that the use of ISO 9126 is a bit too old - wake up: ISO/IEC 25000 has been released!

Monday, October 3, 2011

Motivations and measurements in an Agile case study (paper review)

Motivations and measurements in an Agile case study (paper review):
http://dx.doi.org/10.1016/j.sysarc.2006.06.009

This is a very interesting paper for those who would like to know more about transitioning to Agile. The paper lists and evaluates a number of measures used in Agile teams. It looks at both sociological factors (like Team education level), technological aspects (like Number of changed classes) and more.

What is good about this paper is the fact that the metrics which are used in the case study, could be used to control trends in Agile/XP teams. I would be very interested myself to see whether the team education level and the number of changed classes are correlated. How about the defect count vs. team experience? I guess I would need to find another paper about that, though...

What is very good about this paper is that it is an experience report - not a scientific/theoretical thing - a real deal. One might criticize, but it is better to learn from it.

The paper is worth reading and worth using:)

_M