Monday, April 30, 2012

Why can't we predict software defects...

A Critique of Software Defect Prediction Models (paper review)...
https://www.eecs.qmul.ac.uk/~norman/papers/defects_prediction_preprint105579.pdf

In this classical paper Fenton et al. presents a number of good points why predicting defects is so difficult. Although a lengthy paper, it constitutes a great review of different attempts of contemporary defect prediction (contemporary in 1999).

A very solid which will be of grab use for researchers and practitioners interested in predicting defects.

@MiroslawStaron

Sunday, April 29, 2012

Visual analytics - survey at Microsoft (paper review)

Visual analytics...
http://thomas-zimmermann.com/publications/files/buse-icse-2012.pdf

I've come across this paper by accident by looking at one of my favorite places - Microsoft Research. This paper shows what developers and managers expect to have from visual analytics. Examples of aspects are:
- targeting testing,
- triggering refactoring
- release planning
- targeting training
- ...

Interesting take on metrics - and a nice piece of empirical research

@MiroslawStaron

Saturday, April 14, 2012

Guiding your test based on faults ...

Fault-based test suite prioritization for specification-based testing
http://dx.doi.org/10.1016/j.infsof.2011.09.005

The question which tests one should prioritize is an important one. Companies usually use test coverage as the guiding metric - generally the higher the average, the better. Smarter companies use metrics like test failure - when the test fails often it is executed more often.

This paper, however, shows a different approach. This technique bases test prioritization on how well the test cases detects faults.

Interesting work, which I will try myself in a near future...

@MiroslawStaron

Thursday, April 12, 2012

Which modules are risky...

How do you know which modules/components/subsystems are risky?
http://www.springerlink.com/content/r60258615x8l1877/fulltext.pdf

The question from the title bothers many test leaders, project managers and quality santas. The reason is that this information varies over time. Once complex modules can be of high quality over time if no changes are made... simple modules can decay over time and features can make modules grow in strange ways.

This is an interesting paper, but takes a while to "digest"

@MiroslawStaron

Making effective decisions in Agile software teams - what's the problem?

Obstacles to decision making in Agile software development teams
http://dx.doi.org/10.1016/j.jss.2012.01.058

I often look at metrics research and discuss the connections between metrics and decisions. In the "old days" managers were concerned with these issues, but now it seems that with the Agile world the decision-metrics are often a part of teams' work.

This paper discussed what kind of decisions teams make and what the obstacles for making the decisions effectively are. For example, one of the obstacles is the fact that collaborative decision making prevents experts from having their voice heard...

Recommended reading for those who often wonder why some metrics are better for decisions than others.

@MiroslawStaron

Monday, April 2, 2012

How to put together a great Agile team...

How to Balance the Size and Skills of Your Agile Team
http://my.gartner.com/resources/220700/220700/how_to_balance_the_size_and__220700.pdf?li=1

Gartner did a great job in showing typical skills of Agile software development teams. They've looked at a number of disciplines and drew conclusions that, e.g.:
Master (not automatically ScrumMaster):
  • Experience: Five to eight years
  • Productivity: Very high, 18 to 25 FPs per staff month

  • They also have tables that show how the teams are usually composed and what it might mean.

    Interesting reading

    @MiroslawStaron