Saturday, November 9, 2013

Visualization of Defect Inflow and Resolution Cycles: Before, During and After (review)...

Visualization of Defect Inflow and Resolution Cycles: Before, During and After Transfer
http://www.bth.se/fou/forskinfo.nsf/0/b3767af60ecfd34dc1257c10007c060f/$file/BTH_Visualization_of_Defect_Inflow_and_Resolution_Cycles.pdf

When making decision about outsourcing there is always some degree of uncertainty what the effects will be. Will the quality be the same? Will the cost be lower? ....

In this paper the authors looked at the efficiency of defect management/resolution processes and visualized them. The results show that during and immediately after the transfer the defect inflow is higher, bottlenecks are more visible, and defect resolution cycles are longer.

The authors identify a set of factors which can explain some of the trends in the defect inflow/removal processes - e.g. competence, unstable maintenance team, etc.

Very interesting article with a solid empirical focus.



Monday, September 16, 2013

What's important for continuous integration...

What's important for continuous integration...
review of "Modeling Continuous Integration Practice Differences in Industry Software Development", http://dx.doi.org/10.1016/j.jss.2013.08.032

I've got this article on ScienceDirect alerts and was a bit skeptical to the method - literature review - but when I read the paper it turned out to be great stuff.

The authors look at what is important in continuous integration - see top 3:
- build duration
- build frequency
- build triggering

and they also looked at  the meaning of common phrases - e.g. what is a failure or success of continuous integration. Turns out not that simple...

Generally very interesting article with a lot of insights.

Friday, August 23, 2013

A teamwork model for understanding an agile team... review


A teamwork model for understanding an agile team...
http://www.sciencedirect.com/science/article/pii/S0950584909002043

I came across an article which analyzed a number of situations in a single Agile/Scrum team. I was skeptical in the beginning, but after I got to browse it my attention was drawn to the quotes from the team.

In the end I've read the whole paper and liked the retrospective of the team. For example how they evolved to work together in the first project, shared risks and opportunities. and a few more.

really recommend!

Sunday, June 30, 2013

Sensing high performance software teams

Interesting article on measuring team performance

The article shows a design of a survey that would help to continuously monitor software teams and their performance. Sounds a bit like the old fashioned PSP or TSP, but might actually work. Looking forward to more case studies with this method. 

The article contains two case studies, but more wild be welcomed and their more thorough analysis together with the proper description of the center would be great. 

Clone detection at Microsoft

Detecting clones at Microsoft seems to be a planned and well executed activity...

I was looking at the clone detection research recently and found a new development from Microsoft. It seems that the company puts a lot of effort to reduce the waste in their source code and work with refactoring. They report an impressive number of downloads if the tool and show why they use it. 

Unfortunately no real results of how much improvement was introduced thanks to the tool. Perhaps that will come later. 

Saturday, June 29, 2013

Review of thesis on measuring sw architectures...

Measuring sw architectures
http://www.st.ewi.tudelft.nl/~bouwers/main/papers/2013thesis_EricBouwers.pdf

I came across this thesis and found a nice part about measuring architectures. Interesting reading about metrics and their connection to steering people somewhere in the middle too.


Wednesday, June 19, 2013

Code smells as system-level indicators of maintainability: An Empirical Study

Code smells as system-level indicators of maintainability: An Empirical Study
http://dx.doi.org/10.1016/j.jss.2013.05.007

Aiko did a very interesting study on code smells - they've compared a number of systems that had been studied previously and evaluated using CK metrics and expert judgements. The new evaluations was based on code smells.

The results are interesting in that sense that code smells are strongly correlated with expert judgements on the quality of the code. This means that code smells can be used as indicators of the well-known "gut feeling" from experts. However, more quantitative.

Very interesting and recommended reading!

@MiroslawStaron

Saturday, June 15, 2013

Metrics suite to measure Agile transformation...

Metrics for Agile transformation

In this article the authors elicit a number of metrics for measuring the effect of changes to software development processes from plan driven to agile. The metrics are not really a set of surprises (sorry) but I like the framework in which they are defined. 

The framework tries to capture what can go right and what can go wrong in a transformation and thus puts metrics I the context. I just wish the authors would use ISO 15939 instead of GQM. 

Examples of metrics are: 
- business value measured in connection to work effort, 
- commit pulse
- flow (Hej I know this one) 
...

Sunday, May 19, 2013

Faktors determining team efficiency for new product development teams

New product development teams and their efficiency...

DOI: 10.1111/j.1540-5885.2012.00940.x

In this article the authors explore a number of empirical studies which map factors which are important for teams to be efficient. This nice mapping is a good entry point for team leaders to understand what kind of potential they can expect from their teams and to read upon how to inspire that potential. 

Very interesting article, but beware that it is not only about software development.

Thursday, May 9, 2013

How reliable are secondary studies...

Reliability of systematic mappings...
http://scholar.google.se/scholar_url?hl=en&q=http://www.wohlin.eu/jss13-2.pdf&sa=X&scisig=AAGBfm0cSB5aghW7O2R8JyrDD3bvNOrqxA&oi=scholaralrt

Reading analyses done by others is very fruitful as it saves time and resources. However one usually asks oneself the question how reliable the analyses are. In this paper the authors analyze two very similar studies and identify that similar studies have biases and can come to different results.

The paper is interesting for those of us who read systematic reviews and mappings (guilty as charged, sic!)

@MiroslawStaron

Tuesday, April 16, 2013

Optimal size of your testing team...

Experiments on finding the optimal size of the testing team.
http://ac.els-cdn.com/S095058491200239X/1-s2.0-S095058491200239X-main.pdf?_tid=0593a78c-a72a-11e2-864f-00000aacb362&acdnat=1366181150_7dcb3e588a7a1bb3c9ddb1ea5bc99a5a

Many articles have been written discussing the optimal outcome of the test suite, test coverage, test progress, etc. however, not much have been written on the size of the test team. Since in modern companies the human resources can be the main bottleneck, this aspect is of utmost importance.

In this article the authors compare the efficiency of teams of testers and individual testers in finding defects. The also discuss the concept of the optimal test team.

Nice work, it would be great to see what companies say about this model.

@MiroslawStaron

Wednesday, February 27, 2013

Tracking evolution of clones over time...

Tracking the evolution of clones over time...
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4222578

Source code cloning usually goes unnoticed until things start to go bad for the company. Managers tend to look into static code analysis and then realize that they've missed the train when cloning was still manageable and could be contained.

I like this paper since it gives a certain perspective of the code clone measurement - tracking over time. The method is not trivial to apply, but there is a tool (Java only, though) which can support automated tracking of clones.

I will set it as a project for my students to get in touch with the authors and develop a tool for other languages.

@Miroslaw Staron

Using entropy theories to predict bugs...

Using Entropy measurements applied to source code changes to predict bugs.
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6416630&tag=1

This article takes a slightly new angle on a well-known problem of predicting bugs in source code. The authors take source code changes and use that as an input to calculate entropy and predict bugs.

Based on their results from applying of this approach on Mozilla's code, the results seem to be promising...

@MiroslawStaron

Wednesday, January 30, 2013

Overview of component metrics...

Overview of component metrics - a systematic review...
http://dx.doi.org/10.1016/j.jss.2012.10.001

Once in a while we come across an interesting study - a study which does the related work for us. This is a very good paper describing a set of metrics for component-based software systems.

Things that are measured (from section 5) are:
- Interfaces
- Interface methods
- Property
- Signatures


Top 4 limitations of the current research in the areas are (I cite):
  • The lack of a widely accepted metric and quality model for CBSE from the components consumer and producer perspectives. This lack may arise because most metrics definitions were performed in an ad-hoc fashion, rather than meeting information requirements of a specific framework upon which we plan to interpret the metric. In the absence of such a framework, the data collection and interpretation of the metric becomes subjective. In addition, most of these proposals have not achieved an industrial level validation.
  • The poor quality of some papers identified in the quality evaluation section, which reduces the trustworthiness of the proposed metrics.
  • The poor quality of some metric definitions, which makes it difficult for researchers or practitioners to ensure the correct collection of measurements that were initially intended by the metrics developers. Overall, many metrics have insufficiencies either in their formulation, collection, validation or applications.
  • The elements of metric definitions those are not visible to CBSS developers, including elements that are incompatible with the standard concepts of a component or CBSS, such as a class or source code.
Interesting...

Monday, January 28, 2013

Survey on testing in Canada...

Survey on testing practices in Canadian software industry
http://dx.doi.org/10.1016/j.jss.2012.12.051

I've wrote about Agile in Finland, now I've come across testing in Canada. The paper is a replication of another study in Canada and finds that (I cite):
  1. The importance of testing-related training is increasing,
  2. Functional and unit testing are two common test types that receive the most attentionand efforts spent on them,
  3. Usage of the mutation testing approach is getting attention among Canadian firms,
  4. Traditional Test-last Development (TLD) style is still dominating and a few companies are attempting the new development approachessuch as Test-Driven Development (TDD), and Behavior-Driven Development (BDD),
  5. in terms of the most popular test tools, NUnit and Web application testing tools overtook JUnit and IBM Rational tools,
  6. Most Canadian companies use a combination of two coverage metrics: decision (branch) and condition coverage,
  7. Number of passing user acceptance tests and number of defects found per day (week or month) are regarded asthe most important quality assurance metrics and decision factors to release,
  8. In most Canadian companies, testers are out-numbered by developers, with ratios ranging from 1:2 to 1:5,
  9. The majority of Canadian firms spent less than 40% of their efforts (budget and time) on testing during development, and
  10. More than 70% of respondents participated in online discussion forums related to testing on a regular basis.
Very insteresting findings indeed! Especially the emerging idustrial adoption of BDD and mutation approaches.

Wednesday, January 16, 2013

Efficiency in source code clone detection

Efficiency in source code clone detection...
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5279980

This is a very interesting paper about detecting advanced type-3 clones, i.e. clones which are modifications of the original code, not just copy-paste or parameter difference.

They've studied a number of projects and came to a conclusion that modern tools can detect only 25% of the clones of type-3. the remaining 75% of the clones detected were false positives.

Depending on the algorithm used to detect the clones detected have different characteristics, e.g. length or complexity of modification.

Looking forward for more work along these lines.

@MiroslawStaron

Friday, January 4, 2013

Simple example of ISO 26262 development items

ISO 26262 Impact on Development of Powertrain Control System
http://link.springer.com/content/pdf/10.1007%2F978-3-642-33805-2_58 

In this paper the authors present a simple example of how ISO 26262 can impact the development of a simple function - Start/Stop. The examples is limited, but it is easy to understand that for larger examples and more safety critical the complexity raises. Now, imagine that one re-classifies the function or divides the ASIL levels... 

The paper needs more examples and more hard-data to make the illustration better, but it contains the basics. I recommend further reading upon BeSafe - a project done by Volvo Technology in this area - http://www.kth.se/polopoly_fs/1.354272!/Menu/general/column-content/attachment/Presentation,%20Johan%20Karlsson.pdf 

@MiroslawStaron


Thursday, January 3, 2013

Special issue on performance in software development - CFP for April

Information and Software Technology - special issue on Performance in Software Development
http://www.staron.nu/performance_in_sd.htm

We're announcing a special issue of Information and Software Technology. It seems that challenges related to organizational performance are prevailing and my recent post about motivations in software engineering gained a significant attention.

We intend to collect interesting articles in the area and thus we solicit the spacial issue with focus on:

  • Managerial, technical and social aspects of measuring performance of software organization
  • Business aspects of organizational performance measurement
  • Agile and Lean software development and its impact on organizational performance
  • Performance of software development teams and organizations
  • Performance of R&D in software organizations
  • Ability to continuously satisfy customer demands
  • Corporate performance management of software development organizations, teams and supply chains
  • Impact of standardization on operational performance
  • Visualization of organizational performance and its patterns
  • Case studies and experiments of how techniques/methods/technologies influence organizational performance and how it is measured
The submission deadline is 5th of April 2013. Consider submitting or stay tuned for the TOC later this year!

@MiroslawStaron


Tuesday, January 1, 2013

Creativity in Agile Software development at BBC

How BBC stimulates creativity - a fresh study to be published at IEEE Software
http://ieeexplore.ieee.org/ielx5/5260979/5260980/05261017.pdf?tp=&arnumber=5261017&isnumber=5260980

This paper contains elements of how the domain influences the process. The Agile process at BBC contains such things as 'random stars' to reward creative ideas that can fly.

I recommend this article to see how requirements can be collected and prioritized at different companies in an Agile fashion. I like the fact that they also managed to use statistics - numbers ins always something that I like.

@MiroslawStaron