IWSED-95: International Workshop on Software Engineering Data
Problems and solutions in measurement programs
Reported by Reidar Conradi
and Jyrki Kontio (Note:
this section is incomplete, this report was prepared from presentation
notes)
Participants:
Shari Lawrence Pfleeger, Systems/Software, co-chair
Shingo Takada, NAIST, co-chair
- Andreas Birk, Univerity of Kaiserslautern
- Takamasa Nara, Hitachi
- Tetsuro Nose, Hitachi
- Filippo Lanubile, University of Maryland
- Christopher Lott, Univerity of Kaiserslautern
- Chris Samelson, Motorola
- Carolyn Seaman, University of Maryland
- Barry Shostak, CAE Electronics
- Inderpreet S. Thukral, IBM Consulting Group
- Petri Vesterinen, Nokia
Working Group Goal and issues addressed
- How to obtain management buy-in, how to justify cost and overhead
to developers
- Confidentiality of the data
- How to utilize distant data: from history, from other departments
or organizations or from other domains => definition and use
of context data
- Organization and logistics of data collection: what to automate
and how
- Database design and implementation: how perfect should it
be, what is the reasonable time and cost for implementation?
- Process definition and metrics: "the data is only as
good as the process definition it is based on
Discussion
The working group pointed out the following issues:
- There are many levels of measurements.
- Is good analysis is possible, if data are unreliable.
- Lack of context makes interpretation of data difficult.
- There is a lack of planning models and tools.
- Tools access: cross-platforms, expense.
- Comparison across projects/organizations/companies (generic
metrics).
- Ways of analyzing and quantifying lessons learned.
- Data definition/validation.
- Legality and confidentiality: "big brother" fear.
- Who is best to implement measurements?
- New development paradigms need new measurements: OO, reuse.
- Identifying and evaluating practices.
- Modeling/quantifying relationships.
State-of-the-practice:
- Metrics on effort and size (FP), defects, schedule.
- Function points.
- (Inadequate) effort and schedule estimation.
- GQM/MQG: top-down vs. bottom-up?
- Planning tools: AMI, Metricate.
- Collection and analysis tools:
- Amadeus, data bases, statistical packages, spreadsheets.
- Static code analysis: QAC, McCabe, Logiscope.
- Analysis as you go.
- Benchmarking.
- Separate metrics group.
Existing solutions:
- Goal-driven metrics program.
- Comparing improvement percentages.
- CHISSA database (from US government) of existing technologies.
- http://hissa.ncsl.nist.gov/ under Technical Publications).
- Evaluation of success of metrics programs.
- Shared vision of better dissemination of data:
- SPINs, newsletters, bulletin boards, databases, ...
Potential areas for new solutions:
- Standards.
- Attribute focusing (IBM Toronto)
- Experimentation and evaluation
- CSCW
- Data warehousing, "enterprise information"
- Statistical process control.
- Paradigms from other disciplines.
Miscellaneous:
- How to measure reuse?
- Key characteristics of measurement personnel?
Conclusions
<TBD>
Go to IWSED-95 home page
Updated 06-Mar-96 by Jyrki Kontio
Web Accessibility