Subj : Re: curve for verbosity in a language To : comp.programming From : pantagruel Date : Sun Aug 07 2005 03:17 am Well, a propos your point, this is basically why I argued for a whitespace normalized character count. The salient point though is not to achieve a perfect metric, but rather to achieve a metric the shortcomings of which are well understood, so that one could analyze code in one language and then have a list of characteristics of that particular language that would affect the analysis. As such the LOC metric although imperfect is also quicker, and its drawbacks are reasonably well understood. Perhaps however true verbosity is a combination of tokens and average lines usage. Thus analyse programs written to the problem for tokens, analyse a dataset of 'average' programs in the languages for LOC. The results of token analysis is weighted higher but the graph of LOC can also be significant. Obviously LOC becomes more pertinent as we get into programs that run over many pages, especially if, as is often the case with J or K, the competing programs in those languages can be read as a couple pages of printout. As noted by Graham: "total effort = effort per line x number of lines" The traditional layout of LOC for a language may exist because of legibility problems with the language without its use. This has bearing on the problem because verbosity in a language is often directly related to comprehensibility of any segment of code. Graham suggests that the comprehension of code in its totality is more at issue but if any segment is nigh incomprehensible then how comprehensible can the total be? .