Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How many parsing errors are acceptable to get good technical debt results? #228

Closed
guwirth opened this issue Jun 7, 2014 · 5 comments
Closed
Assignees
Milestone

Comments

@guwirth
Copy link
Collaborator

guwirth commented Jun 7, 2014

Using sonar.cxx.includeDirectories and sonar.cxx.forceIncludes in the right way results in less parsing errors.

On the other hand analyzing time is increased because of these additional includes. During performance test I found: Most important part is how many and complex includes exist. For example integration of complex boost slow down things.

In my case resulting technical debt was with and without these includes always the same. So wondering if it would not be better not to add system includes and external libraries?

What is your experience?

@wenns
Copy link
Contributor

wenns commented Jul 2, 2014

Just a side node: to get good technical debt results in presence of 'parsing error' violations the remediation costs of the latter should be probably set to 'zero'. Otherwise we're charging technical debt of the codebase with technical debt of projects setup and our grammar ;)

@jmecosta
Copy link
Member

jmecosta commented Jul 2, 2014

characteristics for parsing error have not been defined, so we are cleared

@guwirth
Copy link
Collaborator Author

guwirth commented Jul 2, 2014

My conclusion: As long as only external tools are used and not the checks based on the AST, parsing errors doesn't matter. Also sonar.cxx.includeDirectories and sonar.cxx.forceIncludes are not needed for this case. Not setting them will reduce analysis time.

@guwirth
Copy link
Collaborator Author

guwirth commented Jul 4, 2014

See also discussion in #237 and comment of @wenns.

We already use it for a couple metrics and we will use it even more. The AST matters.

I'm not sure if this is the right way or we should not better rely on
external tools. Looking to the current checks they are all very special
or existing static code analyzes are doing the same.

Im actually not talking about the builtin checks, but all features which
are based on the AST. This includes most of the size-like metrics
(#locs, #statments/classes/methods), complexity, custom XPath rules,
design metrics, public API detection etc.

And more is coming: all the cartography-stuff which from SonarSource's
roadmap requires an accurate program model which in turn has to build
upon the AST.

Parsing and building the AST doesnt end in itself, it enables many
things, many of which we get 'for free' through the usage of the SQ
platform or SQ libs. When the goal is to achieve accurate measures, an
accurate and 'high quality' AST is a precondition. Or at least I believe
so. The 'just get it all parsed'-goal is probably somewhat short
sighted, a point may come when we will be thinking about the
size/shape/'quality' of our AST...

@guwirth guwirth closed this as completed Jul 4, 2014
@guwirth guwirth added this to the M 0.9.2 milestone Jan 5, 2015
@guwirth guwirth self-assigned this Jan 5, 2015
@jmecosta
Copy link
Member

@guwirth seems we all agree parser errors should not produce technical debt. This need to be changed before release

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

3 participants