Skip to content

Latest commit

 

History

History
277 lines (228 loc) · 11.4 KB

README.md

File metadata and controls

277 lines (228 loc) · 11.4 KB

EO principles respected here Managed by Zerocracy DevOps By Rultor.com We recommend IntelliJ IDEA

Build Status Javadoc PDD status Maven Central License

jpeek report Test Coverage SonarQube

jPeek is a static collector of Java code metrics.

Motivation: Class cohesion, for example, is considered as one of most important object-oriented software attributes. There are over 30 different cohesion metrics invented so far, but almost none of them have calculators available. The situation with other metrics is very similar. We want to create such a tool that will make it possible to analyze code quality more or less formally (with hundreds of metrics). Then, we will apply this analysis to different Java libraries with an intent to prove that the ideas from Elegant Objects book series make sense.

How to use?

Load this JAR file and then:

$ java -jar jpeek-0.5-jar-with-dependencies.jar --sources . --target ./jpeek

jPeek will analyze Java files in the current directory. XML reports will be generated in the ./jpeek directory. Enjoy.

You can also deploy it as a web service to your own platform. Just compile it with mvn clean package --settings settings.xml and then run, as Procfile suggests. You will need to have settings.xml with the following data:

<settings>
  <profiles>
    <profile>
      <id>jpeek-heroku</id>
      <activation>
        <activeByDefault>true</activeByDefault>
      </activation>
      <properties>
        <sentry.dsn>https://...</sentry.dsn>
        <dynamo.key>AKIAI..........LNN6A</dynamo.key>
        <dynamo.secret>6560KMv5+8Ti....................Qdwob63Z</dynamo.secret>
      </properties>
    </profile>
  </profiles>
</settings>

You will also need these tables in DynamoDB (all indexes must deliver ALL attributes):

jpeek-mistakes:
  metric (HASH/String)
  version (RANGE/String)
  indexes:
    mistakes (GSI):
      version (HASH/String),
      avg (RANGE/Number)
jpeek-results:
  artifact (HASH/String)
  indexes:
    ranks (GSI):
      version (HASH/String)
      rank (RANGE/Number)
    scores (GSI):
      version (HASH/String)
      score (RANGE/Number)
    recent (GSI):
      good (HASH/String)
      added (RANGE/Number)

Cohesion Metrics

These papers provide a pretty good summary of cohesion metrics:

[izadkhah17] Habib Izadkhah et al.,
Class Cohesion Metrics for Software Engineering: A Critical Review,
Computer Science Journal of Moldova, vol.25, no.1(73), 2017, PDF.

[badri08] Linda Badri et al.,
Revisiting Class Cohesion: An empirical investigation on several systems,
Journal of Object Technology, vol.7, no.6, 2008, PDF.

Here is a list of metrics we already implement:

[bansiya99] Cohesion Among Methods of Classes (CAMC).
Jagdish Bansiya et al.,
A class cohesion metric for object-oriented designs,
Journal of Object-Oriented Programming, vol. 11, no. 8, 1999, PDF.

[chidamber94] Lack of Cohesion in Methods (LCOM).
Shyam Chidamber et al.,
A metrics suite for object oriented design,
IEEE Transactions on Software Engineering, vol.20, no.6, 1994, PDF.

[aman04] Optimistic Class Cohesion (OCC) and Pessimistic Class Cohesion (PCC).
Hirohisa Aman et al.,
A proposal of class cohesion metrics using sizes of cohesive parts,
Proc. of Fifth Joint Conference on Knowledge-based Software Engineering, 2002, PDF.

[dallal07] Method-Method through Attributes Cohesion (MMAC).
Jehad Al Dallal,
A Design-Based Cohesion Metric for Object-Oriented Classes,
World Academy of Science, Engineering and Technology International Journal of Computer and Information Engineering Vol:1, No:10, 2007, PDF.

[counsell06] Normalized Hamming Distance (NHD).
Steve Counsell et al.,
The interpretation and utility of three cohesion metrics for object-oriented design,
ACM TOSEM, April 2006, PDF.

[sellers96] Lack of Cohesion in Methods 2-3 (LCOM 2-3).
B. Henderson-Sellers et al.,
Coupling and cohesion (towards a valid metrics suite for object-oriented analysis and design),
Object Oriented Systems 3, 1996, PDF.

[wasiq01] Class Connection Metric (CCM).
M. Wasiq
Measuring Class Cohesion in Object-Oriented Systems,
Master Thesis at the King Fahd University of Petroleum & Minerals, 2001, PDF.

[fernandez06] A Sensitive Metric of Class Cohesion (SCOM).
Luis Fernández et al.,
[A] new metric [...] yielding meaningful values [...] more sensitive than those previously reported,
International Journal "Information Theories & Applications", Volume 13, 2006, PDF.

[bieman95] Tight Class Cohesion (TCC).
James M. Bieman et al.,
Cohesion and Reuse in an Object-Oriented System,
Department of Computer Science, Colorado State University, 1995, PDF.

[dallal11.pdf] Transitive Lack of Cohesion in Methods (TLCOM).
Jehad Al Dallal,
Transitive-based object-oriented lack-of-cohesion metric,
Department of Information Science, Kuwait University, 2011, PDF.

[hitz95] Lack of Cohesion in Methods 4 (LCOM4).
Martin Hitz et al.,
Measuring Coupling and Cohesion In Object-Oriented Systems,
Institute of Applied Computer Science and Systems Analysis, University of Vienna, 1995, PDF.

How it works?

First, Skeleton parses Java bytecode using Javaassit and ASM, in order to produce skeleton.xml. This XML document contains information about each class, which is necessary for the metrics calculations. For example, this simple Java class:

class Book {
  private int id;
  int getId() {
    return this.id;
  }
}

Will look like this in the skeleton.xml:

<class id='Book'>
  <attributes>
   <attribute public='false' static='false' type='I'>id</attribute>
  </attributes>
  <methods>
    <method abstract='false' ctor='true' desc='()I' name='getId' public='true' static='false'>
      <return>I</return>
      <args/>
    </method>
  </methods>
</class>

Then, we have a collection of XSL stylesheets, one per each metric. For example, LCOM.xsl transforms skeleton.xml into LCOM.xml, which may look like this:

<metric>
  <title>MMAC</title>
  <app>
    <class id='InstantiatorProvider' value='1'/>
    <class id='InstantationException' value='0'/>
    <class id='AnswersValidator' value='0.0583'/>
    <class id='ClassNode' value='0.25'/>
    [... skipped ...]
  </app>
</metric>

Thus, all calculations happen inside the XSLT files. We decided to implement it this way after a less successful attempt to do it all in Java. It seems that XSL is much more suitable for manipulations with data than Java.

Known Limitations

  • The java compiler is known to inline constant variables as per JLS 13.1. This affects the results calculated by metrics that take into account access to class attributes if these are final constants. For instance, all LCOM* and *COM metrics are affected.

How to contribute?

Just fork, make changes, run mvn clean install -Pqulice and submit a pull request; read this, if lost.

Contributors

Don't hesitate to add your name to this list in your next pull request.

License (MIT)

Copyright (c) 2017-2018 Yegor Bugayenko

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.