Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HDDS-11484. Add check for javadoc correctness #7245

Merged
merged 51 commits into from
Sep 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
35f301f
docs: fixed javadocs warnings
Daniilchik Sep 23, 2024
32a30fd
docs: fixed javadocs warnings
Daniilchik Sep 23, 2024
9b7b0f2
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 26, 2024
f3cfffe
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 26, 2024
5bf24d4
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 26, 2024
c66f57e
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 26, 2024
62f70e4
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 26, 2024
5a3101d
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 26, 2024
ecdebda
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 26, 2024
7b27171
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 26, 2024
a358800
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 26, 2024
7db59c1
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 26, 2024
5221843
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 26, 2024
4ca55cb
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 26, 2024
f46675e
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 26, 2024
db67083
HHDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
abb7b54
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
4a1b4c4
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
fbb6bb0
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
4d8a118
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
c650234
HHDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
6f60dcb
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
3ae1f5e
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
8a53c5d
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
1959040
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
1fa2082
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
b0b5a38
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
db6296b
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
f595741
HHDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
3ab7fea
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
7daaeb5
HHDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
1e01abb
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
2bacc63
HHDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
e959185
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
d353bed
HHDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
e09a5d1
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
a43c9dc
HHDS-11484. Add check for javadoc correctness
Daniilchik Sep 28, 2024
86dac52
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 29, 2024
f377ac0
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 29, 2024
96e349b
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 29, 2024
18c7e2f
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 29, 2024
cb84092
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 29, 2024
499b2d8
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 29, 2024
6a35155
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 29, 2024
c2a3c37
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 29, 2024
5dbfca8
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 29, 2024
22697d3
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 29, 2024
12523eb
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 29, 2024
97160fd
HHDS-11484. Add check for javadoc correctness
Daniilchik Sep 29, 2024
be2eb27
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 29, 2024
d7ca1b2
HDDS-11484. Add check for javadoc correctness
Daniilchik Sep 29, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ jobs:
distribution: 'temurin'
java-version: ${{ matrix.java }}
- name: Compile Ozone using Java ${{ matrix.java }}
run: hadoop-ozone/dev-support/checks/build.sh -Pdist -Dskip.npx -Dskip.installnpx -Djavac.version=${{ matrix.java }} ${{ inputs.ratis_args }}
run: hadoop-ozone/dev-support/checks/build.sh -Pdist -Dskip.npx -Dskip.installnpx -Dmaven.javadoc.failOnWarnings=${{ matrix.java != 8 }} -Djavac.version=${{ matrix.java }} ${{ inputs.ratis_args }}
env:
OZONE_WITH_COVERAGE: false
DEVELOCITY_ACCESS_KEY: ${{ secrets.GE_ACCESS_TOKEN }}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,6 @@ public synchronized int read(ByteBuffer byteBuffer) throws IOException {
* readWithStrategy implementation, as it will never be called by the tests.
*
* @param strategy
* @return
* @throws IOException
*/
protected abstract int readWithStrategy(ByteReaderStrategy strategy) throws
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,6 @@ protected int calculateExpectedDataBlocks(ECReplicationConfig rConfig) {
* Using the current position, returns the index of the blockStream we should
* be reading from. This is the index in the internal array holding the
* stream reference. The block group index will be one greater than this.
* @return
*/
protected int currentStreamIndex() {
return (int)((position / ecChunkSize) % repConfig.getData());
Expand Down Expand Up @@ -206,7 +205,6 @@ protected BlockExtendedInputStream getOrOpenStream(int locationIndex) throws IOE
* to the replicaIndex given based on the EC pipeline fetched from SCM.
* @param replicaIndex
* @param refreshFunc
* @return
*/
protected Function<BlockID, BlockLocationInfo> ecPipelineRefreshFunction(
int replicaIndex, Function<BlockID, BlockLocationInfo> refreshFunc) {
Expand Down Expand Up @@ -241,7 +239,6 @@ protected Function<BlockID, BlockLocationInfo> ecPipelineRefreshFunction(
* potentially partial last stripe. Note that the internal block index is
* numbered starting from 1.
* @param index - Index number of the internal block, starting from 1
* @return
*/
protected long internalBlockLength(int index) {
long lastStripe = blockInfo.getLength() % stripeSize;
Expand Down Expand Up @@ -344,7 +341,6 @@ protected boolean shouldRetryFailedRead(int failedIndex) {
* strategy buffer. This call may read from several internal BlockInputStreams
* if there is sufficient space in the buffer.
* @param strategy
* @return
* @throws IOException
*/
@Override
Expand Down Expand Up @@ -409,7 +405,6 @@ protected void seekStreamIfNecessary(BlockExtendedInputStream stream,
* group length.
* @param stream Stream to read from
* @param strategy The ReaderStrategy to read data into
* @return
* @throws IOException
*/
private int readFromStream(BlockExtendedInputStream stream,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -650,7 +650,7 @@ public static File createDir(String dirPath) {
* Utility string formatter method to display SCM roles.
*
* @param nodes
* @return
* @return String
*/
public static String format(List<String> nodes) {
StringBuilder sb = new StringBuilder();
Expand Down Expand Up @@ -680,7 +680,8 @@ public static int roundupMb(long bytes) {

/**
* Unwrap exception to check if it is some kind of access control problem
* ({@link AccessControlException} or {@link SecretManager.InvalidToken})
* ({@link org.apache.hadoop.security.AccessControlException} or
* {@link org.apache.hadoop.security.token.SecretManager.InvalidToken})
* or a RpcException.
*/
public static Throwable getUnwrappedException(Exception ex) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,6 @@ public static JsonNode getBeansJsonNode(String metricsJson) throws IOException {
* Returns the number of decommissioning nodes.
*
* @param jsonNode
* @return
*/
public static int getNumDecomNodes(JsonNode jsonNode) {
int numDecomNodes;
Expand All @@ -118,7 +117,6 @@ public static int getNumDecomNodes(JsonNode jsonNode) {
* @param numDecomNodes
* @param countsMap
* @param errMsg
* @return
* @throws IOException
*/
@Nullable
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -396,7 +396,6 @@ StartContainerBalancerResponseProto startContainerBalancer(
* Force generates new secret keys (rotate).
*
* @param force boolean flag that forcefully rotates the key on demand
* @return
* @throws IOException
*/
boolean rotateSecretKeys(boolean force) throws IOException;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,6 @@ public long getReportTimeStamp() {

/**
* Return a map of all stats and their value as a long.
* @return
*/
public Map<String, Long> getStats() {
Map<String, Long> result = new HashMap<>();
Expand All @@ -159,7 +158,6 @@ public Map<String, Long> getStats() {
/**
* Return a map of all samples, with the stat as the key and the samples
* for the stat as a List of Long.
* @return
*/
public Map<String, List<Long>> getSamples() {
Map<String, List<Long>> result = new HashMap<>();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,6 @@ public class SCMNodeInfo {
/**
* Build SCM Node information from configuration.
* @param conf
* @return
*/
public static List<SCMNodeInfo> buildNodeInfo(ConfigurationSource conf) {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -307,10 +307,13 @@ public void remove(Node node) {
* @param loc string location of a node. If loc starts with "/", it's a
* absolute path, otherwise a relative path. Following examples
* are all accepted,
* <pre>
* {@code
* 1. /dc1/rm1/rack1 -> an inner node
* 2. /dc1/rm1/rack1/node1 -> a leaf node
* 3. rack1/node1 -> a relative path to this node
*
* }
* </pre>
* @return null if the node is not found
*/
@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,6 @@ public int getReplicaIndex(DatanodeDetails dn) {

/**
* Get the replicaIndex Map.
* @return
*/
public Map<DatanodeDetails, Integer> getReplicaIndexes() {
return this.getNodes().stream().collect(Collectors.toMap(Function.identity(), this::getReplicaIndex));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ public static InetSocketAddress updateListenAddress(OzoneConfiguration conf,
* Fall back to OZONE_METADATA_DIRS if not defined.
*
* @param conf
* @return
* @return File
*/
public static File getScmDbDir(ConfigurationSource conf) {
File metadataDir = getDirectoryFromConfig(conf,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@
/**
* Simple general resource leak detector using {@link ReferenceQueue} and {@link java.lang.ref.WeakReference} to
* observe resource object life-cycle and assert proper resource closure before they are GCed.
*
* <p>
* Example usage:
*
Expand All @@ -43,16 +42,18 @@
* // report leaks, don't refer to the original object (MyResource) here.
* System.out.println("MyResource is not closed before being discarded.");
* });
*
* @Override
* }
* }
* </pre>
* <pre>
* {@code @Override
* public void close() {
* // proper resources cleanup...
* // inform tracker that this object is closed properly.
* leakTracker.close();
* }
* }
*
* }</pre>
* }
* </pre>
*/
public class LeakDetector {
private static final Logger LOG = LoggerFactory.getLogger(LeakDetector.class);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,9 @@
import java.io.IOException;

/**
* A {@link Codec} to serialize/deserialize objects by delegation.
* A {@link org.apache.hadoop.hdds.utils.db.Codec} to serialize/deserialize objects by delegation.
*
* @param <T> The object type of this {@link Codec}.
* @param <T> The object type of this {@link org.apache.hadoop.hdds.utils.db.Codec}.
* @param <DELEGATE> The object type of the {@link #delegate}.
*/
public class DelegatedCodec<T, DELEGATE> implements Codec<T> {
Expand Down Expand Up @@ -53,8 +53,8 @@ public enum CopyType {
* Construct a {@link Codec} using the given delegate.
*
* @param delegate the delegate {@link Codec}
* @param forward a function to convert {@link DELEGATE} to {@link T}.
* @param backward a function to convert {@link T} back to {@link DELEGATE}.
* @param forward a function to convert {@code DELEGATE} to {@code T}.
* @param backward a function to convert {@code T} back to {@code DELEGATE}.
* @param copyType How to {@link #copyObject(Object)}?
*/
public DelegatedCodec(Codec<DELEGATE> delegate,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,7 @@ static ChunkBuffer allocate(int capacity) {
return allocate(capacity, 0);
}

/**
* Similar to {@link ByteBuffer#allocate(int)}
/** Similar to {@link ByteBuffer#allocate(int)}
* except that it can specify the increment.
*
* @param increment
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@

/**
* Helper class to convert between protobuf lists and Java lists of
* {@link ContainerProtos.ChunkInfo} objects.
* {@link org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo} objects.
* <p>
* This class is immutable.
*/
Expand All @@ -49,7 +49,7 @@ public ChunkInfoList(List<ContainerProtos.ChunkInfo> chunks) {
}

/**
* @return A new {@link ChunkInfoList} created from protobuf data.
* @return A new {@link #ChunkInfoList} created from protobuf data.
*/
public static ChunkInfoList getFromProtoBuf(
ContainerProtos.ChunkInfoList chunksProto) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ public class LayoutVersionInstanceFactory<T> {
/**
* Register an instance with a given factory key (key + version).
* For safety reasons we dont allow (1) re-registering, (2) registering an
* instance with version > SLV.
* instance with version &gt; SLV.
*
* @param lvm LayoutVersionManager
* @param key VersionFactoryKey key to associate with instance.
Expand Down Expand Up @@ -136,13 +136,15 @@ private boolean isValid(LayoutVersionManager lvm, int version) {
}

/**
* <pre>
* From the list of versioned instances for a given "key", this
* returns the "floor" value corresponding to the given version.
* For example, if we have key = "CreateKey", entry -> [(1, CreateKeyV1),
* (3, CreateKeyV2), and if the passed in key = CreateKey and version = 2, we
* For example, if we have key = "CreateKey", entry -&gt; [(1, CreateKeyV1),
* (3, CreateKeyV2), and if the passed in key = CreateKey &amp; version = 2, we
* return CreateKeyV1.
* Since this is a priority queue based implementation, we use a O(1) peek()
* lookup to get the current valid version.
* </pre>
* @param lvm LayoutVersionManager
* @param key Key and Version.
* @return instance.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,6 @@ public interface LayoutVersionManager {
/**
* Generic API for returning a registered handler for a given type.
* @param type String type
* @return
*/
default Object getHandler(String type) {
return null;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,14 +50,14 @@ public interface UpgradeFinalizer<T> {
* Represents the current state in which the service is with regards to
* finalization after an upgrade.
* The state transitions are the following:
* ALREADY_FINALIZED - no entry no exit from this status without restart.
* {@code ALREADY_FINALIZED} - no entry no exit from this status without restart.
* After an upgrade:
* FINALIZATION_REQUIRED -(finalize)-> STARTING_FINALIZATION
* -> FINALIZATION_IN_PROGRESS -> FINALIZATION_DONE from finalization done
* {@code FINALIZATION_REQUIRED -(finalize)-> STARTING_FINALIZATION
* -> FINALIZATION_IN_PROGRESS -> FINALIZATION_DONE} from finalization done
* there is no more move possible, after a restart the service can end up in:
* - FINALIZATION_REQUIRED, if the finalization failed and have not reached
* FINALIZATION_DONE,
* - or it can be ALREADY_FINALIZED if the finalization was successfully done.
* {@code FINALIZATION_REQUIRED}, if the finalization failed and have not reached
* {@code FINALIZATION_DONE},
* - or it can be {@code ALREADY_FINALIZED} if the finalization was successfully done.
*/
enum Status {
ALREADY_FINALIZED,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@

/**
* "Key" element to the Version specific instance factory. Currently it has 2
* dimensions -> a 'key' string and a version. This is to support a factory
* dimensions -&gt; a 'key' string and a version. This is to support a factory
* which returns an instance for a given "key" and "version".
*/
public class VersionFactoryKey {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ default String[] getTrimmedStrings(String name) {
/**
* Gets the configuration entries where the key contains the prefix. This
* method will strip the prefix from the key in the return Map.
* Example: somePrefix.key->value will be key->value in the returned map.
* Example: {@code somePrefix.key->value} will be {@code key->value} in the returned map.
* @param keyPrefix Prefix to search.
* @return Map containing keys that match and their values.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
/**
* Map: containerId {@literal ->} (localId {@literal ->} {@link BlockData}).
* The outer container map does not entail locking for a better performance.
* The inner {@link BlockDataMap} is synchronized.
* The inner {@code BlockDataMap} is synchronized.
*
* This class will maintain list of open keys per container when closeContainer
* command comes, it should autocommit all open keys of a open container before
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ public final List<ContainerBlockInfo> chooseContainerForBlockDeletion(
/**
* Abstract step for ordering the container data to be deleted.
* Subclass need to implement the concrete ordering implementation
* in descending order (more prioritized -> less prioritized)
* in descending order (more prioritized -&gt; less prioritized)
* @param candidateContainers candidate containers to be ordered
*/
protected abstract void orderByDescendingPriority(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,6 @@ void validateContainerCommand(
/**
* Returns the handler for the specified containerType.
* @param containerType
* @return
*/
Handler getHandler(ContainerProtos.ContainerType containerType);

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -705,9 +705,9 @@ private ExecutorService getChunkExecutor(WriteChunkRequestProto req) {
}

/**
* {@link #writeStateMachineData(ContainerCommandRequestProto, long, long, long)}
* {@link #writeStateMachineData}
* calls are not synchronized with each other
* and also with {@link #applyTransaction(TransactionContext)}.
* and also with {@code applyTransaction(TransactionContext)}.
*/
@Override
public CompletableFuture<Message> write(LogEntryProto entry, TransactionContext trx) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@
* | fsAvail |-------other-----------|
* |<- fsCapacity ->|
* }</pre>
*
* <pre>
* What we could directly get from local fs:
* fsCapacity, fsAvail, (fsUsed = fsCapacity - fsAvail)
* We could get from config:
Expand All @@ -80,13 +80,13 @@
* then we should use DedicatedDiskSpaceUsage for
* `hdds.datanode.du.factory.classname`,
* Then it is much simpler, since we don't care about other usage:
* <pre>
* {@code
* |----used----| (avail)/fsAvail |
* |<- capacity/fsCapacity ->|
* }
* </pre>
* }
*
* We have avail == fsAvail.
* </pre>
*/
public final class VolumeInfo {

Expand Down Expand Up @@ -157,14 +157,14 @@ public long getCapacity() {
}

/**
* Calculate available space use method A.
* <pre>
* {@code
* Calculate available space use method A.
* |----used----| (avail) |++++++++reserved++++++++|
* |<- capacity ->|
* }
*</pre>
* A) avail = capacity - used
* }
* </pre>
*/
public long getAvailable() {
return usage.getAvailable();
Expand Down
Loading