Skip to content

Commit

Permalink
HBASE-16264 Figure how to deal with endpoints and shaded pb Shade our…
Browse files Browse the repository at this point in the history
… protobufs. Do it in a manner that makes it so we can still have in our API references to com.google.protobuf (and in REST). The c.g.p in API is for Coprocessor Endpoints (CPEP)

        This patch is Tactic #4 from Shading Doc attached to the referenced issue.
        Figuring an appoach took a while because we have Coprocessor Endpoints
        mixed in with the core of HBase that are tough to untangle (FIX).

        Tactic #4 (the fourth attempt at addressing this issue) is COPY all but
        the CPEP .proto files currently in hbase-protocol to a new module named
        hbase-protocol-shaded. Generate .protos again in the new location and
        then relocate/shade the generated files. Let CPEPs keep on with the
        old references at com.google.protobuf.* and
        org.apache.hadoop.hbase.protobuf.* but change the hbase core so all
        instead refer to the relocated files in their new location at
        org.apache.hadoop.hbase.shaded.com.google.protobuf.*.

        Let the new module also shade protobufs themselves and change hbase
        core to pick up this shaded protobuf rather than directly reference
        com.google.protobuf.

        This approach allows us to explicitly refer to either the shaded or
        non-shaded version of a protobuf class in any particular context (though
        usually context dictates one or the other). Core runs on shaded protobuf.
        CPEPs continue to use whatever is on the classpath with
        com.google.protobuf.* which is pb2.5.0 for the near future at least.

        See above cited doc for follow-ons and downsides. In short, IDEs will complain
        about not being able to find the shaded protobufs since shading happens at package
        time; will fix by checking in all generated classes and relocated protobuf in
        a follow-on. Also, CPEPs currently suffer an extra-copy as marshalled from
        non-shaded to shaded. To fix. Finally, our .protos are duplicated; once
        shaded, and once not. Pain, but how else to reveal our protos to CPEPs or
        C++ client that wants to talk with HBase AND shade protobuf.

        Details:

        Add a new hbase-protocol-shaded module. It is a copy of hbase-protocol
i       with all relocated offset from o.a.h.h. to o.a.h.h.shaded. The new module
        also includes the relocated pb. It does not include CPEPs. They stay in
        their old location.

        Add another module hbase-endpoint which has in it all the endpoints
        that ship as part of hbase -- at least the ones that are not
        entangled with core such as AccessControl and Auth. Move all protos
        for these CPEPs here as well as their unit tests (mostly moving a
        bunch of stuff out of hbase-server module)

        Much of the change looks like this:

             -import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
             -import org.apache.hadoop.hbase.protobuf.generated.ClusterIdProtos;
             +import org.apache.hadoop.hbase.protobuf.shaded.ProtobufUtil;
             +import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterIdProtos;

        In HTable and in HBaseAdmin, regularize the way Callables are used and also hide
        protobuf usage as much as possible moving it up into Callable super classes or out
        to utility classes. Still TODO is adding in of retries, etc., but can wait on
        procedure which will redo all this.

        Also in HTable and HBaseAdmin as well as in HRegionServer and Server, be explicit
        when using non-shaded protobuf. Do the full-path so it is clear. This is around
        endpoint coprocessors registration of services and execution of CPEP methods.

        Shrunk ProtobufUtil by moving methods used by one CPEP only back to the CPEP either
        into Client class or as new Util class; e.g. AccessControlUtil.

        There are actually two versions of ProtobufUtil now; a shaded one and a subset
        that is used by CPEPs doing non-shaded work.

        Made it so hbase-common no longer depends on hbase-protocol (with Matteo's help)

        R*Converter classes got moved down under shaded package -- they are for internal
        use only. There are no non-shaded versions of these classes.

        D hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable
        D RetryingCallableBase
         Not used anymore and we have too many tiers of Callables so removed/cleaned-up.

        A ClientServicecallable
         Had to add this one. RegionServerCallable was made generic so it could be used
         for a few Interfaces (Client and Admin). Then added ClientServiceCallable to
         implement RegionServerCallable with the Client Interface.
  • Loading branch information
saintstack committed Sep 29, 2016
1 parent 63808a2 commit 17d4b70
Show file tree
Hide file tree
Showing 625 changed files with 199,043 additions and 18,280 deletions.
4 changes: 4 additions & 0 deletions hbase-client/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,10 @@
<type>test-jar</type>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-protocol-shaded</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-protocol</artifactId>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@

import org.apache.hadoop.hbase.classification.InterfaceAudience;
import org.apache.hadoop.hbase.exceptions.DeserializationException;
import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.protobuf.generated.ClusterIdProtos;
import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterIdProtos;
import org.apache.hadoop.hbase.util.Bytes;

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,14 +30,14 @@
import org.apache.hadoop.hbase.classification.InterfaceAudience;
import org.apache.hadoop.hbase.classification.InterfaceStability;
import org.apache.hadoop.hbase.master.RegionState;
import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos;
import org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.LiveServerInfo;
import org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.RegionInTransition;
import org.apache.hadoop.hbase.protobuf.generated.FSProtos.HBaseVersionFileContent;
import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos;
import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.RegionSpecifier;
import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.RegionSpecifier.RegionSpecifierType;
import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos;
import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos.LiveServerInfo;
import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos.RegionInTransition;
import org.apache.hadoop.hbase.shaded.protobuf.generated.FSProtos.HBaseVersionFileContent;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionSpecifier;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionSpecifier.RegionSpecifierType;
import org.apache.hadoop.hbase.util.ByteStringer;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.io.VersionedWritable;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@
import org.apache.hadoop.hbase.exceptions.HBaseException;
import org.apache.hadoop.hbase.io.compress.Compression;
import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ColumnFamilySchema;
import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ColumnFamilySchema;
import org.apache.hadoop.hbase.regionserver.BloomType;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.util.PrettyPrinter;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,11 +33,11 @@
import org.apache.hadoop.hbase.KeyValue.KVComparator;
import org.apache.hadoop.hbase.exceptions.DeserializationException;
import org.apache.hadoop.hbase.master.RegionState;
import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos;
import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.RegionInfo;
import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo;
import org.apache.hadoop.hbase.util.ByteArrayHashKey;
import org.apache.hadoop.hbase.util.ByteStringer;
import org.apache.hadoop.hbase.shaded.util.ByteStringer;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.util.HashKey;
import org.apache.hadoop.hbase.util.JenkinsHash;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@
import org.apache.hadoop.hbase.client.Durability;
import org.apache.hadoop.hbase.client.RegionReplicaUtil;
import org.apache.hadoop.hbase.exceptions.DeserializationException;
import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema;
import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
import org.apache.hadoop.hbase.security.User;
import org.apache.hadoop.hbase.util.Bytes;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,6 @@
*/
package org.apache.hadoop.hbase;

import com.google.common.annotations.VisibleForTesting;
import com.google.protobuf.ServiceException;

import java.io.Closeable;
import java.io.IOException;
import java.io.InterruptedIOException;
Expand All @@ -36,8 +33,6 @@
import java.util.regex.Matcher;
import java.util.regex.Pattern;

import edu.umd.cs.findbugs.annotations.NonNull;
import edu.umd.cs.findbugs.annotations.Nullable;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
Expand All @@ -51,6 +46,7 @@
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.RegionLocator;
import org.apache.hadoop.hbase.client.RegionReplicaUtil;
import org.apache.hadoop.hbase.client.RegionServerCallable;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
Expand All @@ -60,13 +56,22 @@
import org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel;
import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.protobuf.generated.ClientProtos;
import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.RegionSpecifier;
import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.RegionSpecifier.RegionSpecifierType;
import org.apache.hadoop.hbase.protobuf.generated.MultiRowMutationProtos;
import org.apache.hadoop.hbase.protobuf.generated.MultiRowMutationProtos.MutateRowsRequest;
import org.apache.hadoop.hbase.protobuf.generated.MultiRowMutationProtos.MutateRowsResponse;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
import org.apache.hadoop.hbase.util.ExceptionUtil;
import org.apache.hadoop.hbase.util.Pair;
import org.apache.hadoop.hbase.util.PairOfSameType;

import com.google.common.annotations.VisibleForTesting;

import edu.umd.cs.findbugs.annotations.NonNull;
import edu.umd.cs.findbugs.annotations.Nullable;

/**
* Read/write operations on region and assignment information store in
* <code>hbase:meta</code>.
Expand Down Expand Up @@ -1677,7 +1682,7 @@ public static void mergeRegions(final Connection connection, HRegionInfo mergedR
} else {
mutations = new Mutation[] { putOfMerged, deleteA, deleteB };
}
multiMutate(meta, tableRow, mutations);
multiMutate(connection, meta, tableRow, mutations);
} finally {
meta.close();
}
Expand Down Expand Up @@ -1732,7 +1737,7 @@ public static void splitRegion(final Connection connection, HRegionInfo parent,
mutations = new Mutation[]{putParent, putA, putB};
}
byte[] tableRow = Bytes.toBytes(parent.getRegionNameAsString() + HConstants.DELIMITER);
multiMutate(meta, tableRow, mutations);
multiMutate(connection, meta, tableRow, mutations);
} finally {
meta.close();
}
Expand Down Expand Up @@ -1777,37 +1782,74 @@ public static void deleteTableState(Connection connection, TableName table)
LOG.info("Deleted table " + table + " state from META");
}

private static void multiMutate(Connection connection, Table table, byte[] row,
Mutation... mutations)
throws IOException {
multiMutate(connection, table, row, Arrays.asList(mutations));
}

/**
* Performs an atomic multi-Mutate operation against the given table.
* Performs an atomic multi-mutate operation against the given table.
*/
private static void multiMutate(Table table, byte[] row, Mutation... mutations)
throws IOException {
CoprocessorRpcChannel channel = table.coprocessorService(row);
MultiRowMutationProtos.MutateRowsRequest.Builder mmrBuilder
= MultiRowMutationProtos.MutateRowsRequest.newBuilder();
// Used by the RSGroup Coprocessor Endpoint. It had a copy/paste of the below. Need to reveal
// this facility for CPEP use or at least those CPEPs that are on their way to becoming part of
// core as is the intent for RSGroup eventually.
public static void multiMutate(Connection connection, final Table table, byte[] row,
final List<Mutation> mutations)
throws IOException {
if (METALOG.isDebugEnabled()) {
METALOG.debug(mutationsToString(mutations));
}
for (Mutation mutation : mutations) {
if (mutation instanceof Put) {
mmrBuilder.addMutationRequest(ProtobufUtil.toMutation(
ClientProtos.MutationProto.MutationType.PUT, mutation));
} else if (mutation instanceof Delete) {
mmrBuilder.addMutationRequest(ProtobufUtil.toMutation(
ClientProtos.MutationProto.MutationType.DELETE, mutation));
} else {
throw new DoNotRetryIOException("multi in MetaEditor doesn't support "
+ mutation.getClass().getName());
// TODO: Need rollback!!!!
// TODO: Need Retry!!!
// TODO: What for a timeout? Default write timeout? GET FROM HTABLE?
// TODO: Review when we come through with ProcedureV2.
RegionServerCallable<MutateRowsResponse,
MultiRowMutationProtos.MultiRowMutationService.BlockingInterface> callable =
new RegionServerCallable<MutateRowsResponse,
MultiRowMutationProtos.MultiRowMutationService.BlockingInterface>(
connection, table.getName(), row, null/*RpcController not used in this CPEP!*/) {
@Override
protected MutateRowsResponse rpcCall() throws Exception {
final MutateRowsRequest.Builder builder = MutateRowsRequest.newBuilder();
for (Mutation mutation : mutations) {
if (mutation instanceof Put) {
builder.addMutationRequest(ProtobufUtil.toMutation(
ClientProtos.MutationProto.MutationType.PUT, mutation));
} else if (mutation instanceof Delete) {
builder.addMutationRequest(ProtobufUtil.toMutation(
ClientProtos.MutationProto.MutationType.DELETE, mutation));
} else {
throw new DoNotRetryIOException("multi in MetaEditor doesn't support "
+ mutation.getClass().getName());
}
}
// The call to #prepare that ran before this invocation will have populated HRegionLocation.
HRegionLocation hrl = getLocation();
RegionSpecifier region = ProtobufUtil.buildRegionSpecifier(
RegionSpecifierType.REGION_NAME, hrl.getRegionInfo().getRegionName());
builder.setRegion(region);
// The rpcController here is awkward. The Coprocessor Endpoint wants an instance of a
// com.google.protobuf but we are going over an rpc that is all shaded protobuf so it
// wants a org.apache.h.h.shaded.com.google.protobuf.RpcController. Set up a factory
// that makes com.google.protobuf.RpcController and then copy into it configs.
return getStub().mutateRows(null, builder.build());
}
}

MultiRowMutationProtos.MultiRowMutationService.BlockingInterface service =
MultiRowMutationProtos.MultiRowMutationService.newBlockingStub(channel);
try {
service.mutateRows(null, mmrBuilder.build());
} catch (ServiceException ex) {
ProtobufUtil.toIOException(ex);
}
@Override
// Called on the end of the super.prepare call. Set the stub.
protected void setStubByServiceName(ServerName serviceName/*Ignored*/) throws IOException {
CoprocessorRpcChannel channel = table.coprocessorService(getRow());
setStub(MultiRowMutationProtos.MultiRowMutationService.newBlockingStub(channel));
}
};
int writeTimeout = connection.getConfiguration().getInt(HConstants.HBASE_RPC_WRITE_TIMEOUT_KEY,
connection.getConfiguration().getInt(HConstants.HBASE_RPC_TIMEOUT_KEY,
HConstants.DEFAULT_HBASE_RPC_TIMEOUT));
// The region location should be cached in connection. Call prepare so this callable picks
// up the region location (see super.prepare method).
callable.prepare(false);
callable.call(writeTimeout);
}

/**
Expand Down Expand Up @@ -2026,16 +2068,6 @@ public static Put addEmptyLocation(final Put p, int replicaId) {
return p;
}

private static String mutationsToString(Mutation ... mutations) throws IOException {
StringBuilder sb = new StringBuilder();
String prefix = "";
for (Mutation mutation : mutations) {
sb.append(prefix).append(mutationToString(mutation));
prefix = ", ";
}
return sb.toString();
}

private static String mutationsToString(List<? extends Mutation> mutations) throws IOException {
StringBuilder sb = new StringBuilder();
String prefix = "";
Expand Down Expand Up @@ -2169,5 +2201,4 @@ public static String getSerialReplicationTableName(Connection connection, byte[]
}
return null;
}

}
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@

import org.apache.hadoop.hbase.classification.InterfaceAudience;
import org.apache.hadoop.hbase.classification.InterfaceStability;
import org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos;
import org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.StoreSequenceId;
import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos;
import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos.StoreSequenceId;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.util.Strings;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,9 @@

import org.apache.hadoop.hbase.classification.InterfaceAudience;
import org.apache.hadoop.hbase.classification.InterfaceStability;
import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos;
import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.Coprocessor;
import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.Coprocessor;
import org.apache.hadoop.hbase.replication.ReplicationLoadSink;
import org.apache.hadoop.hbase.replication.ReplicationLoadSource;
import org.apache.hadoop.hbase.util.Bytes;
Expand Down
Loading

0 comments on commit 17d4b70

Please sign in to comment.