Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ddl: pre-split region for partitioned table #10221

Merged
merged 8 commits into from
Apr 30, 2019
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 11 additions & 1 deletion ddl/table.go
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,17 @@ func onCreateTable(d *ddlCtx, t *meta.Meta, job *model.Job) (ver int64, _ error)
}
if atomic.LoadUint32(&EnableSplitTableRegion) != 0 {
// TODO: Add restrictions to this operation.
go splitTableRegion(d.store, tbInfo.ID)
pi := tbInfo.GetPartitionInfo()
if pi != nil {
// Max partition count is 4096, should we sample and just choose some of the partition to split?
tiancaiamao marked this conversation as resolved.
Show resolved Hide resolved
go func(pi *model.PartitionInfo) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about not go? Since the split process can be really slow. Client could start to insert data before split complete.

Copy link
Contributor

@crazycs520 crazycs520 Apr 24, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I add a session variable tidb_wait_table_split_finish to control use sync or async split. Maybe we can use it. in this PR: #10138
And because the split process can be really slow, I prefer to put the split process in ddl_api.go, not in the DDL owner to do this, otherwise sync split maybe block the DDL jobs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've move the code from table.go to ddl_api.go

for _, def := range pi.Definitions {
splitTableRegion(d.store, def.ID)
}
}(pi)
} else {
go splitTableRegion(d.store, tbInfo.ID)
}
}
// Finish this job.
job.FinishTableJob(model.JobStateDone, model.StatePublic, ver, tbInfo)
Expand Down
28 changes: 24 additions & 4 deletions ddl/table_split_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ import (
"github.com/pingcap/tidb/store/mockstore"
"github.com/pingcap/tidb/store/tikv"
"github.com/pingcap/tidb/tablecodec"
"github.com/pingcap/tidb/util/testkit"
)

type testDDLTableSplitSuite struct{}
Expand All @@ -41,20 +42,39 @@ func (s *testDDLTableSplitSuite) TestTableSplit(c *C) {
atomic.StoreUint32(&ddl.EnableSplitTableRegion, 1)
dom, err := session.BootstrapSession(store)
c.Assert(err, IsNil)
tk := testkit.NewTestKit(c, store)
tk.MustExec("use test")
tk.MustExec(`create table t_part (a int key) partition by range(a) (
partition p0 values less than (10),
partition p1 values less than (20)
)`)
defer dom.Close()
atomic.StoreUint32(&ddl.EnableSplitTableRegion, 0)
infoSchema := dom.InfoSchema()
c.Assert(infoSchema, NotNil)
t, err := infoSchema.TableByName(model.NewCIStr("mysql"), model.NewCIStr("tidb"))
c.Assert(err, IsNil)
regionStartKey := tablecodec.EncodeTablePrefix(t.Meta().ID)
checkRegionStartWithTableID(c, t.Meta().ID, store.(kvStore))

type kvStore interface {
GetRegionCache() *tikv.RegionCache
t, err = infoSchema.TableByName(model.NewCIStr("test"), model.NewCIStr("t_part"))
c.Assert(err, IsNil)
pi := t.Meta().GetPartitionInfo()
c.Assert(pi, NotNil)
for _, def := range pi.Definitions {
checkRegionStartWithTableID(c, def.ID, store.(kvStore))
}
}

type kvStore interface {
GetRegionCache() *tikv.RegionCache
}

func checkRegionStartWithTableID(c *C, id int64, store kvStore) {
regionStartKey := tablecodec.EncodeTablePrefix(id)
var loc *tikv.KeyLocation
var err error
for i := 0; i < 10; i++ {
cache := store.(kvStore).GetRegionCache()
cache := store.GetRegionCache()
loc, err = cache.LocateKey(tikv.NewBackoffer(context.Background(), 5000), regionStartKey)
c.Assert(err, IsNil)

Expand Down