Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Select statement run too slow #5896

Closed
leonchen83 opened this issue Feb 24, 2018 · 24 comments
Closed

Select statement run too slow #5896

leonchen83 opened this issue Feb 24, 2018 · 24 comments

Comments

@leonchen83
Copy link

leonchen83 commented Feb 24, 2018

Please answer these questions before submitting your issue. Thanks!

  1. What did you do?
    If possible, provide a recipe for reproducing the error.

step 1:

CREATE TABLE `PRICING`.`BK_QUOTE` (
  `ID` bigint(20) NOT NULL,
  `BROKER_ID` smallint(6) NOT NULL,
  `SYMBOL_ID` smallint(6) NOT NULL,
  `COMPANY_ID` tinyint(4) NOT NULL,
  `BK_SYMBOL_ID` int(11) NOT NULL,
  `AI_SHADE_ID` bigint(20) NULL DEFAULT NULL,
  `AI_SPREAD_ID` bigint(20) NULL DEFAULT NULL,
  `SCALE` tinyint(4) NOT NULL,
  `MA_SHADE` smallint(6) NOT NULL DEFAULT 0,
  `MA_SPREAD` tinyint(4) NULL DEFAULT NULL,
  `AI_SHADE1` smallint(6) NOT NULL DEFAULT 0,
  `AI_SHADE2` smallint(6) NOT NULL DEFAULT 0,
  `AI_SPREAD` smallint(6) NOT NULL DEFAULT 0,
  `BID_TYPE` tinyint(4) NOT NULL,
  `ASK_TYPE` tinyint(4) NOT NULL,
  `BID_PRICE` decimal(15,6) NOT NULL,
  `ASK_PRICE` decimal(15,6) NOT NULL,
  `BID_VOLUME` int(11) NOT NULL,
  `ASK_VOLUME` int(11) NOT NULL,
  `BID_STATUS` tinyint(4) NOT NULL,
  `ASK_STATUS` tinyint(4) NOT NULL,
  `QUOTE_DATE` date NOT NULL,
  `QUOTE_TIME` bigint(20) NOT NULL,
  `QUOTE_TYPE` tinyint(4) NOT NULL,
  `QUOTE_STATUS` tinyint(4)  NOT NULL,
  `TUNING_STATUS` tinyint(4)  NOT NULL,
  `BID_RAW_PRICE` decimal(15,6) NOT NULL,
  `ASK_RAW_PRICE` decimal(15,6) NOT NULL,
  `BID_AI_ADJUSTMENT` smallint(6) NOT NULL DEFAULT 0,
  `ASK_AI_ADJUSTMENT` smallint(6) NOT NULL DEFAULT 0,
  `BID_MA_ADJUSTMENT` smallint(6) NOT NULL DEFAULT 0,
  `ASK_MA_ADJUSTMENT` smallint(6) NOT NULL DEFAULT 0,
  `VERSION` int(11) NOT NULL DEFAULT '1',
  `INSERT_DATETIME` timestamp NULL DEFAULT NULL,
  `UPDATE_DATETIME` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`ID`),
  KEY `INDEX_01` (`BK_SYMBOL_ID`,`QUOTE_DATE`),
  KEY `INDEX_02` (`BK_SYMBOL_ID`,`QUOTE_TIME`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

step 2:
insert into this table 130,000,000rows data.

step 3:
execute select BK_SYMBOL_ID, max(QUOTE_TIME) as QUOTE_TIME from BK_QUOTE group by BK_SYMBOL_ID

found that this query is too slow.

the explain sql is following:
explain select BK_SYMBOL_ID, max(QUOTE_TIME) as QUOTE_TIME from BK_QUOTE group by BK_SYMBOL_ID

mysql> explain select BK_SYMBOL_ID, max(QUOTE_TIME) as QUOTE_TIME from BK_QUOTE group by BK_SYMBOL_ID;
+---------------+--------------+---------------+------+----------------------------------------------------------------------------------------------------------------------------------------+-----------+
| id            | parents      | children      | task | operator info                                                                                                                          | count     |
+---------------+--------------+---------------+------+----------------------------------------------------------------------------------------------------------------------------------------+-----------+
| TableScan_7   | HashAgg_4    |               | cop  | table:BK_QUOTE, range:(-inf,+inf), keep order:false                                                                                    | 130767217 |
| HashAgg_4     |              | TableScan_7   | cop  | type:complete, group by:pricing.bk_quote.bk_symbol_id, funcs:max(pricing.bk_quote.quote_time), firstrow(pricing.bk_quote.bk_symbol_id) |        79 |
| TableReader_9 | HashAgg_8    |               | root | data:HashAgg_4                                                                                                                         |        79 |
| HashAgg_8     | Projection_3 | TableReader_9 | root | type:final, group by:, funcs:max(col_0), firstrow(col_1)                                                                               |        79 |
| Projection_3  |              | HashAgg_8     | root | pricing.bk_quote.bk_symbol_id, 2_col_0                                                                                                 |        79 |
+---------------+--------------+---------------+------+----------------------------------------------------------------------------------------------------------------------------------------+-----------+
5 rows in set (0.00 sec)
  1. What did you expect to see?

As fast as Mysql (0.35 sec)

  1. What did you see instead?

execute this query more than 1 min.

  1. What version of TiDB are you using (tidb-server -V or run select tidb_version(); on TiDB)?
mysql> select tidb_version();
+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| tidb_version()                                                                                                                                           |
+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Release Version: v1.0.7-1-g4913f64
Git Commit Hash: 4913f64c14f6b8e5761414def9fac3c477626f74
Git Branch: release-1.0
UTC Build Time: 2018-01-25 06:59:40 |
+----------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
@coocood
Copy link
Member

coocood commented Feb 24, 2018

@leonchen83
Thank you for your report!

@winoros PTAL

@zz-jason
Copy link
Member

@leonchen83 tidb master branch is much faster than release1.0, please have a try.

@leonchen83
Copy link
Author

leonchen83 commented Feb 24, 2018

@zz-jason
but for now the master branch is not very stable. isn't?

@winoros
Copy link
Member

winoros commented Feb 24, 2018

We're preparing Stream Aggregation in #5725 . This feature will speed up this sql a lot.

@zz-jason
Copy link
Member

@leonchen83 can you show us the execution plan in mysql by running the following sql in mysql:
explain format=json select BK_SYMBOL_ID, max(QUOTE_TIME) as QUOTE_TIME from BK_QUOTE group by BK_SYMBOL_ID;

@leonchen83
Copy link
Author

leonchen83 commented Feb 24, 2018

@zz-jason

explain format=json select BK_SYMBOL_ID, max(QUOTE_TIME) as QUOTE_TIME from BK_QUOTE group by BK_SYMBOL_ID;
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| EXPLAIN                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| {
  "query_block": {
    "select_id": 1,
    "cost_info": {
      "query_cost": "69.70"
    },
    "grouping_operation": {
      "using_filesort": false,
      "table": {
        "table_name": "BK_QUOTE",
        "access_type": "range",
        "possible_keys": [
          "INDEX_01",
          "INDEX_02"
        ],
        "key": "INDEX_02",
        "used_key_parts": [
          "BK_SYMBOL_ID"
        ],
        "key_length": "4",
        "rows_examined_per_scan": 41,
        "rows_produced_per_join": 41,
        "filtered": "100.00",
        "using_index_for_group_by": true,
        "cost_info": {
          "read_cost": "61.50",
          "eval_cost": "8.20",
          "prefix_cost": "69.70",
          "data_read_per_join": "4K"
        },
        "used_columns": [
          "ID",
          "BK_SYMBOL_ID",
          "QUOTE_TIME"
        ]
      }
    }
  }
} |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set, 1 warning (0.02 sec)

maybe the "using_index_for_group_by": true, is related.

@zz-jason
Copy link
Member

@leonchen83 Yes, It's related to this optimization: https://dev.mysql.com/doc/refman/5.7/en/group-by-optimization.html, which tidb is not supported yet.

@leonchen83
Copy link
Author

@zz-jason
if I added additional index create index INDEX_03 on BK_QUOTE (BK_SYMBOL_ID) , could this index speed up the select query?

@winoros
Copy link
Member

winoros commented Feb 24, 2018

@leonchen83 INDEX_02 won't take effect until the stream aggregation feature is ready.
INDEX_03 is no better than INDEX_02

@leonchen83
Copy link
Author

leonchen83 commented Feb 24, 2018

@winoros can Stream Aggregation speed up the query as fast as mysql(0.35sec, tidb 1.5 min)?

@zz-jason
Copy link
Member

@leonchen83 Whether use index or not, TiDB always examine 130,000,000 rows data for now. When "Loose Index Scan" is supported, rows examined by TiDB can be reduced the same as MySQL and may have the same speed compared to MySQL.

@leonchen83
Copy link
Author

@zz-jason
I can't find Loose Index Scan in ROADMAP.md.
will 1.1.0 support this feature?

@winoros
Copy link
Member

winoros commented Feb 26, 2018

@leonchen83 1.1.0 won't support loose index scan. 1.1.0 can speed up this sql to about 50% of original time. We will do this in the near future.

@leonchen83
Copy link
Author

leonchen83 commented Feb 26, 2018

@winoros @zz-jason
thanks.

another question releated this issue.

I m adding another index create index INDEX_03 on BK_QUOTE (BK_SYMBOL_ID)
but this DML already running 2 days and not yet finished.

mysql> show processlist;
+------+--------+---------------+---------+---------+--------+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Id   | User   | Host          | db      | Command | Time   | State | Info                                                                                                                                                                                                                                                                                                                                                                                                                 |
+------+--------+---------------+---------+---------+--------+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|    1 | ****** | 192.168.*.**  | PRICING | Query   | 171067 | 2     | create index INDEX_03 on BK_QUOTE (BK_SYMBOL_ID)                                                                                                                  |
|   37 | ****** | 192.168.*.*** | PRICING | Query   |      0 | 2     | show processlist                                                                                                                                                                                                                                                                                                                                                                                                     |
+------+--------+---------------+---------+---------+--------+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

@winoros
Copy link
Member

winoros commented Feb 26, 2018

@zimulala PTAL

@zimulala
Copy link
Contributor

@leonchen83
Could you let me know about your cluster deployment?

@leonchen83
Copy link
Author

leonchen83 commented Feb 26, 2018

cluster physical machine OS
1pd, 1tidb i7 3.6g Hz, 8g RAM, 128g SSD CentOS 7.4
tikv i7 3.6g Hz, 8g RAM, 128g SSD CentOS 7.4
tikv i7 3.6g Hz, 8g RAM, 128g SSD CentOS 7.4
tikv i7 3.6g Hz, 8g RAM, 128g SSD CentOS 7.4

tidb config:
max-replicas : 1

@zimulala
Copy link
Contributor

@leonchen83
Could you send me your TiDB log to lixia@pingcap.com?

@leonchen83
Copy link
Author

leonchen83 commented Feb 26, 2018

@zimulala

-rw-r--r--. 1 tidb tidb 127M Feb 26 15:30 tidb.log
-rw-r--r--. 1 tidb tidb 428K Feb 26 15:30 pushgateway.log
-rw-r--r--. 1 tidb tidb 749K Feb 26 15:30 grafana.log
-rw-r--r--. 1 tidb tidb  80K Feb 26 15:00 prometheus.log
-rw-r--r--. 1 tidb tidb 2.2M Feb 26 14:35 pd.log
-rw-r--r--. 1 tidb tidb 300M Feb 26 12:41 tidb-2018-02-26T12-41-07.422.log
-rw-r--r--. 1 tidb tidb 300M Feb 26 07:27 tidb-2018-02-26T07-27-14.182.log
-rw-r--r--. 1 tidb tidb 300M Feb 26 01:28 tidb-2018-02-26T01-28-01.950.log
-rw-r--r--. 1 tidb tidb 1.2M Feb 25 23:59 grafana.log.2018-02-26.001
-rw-r--r--. 1 tidb tidb 300M Feb 25 20:19 tidb-2018-02-25T20-19-30.612.log
-rw-r--r--. 1 tidb tidb 300M Feb 25 15:25 tidb-2018-02-25T15-25-44.638.log
-rw-r--r--. 1 tidb tidb 300M Feb 25 10:10 tidb-2018-02-25T10-10-43.080.log
-rw-r--r--. 1 tidb tidb 300M Feb 25 04:52 tidb-2018-02-25T04-52-30.458.log
-rw-r--r--. 1 tidb tidb 1.2M Feb 24 23:59 grafana.log.2018-02-25.001
-rw-r--r--. 1 tidb tidb 300M Feb 24 23:50 tidb-2018-02-24T23-50-40.561.log
-rw-r--r--. 1 tidb tidb 300M Feb 24 19:07 tidb-2018-02-24T19-07-29.820.log
-rw-r--r--. 1 tidb tidb 1.2M Feb 23 23:59 grafana.log.2018-02-24.001
-rw-r--r--. 1 tidb tidb    0 Feb 23 12:36 tidb_stderr.log
-rw-r--r--. 1 tidb tidb    0 Feb 23 12:35 pd_stderr.log
-rw-r--r--. 1 tidb tidb 6.0K Feb 23 12:35 node_exporter.log
-rw-r--r--. 1 tidb tidb 1.2M Feb 22 23:59 grafana.log.2018-02-23.001
-rw-r--r--. 1 tidb tidb 1.2M Feb 21 23:59 grafana.log.2018-02-22.001
-rw-r--r--. 1 tidb tidb 1.2M Feb 20 23:59 grafana.log.2018-02-21.001
-rw-r--r--. 1 tidb tidb 300M Feb 20 22:05 tidb-2018-02-20T22-05-53.671.log
-rw-r--r--. 1 tidb tidb 300M Feb 20 14:18 tidb-2018-02-20T14-18-44.560.log
-rw-r--r--. 1 tidb tidb 300M Feb 20 05:54 tidb-2018-02-20T05-54-28.614.log
-rw-r--r--. 1 tidb tidb 1.2M Feb 19 23:59 grafana.log.2018-02-20.001
-rw-r--r--. 1 tidb tidb 300M Feb 19 23:07 tidb-2018-02-19T23-07-51.056.log
-rw-r--r--. 1 tidb tidb 300M Feb 19 14:32 tidb-2018-02-19T14-32-44.714.log
-rw-r--r--. 1 tidb tidb 300M Feb 19 07:01 tidb-2018-02-19T07-01-15.305.log
-rw-r--r--. 1 tidb tidb 300M Feb 17 00:46 tidb-2018-02-17T00-46-14.754.log
-rw-r--r--. 1 tidb tidb 300M Feb 16 18:05 tidb-2018-02-16T18-05-43.586.log
-rw-r--r--. 1 tidb tidb 300M Feb 16 10:35 tidb-2018-02-16T10-35-49.588.log
-rw-r--r--. 1 tidb tidb 300M Feb 16 04:03 tidb-2018-02-16T04-03-38.107.log
-rw-r--r--. 1 tidb tidb 300M Feb 15 21:59 tidb-2018-02-15T21-59-07.215.log
-rw-r--r--. 1 tidb tidb 300M Feb 15 15:24 tidb-2018-02-15T15-24-36.606.log
-rw-r--r--. 1 tidb tidb 300M Feb 15 09:17 tidb-2018-02-15T09-17-30.513.log
-rw-r--r--. 1 tidb tidb 300M Feb 15 03:23 tidb-2018-02-15T03-23-14.743.log
-rw-r--r--. 1 tidb tidb 300M Feb 14 20:47 tidb-2018-02-14T20-47-48.327.log
-rw-r--r--. 1 tidb tidb 300M Feb 14 15:22 tidb-2018-02-14T15-22-11.974.log
-rw-r--r--. 1 tidb tidb 300M Feb 14 09:46 tidb-2018-02-14T09-46-35.250.log
-rw-r--r--. 1 tidb tidb 300M Feb 14 03:47 tidb-2018-02-14T03-47-54.495.log
-rw-r--r--. 1 tidb tidb 300M Feb 13 22:11 tidb-2018-02-13T22-11-20.884.log
-rw-r--r--. 1 tidb tidb 300M Feb 13 15:39 tidb-2018-02-13T15-39-26.258.log

the log is very big and like following

2018/02/26 12:41:07.961 ddl_worker.go:354: [info] [ddl] schema version doesn't change
2018/02/26 12:41:07.964 ddl_worker.go:257: [info] [ddl] run DDL job ID:53, Type:add index, State:running, SchemaState:write reorganization, SchemaID:27, TableID:47, RowCount:39003411, ArgLen:0
2018/02/26 12:41:07.969 index.go:607: [info] [ddl] total added index for 39003415 rows, this task [562267334872875549,562267334914820125) added index for 2 rows, batch 1048576, take time 0.011746543
2018/02/26 12:41:07.983 index.go:607: [info] [ddl] total added index for 39003417 rows, this task [562267334914820125,562267335166479901) added index for 2 rows, batch 1048576, take time 0.013613437
2018/02/26 12:41:07.995 index.go:607: [info] [ddl] total added index for 39003419 rows, this task [562267335166479901,562267335275534877) added index for 2 rows, batch 1048576, take time 0.012091375
2018/02/26 12:41:08.010 index.go:607: [info] [ddl] total added index for 39003421 rows, this task [562267335275534877,562267335409754141) added index for 2 rows, batch 1048576, take time 0.01529772
2018/02/26 12:41:08.021 reorg.go:112: [info] [ddl] run reorg job wait timeout 50ms, handled 39003421 rows, current done handle 562267335409754141, err <nil>
2018/02/26 12:41:08.025 index.go:607: [info] [ddl] total added index for 39003423 rows, this task [562267335409754141,562267335460087325) added index for 2 rows, batch 1048576, take time 0.014455151
2018/02/26 12:41:08.028 ddl_worker.go:354: [info] [ddl] schema version doesn't change
2018/02/26 12:41:08.036 ddl_worker.go:257: [info] [ddl] run DDL job ID:53, Type:add index, State:running, SchemaState:write reorganization, SchemaID:27, TableID:47, RowCount:39003421, ArgLen:0
2018/02/26 12:41:08.041 index.go:607: [info] [ddl] total added index for 39003431 rows, this task [562267335460087325,562267335560756765) added index for 8 rows, batch 1048576, take time 0.01591445
2018/02/26 12:41:08.055 index.go:607: [info] [ddl] total added index for 39003433 rows, this task [562267335560756765,562267335594312733) added index for 2 rows, batch 1048576, take time 0.014135859
2018/02/26 12:41:08.070 index.go:607: [info] [ddl] total added index for 39003435 rows, this task [562267335594312733,562267335644645917) added index for 2 rows, batch 1048576, take time 0.014562202
2018/02/26 12:41:08.083 index.go:607: [info] [ddl] total added index for 39003439 rows, this task [562267335644645917,562267335703370781) added index for 4 rows, batch 1048576, take time 0.013720411
2018/02/26 12:41:08.095 reorg.go:112: [info] [ddl] run reorg job wait timeout 50ms, handled 39003439 rows, current done handle 562267335703370781, err <nil>
2018/02/26 12:41:08.096 index.go:607: [info] [ddl] total added index for 39003441 rows, this task [562267335703370781,562267335829201437) added index for 2 rows, batch 1048576, take time 0.012363442
2018/02/26 12:41:08.106 ddl_worker.go:354: [info] [ddl] schema version doesn't change
2018/02/26 12:41:08.110 ddl_worker.go:257: [info] [ddl] run DDL job ID:53, Type:add index, State:running, SchemaState:write reorganization, SchemaID:27, TableID:47, RowCount:39003439, ArgLen:0
2018/02/26 12:41:08.114 index.go:607: [info] [ddl] total added index for 39003443 rows, this task [562267335829201437,562267336005363741) added index for 2 rows, batch 1048576, take time 0.017582543
2018/02/26 12:41:08.126 index.go:607: [info] [ddl] total added index for 39003445 rows, this task [562267336005363741,562267336106031645) added index for 2 rows, batch 1048576, take time 0.012708081
2018/02/26 12:41:08.139 index.go:607: [info] [ddl] total added index for 39003447 rows, this task [562267336106031645,562267336147976221) added index for 2 rows, batch 1048576, take time 0.013043389
2018/02/26 12:41:08.152 index.go:607: [info] [ddl] total added index for 39003451 rows, this task [562267336147976221,562267336189922333) added index for 4 rows, batch 1048576, take time 0.012521441
2018/02/26 12:41:08.165 index.go:607: [info] [ddl] total added index for 39003453 rows, this task [562267336189922333,562267336206701085) added index for 2 rows, batch 1048576, take time 0.012481821
2018/02/26 12:41:08.173 reorg.go:112: [info] [ddl] run reorg job wait timeout 50ms, handled 39003453 rows, current done handle 562267336206701085, err <nil>
2018/02/26 12:41:08.178 index.go:607: [info] [ddl] total added index for 39003457 rows, this task [562267336206701085,562267336240258589) added index for 4 rows, batch 1048576, take time 0.013345651
2018/02/26 12:41:08.180 ddl_worker.go:354: [info] [ddl] schema version doesn't change
2018/02/26 12:41:08.185 ddl_worker.go:257: [info] [ddl] run DDL job ID:53, Type:add index, State:running, SchemaState:write reorganization, SchemaID:27, TableID:47, RowCount:39003453, ArgLen:0
2018/02/26 12:41:08.191 index.go:607: [info] [ddl] total added index for 39003459 rows, this task [562267336240258589,562267336408032285) added index for 2 rows, batch 1048576, take time 0.012833037
2018/02/26 12:41:08.204 index.go:607: [info] [ddl] total added index for 39003461 rows, this task [562267336408032285,562267336433199645) added index for 2 rows, batch 1048576, take time 0.012489982
2018/02/26 12:41:08.215 index.go:607: [info] [ddl] total added index for 39003465 rows, this task [562267336433199645,562267336525477405) added index for 4 rows, batch 1048576, take time 0.010997164
2018/02/26 12:41:08.226 index.go:607: [info] [ddl] total added index for 39003467 rows, this task [562267336525477405,562267336542256157) added index for 2 rows, batch 1048576, take time 0.01128412
2018/02/26 12:41:08.238 index.go:607: [info] [ddl] total added index for 39003469 rows, this task [562267336542256157,562267336626143773) added index for 2 rows, batch 1048576, take time 0.011339275
2018/02/26 12:41:08.247 reorg.go:112: [info] [ddl] run reorg job wait timeout 50ms, handled 39003469 rows, current done handle 562267336626143773, err <nil>
2018/02/26 12:41:08.252 index.go:607: [info] [ddl] total added index for 39003471 rows, this task [562267336626143773,562267336651311133) added index for 2 rows, batch 1048576, take time 0.014627943
2018/02/26 12:41:08.255 ddl_worker.go:354: [info] [ddl] schema version doesn't change

@zimulala
Copy link
Contributor

@leonchen83
Thank you for your report! We will handle it later. Relate #5192.

@leonchen83
Copy link
Author

@zimulala
hi
I killed that create index command using kill tidb process-id

after killed

mysql> show processlist;
+------+--------+---------------+--------------------+---------+------+-------+------------------+
| Id   | User   | Host          | db                 | Command | Time | State | Info             |
+------+--------+---------------+--------------------+---------+------+-------+------------------+
|   37 | ****** | 192.168.*.*** | PRICING            | Query   |    0 | 2     | show processlist |
+------+--------+---------------+--------------------+---------+------+-------+------------------+

and then I found that create index still running in backend when I check tidb.log

2018/02/27 12:32:42.610 ddl_worker.go:354: [info] [ddl] schema version doesn't change
2018/02/27 12:32:42.612 index.go:607: [info] [ddl] total added index for 60395508 rows, this task [562613608629514781,562613608805677085) added index for 2 rows, batch 1048576, take time 0.022767302
2018/02/27 12:32:42.613 ddl_worker.go:257: [info] [ddl] run DDL job ID:53, Type:add index, State:running, SchemaState:write reorganization, SchemaID:27, TableID:47, RowCount:60395505, ArgLen:0
2018/02/27 12:32:42.655 index.go:607: [info] [ddl] total added index for 60395510 rows, this task [562613608805677085,562613608889564701) added index for 2 rows, batch 1048576, take time 0.042558022
2018/02/27 12:32:42.671 reorg.go:112: [info] [ddl] run reorg job wait timeout 50ms, handled 60395510 rows, current done handle 562613608889564701, err <nil>
2018/02/27 12:32:42.691 ddl_worker.go:354: [info] [ddl] schema version doesn't change
2018/02/27 12:32:42.694 index.go:607: [info] [ddl] total added index for 60395512 rows, this task [562613608889564701,562613609166391837) added index for 2 rows, batch 1048576, take time 0.038809584
2018/02/27 12:32:42.694 ddl_worker.go:257: [info] [ddl] run DDL job ID:53, Type:add index, State:running, SchemaState:write reorganization, SchemaID:27, TableID:47, RowCount:60395510, ArgLen:0
2018/02/27 12:32:42.722 index.go:607: [info] [ddl] total added index for 60395514 rows, this task [562613609166391837,562613609300611101) added index for 2 rows, batch 1048576, take time 0.028120127
2018/02/27 12:32:42.754 reorg.go:112: [info] [ddl] run reorg job wait timeout 50ms, handled 60395514 rows, current done handle 562613609300611101, err <nil>
2018/02/27 12:32:42.754 index.go:607: [info] [ddl] total added index for 60395516 rows, this task [562613609300611101,562613609317389853) added index for 2 rows, batch 1048576, take time 0.031463399
2018/02/27 12:32:42.774 ddl_worker.go:354: [info] [ddl] schema version doesn't change
2018/02/27 12:32:42.777 index.go:607: [info] [ddl] total added index for 60395518 rows, this task [562613609317389853,562613609401279005) added index for 2 rows, batch 1048576, take time 0.022842266
2018/02/27 12:32:42.777 ddl_worker.go:257: [info] [ddl] run DDL job ID:53, Type:add index, State:running, SchemaState:write reorganization, SchemaID:27, TableID:47, RowCount:60395514, ArgLen:0
2018/02/27 12:32:42.804 index.go:607: [info] [ddl] total added index for 60395528 rows, this task [562613609401279005,562613609426452509) added index for 10 rows, batch 1048576, take time 0.027748724
2018/02/27 12:32:42.829 index.go:607: [info] [ddl] total added index for 60395530 rows, this task [562613609426452509,562613609476785693) added index for 2 rows, batch 1048576, take time 0.024087704
2018/02/27 12:32:42.835 reorg.go:112: [info] [ddl] run reorg job wait timeout 50ms, handled 60395530 rows, current done handle 562613609476785693, err <nil>
2018/02/27 12:32:42.858 ddl_worker.go:354: [info] [ddl] schema version doesn't change
2018/02/27 12:32:42.861 ddl_worker.go:257: [info] [ddl] run DDL job ID:53, Type:add index, State:running, SchemaState:write reorganization, SchemaID:27, TableID:47, RowCount:60395530, ArgLen:0
2018/02/27 12:32:42.863 index.go:607: [info] [ddl] total added index for 60395532 rows, this task [562613609476785693,562613609594229277) added index for 2 rows, batch 1048576, take time 0.03412767
2018/02/27 12:32:42.890 index.go:607: [info] [ddl] total added index for 60395535 rows, this task [562613609594229277,562613609619398173) added index for 3 rows, batch 1048576, take time 0.026593921
2018/02/27 12:32:42.912 index.go:607: [info] [ddl] total added index for 60395537 rows, this task [562613609619398173,562613609711674397) added index for 2 rows, batch 1048576, take time 0.02235482
2018/02/27 12:32:42.920 reorg.go:112: [info] [ddl] run reorg job wait timeout 50ms, handled 60395537 rows, current done handle 562613609711674397, err <nil>
2018/02/27 12:32:42.935 index.go:607: [info] [ddl] total added index for 60395539 rows, this task [562613609711674397,562613609762007581) added index for 2 rows, batch 1048576, take time 0.022496874
2018/02/27 12:32:42.942 ddl_worker.go:354: [info] [ddl] schema version doesn't change
2018/02/27 12:32:42.945 ddl_worker.go:257: [info] [ddl] run DDL job ID:53, Type:add index, State:running, SchemaState:write reorganization, SchemaID:27, TableID:47, RowCount:60395537, ArgLen:0
2018/02/27 12:32:42.960 index.go:607: [info] [ddl] total added index for 60395541 rows, this task [562613609762007581,562613609787174941) added index for 2 rows, batch 1048576, take time 0.025629411
2018/02/27 12:32:42.981 index.go:607: [info] [ddl] total added index for 60395545 rows, this task [562613609787174941,562613609803955229) added index for 4 rows, batch 1048576, take time 0.020870596
2018/02/27 12:32:43.004 reorg.go:112: [info] [ddl] run reorg job wait timeout 50ms, handled 60395545 rows, current done handle 562613609803955229, err <nil>
2018/02/27 12:32:43.021 index.go:607: [info] [ddl] total added index for 60395547 rows, this task [562613609803955229,562613609938174493) added index for 2 rows, batch 1048576, take time 0.039546952
2018/02/27 12:32:43.025 ddl_worker.go:354: [info] [ddl] schema version doesn't change
2018/02/27 12:32:43.028 ddl_worker.go:257: [info] [ddl] run DDL job ID:53, Type:add index, State:running, SchemaState:write reorganization, SchemaID:27, TableID:47, RowCount:60395545, ArgLen:0
2018/02/27 12:32:43.049 index.go:607: [info] [ddl] total added index for 60395549 rows, this task [562613609938174493,562613609980119069) added index for 2 rows, batch 1048576, take time 0.028244708
2018/02/27 12:32:43.068 index.go:607: [info] [ddl] total added index for 60395553 rows, this task [562613609980119069,562613610055619613) added index for 4 rows, batch 1048576, take time 0.0191805
2018/02/27 12:32:43.087 reorg.go:112: [info] [ddl] run reorg job wait timeout 50ms, handled 60395553 rows, current done handle 562613610055619613, err <nil>
2018/02/27 12:32:43.100 index.go:607: [info] [ddl] total added index for 60395559 rows, this task [562613610055619613,562613610173064733) added index for 6 rows, batch 1048576, take time 0.031124956
2018/02/27 12:32:43.108 ddl_worker.go:354: [info] [ddl] schema version doesn't change
2018/02/27 12:32:43.112 ddl_worker.go:257: [info] [ddl] run DDL job ID:53, Type:add index, State:running, SchemaState:write reorganization, SchemaID:27, TableID:47, RowCount:60395553, ArgLen:0
2018/02/27 12:32:43.122 index.go:607: [info] [ddl] total added index for 60395561 rows, this task [562613610173064733,562613610256952349) added index for 2 rows, batch 1048576, take time 0.022045652
2018/02/27 12:32:43.162 index.go:607: [info] [ddl] total added index for 60395563 rows, this task [562613610256952349,562613610357617181) added index for 2 rows, batch 1048576, take time 0.04030802

How should I terminate that create index process?

@zimulala
Copy link
Contributor

@leonchen83
You can use the statement ADMIN CANCEL DDL JOBS 'job_id' [, 'job_id'] .... Related docs
The job_id is 53 here.

@winkyao
Copy link
Contributor

winkyao commented Mar 22, 2018

@leonchen83 create index slow was fixed by PR: #5964 , you can try the newest master branch.

@shenli shenli changed the title Select query too slow Select statement run too slow Apr 6, 2018
@shenli
Copy link
Member

shenli commented Apr 6, 2018

@leonchen83 Feel free to reopen this issue if you still have any problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants