We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
性能对结论:MySQL 宽表 > MySQL JSON ≈ PGSQL JSONB > PGSQL JSON
MySQL Row Size 65,535 bytes and Row Columns 4096
UTF8 编码:1 个字符占 3 个字节 UTF8MB4 编码:1 个字符占 4 个字节
采用 500 个存储字段,由以下字段组成:
参考内容:
> select * from data where col_0 = '⽹卡'; ... 3 726 rows in set (0.278 sec)
> select data from json where data -> '$.col_0' = '⽹卡'; ... 704 rows in set (0.319 sec)
# select data -> 'col_0' from jsonb; Time: 233.711 ms (0.233 sec)
# select data -> 'col_0' from json; Time: 876.308 ms (0.876 sec)
> select * from data where col_0 = 'CPU' limit 50; ... 50 rows in set (0.019 sec)
> select data from json where data -> '$.col_0' = 'CPU' limit 50; ... 50 rows in set (0.017 sec)
# select data -> 'col_0' from jsonb limit 50; Time: 1.834 ms (0.001 sec)
# select data -> 'col_0' from json limit 50; Time: 4.824 ms (0.004 sec)
> select col_0,count(1) from data group by col_0; +-----------+----------+ | col_0 | count(1) | +-----------+----------+ | 内存 | 757 | | ⿏标 | 695 | | 机箱 | 688 | | 声卡 | 744 | | 显卡 | 662 | | ⽹卡 | 726 | | 打印机 | 728 | | 显⽰器 | 726 | | 硬盘 | 700 | | 主板 | 718 | | 键盘 | 749 | | 光驱 | 690 | | 电源 | 717 | | CPU | 656 | +-----------+----------+ 14 rows in set (0.045 sec)
> select data->'$.col_0',count(1) from json group by data->'$.col_0'; +-----------------+----------+ | data->'$.col_0' | count(1) | +-----------------+----------+ | "显卡" | 715 | | "光驱" | 703 | | "主板" | 745 | | "声卡" | 683 | | "键盘" | 751 | | "机箱" | 711 | | "⿏标" | 723 | | "电源" | 699 | | "显⽰器" | 738 | | "⽹卡" | 704 | | "内存" | 720 | | "硬盘" | 725 | | "打印机" | 697 | | "CPU" | 638 | +-----------------+----------+ 14 rows in set (0.416 sec)
# select data ->> 'col_0',count(1) from jsonb group by data ->> 'col_0'; ?column? | count ----------+------- 显⽰器 | 721 光驱 | 680 CPU | 736 声卡 | 724 主板 | 736 硬盘 | 732 ⿏标 | 675 机箱 | 686 键盘 | 712 显卡 | 732 打印机 | 686 电源 | 706 内存 | 709 ⽹卡 | 766 (14 rows) Time: 225.264 ms (0.225 sec)
# select data ->> 'col_0',count(1) from json group by data ->> 'col_0'; ?column? | count ----------+------- 主板 | 736 CPU | 736 显⽰器 | 722 键盘 | 712 ⿏标 | 675 电源 | 706 打印机 | 686 声卡 | 724 内存 | 709 机箱 | 686 ⽹卡 | 766 硬盘 | 732 显卡 | 732 光驱 | 680 (14 rows) Time: 885.111 ms (0.885 sec)
> select d1.col_0,d2.col_1 from data d1 left join data d2 on d1.col_0 = d2.col_1 where d1.col_0 = 'CPU'; 463792 rows in set (0.490 sec)
> select j1.data -> '$.col_0',j2.data->'$.col_1' from json j1 left join json j2 on json_value(j1.data, '$.col_0') = json_value(j2.data,'$.col_1') where j1.data -> '$.col_0' = 'CPU'; 490622 rows in set (2.384 sec)
> alter table json add column col_0 varchar(50) as (JSON_EXTRACT(data, '$.col_0')) VIRTUAL; > alter table json add column col_1 varchar(50) as (JSON_EXTRACT(data, '$.col_1')) VIRTUAL; > create index index_col_0 on json (col_0); > create index index_col_1 on json (col_1); > select j1.col_0,j2.col_1 from json j1 left join json j2 on j1.col_0 =j2.col_1 where j1.col_0 ='"CPU"'; 490622 rows in set (0.601 sec)
# select j1.data ->> 'col_0',j2.data ->> 'col_1' from jsonb j1 left join jsonb j2 on j1.data ->>'col_0' = j2.data ->>'col_1' where j1.data ->> 'col_0'= 'CPU'; Time: 20984.119 ms (00:20.984) (20.984 sec)
# select j1.data ->> 'col_0',j2.data ->> 'col_1' from json j1 left join json j2 on j1.data ->>'col_0' = j2.data ->>'col_1' where j1.data ->> 'col_0'= 'CPU' ; ⚠️ Select Time > 70967.075 ms (01:10.967) (70.967 sec)
The text was updated successfully, but these errors were encountered:
No branches or pull requests
测试环境
测试条件
方案拟定
宽表方案
MySQL Row Size 65,535 bytes and Row Columns 4096
UTF8 编码:1 个字符占 3 个字节
UTF8MB4 编码:1 个字符占 4 个字节
采用 500 个存储字段,由以下字段组成:
参考内容:
JSON 方案
操作结果对比
判断查询
分页查询
分组查询
联表查询
The text was updated successfully, but these errors were encountered: