You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Run an application in a similar way to running Husky applications.
Run master ./HuskyMaster -C <your conf file path>
Run application ./CubingByLayer -C <your conf file path>
Alternatively, you may run the application distributedly by ./exec.sh ./CubingByLayer -C <your conf file path>
The exec.sh file is the one in Husky, which looks like
# This points to a file, which should contains hostnames (one per line).# E.g.,## worker1# worker2# worker3#
MACHINE_CFG=/data/opt/tmp/tati/husky/build/slaves
# This point to the directory where Husky binaries live.# If Husky is running in a cluster, this directory should be available# to all machines.
BIN_DIR=/data/opt/tmp/tati/husky/build
time pssh -t 0 -P -h ${MACHINE_CFG} \
-x "-t -t""cd $BIN_DIR && ls $BIN_DIR > /dev/null && ./$@"
The meta_url parameter is the same as the Kylin input to Spark/MR; hive_table is the HDFS path to the flat join table; table_format is the format of the flat join table; and output_path is somewhere on HDFS in which you want to put the cuboids. The sample parameters are for building the example cube in Kylin which is very small. If you want to try large-scale data, you may deploy your own Kylin instance on the cluster, import TPC-H benchmark to get cube descriptions, and run the Kylin pipeline to create the flat join table.
The text was updated successfully, but these errors were encountered:
Run an application in a similar way to running Husky applications.
./HuskyMaster -C <your conf file path>
./CubingByLayer -C <your conf file path>
Alternatively, you may run the application distributedly by
./exec.sh ./CubingByLayer -C <your conf file path>
The exec.sh file is the one in Husky, which looks like
The
meta_url
parameter is the same as the Kylin input to Spark/MR;hive_table
is the HDFS path to the flat join table;table_format
is the format of the flat join table; andoutput_path
is somewhere on HDFS in which you want to put the cuboids. The sample parameters are for building the example cube in Kylin which is very small. If you want to try large-scale data, you may deploy your own Kylin instance on the cluster, import TPC-H benchmark to get cube descriptions, and run the Kylin pipeline to create the flat join table.The text was updated successfully, but these errors were encountered: