Skip to content

Commit

Permalink
[MultiDB] add mutidb warmboot support - restoring database (#5773)
Browse files Browse the repository at this point in the history
* restoring each database  with all data before warmboot and then flush unused data in each instance, following the multiDB warmboot design at https://github.com/Azure/SONiC/blob/master/doc/database/multi_database_instances.md
  * restore needs to be done in database docker since we need to know the database_config.json in new version
  * copy all data rdb file into each instance restoration location andthen flush unused database
  * other logic is the same as before
*  backing up database part is in another PR at sonic-utilities sonic-net/sonic-utilities#1205, they depend on each other
  • Loading branch information
dzhangalibaba committed Dec 10, 2020
1 parent 3c9a7ec commit b2a3de5
Show file tree
Hide file tree
Showing 4 changed files with 53 additions and 0 deletions.
1 change: 1 addition & 0 deletions dockers/docker-database/Dockerfile.j2
Original file line number Diff line number Diff line change
Expand Up @@ -58,5 +58,6 @@ COPY ["files/supervisor-proc-exit-listener", "/usr/bin"]
COPY ["files/sysctl-net.conf", "/etc/sysctl.d/"]
COPY ["critical_processes", "/etc/supervisor"]
COPY ["files/update_chassisdb_config", "/usr/local/bin/"]
COPY ["flush_unused_database", "/usr/local/bin/"]

ENTRYPOINT ["/usr/local/bin/docker-database-init.sh"]
16 changes: 16 additions & 0 deletions dockers/docker-database/docker-database-init.sh
Original file line number Diff line number Diff line change
Expand Up @@ -77,4 +77,20 @@ else
fi
rm $db_cfg_file_tmp

# copy dump.rdb file to each instance for restoration
DUMPFILE=/var/lib/redis/dump.rdb
redis_inst_list=`/usr/bin/python -c "import swsssdk; print(' '.join(swsssdk.SonicDBConfig.get_instancelist().keys()))"`
for inst in $redis_inst_list
do
mkdir -p /var/lib/$inst
if [[ -f $DUMPFILE ]]; then
# copy warmboot rdb file into each new instance location
if [[ "$DUMPFILE" != "/var/lib/$inst/dump.rdb" ]]; then
cp $DUMPFILE /var/lib/$inst/dump.rdb
fi
else
echo -n > /var/lib/$inst/dump.rdb
fi
done

exec /usr/local/bin/supervisord
28 changes: 28 additions & 0 deletions dockers/docker-database/flush_unused_database
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
#!/usr/bin/python
import swsssdk
import redis
import subprocess
import time

while(True):
output = subprocess.Popen(['sonic-db-cli', 'PING'], stdout=subprocess.PIPE).communicate()[0]
if 'PONG' in output:
break
time.sleep(1)

instlists = swsssdk.SonicDBConfig.get_instancelist()
for instname, v in instlists.items():
insthost = v['hostname']
instsocket = v['unix_socket_path']

dblists = swsssdk.SonicDBConfig.get_dblist()
for dbname in dblists:
dbid = swsssdk.SonicDBConfig.get_dbid(dbname)
dbinst = swsssdk.SonicDBConfig.get_instancename(dbname)

# this DB is on current instance, skip flush
if dbinst == instname:
continue

r = redis.Redis(host=insthost, unix_socket_path=instsocket, db=dbid)
r.flushdb()
8 changes: 8 additions & 0 deletions dockers/docker-database/supervisord.conf.j2
Original file line number Diff line number Diff line change
Expand Up @@ -33,3 +33,11 @@ stdout_logfile=syslog
stderr_logfile=syslog
{% endfor %}
{% endif %}

[program:flushdb]
command=/bin/bash -c "sleep 300 && /usr/local/bin/flush_unused_database"

This comment has been minimized.

Copy link
@rajendra-dendukuri

rajendra-dendukuri Sep 2, 2021

Contributor

Why do we wait for 300s to flush unused DBs. They are unused anyway, can't we delete them right away?
@dzhangalibaba can you please comment.

priority=3
autostart=true
autorestart=false
stdout_logfile=syslog
stderr_logfile=syslog

0 comments on commit b2a3de5

Please sign in to comment.