This repository aims to record some frequently-used commands, including Linux and python commands & archive some frequently-used python scripts.
A healthy way to switch the gcc/g++ version is to use update-alternatives and set the default to gcc/++-10 for the build.
sudo apt install gcc-10 g++-10
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 10
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 10
sudo update-alternatives --config gcc
sudo update-alternatives --config g++
git checkout -b new-feature-branch
git add .
git commit -m "A descriptive message about your changes"
git push origin new-feature-branch
git pull
git config pull.rebase false
git reset --hard HEAD~1
objectives:
1. get the newest update from the original project
2. push your own codes even without permission
-
fork the desired project.
-
clone the fork into your local repo
git clone https://github.com/MitchellX/flash-attention.git
-
add the original repo as upstream, so that you can pull the newest changes
git remote add upstream https://github.com/Dao-AILab/flash-attention.git
-
see the remote choices
git remote -v
-
get the newest changes
git fetch + git merge upstream/main
-
push codes to remote (default: origin/main, you don't have access to upstream/main)
git push
cd - 返回上次的目录
!pip3 install youtube-dl ffmpeg-python
source_url = 'https://www.youtube.com/watch?v=5-s3ANu4eMs' #@param {type:"string"}
# (start, end) 剪取指定时长
source_start = '00:01:40' #@param {type:"string"}
source_end = '00:01:50' #@param {type:"string"}
!mkdir -p /content/data
!rm -dr /content/data/source*
!youtube-dl $source_url --merge-output-format mp4 -o /content/data/source_tmp.mp4
!ffmpeg -y -i /content/data/source_tmp.mp4 -ss $source_start -to $source_end -r 25 /content/data/source.mp4
!rm /content/data/source_tmp.mp4
Taylor Swift videos:
source_url = 'https://www.youtube.com/watch?v=JgkCFCOAn48'
source_start = '00:00:08' #@param {type:"string"}
source_end = '00:00:25' #@param {type:"string"}
# scp root@10.1.22.5:/root/1.txt e:\scpdata\
scp xiangmingcan@10.207.174.24:/export2/xiangmingcan/celeba.tar e: # 下载到E盘
windows上传文件夹到linux服务器:
scp -rp e:\scpdata root@10.1.22.5:/root
Linux服务器之间传输:点此
以admin的身份把IP地址为“192.168.219.125”,/home/admin/test目录下所有的东西都拷贝到本机/home/admin/目录下
scp -r 用户名@计算机IP或者计算机名称:目录名 本地路径
scp -r admin@192.168.219.125:/home/admin/test /home/admin/
scp -r 要传的本地目录名 用户名@计算机IP或名称:远程路径
scp -r /home/music/ root@ipAddress:/home/root/others/
# 指定端口
scp -P 7022 ./nyu_v2.zip tongping@keb310-useast.xttech.tech:/home/tongping/dataset/
2-0
scp -r root@192.168.1.104:/usr/local/nginx/html/webs/\{index,json\} ./
2-1 从本地文件复制多个文件到远程主机(多个文件使用空格分隔开)
先进入本地目录下,然后运行如下命令:
scp index.css json.js root@192.168.1.104:/usr/local/nginx/html/webs
rsync -rvz -e 'ssh -p **22' --exclude='*.model' dir/ host:/dir
-a or --archive: archive mode, which preserves permissions, ownership, timestamps, and links.
-v or --verbose: verbose output, which displays the progress of the transfer.
-z or --compress: compresses the data during transfer, which can help to reduce the amount of data being transferred over the network.
-P or --partial --progress: shows the progress of the transfer and resumes partially transferred files.
-r recurse into directories
-e 使用 ssh 作为远程 shell,这样所有的东西都被加密
--exclude='*.out' :排除匹配模式的文件,例如 *.out 或 *.c 等。
要跳过已有传输可使用rsync:rsync -aWPu local root@host:remote,参数解释:
-a:档案模式,保留源文件的所有属性,并递归传输目录
-W:跳过增量传输算法,直接传输整个文件,在带宽较高时适用
-P:显示传输进度
-u:仅当源主机文件比目标主机中的文件更新时才传输
centOS:
cat /etc/redhat-release
Ubuntu:
lsb_release -a
cat /proc/cpuinfo
Ubuntu:
lsb_release -a
uname -a
centOS:
cat /etc/redhat-release
rpm -q centos-release
# 不需要加密/或Windows下一步解压,就用这个
tar -cvf ***.tar /source
tar -xvf ***.tar
压缩
tar -czvf *name*.tar.gz /source
tar -cjvf *name*.tar.bz2 /source
tar -czvf 3000.tar.gz 3000/ #举例
解压缩
tar -xzvf ***.tar.gz
tar -xjvf ***.tar.bz2
参数解析
-c: compress建立压缩档案
-x:解压
-t:tex 查看内容
-v: view 查看过程
-f: force 参数-f是必须的。使用档案名字,切记,这个参数是最后一个参数,后面只能接档案名。
-z:有gzip属性的
-j:有bz2属性的
git branch 查看本地分支
git branch -r 查看远程分支
git branch -a 查看所有分支
git branch [branch name]
git checkout [branch name] 切换到新的分支
可以一条命令执行
git checkout -b [branch name] 创建+切换分支
git branch -r (--remote)
git checkout [branch name]
git push origin [branch name]
删除本地分支
git branch -d [branch name]
删除github远程分支,分支名前的冒号代表删除。
git push origin :[branch name]
若只想取回某一部分,则用:
git pull [repo的website地址] [branch name]
git clone https://github.com/MitchellX/testImage.git
git add . (注:别忘记后面的.,此操作是把Test文件夹下面的文件都添加进来
git commit -m "提交信息" (注:“提交信息”里面换成你需要,如“first commit”)
git push -u origin master (注:此操作目的是把本地仓库push到github上面,此步骤需要你输入帐号和密码)
一条指令完成
git add . && git commit -m "update" && git push
git rm -r --cached .
git fetch --all
git reset --hard origin/main
git pull
然后有两种方法来把你的代码和远程仓库中的代码合并
-a. git pull这样就直接把你本地仓库中的代码进行更新但问题是可能会有冲突(conflicts),个人不推荐
-b. 先git fetch origin(把远程仓库中origin最新代码取回),再git merge origin/master(把本地代码和已取得的远程仓库最新代码合并),如果你的改动和远程仓库中最新代码有冲突,会提示,再去一个一个解决冲突,最后再从1开始
如果没有冲突,git push origin master,把你的改动推送到远程仓库中
https://zhuanlan.zhihu.com/p/137856034
https://stackoverflow.com/questions/6084483/what-should-i-do-when-git-revert-aborts-with-an-error-message
git fetch --all && git reset --hard origin/main && git pull
git 删除远程分支上的某次提交 git revert HEAD git push origin master 删除最后一次提交,但是查看git log 会有记录
https://blog.csdn.net/u011630575/article/details/48288663
fg 回到上一个进程
bg 将一个在后台暂停的命令,变成继续执行。如果后台中有多个命令,可以用bg %jobnumber将选中的命令调出
jobs -l 查看当前所有进程,并显示pid
kill pid 杀死pid进程
gpustat 最简单的
watch -n 0.1 nvidia-smi 实时监控 -n设置间隔
lspci | grep -i vga
fuser -v /dev/nvidia* 查看当前系统中GPU占用的线程
nvidia-smi 也能查看pid
kill -9 pid 结束进程
import sys
sys.getsizeof(input.storage()) 单位byte B
print('model.__len__(): %d layers' % model.__len__())
print(f'model.__len__(): {model.__len__()} layers')
# U-net(5, 64) memory usage
param_count = sum(p.storage().size() for p in model.parameters())
param_size = sum(p.storage().size() * p.storage().element_size() for p in model.parameters())
param_scale = 2 # param + grad
print(f'# of Model Parameters: {param_count:,}')
print(f'Total Model Parameter Memory: {param_size * param_scale:,} Bytes')
在接入全连接层前,一般都需要一个打平的操作放在nn.Sequential里面,因此需要自己写一个打平的类继承自nn.Module.以上便是代码,特记录之。
class View(nn.Module):
def __init__(self):
super(View, self).__init__()
def forward(self, x):
return x.view(x.size[0], -1)
前提是,在本地的conda里已经有一个叫AAA的环境,我想创建一个新环境跟它一模一样的叫BBB,那么这样一句就搞定了:
conda create -n BBB --clone AAA
conda create -n your_env_name python=X.X(2.7、3.6等) # Conda 创建虚拟环境
conda remove -n your_env_name(虚拟环境名称) --all # 删除虚拟环境
但是如果是跨计算机呢。查询conda create命令的原来说明,是这样的:
–clone ENV
Path to (or name of) existing local environment.
–clone这个参数后面的不仅可以是环境的名字,也可以是环境的路径。所以,很自然地,我们可以把原来电脑上目标conda环境的目录复制到新电脑上,然后再用:
conda create -n BBB --clone ~/path
参考:https://blog.csdn.net/qq_38262728/article/details/88744268
sudo make install
unset https_proxy
unset http_proxy
sudo apt-get update
export PYTHONPATH=$PYTHONPATH:~/你的环境位置
export PYTHONPATH=$PYTHONPATH:/home/xiangmingcan/notespace/deepfakes/faceswapNirkin/face_swap/interfaces/python
!pip3 install youtube-dl ffmpeg-python
source_url = 'https://www.youtube.com/watch?v=5-s3ANu4eMs' #@param {type:"string"}
# (start, end) 剪取指定时长
source_start = '00:01:40' #@param {type:"string"}
source_end = '00:01:50' #@param {type:"string"}
!mkdir -p /content/data
!rm -dr /content/data/source*
!youtube-dl $source_url --merge-output-format mp4 -o /content/data/source_tmp.mp4
!ffmpeg -y -i /content/data/source_tmp.mp4 -ss $source_start -to $source_end -r 25 /content/data/source.mp4
!rm /content/data/source_tmp.mp4
Taylor Swift videos:
source_url = 'https://www.youtube.com/watch?v=JgkCFCOAn48'
source_start = '00:00:08' #@param {type:"string"}
source_end = '00:00:25' #@param {type:"string"}
CUDA_VISIBLE_DEVICES=1 python xxx.py ...
CUDA_VISIBLE_DEVICES=0 python your_file.py # 指定GPU集群中第一块GPU使用,其他的屏蔽掉
CUDA_VISIBLE_DEVICES=1 Only device 1 will be seen
CUDA_VISIBLE_DEVICES=0,1 Devices 0 and 1 will be visible
- 匹配 0 或多个字符
? 匹配任意一个字符
mv *.* ./1000/
mv 6???.* ./6000/
pip install xxx -i https://pypi.tuna.tsinghua.edu.cn/simple
python -m ipdb your_code.py
或者侵入式调试,可以进入os.system('命令')
import ipdb
ipdb.set_trace()
df -lh
du -lh
yum install -y screen 安装screen工具。
screen 打开一个screen会话
screen -S <name> 建立一个screen会话,名字是:name
先按Ctrl+a,再按d 退出screen会话。
screen -ls 查看打开的screen会话。
screen -r 编号 退出后再次登录某个会话。
Ctrl+d或exit 结束screen会话。
# "no screen to be resumed", but indeed exist
screen -d -r
# 强制结束一些,你结束不了的session
screen -X -S [session # you want to kill] quit
for i in `ls templates/*.mp4`;do
name=`basename $i .mp4`
if [ ! -d templates/$name ];then
python image2video_fp.py templates/$i templates/$name
fi
python main.py $name
python image2video_fp.py results/${name}_sijiali results/${name}_sijiali.mp4 25
echo $i
done
basename是指去掉 .mp4后的base名词
#如果文件夹不存在,创建文件夹
if [ ! -d "/myfolder" ]; then
mkdir /myfolder
fi
username=$(basename $username) 去掉前置路径
username=$(basename $username .jpg) 增加去掉后缀
if not os.path.exists(args.dest):
os.mkdir(args.dest)
os.path.splitext()[0]
os.path.basename()
# 两个连用,只剩名词
target_name = os.path.splitext(os.path.basename(target_path))[0]
landmark_txt = os.path.split(image_path)[1][:-3] + 'txt'
upper_folder = os.path.split(os.path.split(image_path)[0])[0]
打开ssh端口 bash ~/notespace/xmc
更新软件源 sudo apt-get update
激活虚拟环境 source ~/envs/digitalman/bin/activate
卸载并重装dlib pip3 uninstall dlib pip3 install dlib
设置root密码 sudo passwd
删除原有链接
rm /usr/bin/python
建立新链接
ln -s /usr/bin/python3.6这是你想要指向的版本号 /usr/bin/python
ln [参数][源文件或目录][目标文件或目录]
ln -s src/ ./
/etc/apt/sources.list
先备份jd的源,然后更新清华源:
cp sources.list sources.list2
vim sources.list
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
更新源千万不能加sudo,不然会失败的!!!
apt-get update
完成之后,即可装最新的软件了
之后想从清华源下载的话,就用-i 指定路径:
pip install virtualenv -i https://pypi.tuna.tsinghua.edu.cn/simple
virtualenv --clear envs/test
source envs/test/bin/activate
deactivate
windows下的CMD命令tree可以很方便的得到文件夹目录树
tree /f>list.txt
ls -v > list.txt
a = glob.glob('*')
print(a)
:: ['Audio', 'batch_run.py', 'Data', 'Deep3DFaceReconstruction', 'pipeline.jpg', 'readme.md', 'render-to-video', 'requirements.txt', 'requirements_colab.txt', 'test.py']
ls | wc -l
这是编码格式ff(fileformat)的问题,vim进去按照下面指令修改文件格式即可
:set ff=unix
tensor1.item()
如何要转成字符串形式:
str(tensor1.item())
以numpy读入的图片(3, 256, 256) -> (1, 3, 256,256)为例
img2 = torch.from_numpy(img2).float().unsqueeze(0).cuda()
results = list(map(int, results))
还能将字符串后面的转义字符'\n \t'去除
a = [1, 2, 3]
log.write(' '.join(map(str, a)))
arr_mean = np.mean(array) 求均值
# 求按列求均值,只剩一行。axis=1时候,按照行取均值,只剩一列
arr_mean = np.mean(array, axis=0)
arr_var = np.var(array)求方差
arr_std = np.std(array,ddof=1)求标准差
dist = np.linalg.norm(vec1-vec2)
distance= np.sqrt(np.sum(np.square(vec1-vec2)))
pip freeze > ./requirements.txt
# if pip freeze creates some weird path instead of the package version
pip list --format=freeze > requirements.txt
sudo apt-get install p7zip-full
7za x filename.7z
但是要记住这个代码要放在最上面
sys.path.append('..') # 添加上级目录
sys.path.append('code/') # 添加下级code/目录
# 是在找不到当前目录下的文件, 就添加绝对路径.
import sys
sys.path.append("/home/tiger/bytegnn/python/bytegnn/ros_data")
# 各参数依次是:照片/添加的文字/左上角坐标/字体/字体大小/颜色/字体粗细
cv2.putText(I,'there 0 error(s):',(50,150),cv2.FONT_HERSHEY_COMPLEX,6,(0,0,255),25)
具体请看文件:PIL_draw.py fontsize = 8 font = ImageFont.truetype("arial.ttf", fontsize) draw.text((x, y), str(cnt), fill=(0, 255, 255), font=font) # 利用ImageDraw的内置函数,在图片上写入文字
img = cv2.imread(fengmian)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # cv2默认为bgr顺序
h, w, _ = img.shape #返回height,width,以及通道数,不用所以省略掉
cv2.imwrite('test2.jpg', img[..., ::-1])
或者这样写,意思主要是将RGB三通道逆序:
img[:, :, ::-1]
cv2.imread(img, -1)
cropped = img[0:128, 0:512] # 裁剪坐标为[y0:y1, x0:x1],先width后height
img =cv2.imread(file_path[i])
img=cv2.hconcat([img,img,img])#水平拼接
img=cv2.vconcat([img,img,img])#垂直拼接
np.concatenate((img, img, img), axis=1)
axis=0表示只剩一列,axis=1表示只剩一行,注意这里!里面是括号,tuple元组的形式
fourcc = cv2.VideoWriter_fourcc(*'MJPG')
w, h, c = test_img.shape
video_writer = cv2.VideoWriter(save_name, fourcc, fps, (h, w))
for img in imgs:
if img[-3:] != 'jpg' and img[-3:] != 'png':
continue
imgname = os.path.join(imgs_dir, img)
frame = cv2.imread(imgname, -1)
video_writer.write(frame)
video_writer.release()
# cv2读有中文路径的图片
img = cv2.imdecode(np.fromfile(image, dtype=np.uint8), -1)
import os
os.system("cmd")
import random
x = random.randint(0,9)
打开 UTF-8 编码的 CSV 文件的方法:
-
打开 Excel
-
执行“数据”->“自文本”
-
选择 CSV 文件,出现文本导入向导
-
选择“分隔符号”,下一步
-
勾选“逗号”,去掉“ Tab 键”,下一步,完成
6)在“导入数据”对话框里,直接点确定
data = pd.read_csv('sample.csv', encoding='GB18030')
L = ['Adam', 'Lisa', 'Bart', 'Paul', 'a', 'b']
print(L[::2])
output: ['Adam', 'Bart', 'a']
print(L[1::2])
output: ['Lisa', 'Paul', 'b']
get_landmark[[52, 53, 54, 55, 56, 61, 66, 88]]
for filesName in filesNames:
dictionary[filesName] = '{:0>4d}'.format(i)
i += 1
np.save("name_diction.npy", dictionary)
read_dic = np.load('name_diction.npy', allow_pickle=True).item()
print(read_dic)
findstr /s /i "string" *.*
上面的命令表示,当前目录以及当前目录的所有子目录下的所有文件中查找"string"这个字符串。
import pandas as pd
df = pd.read_csv('board.csv')
print(len(df))
print(df.head())
# read the title of dataFrame
header = df.columns.values.tolist()
print(header)
for i in range(len(df)):
print(df[header[0]][i])
print(df[header[1]][i])
print(df[header[2]][i])
print(df[header[3]][i])
print(df[header[4]][i])
mask[:, :, np.newaxis]
np.expand_dims(x, 2)
# 扩充width、height
self.IMG_MEAN[np.newaxis, np.newaxis, :]
line.strip().split()
strip() 方法用于移除字符串头尾指定的字符(默认为空格)或字符序列。注意:该方法只能删除开头或是结尾的字符,不能删除中间部分的字符。
split() 默认为所有的空字符,包括空格、换行(\n)、制表符(\t)等
for i in `ls`;do if [ -d $i/.ipynb_checkpoints ];then echo $i; fi; done
rm -rf M030_angry_3_003/.ipynb_checkpoints/
ffmpeg -i 4_concate.avi -i all.mp3 -c:v copy -c:a aac -strict experimental output.mp4
audio音频替换video中的音频
ffmpeg -i video.mp4 -i audio.wav -c:v copy -c:a aac -strict experimental -map 0:v:0 -map 1:a:0 output.mp4
1.新建文本文档 list.txt ,包含要拼接的音频,格式如:
file '1.mp3'
file '2.mp3'
2.可以用一下命令生产这个list
ls *.mp3 > list.txt
3.拼接,命令如:
ffmpeg -f concat -i list.txt -c copy 007.mp3
ffmpeg -i input.mp3 output.wav
ffmpeg -i input.m4a -acodec pcm_s16le -ac 1 -ar 8000 output.wav
ffmpeg -i input.mp4 output.wav
ffmpeg -i video.avi frames_%05d.jpg
ffmpeg -i M030_angry_3_001/fake_B_%06d.jpg -vcodec mpeg4 test.avi
ffmpeg -i M030_angry_3_001/fake_B_%06d.jpg -i audio.mp3 -vcodec mpeg4 test.avi
输出的时候,编码器换下
-vcodec libx264 输出.mp4
完整版本:
ffmpeg -y -r 25 -i M030_angry_3_001/fake_B_%06d.jpg -i audio.mp3 -vcodec mpeg4 test.avi
-y 表示覆盖原视频
-r 25 表示帧数
-i M030_angry_3_001/fake_B_%06d.jpg 表示要合成的图片的路径
-i audio.mp3 表示要添加的音频
mkdir -p
with open(name + '.pkl', 'wb') as f:
pickle.dump(data, f) #这个data也可以是list
with open(file_path, 'rb') as f:
file = pickle.load(f)
with open(json_file, 'r') as f:
info = json.load(f)
a=np.arange(5)
np.save('test.npy',a)
a=np.load('test.npy')
import scipy.id.wavefile as wavfile
sample_rate,signal=wavfile.read('stop.wav')
1、2、10 排序后结果是 1、10、2。如果按照人为逻辑则是 1、2、10
ls -lv
store_true就代表着一旦有这个参数,做出动作“将其值标为True”,也就是没有时,默认状态下其值为False。反之亦然,store_false也就是默认为True,一旦命令中有此参数,其值则变为False。
parser.add_argument('--lstm', action='store_true')
cat /usr/local/cuda/version.txt
nvcc --version
cat /proc/driver/nvidia/version
挂载大于2T的硬盘时候,要用GPT的命令,参考这个链接的第二条 https://www.thegeekstuff.com/2012/08/2tb-gtp-parted/
挂载小于2T(非服务器)的硬盘,参考这个就足够: https://cloud.tencent.com/developer/article/1746763
自动挂载(重启后有效) https://www.jianshu.com/p/336758411dbf
anaconda多用户的安装和user添加可以参考这个链接 https://blog.csdn.net/codedancing/article/details/103936542
Ubuntu 18.04安装CUDA(版本10.2)和cuDNN,参考: https://blog.csdn.net/ywdll/article/details/103619130
报错:Failed to initialize NVML: Driver/library version mismatch。cuda和gpu的内核版本不一致:
ubuntu 创建用户 删除用户 切换用户 修改密码 管理员权限 https://blog.csdn.net/superjunenaruto/article/details/110100781
sudo chmod 777 ××× (每个人都有读和写以及执行的权限)
reboot 需要root用户
shutdown -r now
Linux系统登录新建用户时,shell开头为$,不显示用户名和路径的解决办法 https://blog.csdn.net/Du_wood/article/details/84914759?utm_medium=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogCommendFromMachineLearnPai2%7Edefault-1.control&depth_1-utm_source=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogCommendFromMachineLearnPai2%7Edefault-1.control
a = torch.from_numpy() 浮点数是64位的
a.float() 变成32位的
torch.Tensor() 32位的
torch.Tensor 是torch.FloatTensor的别名,32位的
torch.tensor 根据输入类型决定类型。
torch.Tensor.expand(shape) 相同填充
class My_mse_loss(nn.Module):
def __init__(self):
super(My_mse_loss, self).__init__()
self.mse_loss_fn = nn.MSELoss(size_average=False, reduce=False)
self.weight = np.ones([136])
self.weight[96:] = self.weight[96:] + 1
self.weight = torch.Tensor(self.weight).cuda()
def forward(self, infer_lm, gt_lm):
loss = self.mse_loss_fn(infer_lm, gt_lm)
shape = gt_lm.shape
self.weight = self.weight.expand(shape)
loss_final = loss * self.weight
loss_final = torch.mean(loss_final)
return loss_final
for parameters in self.generator.parameters():
print parameters
break
sudo useradd -m username -d /export4/username -s /bin/bash
userdel username
find ./ -name *fsgan*
list = [9, 12, 88, 14, 25]
max_list = max(list) # 返回最大值
max_index = list.index(max(list))# 最大值的索引
# 最小的话 max换成min
FLOPS denotes the total number of floating point operations of the neural network in a forward propogation.
FLOPs denotes the floating point operations per second.
srun --pty --partition=1080ti-short --gres=gpu:1 --time=0-04:00:00 /bin/bash
cd /home/root/
cd ./.pycharm_helpers/
rm -rf check_all_test_suite.py
tar -xvzf helpers.tar.gz
或者: 例如C:\Program Files\JetBrains\PyCharm 2017.2.3这里面找到并且 删掉skeletons文件夹,重新启动再配置远程环境就好了
import json
with open('my_dict.json', 'w') as f:
json.dump(my_dict, f)
# elsewhere...
with open('my_dict.json') as f:
my_dict = json.load(f)
sudo docker cp mysql-5.1.32-linux-x86_64-icc-glibc23.tar.gz xenodochial_mcnulty/:/home
import matplotlib.pyplot as plt
#折线图
x = [0,0.2,0.4,0.6,0.8]#点的横坐标
k1 = [5.86, 7.03, 10.77, 13.55, 15.98]#线1的纵坐标
k2 = [6.16, 8.59, 11.92, 14.43, 17.19]
plt.plot(x,k1,'s-',color = 'r',label="with cache")#s-:方形
plt.plot(x,k2,'o-',color = 'g',label="without cache")#o-:圆形
plt.xlabel("p (probability)")#横坐标名字
plt.ylabel("latency")#纵坐标名字
plt.legend(loc = "best")#图例
# plt.show()
plt.savefig('test.png')
plt.imshow(img)
plt.show()
getconf -a | grep CACHE
cat /proc/meminfo
ps -p <PID> -o user
conda create -n py37 python=3.7
conda activate py37
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
conda install -c nvidia cuda
conda clean -y -all //删除所有的安装包及cache
du -s * | sort -hr | head 选出排在前面的10个, du -s * | sort -hr| tail 选出排在后面的10个。
https://pybind11.readthedocs.io/en/latest/classes.html
g++ -O3 -Wall -shared -std=c++11 -fPIC $(python3 -m pybind11 --includes) example.cpp -o example$(python3-config --extension-suffix)
find ./ -name "mytest.*"
You can build on your conda environment from the provided environment.yml
. Feel free to change the env name in the file.
conda env create -f environment.yml
or
conda env update --name myenv --file local.yml --prune
// prune uninstalls dependencies which were removed from local.yml
squeue --me
scancel job_id
Holds submodules in a list.
self.blocks = nn.ModuleList(self.blocks)
RuntimeError: Integer division of tensors using div or / is no longer supported, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead.
Using floor division (//) will floor the result to the largest possible integer. Using torch.true_divide(Dividend, Divisor) or numpy.true_divide(Dividend, Divisor) in stead.
For example: 3/4 = torch.true_divide(3, 4)
>>> import ast
>>> ast.literal_eval("{'muffin' : 'lolz', 'foo' : 'kitty'}")
{'muffin': 'lolz', 'foo': 'kitty'}
list(model.modules())
tensorboard --logdir=xmc_test_norm/ --port 8000 --bind_all
ssh -L 16006:127.0.0.1:6006 user@hostname
# 使用SSH将服务器的6006端口重定向到自己机器上来。其中16006:127.0.0.1代表自己机器上的16006号端口,6006是服务器上tensorboard使用的端口。
# https://blog.csdn.net/xg123321123/article/details/81153735
ps -p 2711389 -o cmd=
tail -n 1 <filename>
wget -nc -i file_list.txt
mv source_directory/!(*.tar) destination_directory/
conda env create -n ENVNAME --file requirements.yml
conda env update --file requirements.yml --prune
ps -p [pid] -o args=
export PYTHONPATH=$PYTHONPATH:$(pwd)
find . -type f -name "*cuda_allocator*