Linux

How to quickly find out which rpm package provides a command on Fedora Linux?

rpm -qf  `which xxx(command)`

change hostname permenantly in f32

hostnamectl set-hostname dlp.server.world

clean up memeories:
sudo sync && sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches'

cannot open source file "GL/glu.h"
yum groupinstall "X Software Development"

copy files preserve permission:
cp -p

To calculate the sum of each row in file xxx.txt
awk '{for (i=1;i<=NF;i++) t+=$i;print t;t=0}' xxx.txt

change the size of current window:
:res +(-) 10

bash to create 2d wham files:

cond=0
for ((i=5;i<=245;i+=20))
do for ((j=-150;j<=150;j+=50))
do rexinfo.pl -inx 1:100 -condsel $cond run,val1,val2 -dir gbrex_va_resd_dihe_new >data.r${i}d$j
cond=`expr $cond + 1|bc`
done
done

amino1_stat.txt
awk '{if (($3 >2100) && ($1 != $2)) {print FNR ":" $0 ":" $3/6000}}' amino_stat1.txt|sort -k 3 -rn

for work in $(cat peptides.txt); do echo $word;done

awk ‘{if ($3<10000)print $0}’ tmp.txt

delete blank lines from file
awk 'NF' filenameawk 'NF' filename

only show files under current directory
find . -maxdepth 1 -type f -printf '%f\n'
for i in $(find . -maxdepth 1 -type f -printf '%f\n'); do echo $i; done

calculate the sum of one collumn

awk '{sum+=$3} END {print sum}' filename

bash:loop

zw=10

for i in $(seq 0 $zw ); do echo $i;done

read files into an array

array=(*.txt)

echo ${array[*]} 

${arr[*]} # All of the items in the array
${!arr[*]} # All of the indexes in the array
${#arr[*]} # Number of items in the array
${#arr[0]} # Length of item zero

fractions in bash

echo "scale=2;2.0/1.3"|bc

add to last line(same line, not next line)

append=xxx(contents to add)

FILE=yyy(input file)

echo "$(cat $FILE)$append" > $FILE

delete last line from a file

sed -i '$ d' filename

in perl, open a file and skip lines starts with #

while (<INP>) {

(/#/) && (next);(if /#/ is true  then next)

....

}

rsync -avz --exclude='*dcd*' sun:MD/ETHC16 .

float calculation in bash

echo "scale=4;1/2"|bc

delete some files under one directory:

 find ./  \ ( -name *ene* -o -name *out* -o -name *dcd* -o -name *log* \ ) -exec rm -rf {} \;

 

useradd xxx

usermod -a -G chemistry xxx(append)

usermod -g chemistry xxx(change chemistry to primary group)

force the user to change password after the first time they login:

chage -d 0 username

 

 

Rocks set path:

Sync Files across Cluster (411)

411 is used to keep files consistent across the cluster
Used for passwd/shadow/group etc
Can be used for other files also

/var/411/Files.mk
FILES_NOCOMMENT = /etc/passwd \
/etc/group \
/etc/shadow

$ rocks sync users

vi /var/411/Files.mk

add "FILES += /etc/bashrc"

then make clean;make

this will sync etc/bashrc on all compute node.

rocks reinstall compute node:

rocks set host pxeboot compute-0-0 action=install

ssh compute-0-0 "shutdown -r now"

If the boot order has not been set to pxe first, you can force a pxe boot with the local keyboard, or by calling /boot/kickstart/cluster-kickstart-pxe on the local node.

Add two  4TB hard drive on rocks cluster

use parted /dev/sdc and parted /dev/sdd to create two partitions

then pvcreate /dev/sdc1 /dev/sdd1

vgcreate vg1 /dev/sdc1 /dev/sdd1

lvcreate -L 7400G vg1 -n lv1

blkid to find out the UUID of the new volume

add "UUID=1785ad00-010e-444f-8513-88a1a27399f9 /export ext4 defaults 1 2" to /etc/fstab

Delete a logical volume:

unmount the logical volume first

lvremove /dev/vg1/lv1

then remove volume group by 

vgremove vg1

 

How To Compile and Run NAMD (MPI Version)?

qsub -pe orte 2 mpi-ring.qsub

qconf -mrqs

{
name zwei
description "per user rule sets"
enabled TRUE
limit users zwei to slots=8
}

Before discussing what the SGE parallel environments look like, it is useful to describe the different types of parallel jobs that can be run.

Shared Memory. This is a type of parallel job that runs multiple threads or processes on a single mutli-core machine. OpenMP programs are a type of shared memory parallel program.
Distributed Memory. This type of parallel jobs runs multiple processes over mutliple processors with communication between them. This can be on a single machine but is typically thought of as going across multiple machines. There are several methods of achieving this via a message passing protocol but the most common, by far, is MPI (Message Passing Interface). There are several implementations of MPI but we have standardized on OpenMPI and MVAPICH2. These integrate very well with SGE by
natively parsing the hosts file provided by SGE
yielding process control to SGE
Hybrid Shared/Distributed Memory. This type of parallel job uses shared memory parallelism with a compute node but distributed memory parallelism across compute nodes. There are several methods for achieving this but the most common is OpenMP/MPI.

https://wiki.chipp.ch/twiki/bin/view/CmsTier3/SunGridEngine#Configure_a_parallel_environment

 

Customizing ssh banner message

You can also create customized greetings for users connecting to your system through ssh. Note that this message is displayed before the actual login.

Create a text file that should appear as the greetings, for example, /etc/sshgreetings.txt.

$ cat /etc/sshgreetings.txt
###############################
#                             #
#      Welcome to Machine1    #
#                             #
###############################

Then edit /etc/ssh/sshd_config as follows:

 

Banner /etc/sshgreetings.txt
restart ssdh : /etc/init.d/sshd restart

if you see some *.h missing, try

yum whatprovides */*.h

after fresh install of laptop, delete /home/Downloads, then

ln -s ./work/Downloads Downloads

check the path of installed packages

rpm -ql xxx

check which package provide .so files

yum provides libfreetype.so.6

 

control output

2>&1 >output.log means first start sending all file handle 2 stuff (standard error) to file handle 1 (standard output) then send that to the file output.log. In other words, send standard error and standard output to the log file.

2>&1 | tee output.log is the same with the 2>&1 bit, it combines standard output and standard error on to the standard output stream. It then pipes that through the tee program which will send its standard input to its standard output (like cat) and also to the file. So it combines the two streams (error and output), then outputs that to the terminal and the file.

loophbond.sh 2>&1|tee output.log &

fix /var 100% problem

rm -rf /var/log/*

在Linux中,当我们使用rm在linux上删除了大文件,但是如果有进程打开了这个大文件,却没有关闭这个文件的句柄,那么linux内核还是不会释放这个文件的磁盘空间,最后造成磁盘空间占用100%,整个系统无法正常运行。这种情况下,通过df和du命令查找的磁盘空间,
在Linux中,当我们使用rm在linux上删除了大文件,但是如果有进程打开了这个大文件,却没有关闭这个文件的句柄,那么linux内核还是不会释放这个文件的磁盘空间,最后造成磁盘空间占用100%,整个系统无法正常运行。这种情况下,通过df和du命令查找的磁盘空间,两者是无法匹配的,可能df显示磁盘100%,而du查找目录的磁盘容量占用却很小。

遇到这种情况,基本可以断定是某些大文件被某些程序占用了,并且这些大文件已经被删除了,但是对应的文件句柄没有被某些程序关闭,造成内核无法收回这些文件占用的空间。

那么,如何查找那些文件被某些程序占用呢:

1
2
3
lsof -n | grep deleted
COMMAND     PID      USER   FD      TYPE             DEVICE        SIZE       NODE NAME
dd        31708      higkoo    1w      REG                8,2 5523705856     429590 /data/filetest (deleted)
命令:lsof -n| grep deleted打印出所有针对已删除文件的读写操作,这类操作是无效的,也正是磁盘空间莫名消失的根本原因。

解决办法:kill -9 PID   ----只需把进程删掉就能释放空间


lsof `which httpd` //那个进程在使用apache的可执行文件
lsof /etc/passwd //那个进程在占用/etc/passwd
lsof /dev/hda6 //那个进程在占用hda6
lsof /dev/cdrom //那个进程在占用光驱
lsof -c sendmail //查看sendmail进程的文件使用情况
lsof -c courier -u ^zahn //显示出那些文件被以courier打头的进程打开,但是并不属于用户zahn
lsof -p 30297 //显示那些文件被pid为30297的进程打开
lsof -D /tmp 显示所有在/tmp文件夹中打开的instance和文件的进程。但是symbol文件并不在列

lsof -u1000 //查看uid是100的用户的进程的文件使用情况
lsof -utony //查看用户tony的进程的文件使用情况
lsof -u^tony //查看不是用户tony的进程的文件使用情况(^是取反的意思)
lsof -i //显示所有打开的端口
lsof -i:80 //显示所有打开80端口的进程
lsof -i -U //显示所有打开的端口和UNIX domain文件
lsof -i UDP@[url]www.akadia.com:123 //显示那些进程打开了到www.akadia.com的UDP的123(ntp)端口的链接
lsof -i tcp@ohaha.ks.edu.tw:ftp -r //不断查看目前ftp连接的情况(-r,lsof会永远不断的执行,直到收到中断信号,+r,lsof会一直执行,直到没有档案被显示,缺省是15s刷新)
lsof -i tcp@ohaha.ks.edu.tw:ftp -n //lsof -n 不将IP转换为hostname,缺省是不加上-n参数

 

check process:

ps -auf

copy certain files by rsync

rsync -avz --include='*.txt' --exclude='*' xeon:work/MD/TOP100/* .

 

Re: how to disable core dumps completely?
Quote:
Originally Posted by megaloman  View Post
Thanks marko, I've set Storage to none, but systemd-coredump still runs and takes 100% cpu for 10-15 seconds... not sure what it does.
Systemd doesn't completely control whether core dumps are made or not. It mainly determine where such dumps go, and whether they should take up space or not. It may prevent some user space core dumps, but not all.
With "Storage=none", they can still occur and are registered by journald, but they don't take up disk space.

Turning them off completely is mainly done by "ulimit". The subject is IMHO badly documented and somewhat confusing since you have to take user privileges into account, and that some subsystems can override other subsystems defaults.

Here is what I did:
in:
/etc/systemd/system.conf
DumpCore=no
#This can be overridden: AFAIK, only pertains to systemd units

in:
/etc/systemd/coredump.conf
Storage=none
#Core dumps are still made and registered in the journal, but not placed on disk.

Also in:
/etc/security/limits.conf

#<domain> <type> <item> <value>
* hard core 0

The above should prevent core dumps, and since it is a "hard" limits, non-root programs can't override it, though I suspect that kernel cmd line parameters and therefore systemd can still override it.

As I understand it, the above is equivalent to the old method of placing "ulimit -c 0" or "ulimit -H -c 0" in "$HOME/.bashrc" but works globally, not just on the logged in user.

There may still be some loopholes left that can generate core dumps, but the above should cover most cases.

A reboot is the easy way to ensure everything is working properly after setting the above.

(Added:
I think programs running as root or setuid programs can still generate core dumps even with the above settings)
 

第三种,修改源代码让其支持你想要的上传类型。

链接FTP,打开wp-includes/functions.php文件,查找:’zip’ => ‘application/zip’, 在下面一行输入 ‘rar’ => ‘application/rar’, 。如果需要添加其他类型的上传,方法一样。
这种方法是添加可上传的文件类型,好处:直接让程序允许某种类型文件上传,不会对网站造成任何影响。弊端:每次升级后都需要修改。推荐使用此种方法。

du -sh *|sort -h

10.9.55.60 master

10.9.55.61  compute-0-0

10.9.55.62 compute-0-1

10.9.55.59 compute-0-2

qstat -f to check the states of queues

queuename                      qtype resv/used/tot. load_avg arch          states
---------------------------------------------------------------------------------
all.q@compute-0-0.local        BIP   0/0/48         0.01     linux-x64     E
---------------------------------------------------------------------------------
all.q@compute-0-1.local        BIP   0/0/48         0.00     linux-x64     E
---------------------------------------------------------------------------------
all.q@compute-0-2.local        BIP   0/0/48         0.02     linux-x64     E
 

qmod -cq all.q to kick the nodes out of "E" states.

 

rsync match pattern

rsync -avz --include="*/" --include="*_bridge_residence.txt" --exclude="*" xeon:backup/TOP100 .

 

ps2eps -f -l -B -s b0 -c -n morc16_ph8_angle.ps