Archive for the ‘Linux’ Category

1. The scenario

On my box I’ve two disks from two differents Storage Disk.

Using multipath I’ve this disks
/dev/dm-15 from Storage Disk A (in use)
/dev/dm-28 from Storage Disk B (not in use yet.)

I’ve the Volume Group with name vg_gfs00 on /dev/dm-15
I’ve the Logical Volume with name lv00 on the Volume Group VG_GFS

Looking my outputs pvs, vgs and lvs

PVS output
# pvs
PV         VG          Fmt  Attr PSize   PFree
/dev/dm-15 vg_gfs00    lvm2 a-   278,98G 39,95G
/dev/sda3  rootvg      lvm2 a-   131,47G 86,19G

VGS output
# vgs
VG          #PV #LV #SN Attr   VSize   VFree
rootvg        1  11   0 wz--n- 131,47G 86,19G
vg_gfs00all   8   7   0 wz--n- 278,98G 39,95G

LVS output
# lvs
LV          VG          Attr   LSize
homelv      rootvg      -wi-ao 512,00M
optlv       rootvg      -wi-ao   1,00G
rootlv      rootvg      -wi-ao   1,00G
tmplv       rootvg      -wi-ao   2,00G
usrlv       rootvg      -wi-ao   4,00G
varloglv    rootvg      -wi-ao   8,00G
varlv       rootvg      -wi-ao   4,00G
lv00        vg_gfs00    -wi-ao 239,03G

2. Objective

Implement a mirror on filesystems for high availbility if my Storage Disk A or B crashs.

3. Implementing

a. Make a PV on new disk (/dev/dm-28)

# pvcreate /dev/dm-28

b. Add new volume on the vg_gfs00

# vgextend vg_gfs00 /dev/dm-28

c. Make a mirror on Logical Volume

# lvconvert -m1 vg_gfs00/lv00 /dev/dm-28
vg_gfs00/lv00: Converted: 17,1%
vg_gfs00/lv00: Converted: 34,1%
vg_gfs00/lv00: Converted: 51,0%
vg_gfs00/lv00: Converted: 68,4%
vg_gfs00/lv00: Converted: 85,5%
vg_gfs00/lv00: Converted: 100,0%

Note: If your LV are in more PVs you can specify all PVs
Sample:

# lvs -a -o +devices | grep lv00
lv00          vg_gfs00 -wi-ao  10,00G     /dev/dm-15(0)
lv00          vg_gfs00 -wi-ao  10,00G     /dev/dm-16(123490)

In this case dm-15 and dm-16 are on my Storage Disk A, I need two disks on the Storage Disk B, sample dm-28 and dm-29.
For convert I use this
# lvconvert -m1 vg_gfs00/lv00 /dev/dm-28 /dev/dm-29

d. Checking Mirror with lvs -a -o +devices

# lvs -a -o +devices | grep vl00
lv00            vg_gfs00 mwi-ao  10,00G       lv00_mlog 100,00         lv00_mimage_0(0),lv00_mimage_1(0)
[lv00_mimage_0] vg_gfs00 iwi-ao  10,00G                                /dev/dm-15(10242)
[lv00_mimage_1] vg_gfs00 iwi-ao  10,00G                                /dev/dm-28(14086)

Looking details:

lv00 is a lv00_mlog and have a 100% of sync to lv00_mimage_0 and lv00_image1.
The lv00_mimage_0 is stored on /dev/dm-15
The lv00_mimage_1 is stored on /dev/dm-28

Looking with a simple lvs

# lvs
LV          VG          Attr   LSize  Origin Snap%  Move Log              Copy%  Convert
homelv      rootvg      -wi-ao 512,00M
optlv       rootvg      -wi-ao   1,00G
rootlv      rootvg      -wi-ao   1,00G
tmplv       rootvg      -wi-ao   2,00G
usrlv       rootvg      -wi-ao   4,00G
varloglv    rootvg      -wi-ao   8,00G
varlv       rootvg      -wi-ao   4,00G
lv00        vg_gfs00    -wi-ao 239,03G              lv00_mlog 100,00

If the lv00_mlog is not 100% you have a problem with one of disks.

4. Documentation:

* http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Cluster_Logical_Volume_Manager/
– 2.3.3. Mirrored Logical Volumes
– 4.4.1.3. Creating Mirrored Volumes
– 6.3. Recovering from LVM Mirror Failure

1 – Creating the Volume Group

# pvs
PV         VG     Fmt  Attr PSize  PFree
/dev/sda3  rootvg lvm2 a-   62.75G 38.97G
/dev/dm-10        lvm2 --   70.00G 70.00G
/dev/dm-13        lvm2 --   70.00G 70.00G
/dev/dm-14        lvm2 --   70.00G 70.00G
/dev/dm-9         lvm2 --   70.00G 70.00G

# vgcreate vg_cluster00 /dev/dm-10 /dev/dm-13  /dev/dm-14 /dev/dm-9
Volume group "vg_cluster00" successfully created

# pvs
PV         VG          Fmt  Attr PSize  PFree
/dev/dm-10 vg_cluster00 lvm2 a-   70.00G 70.00G
/dev/dm-13 vg_cluster00 lvm2 a-   70.00G 70.00G
/dev/dm-14 vg_cluster00 lvm2 a-   70.00G 70.00G
/dev/dm-9  vg_cluster00 lvm2 a-   70.00G 70.00G
/dev/sda3  rootvg      lvm2 a-   62.75G 38.97G

# vgs
VG          #PV #LV #SN Attr   VSize   VFree
rootvg        1   9   0 wz--n-  62.75G  38.97G
vg_cluster00   4   0   0 wz--n- 279.98G 279.98G

2 – Creating the Logical Volumes

# vcreate -L180G vg_cluster00 -n lvuserapp

3 – Making the Cluster

Particulary I like system-config-cluster

This is my simple /etc/cluster/cluster.conf

<?xml version="1.0"?>
<cluster alias="CLUSTER00" config_version="23" name="CLUSTER00">
<fence_daemon post_fail_delay="0" post_join_delay="3"/>
<clusternodes>
<clusternode name="node001" nodeid="1" votes="1">
<fence>
<method name="1"/>
</fence>
</clusternode>
<clusternode name="node003" nodeid="3" votes="1">
<fence>
<method name="1"/>
</fence>
</clusternode>
<clusternode name="node004" nodeid="4" votes="1">
<fence>
<method name="1"/>
</fence>
</clusternode>
<clusternode name="node002" nodeid="2" votes="1">
<fence>
<method name="1"/>
</fence>
</clusternode>
</clusternodes>
<fencedevices/>
</cluster>

4 – making gfs2 filesystems

# mkfs -t gfs2 -p lock_dlm -t CLUSTER00:lvuserapp -j 8 /dev/vg_cluster00/lvuserapp

5 – mounting GFS2 filesystems

Put on the /etc/fstab file

/dev/vg_cluster00/lvuserapp      /home/userapp           gfs2    defaults       0 0

6 – Start the cluster services

Note: For complete startup start the service on all nodes.

service cman start
service rgmanager start

7 – Check the nodes

# cman_tool nodes
Node  Sts   Inc   Joined               Name
1   M    196   2009-04-09 11:57:16  node001
2   M    216   2009-04-09 11:57:32  node002
3   M    212   2009-04-09 11:58:02  node003
4   M    214   2009-04-09 11:58:32  node004

8 – mounting the filesystems

Mount filesystems on all nodes

# mount /home/userapp

9 – testing the read/write files on nodes

# touch /home/userapp/teste.txt

Check on all servers if this file exist.

Sources:

Installing IBM lin_tape on Linux (Red Hat Enterprise Linux)

Steps for install lin_tape and lin_taped

1. Download the lin_tape source and lin_taped from ftp://ftp.software.ibm.com/storage/devdrvr/Linux/lin_tape_source-lin_taped/

lin_tape-X.YY.Z-W.src.rpm.bin
lin_taped-X.YY.Z-rhel[V].arch.rpm.bin

Samples for version 1.20.0-1

lin_tape-1.20.0-1.src.rpm.bin
lin_taped-1.20.0-rhel4.x86_64.rpm.bin

2. Rebuild lin_tape rpm source

# rpmbuild --rebuild lin_tape-X.YY.Z-W.src.rpm.bin

Continuar Lendo »

First you need rpm2cpio

Use

# mkdir /tmp/package
# cp package.rpm /tmp/package
# cd /tmp/package
# rpm2cpio package.rpm | cpio -idv

Posted by Wordmobi

Inglish (português mais abaixo)

The package basesystem contains no files

# rpm -ql basesystem
(contains no files)

But your discription is very clean:
Basesystem defines the components of a basic Red Hat Linux system (for
example, the package installation order to use during bootstrapping).
Basesystem should be the first package installed on a system and it
should never be removed.

You can check Install Date from this package

# rpm -qi basesystem
Name : basesystem Relocations: (not relocatable)
Version : 8.0 Vendor: Red Hat, Inc.
Release : 4 Build Date: Wed 22 Sep 2004 07:01:44 PM BRT
Install Date: Sat 17 Jan 2009 05:28:37 PM BRST Build Host: tweety.build.redhat.com
Group : System Environment/Base Source RPM: basesystem-8.0-4.src.rpm
Size : 0 License: public domain
Signature : DSA/SHA1, Wed 05 Jan 2005 09:03:37 PM BRST, Key ID 219180cddb42a60e
Packager : Red Hat, Inc.
Summary : The skeleton package which defines a simple Red Hat Linux system.
Description :
Basesystem defines the components of a basic Red Hat Linux system (for
example, the package installation order to use during bootstrapping).
Basesystem should be the first package installed on a system and it
should never be removed.

Português

Checando a data de instalação do Sistema Operacional Linux (Red Hat) – Não oficial, mas utilizavel

O pacote basesystem não contem arquivos

# rpm -ql basesystem
(contains no files)

Mas usa descrição é muito simples:
Basesystem defines the components of a basic Red Hat Linux system (for
example, the package installation order to use during bootstrapping).
Basesystem should be the first package installed on a system and it
should never be removed.

Você pode checar a data de instalação a partir deste pacote

# rpm -qi basesystem
Name : basesystem Relocations: (not relocatable)
Version : 8.0 Vendor: Red Hat, Inc.
Release : 4 Build Date: Wed 22 Sep 2004 07:01:44 PM BRT
Install Date: Sat 17 Jan 2009 05:28:37 PM BRST Build Host: tweety.build.redhat.com
Group : System Environment/Base Source RPM: basesystem-8.0-4.src.rpm
Size : 0 License: public domain
Signature : DSA/SHA1, Wed 05 Jan 2005 09:03:37 PM BRST, Key ID 219180cddb42a60e
Packager : Red Hat, Inc.
Summary : The skeleton package which defines a simple Red Hat Linux system.
Description :
Basesystem defines the components of a basic Red Hat Linux system (for
example, the package installation order to use during bootstrapping).
Basesystem should be the first package installed on a system and it
should never be removed.

For Volume Group data redistribution in the disks (strip) uses reorgvg command.

# reorgvg myvg

This command need a few times. Rum in background.

# nohup reorgvg myvg &

Posted by Wordmobi

This post show installation of MegaCli (ServeRAID MR 10 k SAS/SATA Controller client) and two basic commands

1. Installation

Get ServeRAID CD and extract the file MegaCli-X.XX.XX-Y.i386.rpm.

Install using the rpm command


# rpm -ivh rpms/MegaCli-3.00.07-1.i386.rpm
Preparing...                ########################################### [100%]
1:MegaCli                ########################################### [100%]

Listing the installed files with rpm command


# rpm -ql MegaCli
/opt/MegaRAID/MegaCli/MegaCli
/opt/MegaRAID/MegaCli/MegaCli64

2. Basic commands
Continuar Lendo »

A algum tempo atrás fiz um post aqui sobre como realizar o pvmove.

Existe uma dúvida comum que ocorre quando tenta-se fazer o pvmove de um disco maior para discos menores conforme o exemplo abaixo:

# pvs
PV         VG       Fmt  Attr PSize   PFree
/dev/sda2  rootvg   lvm2 a-    68.12G  46.09G
/dev/sdh1  datavg lvm2 a-   100.00G   5.36G
/dev/sdj1  datavg lvm2 a-   200.00G  49.16G
/dev/sdk1  datavg lvm2 a-   200.00G  49.37G
/dev/sdl1  datavg lvm2 a-   200.00G  26.00G
/dev/sdm1  datavg lvm2 a-   200.00G  24.94G
/dev/sdn1  datavg lvm2 a-   200.00G  51.54G
/dev/sdo1  datavg lvm2 a-   200.00G   60.00G

No exemplo abaixo estou tentando movimentar o PV /dev/sdh1 para os demais PVs (/dev/sdj1, /dev/sdk1, /dev/sdl1, /dev/sdm1, /dev/sdn1 e /dev/sdo1), porém quando executo o comando abaixo ele retorna:

# pvmove /dev/sdh1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1
Insufficient suitable contiguous allocatable extents for logical volume pvmove0: 17920 more required
Unable to allocate temporary LV for pvmove.

Isso ocorre porque ele não tem blocos continuos que suportem todo o tamanho do volume.
Para isto o comando pvmove suporta mover por partes, desta forma faremos assim:

pvmove /dev/sdh1:1-17920 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1

Desta forma movimentaremos até o tamanho informado que é suportado. E continuaremos a movimentar por partes.

Dica: Você pode tentar movimentar totalmente o restante, comigo na maioria das vezes ele ja atinge o tamanho disponível.

pvmove /dev/sdh1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1

Como identificar usuários com senhas em status lock

LINUX

Verificando status do usuário
# passwd -S kairo
kairo LK 2008-10-31 0 99999 7 -1 (Password locked.)

Retirando lock do usuário

# passwd -u kairo
Unlocking password for user kairo.
passwd: Success.

o status fica normal

# passwd -S kairo
kairo PS 2008-10-31 0 99999 7 -1 (Password set, MD5 crypt.)

Fazendo lock de senha do usuário

# passwd -l kairo
Locking password for user kairo.
passwd: Success

AIX

# lsuser kairo
kairo id=15000 pgrp=staff groups=staff,so home=/home/kairo shell=/usr/bin/ksh gecos=Kairo Araujo login=true su=true rlogin=true daemon=true admin=false sugroups=ALL admgroups= tpath=nosak ttys=ALL expires=0 auth1=SYSTEM auth2=NONE umask=22 registry=files SYSTEM=compat logintimes= loginretries=0 pwdwarntime=0 account_locked=false minage=0 maxage=0 maxexpired=-1 minalpha=0 minother=0 mindiff=0 maxrepeats=8 minlen=0 histexpire=0 histsize=0 pwdchecks= dictionlist= fsize=-1 cpu=-1 data=262144 stack=65536 core=2097151 rss=65536 nofiles=2000 fsize_hard=-1 time_last_login=1229521872 time_last_unsuccessful_login=1224871660 tty_last_login=/dev/pts/0 tty_last_unsuccessful_login=ssh host_last_login=myserver host_last_unsuccessful_login=127.0.0.1 unsuccessful_login_count=0 roles=

Verifique o item “account_locked”.

Para gerenciar, recomendo utilizar o smitty

smitty user