Monday, December 17, 2007

vxfs

VERITAS Education http://us.training.veritas.com 800-327-2232 (option 2)
Copyright © 2002 VERITAS Software Corporation. All Rights Reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or
registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of
their respective companies. Specifications and product offerings subject to change without notice. Printed in USA and the EU. March 2002.
1
V E R I T A S E D U C A T I O N Q U I C K R E F E R E N C E
VxFS Commands
SETTING UP A FILE SYSTEM
Action Command Line
Make a VxFS file system mkfs –F vxfs [generic_options] [-o vxfs_options]
char_device [size]
Mount a file system mount –F vxfs [generic_options] [-o vxfs_options]
block_device mount_point
Unmount a file system umount mount_point
Determine file system type fstyp [-v] block_device
Report free blocks/inodes df –F vxfs [generic_options] [-o s] mount_point
Check/repair a file system fsck –F vxfs [generic_options] [yY] [nN]
character_device
ONLINE ADMINISTRATION
Action Command Line
Resize a file system fasdm [-b newsize] [-r raw_device] mount_point
Dump a file system vxdump [options] mount_point
Restore a file system vxrestore [options] mount_point
Create a snapshot file
system
mount –F vxfs –o
snapof=source_block_device,[snapsize=size]
destination_block_device snap_mount_point
Create a storage
checkpoint fsckptadm [-nruv] create ckpt_name mount_point
List storage checkpoints fsckptadm [-clv] list mount_point
Remove a checkpoint fsckptadm [-sv] remove ckpt_name mount_point
Mount a checkpoint mount –F vxfs –o ckpt=ckpt_name pseudo_device
mount_point
Unmount a checkpoint umount mount_point
Change checkpoint
attributes
fsckptadm [-sv] set [nodatanomountremove]
ckpt_name
Upgrade the VxFS layout vxupgrade [-n new_version] [-r raw_device]
mount_point
Display layout version vxupgrade mount_point
DEFRAGMENTING A FILE SYSTEM
Action Command Line
Report on directory fragmentation fsadm –D mount_point
Report on extent fragmentation fsadm –E [-l largesize] mount_point
Defragment directories fsadm –d mount_point
Defragment extents fsadm –e mount_point
Reorganize a file system to
support files > 2GB fsadm –o largefiles mount_point
INTENT LOGGING, I/O TYPES, AND CACHE ADVISORIES
Action Command Line
Change default logging behavior
fsck –F vxfs [generic_options]
–o delaylogtmplognodatainlogblkclear
block_device mount_point
Change how VxFS handles
buffered I/O operations
mount –F vxfs [generic_options] –o
mincache=closesyncdirectdsyncunbuffered
tmpcache block_device mount_point
Change how VxFS handles I/O
requests for files opened with
O_SYNC and O_DSYNC
mount –F vxfs [generic_options] –o
convosync=closesyncdirectdsyncunbuffered
delay block_device mount_point
QUICK I/O
Action Command Line
Enable Quick I/O at mount mount –F vxfs –o qio mount_point
Disable Quick I/O mount –F vxfs –o noqio mount_point
Treat a file as a raw character
device filename::cdev:vxfs:
Create a Quick I/O file through a
symbolic link
qiomkfile [-h header_size] [-a] [-s size]
[-e-r size] file
Get Quick I/O statistics qiostat [-i interval][-c count] [-l] [-r] file
Enable cached QIO for all files in
a file system vxtunefs –s –o qio_cache_enable=1 mnt_point
Disable cached QIO for a file qioadmin –S filename=OFF mount_point

sun cluster 3.x

SUNTM CLUSTER QUICK REFERENCE
This reference provides quick lookup support for the Sun Cluster command-line interface. Many tasks require cluster preparation before
you issue these commands. For information about cluster preparation, refer to the appropriate cluster administration manual.
QUORUM ADMINISTRATION
Add a SCSI Quorum Device # clquorum add device
Add a NAS Quorum Device # clquorum add -t netapp_nas -p filer =nasdevicename,lun_id =IDnumdevice \
Nasdevice
Add a Quorum Server # clquorum add -t quorumserver -p qshost =IPaddress, port =portnumber \
quorumservername
Remove a Quorum Device # clquorum remove device
RESOURCE TYPE ADMINISTRATION
Register a Resource Type # clresourcetype register type
Remove a Resource Type # clresourcetype unregister type
RESOURCE GROUP ADMINISTRATION
Create a Failover Resource Group # clresourcegroup create group
Create a Scalable Resource Group # clresourcegroup create -S group
Bring Online All Resource Groups # clresourcegroup online +
Delete a Resource Group # clresourcegroup delete group
Delete a Resource Group and All of Its Resources # clresourcegroup delete -F group
Switch the Current Primary Node of a Resource Group # clresourcegroup switch -n nodename group
Move a Resource Group Into the UNMANAGED State # clresourcegroup unmanage group
Suspend Automatic Recovery of a Resource Group # clresourcegroup suspend group
Resume Automatic Recovery of a Resource Group # clresourcegroup resume group
Change a Resource Group Property # clresourcegroup set -p Failback=true + name=value
Add a Node To a Resource Group # clresourcegroup add-node -n nodename group
Remove a Node From a Resource Group # clresourcegroup remove-node -n nodename group
RESOURCE ADMINISTRATION
Create a Logical Hostname Resource # clreslogicalhostname create -g group lh-resource
Create a Shared Address Resource # clressharedaddress create -g group sa-resource
Create a Resource # clresource create -g group -t type resource
Remove a Resource # clresource delete resource
Disable a Resource # clresource disable resource
Change a Single-Value Resource Property # clresource set -t type -p name=value +
Add a Value to a List of Property Values # clresource set -p name+=value resource
Existing values in the list are unchanged.
Create an HAStorage Plus Resource # clresource create -t HAStoragePlus -g group \
-p FileSystemMountPoints=mount-point-list \
-p Affinityon=true rs-hasp
Clear the STOP_FAILED Error Flag on a Resource # clresource clear -f STOP_FAILED resource
DEVICE ADMINISTRATION
Add a VxVM Device Group # cldevicegroup create -t vxvm -n node-list -p failback=true vxdevgrp
Remove a Device Group # cldevicegroup delete devgrp
Switch a Device Group to a New Node # cldevicegroup switch -n nodename devgrp
Bring Offline a Device Group # cldevicegroup offline devgrp
Update Device IDs for the Cluster # cldevice refresh diskname
MISCELLANEOUS ADMINISTRATION AND MONITORING
Add a Node to Cluster
From the node to be added, which has access: # clnode add -c clustername -n nodename -e endpoint1, endpoint2 \
(If the node does not have access to cluster -e endpoint3, endpoint4
configuration, see the claccess (1CL) man page.)
Remove a Node From the Cluster
From the node to be removed, which is in noncluster
mode and has access: # clnode remove
(If the node does not have access to cluster
configuration, see the claccess(1CL) man page.)
Switch All Resource Groups and Device Groups # clnode evacuate nodename
Off of a Node
Manage the Interconnect Interfaces # clinterconnect disable nodename:endpoint
These commands disable a cable so that # clinterconnect enable nodename:endpoint
maintenance can be performed, then enable the
same cable afterward.
Display the Status of All Cluster Components # cluster status
Display the Status of One Type of Cluster Component # command status
Display the Complete Cluster Configuration # cluster show
Display the Configuration of One Type of Cluster # command show
Component
List One Type of Cluster Component # command list
Display Sun Cluster Release and Version Information # clnode show-rev -v
This command lists the software versions
on the current node.
Map Node ID to Node Name # clnode show grep nodename
Enable Disk Attribute Monitoring on All Cluster Disks # cltelemetryattribute enable -t disk rbyte.rate wbyte.rate \
read.rate write.rate
Disable Disk Attribute Monitoring on All Cluster Disks # cltelemetryattribute disable -t disk rbyte.rate wbyte.rate \
read.rate write.rate
SHUTTING DOWN AND BOOTING A CLUSTER
Shut Down the Entire Cluster # cluster shutdown
From one node:
Shut Down a Single Node # clnode evacuate
# shutdown
Boot a Single Node
(SPARC) ok> boot
(x86) Press any key to reboot: keystroke
Reboot a Node Into Noncluster Mode
(SPARC) ok> boot -x
(x86) Press any key to reboot: boot interactively and add -x to the multiboot
command
© 2007 Sun Microsystems, Inc. Part No. 819-6811-11, August 2007

nis

NAME ypwhich - return name of NIS server or map master
SYNOPSIS
ypwhich [ -d domain ] [ [ -t ] -m [ mname ] [ -Vn ] hostname ]
ypwhich -x
DESCRIPTION
ypwhich returns the name of the NIS server that supplies the
NIS name services to a NIS client, or which is the master
for a map. If invoked without arguments, it gives the NIS
server for the local machine. If hostname is specified,
that machine is queried to find out which NIS master it is
using.
OPTIONS
-d domain
Use domain instead of the default domain.
-t This option inhibits map nickname translation.
-m mname
Find the master NIS server for a map. No hostname can
be specified with -m. mname can be a mapname, or a
nickname for a map. When mname is omitted, produce a
list of available maps.
-x Display the map nickname translation table.
NAMEypcat - print values in a NIS database
SYNOPSIS
ypcat [ -kx ] [ -d ypdomain ] mname
DESCRIPTION
The ypcat command prints out values in the NIS name service
map specified by mname, which may be either a map name or a
map nickname. Since ypcat uses the NIS network services, no
NIS server is specified.
OPTIONS
-k Display the keys for those maps in which the values
are null or the key is not part of the value. None of
the maps derived from files that have an ASCII version
in /etc fall into this class.
-d ypdomain
Specify a domain other than the default domain.
-x Display map nicknames.
NAMEypmatch - print the value of one or more keys from a NIS map
SYNOPSIS
ypmatch [ -k ] [ -t ] [ -d domain ] key [ key ... ] mname
ypmatch -x
DESCRIPTION
ypmatch prints the values associated with one or more keys
from the NIS's name services map specified by mname, which
may be either a map name or a map nickname.
Multiple keys can be specified; all keys will be searched
for in the same map. The keys must be the same case and
length. No pattern matching is available. If a key is not
matched, a diagnostic message is produced.
OPTIONS
The following options are supported:
-k Before printing the value of a key, print the key
itself, followed by a colon (:).
-d domain Specify a domain other than the default domain.
-x Display the map nickname table. This lists the nick-
names the command knows of, and indicates the map name
associated with each nickname.
OPERANDS
The following operand is supported:
mname The NIS's name services map

solaris commands

SOLARIS
/ {/dev/vx/dsk/rootvol}
/export/home
/dev/vx/dsk/home}
/tmp
/dev/vx/dsk/swapvol}
/usr
/var

Solaris
/etc/passwd
/etc/shadow
/etc/group
2147483647
/etc/default/login
{CONSOLE=/dev/console}
60001 & 65534(nobody4)
60002 & 65534(nogroup)
boot cdrom -s
mkdir /tmp/a
mount /dev/c0t0d0s0 /tmp/a
vi /tmp/a/etc/shadow







useradd
userdel
logins
usermod
Solaris
hostid
admintool
top
sar
vmstat
iostat
dmesg
16TB


/dev/vx/dsk/swapvol
swap
swap -l
swap -a
Solaris
/etc/lp/interfaces/*
/usr/lib/lp/lpshut
/usr/lib/lp/lpsched
lp
lpr


lpstat
cancel
lprm


lpadmin -p pq
lpadmin -x pq
lpadmin -d pq
Solaris
/etc/hostname.*
/etc/inet/*
/etc/defaultrouter
/etc/inet/hosts
/etc/nsswitch.conf
ndd /dev/[tcpip] ?
in.routed
ifconfig -a
ifconfig hme0:1 IP up
BANNER @
/etc/default/telnetd
{/etc/system}
set pt_cnt = # {SYSV}
set npty = # {BSD}

{/etc/iu.ap}
ptsl 0 # ldterm ttcompat

halt
boot -r
176 {BSD}
3000 {SYSV}
rsh
/usr/lib/netsvc/yp/ypbind
Solaris
/etc/dfs/dfstab
/etc/dfs/sharetab
/etc/rmtab
1 TB
8000 TB {vxfs}

1 TB
2 GB {=<2.5.1}>current
format>inquiry
prtvtoc
sub disk
Volume
Plex
disk group
vxfs
/dev/vx/dsk/rootdg
vxprint -l -g rootdg

vxdiskadd
vxprint -dl
vxdg rmdisk
vxassist move
vxdg init



vxdg deport
vxdg import

vxedit set
vxprint -vl
vxassist make
vxassist growto
vxassist shrinkto
vxedit rm
vxbootsetup

vxva
mkfs -M


vxassist mirror

vxassist make vol 100mb layout=raid5

ufsdump
ufsrestore
SOLARIS
/etc/init.d
/kernel/genunix
sysdef -i
vi /etc/system
reboot













modinfo
modload
modunload
sys-unconfig
prtconf
isainfo -kv
crash
truss
uname -imp
uname -r
who -r
/var/crash/`uname -n`
ok boot -s
ok boot -as
Stop-A
ok go
/etc/TIMEZONE
/etc/default/init
/etc/inet/ntp.conf
/etc/init.d/xntpd
SOLARIS
pkgadd
pkgrm
pkginfo
pkginfo -i
pkginfo -p
pkgchk -l package
patchadd -p
pkgchk -l -p path
/var/sadm
SOLARIS
/devices
drvconfig
devlinks
disks
tapes
ports
rem_drv
prtconf -D
psrinfo -v
pmadm -l
/usr/platform/`uname -m`/
sbin/prtdiag
ok test-all
/opt/SUNWvts/bin/sunvts
/dev/c#t#d0s2
/dev/dsk/c#t6d0s2
hsfs
/dev/rmt/0
/dev/rmt/0n
/dev/diskette
SOLARIS
Solaris 2 FAQ
Solaris 10 Documentation
SunSolve
1-800-USA-4SUN
Sun Freeware
suned.sun.com

veritas cluster

Term/Cmd/Pkg Description Command / File =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-VRTSvcs VERITAS Cluster ServerVRTSvcswz VERITAS Cluster Server WizardVRTScsga VERITAS Cluster Server Graphical AdministratorVRTSgab VERITAS Group Membership and Atomic BroadcastVRTSllt VERITAS Low Latency TransportVRTSvcsor VERITAS Cluster Server Oracle Enterprise ExtensionVRTSvcssy VERITAS Cluster Server Sybase Enterprise ExtensionVRTSperl VERITAS Perl for VRTSvcsCluster Name of your HA environmentNodes Physical systems that make up the clusterService group Abstract container of related resourcesResource Cluster components (i.e. NICs, IPs, disk groups, volumes, mounts, processes, etc...)Attributes Parameter values that define the resourcesDependencies Links between resources or service groupsCluster Mgr Cluster Monitor : Log in, add clusters, change preferencesCluster Mgr Cluster Explorer: Monitor systems, service grps, resources, attributes & dependenciesCluster Mgr Log Desk : Monitor log messages received from engine, view GUI commandsCluster Mgr Command Center : Build VCS commands and send to engineLLT Low Latency transport provides fast kernel-kernel comm. & monitors network connx.GAB Grp membership & Atomic Broadcast maintains a synch. state & monitors disk comm.Config files VCS etc directory $VCSETC=/etc/VRTSvcsConfig files VCS configuration directories $VCSCONF=/etc/VRTSvcs/conf/configBinary files VCS opt directory $VCSOPT=/opt/VRTSvcsBinary files VCS binary path $VCSBIN=/opt/VRTSvcs/binLog files VCS log path $VCSLOG=/var/VRTSvcs/logConfig files VCS configuration file /etc/VRTSvcs/conf/config/main.cfLLT tab file LLT configuration file /etc/llttabLLT hosts file LLT host name database /etc/llthostsGAB file Grp membership & Atomic Broadcast file /etc/gabtabquick-start VCS Quick-start wizard # $VCS_HOME/wizards/config/quick_startquick-NFS VCS Quick-NFS wizard # $VCS_HOME/wizards/services/quick_nfsllt Verify LLT # /sbin/llstat -nllt Get interface MAC Address # /opt/VRTSllt/getmac device_namellt Check network connectivity # /opt/VRTSllt/dlpiping -s-c -v device_namegab Verify GAB # /sbin/gabconfig -a ; /sbin/gabconfig -lhasys List systems in cluster # /opt/VRTSvcs/bin/hasys -listhasys Detailed info on each cluster node # /opt/VRTSvcs/bin/hasys -display (sysname)hasys Increase system count in gabtab startup # /opt/VRTSvcs/bin/hasys -add (sysname)hasys Delete a system # /opt/VRTSvcs/bin/hasys -delete (sysname)hastart Start VCS cluster # /opt/VRTSvcs/bin/hastarthastart Force start a stale VCS cluster # /opt/VRTSvcs/bin/hastart -force -stalehastop Stop VCS on all systems # /opt/VRTSvcs/bin/hastop -allhastop Stop VCS had, keep srvc-groups running # /opt/VRTSvcs/bin/hastop -local -forcehastop Stop VCS, migrate srvc-groups to sysname # /opt/VRTSvcs/bin/hastop -sys (sysname) -evacuatehastatus Provide continual status of service grps # /opt/VRTSvcs/bin/hastatus hastatus Verify status of service groups # /opt/VRTSvcs/bin/hastatus -summaryhacf Check for syntax errors in main.cf # /opt/VRTSvcs/bin/hacf -verify /etc/VRTSvcs/conf/config/main.cfhacf Generate dependency tree in main.cf # /opt/VRTSvcs/bin/hacf -generate /etc/VRTSvcs/conf/config/main.cfhares List all resources # /opt/VRTSvcs/bin/hares -listhares List a resource's dependencies # /opt/VRTSvcs/bin/hares -dep (resource_name)hares Get detailed info on a resource # /opt/VRTSvcs/bin/hares -display (resource)hares Add a resource # /opt/VRTSvcs/bin/hares -add (resource_name (resource_type (service_group)hares Modify attributes of the new resource # /opt/VRTSvcs/bin/hares -modify (resource_name (attribute_name (value)hares Delete a resource # /opt/VRTSvcs/bin/hares -delete (resource_name)hares Online a resource # /opt/VRTSvcs/bin/hares -online (resource_name) -sys (system_name)hares Offline a resource # /opt/VRTSvcs/bin/hares -offline (resource_name) -sys (system_name)hares Monitor resource on a system # /opt/VRTSvcs/bin/hares -probe (resource_name) -sys (system_name)hares Clear a faulted resource # /opt/VRTSvcs/bin/hares -clear (resource_name) [-sys system_name]hares Make a resource's attribute value local # /opt/VRTSvcs/bin/hares -local (resource_name) (attribute_name) value)hares Make a resource's attribute value global # /opt/VRTSvcs/bin/hares -global (resource_name) (attribute_name) value)hares Specify a dependency between 2 resources # /opt/VRTSvcs/bin/hares -link (parent_res) (child_res)hares Remove dependency between 2 resources # /opt/VRTSvcs/bin/hares -unlink (parent_res) (child_res)hares Modify a Share res. by adding options # /opt/VRTSvcs/bin/hares Share_cicgt-as4-p_apps Options "%-o rw,root=dcsa-cln1"hagrp List all service groups # /opt/VRTSvcs/bin/hagrp -listhagrp List a service group's resources # /opt/VRTSvcs/bin/hagrp -resources [service_group]hagrp List a service group's dependencies # /opt/VRTSvcs/bin/hagrp -dep [service_group]hagrp Detailed info about a service group # /opt/VRTSvcs/bin/hagrp -display [service_group]hagrp Start service group, bring res. online # /opt/VRTSvcs/bin/hagrp -online (service_group) -sys (system_name)hagrp Stop service group, bring res. offline # /opt/VRTSvcs/bin/hagrp -offline (service_group) -sys (system_name)hagrp Switch service group between nodes # /opt/VRTSvcs/bin/hagrp -switch (service_group) -to (system_name)hagrp Freeze svcgroup, (disable onl. & offl.) # /opt/VRTSvcs/bin/hagrp -freeze (service_group) [-persistent]hagrp Thaw a svcgroup, (enable onl. & offl.) # /opt/VRTSvcs/bin/hagrp -unfreeze (service_group) [-persistent]hagrp Enable a service group # /opt/VRTSvcs/bin/hagrp -enable (service_group) [-sys system_name]hagrp Disable a service group # /opt/VRTSvcs/bin/hagrp -disable (service_group) [-sys system_name]hagrp Enable all resources in a service group # /opt/VRTSvcs/bin/hagrp -enableresources (service_group)hagrp Disable all resources in a service group # /opt/VRTSvcs/bin/hagrp -disableresources (service_group)hagrp Specify dependenciy between 2 svc groups # /opt/VRTSvcs/bin/hagrp -link (parent_group) (child_group) (relationship)hagrp Remove dependenciy between 2 svc groups # /opt/VRTSvcs/bin/hagrp -unlink (parent_group) (child_group)hagrp Auto-Enable a servicegroup marked # /opt/VRTSvcs/bin/hagrp -autoenable (service_group) [-sys system_name] disabled due to prob with system_name.hatype List resource types # /opt/VRTSvcs/bin/hatype -listhatype Detailed info on a resource type # /opt/VRTSvcs/bin/hatype -display (resource_type)hatype List all resources of a part. type # /opt/VRTSvcs/bin/hatype -resources (resource_type)hatype Add a resource type # /opt/VRTSvcs/bin/hatype -add (resource_type)hatype Set static attribute values # /opt/VRTSvcs/bin/hatype -modify ...hatype Delete a resource type # /opt/VRTSvcs/bin/hatype -delete (resource_type)haattr Add Attribute to a Type definition # /opt/VRTSvcs/bin/haattr -add (resource_type) (attribute_name) (attribute_type -integer, -string, -vector)haattr Delete a Entry in a Type definition # /opt/VRTSvcs/bin/haattr -delete (resource_type) (attribute_name)haconf Set VCS configuration file to r/w mode # /opt/VRTSvcs/bin/haconf -makerwhaconf Set VCS configuration file to read mode # /opt/VRTSvcs/bin/haconf -dump -makerohauser Add a user with r/w access to VCS # /opt/VRTSvcs/bin/hauser -add (user_name)hauser Add a user with read access only to VCS # /opt/VRTSvcs/bin/hauser -add VCSGuesthauser Update a user # /opt/VRTSvcs/bin/hauser -update (user_name)hauser Delete a user # /opt/VRTSvcs/bin/hauser -delete (user_name)hauser Display all users # /opt/VRTSvcs/bin/hauser -displayhaagent Start agents manually # haagent -start (agent_name) -sys (system_name)haagent Stop agents manually # haagent -stop (agent_name) -sys (system_name)hagui Start Cluster Manager # /opt/VRTSvcs/bin/haguihagui Start Cluster Manager in debug mode # /opt/VRTSvcs/bin/hagui -DProduct Terminology comparisonsSun SC 2.2 Veritas VCS 1.1------------------------------------------------------cluster name cluster nameadmin workstation -physical node A local systemphysical node B remote systemphysical node IP address maintenance IP addresslogical host service grouplogical host IP address service group IP address- resourcesdisk group disk groupprivate heartbeats communication channels- GAB disk (disk heartbeat)Quorum disk -Admin filesystem -scinstall Quick-Start wizard split-brain network partitionconfiguration files:

/etc/llthosts

/etc/llttab

/etc/gabtab

/etc/VRTSvcs/conf/config/main.cf

/etc/VRTSvcs/conf/config/sysname

solaris---Jumpstart Install

Description Of a Jumpstart Install
Here, roughly, is what goes on during a Solaris 8 jumpstart install, oh but first let's talk about some terms!
Definition of Terms
Client = Machine to Recieve Jumpstart InstallBoot Server = Machine on same subnet/local network/broadcast domain as Client which hands client boot files and informationInstall Server = Machine from which Client will install Solaris OSNotice: The Install server does not have to be on the same subnet as the Client. The Boot Server passes the Client the IP address of its default router which the client then uses as its default router. Not sure if this is documented anywhere in Sun's Doc, but if you snoop -v the traffic between a Boot Server and an Install Client you'll see the bootparam service hand this info to the Client. Also if you are using routed instead of default routers, well good luck!Also Notice: There is nothing stopping the Install server and the Boot server from being the same machine, and in practice it make good sense.
Admin issues the 'boot net - install' command from the 'ok' prompt of the client
Client machine broadcasts a Reverse-ARP request to the local subnet
Boot server's in.rarpd server responds with an ip address from its /etc/hosts file matching the hostname listed for the client's mac address in the server's /etc/ethers file
Client machine requests OS kernel and extras from the Boot server's tftp daemon
The in.tftpd server running on the Boot Server hands the client the requested files
Client downloads and boots the requested kernel image
Client starts bootparams client and requests boot info from Boot Server
rpc.bootparams server on Boot Server responds with the NFS location of a jumpstart-dir, an install-root filesystem, and a sysidcfg dir
Client NFS mounts Boot Server's jumpstart-dir/Solaris_8/Tools/Boot directory as its root filesystem
Client NFS mounts Install server's jumpstart-dir and launches OS install
Client begins "System Indentification" stage of install. If you are familiar with CD installs of Solaris, you will recognize this stage as where the install program asks for host, network, locale, date and time information
Client NFS mounts the sysidcfg directory it was given by the rpc.bootparams daemon on the Boot server
Client reads host configuration information from the text file named 'sysidcfg' inside the sysidcfg directory it has mounted
Client requests manual entry at it's console for any information not contained in this file
Client begins "System Install" stage of install. If you are familiar with CD installs of Solaris, you will recognize this stage as where the install program asks you to choose a parition layout for system and to choose a package cluster/and or individual software packages to install.
Client finds line matching it's own system architechture (Sparc/x86), hostname, ip address, etc. in 'rules.ok' text file in Install server's jumpstart-dir
From this line the Client is told the name of a begin-script, profile, and finish-script. The profile is a text file, the begin and finish scripts bourne shell scripts. All three items are located in the Install server's jumpstart-dir
Client executes the begin-script
Client partitions and formats its disks according to rules laid out in its profile
Client installs software packages according to rules laid out in its profile
Client installs any system patches found in the Install server's jumpstart-dir/Patches directory. It installs these patches in date-wise order (that is to say according to their time-stamps) from oldest to newest
Client execute the finish-script
System is installed, and Client reboots
That's All!!
Network Services Needed for a Jumpstart InstallOn Boot Server
in.rarpd
From The Man Page (man rarpd)RARP is used by machines at boot time to discover their Internet Protocol (IP) address. The booting machine provides its Ethernet address in a RARP request message. Using the ethers and hosts databases, in.rarpd maps this Ethernet address into the corresponding IP address which it returns to the booting machine in an RARP reply message. The booting machine must be listed in both databases for in.rarpd to locate its IP address.Under Solaris 8 in.rarpd is started by the nfs.server init script if the directory /tftpboot exists.
rpc.bootparams
From The Man Page (man bootparamd)rpc.bootparamd is a server process that provides information from a bootparams database to diskless clients at boot time.The bootparams database can either be the flat text file /etc/bootparams (recommended) or an NIS map. Which of these methods to use is controlled by an entry in the /etc/nsswitch.conf file.Under Solaris 8 rpc.bootparamd is started by the nfs.server init script if the directory /tftpboot exists.
in.tftpd
Trivial File Transfer Protocol Daemon. A stripped down version of ftp, used to transfer Solaris 8 kernel plus some extra goodies to the boot clients.The tftp daemon needs to be configured to share the /tftpboot directory (which you may have to create)The tftp daemon included with Solaris 8 (there are many others you could use if you prefer) is run from inetd. It is turned on in a default Solaris 8 install and hence recieves no mention (at least not that I saw...) in Sun's own Jumpstart documentation even though it is essential to the jumpstart process(!?!) Anyhow, if you have disabled the service, adding a line such as this to the file /etc/inet/inetd.conf and restarting the inetd daemon (use /etc/init.d/inetsvc {startstop}) should start it up again:tftp dgram udp wait root /usr/sbin/in.tftpd in.tftpd -s /tftpboottftp dgram udp6 wait root /usr/sbin/in.tftpd in.tftpd -s /tftpboot
rpcbind, nfsd, mountd
RPC (Remote Procedure Call) plus the standard NFS Server suite. These services are required to serve out parts of the boot server's filesystem via NFS. The parts that need to be served are the Boot sub-directory of jumpstart-dir. On an Install/Boot server combo machine this is just a sub-directory of the jumpstart-dir which you will already be sharing via NFS. On a stand-alone Boot server this directory is wherever you told the Solaris 8 Jumpstart install scripts to install the boot-only portion of the Jumpstart files. (Basically what you will be serving is the directory tree used as the root filesystem on the install client) Under Solaris 8 these services are started by the rpc and nfs.server init scripts.On Install Server
rpcbind, nfsd, mountd
RPC (Remote Procedure Call) plus the standard NFS Server suite. These services are required to serve out parts of the Install server's filesystem via NFS. What needs to be shared is the jumpstart-dir, ie. where you tell the Solaris 8 Jumpstart install scripts to install to. You must also NFS share the sysidcfg directories where you put the sysidcfg files for your hosts.(One approach is to simply make the sysidcfg dir a subdirectory of the jumpstart-dir so that it is shared along with the jumpstart-dir and thus does not need a seperate dfstab entry, this also serves to put all of your jumpstart files under one directory tree rather than having them all spread out.)Under Solaris 8 these services are started by the rpc and nfs.server init scripts.
Key Files Needed for a Jumpstart InstallOn Boot Server
/etc/ethers, /etc/hosts
in.rarpd uses these files to give out an IP address to the Install Client, hence the install Client must have a valid entry in both of these files for things to work.ethers is a MAC Address to Hostname tablehosts is an IP Address to Hostname tableIn theory it maybe possible to use NIS for these purposes instead of the local flat files but if you want to do things like that you are on your own.
/etc/bootparams
rpc.bootparams uses this file to tell an Install Client where to NFS mount various install directories from. In particular, it tells where the Client should look to mount the 1) root filesystem used during the install, 2) jumpstart directory, 3) sysidcfg directorybootparams is a flat text file which can be edited by hand, there is an extensive manpage describing it's format. Alternately Sun provides two tools called add_install_client and rm_install_client (located in jumpstart-dir/Solaris_8/Tools) which can be used to add and remove entries from /etc/bootparams. I recommend this method becuase it also copies the right files into /tftpboot for the system. See further down for a sample usage of these scripts.
/etc/inet/inetd.conf
The tftp server that comes with Solaris is started from inetd, hence in order to run the in.tftpd daemon (which you need to transfer a boot kernel to the install client) you need to make sure it has a line in the inetd.conf file. The folowing two lines should do the trick if you don't already have any: tftp dgram udp wait root /usr/sbin/in.tftpd in.tftpd -s /tftpboottftp dgram udp6 wait root /usr/sbin/in.tftpd in.tftpd -s /tftpboot
/tftpboot/*
There are a handful of files that go in here including a kernel for the Install Client boot. I don't know the specifics of these files because Sun's add_install_client and rm_install_client scripts (located in jumpstart-dir/Solaris_8/Tools) will take care of creating them for you. If any of you ambitious youngsters want crack the code and figure out what all these files are be sure to let me know what you find out!
/etc/dfs/dfstab
Solaris's NFS server config file. Really just a list of share commands that are executed when you run the 'shareall'.This is hardly the place for an NFS tutorial, but here's enough to get you started...On the Boot server you need to share out the Boot subdirectory of the the directory where you told the Solaris 8 Jumpstart install scripts to install to.The directory should be shared read-only with the anon=0 option set (meaning unknown users are given UID=0, sounds dangerous but you are share the FS as read only so it's not so bad) To accomplish this add a line such as this to the /etc/dfstabshare -F nfs -o ro,anon=0 /jumpstart-dir/BootI like to limit access to my local subnet, to do this use a line like this:share -F nfs -o ro=@192.168.1.0,anon=0 /jumpstart-dir/Boot Where 192.168.1.0 is the network address of your subnet. After changing the dfstab run /usr/sbin/unshareall, followed by /usr/sbin/shareall, to share the directories. You can run /usr/sbin/share to see what directories you are currently sharing via NFS.The nfs.server init script can also be used to restart the NFS server daemons and thus propgate your dfstab changes. On Install Server
/etc/dfst/dfstab
Solaris's NFS server config file. Really just a list of share commands that are executed when you run the 'shareall'.This is hardly the place for an NFS tutorial, but here's enough to get you started...On the Install server you need to share out the jumpstart directory, ie the directory where you told the Solaris 8 Jumpstart install scripts to install to.The directory should be shared read-only with the anon=0 option set (meaning unknown users are given UID=0, sounds dangerous but you are share the FS as read only so it's not so bad) To accomplish this add a line such as this to the /etc/dfstabshare -F nfs -o ro,anon=0 /jumpstart-dirI like to limit access to my local subnet, to do this use a line like this:share -F nfs -o ro=@192.168.1.0,anon=0 /jumpstart-dir Where 192.168.1.0 is the network address of your subnet. You also must share out your sysidcfg directory. I recommend making the sysidcfg directory a subdir of the jumpstart-dir, in which case the above lines would export this directory as well (so long as you do not cross filesystems, exporting a parent dir automatically exports all of it's sub-directories. Hence if I NFS export /export/home, a client can legally mount /export/home/scott or even /export/home/scott/Mail) But if you want to put your sysidcfg file somewhere else, you will need another line in the same format as the above sharing this directory as well.After changing the dfstab run /usr/sbin/unshareall, followed by /usr/sbin/shareall, to share the directories. You can run /usr/sbin/share to see what directories you are currently sharing via NFS.The nfs.server init script can also be used to restart the NFS server daemons and thus propgate your dfstab changes.
/jumpstart-dir/rules, /jumpstart-dir/rules.ok
/jumpstart-dir meaning wherever you told the Solaris 8 Jumpstart install scripts to install to. rules is a flat text file containing 'rules' for the install client to use when installing itself. Basically the client looks in the rules file for a rule which matches it's particular hostname, ip address, system architechture (SPARC/x86), etc. When it finds a match it then uses the begin-script, profile, and finish-script listed for that rule to do the system install. The sample rules file that is installed into the /jumpstart-dir by the Jumpstart Server Install scripts documents the options for this file and provides several useful examples. These options are also thoroughly documented in the Sun Solaris 8 Advanced Installation Guide (available online at docs.sun.com or try this direct link)Here's a quick sample rules entry to get you started:any - custom-start.sh core.profile custom-finish.shThis line matches any machine and will install it using the custom-start.sh pre-install script, the profile core.profile and the finish script custom-finish.shrules.ok is the file that is actually used, it is created by first creating a rules file and then running the supplied check script which will varify that all your rules and associated profiles make sense and then produce the rules.ok file. The check script is installed along with the rest of the jumpstart software by the Jumpstart Server Install Scripts and lives directly in the /jumpstart-dir, it takes no arguments and is simply run in the same directory as the rukes and profile files.
/jumpstart-dir/*.profile
The install profiles. These are flat text files that don't actually have to be named by any particular convention, although for sanity's sake '.profile' is my recommended extension.These files are named in the rules.ok file and tell the installer what packages/package clusters to install/remove from the install client, how to format the install client's disks, and what locale to install on the install client. The format of the profile file is pretty straight forward, it is thoroughly documented in the Sun Solaris 8 Advanced Installation Guide (available online at docs.sun.com or try this direct link).Here's a sample profile file to get you started:install_type initial_installsystem_type serverpartitioning explicitfilesys any 2000 /filesys any 1500 swapfilesys any 1500 /varfilesys any 2000 /optgeo N_Americacluster SUNWCreqpackage SUNWgzip addpackage SUNWless addpackage SUNWman addpackage SUNWbash addpackage SUNWtcsh addpackage SUNWzsh add
/sysidcfg-dir/hostname, /sysidcfg-dir/hostname/sysidcfg
/sysidcfg-dir being wherever you decided to put your sysidcfg directories. hostname being the hostname of the install client. One of the bootparams options will point an install client to /sysidcfg-dir/hostname and install client will then use look in this directory for a file named sysidcfg which it will use to complete the System Identification stage of the install.The /sysidcfg-dir must be nfs exported! You can actually use any name you want for the subdirectory, I chose to use the install client's hostname as a convention, and I recommend you do so as well. The important point is that in the bootparams file you point the install client at a directory and it then looks for a file named sysidcfg in that directory, hence you can not simply have a seperate file for each host, you must have a seperate directory containing a unique sysidcfg file for each host. (of course you can use the same sysidcfg file for multiple hosts, but read on to see why you may not want to do that.) The sysidcfg file is a flat text file containing hostname, ip address, name service, locale, IPv6, kerberos, netmask, root password hash, terminal and time service information. It's format and syntax are thoroughly documented in the Sun Solaris 8 Advanced Installation Guide (available online at docs.sun.com or try this direct link).Here's a sample sysidcfg file to get you started:system_locale=en_UStimezone=US/Centralterminal=xtermsname_service=NONEtimeserver=192.168.1.2security_policy=NONEroot_password=PpZsmNrUdbsGknetwork_interface=primary{ hostname=js-testip_address=192.168.1.5netmask=255.255.255.0protocol_ipv6=yes}
How To Setup a Jumpstart ServerStep 1) Install Jumpstart Software
All of the servers and whatnot (eg. nfsd, tftpd, etc) should already be on your system as they are part of the core Solaris 8 system.What's left to install then is the Jumpstart Server software from the Sun Install CD's. At minimum you'll need Solaris 8 Software CD 1 from the Solaris 8 install media kit.Insert the Solaris 8 Software CD 1 into the to-be Install Server and mount it.cd to /mount-point/Solaris_8/ToolsHere you will find the first Solaris 8 Jumpstart Server Install script, it's called: setup_install_serverRun the script with the directory into which you wish to install the Jumpstart Software as it's only argument, eg:./setup_install_server /jumpstart-dirThe destination directory should be empty and should have > 1 gigabyte of freespace available to it.This script basically just copies the contents of the install CD over to your Install Server. If you only want to create a Boot Server you can use the -b option which will only copy over the files required by a Boot Server.To add the contents of the other install CD's to your to-be Install Server (Software CD 2 is HIGHLY recommended, it contains the man page packages!) follow these steps for each CD:Insert the CD into the to-be Install Server and mount itcd to /mount-point/Solaris_8/ToolsHere you will find the secondary Solaris 8 Jumpstart Server Install script, it's called: add_to_install_serverRun this script with the directory into which you wish to install the Jumpstart Software as it's only argument, eg:./add_to_install_server /jumpstart-dir Again this script simple copies over the contents of the CD to your to-be Install Server.Now you need to copy over the check script and the sample profile and rules files into your /jumpstart-dir (actually these files can go in their own directory but that directory would also have to be NFS exported) One nice offshoot of the Jumpstart Server Install scripts just dumping the install CD's onto your Install Server's harddrive is that these files are now already on your Install Server, you've simply got to put copies in the root of your /jumpstart-dir. To do so use cp like this:cp /jumpstart-dir/sparc_8/Solaris_8/Misc/jumpstart_sample/* /jumpstart-dir/.Finally you'll need to setup your sysidcfg directory. To do so, simply choose where you want to put it (I recommend making it a sub-dir of your /jumpstart-dir) and create it, eg.mkdir -p /jumpstart-dir/sysidcfg-dir Step 2) Setup All the Required Server Daemons
Following the directions given above you'll need to make sure your to-be Install Server is running in.tftpd, in.rarpd and nfs sharing it's /jumpstart-dir and /sysidcfg-dir directories.Here's a checklist of what needs to be done to help out
/etc/dfstab must be set to share your jumpstart-dir and sysidcfg-dir
/etc/inet/inetd.conf should be setup to start in.tftpd
/tftpboot should be created
/etc/bootparams should be created (leave empty for now)
rpc services should be (re-)started
nfs server services should be (re-)started When you are done with all of this the following processes should show up in a ps -e:
in.rarpd
rpc.bootparams
rpcbind
nfsd
mountd
inetdStep 3) Setup Install Client
Your install server is more or less good to go now, all that's left is to flesh it out with rules, sysidcfg files, and profiles for your various hosts.Here's a sample walkthrough of adding a dummy Install Client named js-test.Add entries for js-test in the /etc/ethers and /etc/hosts files.Create a sysidcfg directory and file for js-test.Add js-test to /etc/bootparams and install boot files for it in /tftpboot.To do this using the provided add_install_client script follow these stepscd /jumpstart-dir/Solaris_8/Tools./add_install_client -c jumpstart-server:/jumpstart_dir -p jumpstart-server:/sysidcfg-dir/js-test js-test sun4uHere jumpstart-server is the hostname of your Install Server, /jumpstart-dir is your jumpstart-dir, /sysidcfg-dir/js-test is wherever you put the sysidcfg file for the install client, and sun4u is the platform group of the install client (Sparc, UltraSparc, x86)add_install_client has an extensive man page which details all it's options and is well worth reading. Next, you must make sure there is a matching rule in the rules and rules.ok files as well as a corresponding profile for your host.The installer simply uses the first line in rules.ok that matches for your host so be careful with the order of lines in that file.Now drop js-test to the 'ok' prompt, type 'boot net - install' and watch the magic happen!!




That's all folks!!