How to Perform System Boot and Shutdown Procedures for Solaris 10, Part C

本文详细介绍了 Solaris 10 的启动过程,包括内核加载及配置、模块加载机制等内容,并重点阐述了 Service Management Facility (SMF) 的工作原理和优势。SMF 改变了传统服务管理和依赖关系处理方式,提供了更高效的系统服务管理方案。

The Kernel

After the boot command initiates the kernel, the kernel begins several phases of the startup process. The first task is for OpenBoot to load the two-part kernel. The secondary startup program, ufsboot, which is described in the preceding section, loads the operating system kernel. The core of the kernel is two pieces of static code called genunix and unix. genunix is the platform-independent generic kernel file, and unix is the platform-specific kernel file. The platform-specific kernel used by ufsboot for systems running in 64-bit mode is named /platform/’uname -m’/kernel/sparcv9/unix. Solaris 10 only runs on 64-bit systems; however, early versions of Solaris gave you the option of running in 32-bit or 64-bit mode. On previous versions of Solaris, the 32-bit platform-specific kernel was named /platform/’uname -m’/kernel/unix. Now, in Solaris 10, /platform/’uname -m’/kernel/unix is merely a link to the 64-bit kernel located in the sparcv9 directory. When ufsboot loads genunix and unix into memory, they are combined to form the running kernel.

The kernel initializes itself and begins loading modules, using ufsboot to read the files. After the kernel has loaded enough modules to mount the root file system, it unmaps the ufsboot program and continues, using its own resources. The kernel creates a user process and starts the /sbin/init daemon, which starts other processes by reading the /etc/inittab file. (The /sbin/init process is described in the "System Run States" section, later in this chapter.)

The kernel is dynamically configured in Solaris 10. The kernel consists of a small static core and many dynamically loadable kernel modules. Many kernel modules are loaded automatically at boot time, but for efficiency, others—such as device drivers—are loaded from the disk as needed by the kernel.

A kernel module is a software component that is used to perform a specific task on the system. An example of a loadable kernel module is a device driver that is loaded when the device is accessed. Drivers, file systems, STREAMS modules, and other modules are loaded automatically as they are needed, either at startup or at runtime. This is referred to as autoconfiguration, and the kernel is referred to as a dynamic kernel. After these modules are no longer in use, they can be unloaded. Modules are kept in memory until that memory is needed. This makes more efficient use of memory and allows for simpler modification and tuning.

The modinfo command provides information about the modules that are currently loaded on a system. The modules that make up the kernel typically reside in the directories /kernel and /usr/kernel. Platform-dependent modules reside in the /platform/’uname -m’/kernel and /platform/’uname -i’/kernel directories.

When the kernel is loading, it reads the /etc/system file where system configuration information is stored. This file modifies the kernel’s parameters and treatment of loadable modules. It specifically controls the following:

  • The search path for default modules to be loaded at boot time as well as the search path for modules not to be loaded at boot time
  • The modules to be forcibly loaded at boot time rather than at first access
  • The root type and device
  • The new values to override the default kernel parameter values

The following is an example of the default /etc/system file:

login: login: root
Password:
Last login: Tue Jul 26 22:23:37 on console
Sun Microsystems Inc. SunOS 5.10  Generic January 2005
# cd /sbi
/sbi: does not exist
# cd /sbin
# ls init
init
# ls -l init
-r-xr-xr-x 1 root  sys  48984 Jan 22 2005 init
# cd /etc
  
# ls init
  
init
  
# ls -l init
  
lrwxrwxrwx 1 root  root   12 Feb 25 14:26 init -> ../sbin/init
# more /etc/system
  
*ident "@(#)system  1.18 97/06/27 SMI" /* SVR4 1.5 */
  
*
* SYSTEM SPECIFICATION FILE
*

  
    
  
* moddir:
*
*  Set the search path for modules. This has a format similar to the
*  csh path variable. If the module isn’t found in the first directory
*  it tries the second and so on. The default is /kernel /usr/kernel
*
*  Example:
*    moddir: /kernel /usr/kernel /other/modules
  

  
    
  

  
    
  

  
    
  
* root device and root filesystem configuration:
*
*  The following may be used to override the defaults provided by
*  the boot program:
*
*  rootfs:   Set the filesystem type of the root.
*
*  rootdev:  Set the root device. This should be a fully
*      expanded physical pathname. The default is the
*      physical pathname of the device where the boot
*      program resides. The physical pathname is
*      highly platform and configuration dependent.
*
*  Example:
*    rootfs:ufs
*    rootdev:/sbus@1,f8000000/esp@0,800000/sd@3,0:a
*
*  (Swap device configuration should be specified in /etc/vfstab.)

  
    
  

  
    
  

  
    
  
* exclude:
*
*  Modules appearing in the moddir path which are NOT to be loaded,
*  even if referenced. Note that ´exclude’ accepts either a module name,
*  or a filename which includes the directory.
*
*  Examples:
*    exclude: win
*    exclude: sys/shmsys

  
    
  

  
    
  

  
    
  
* forceload:
*
*  Cause these modules to be loaded at boot time, (just before mounting
*  the root filesystem) rather than at first reference. Note that
*  forceload expects a filename which includes the directory. Also
*  note that loading a module does not necessarily imply that it will
*  be installed.
*
*  Example:
*    forceload: drv/foo

  
    
  

  
    
  

  
    
  
* set:
*
*  Set an integer variable in the kernel or a module to a new value.
*  This facility should be used with caution. See system(4).
*
*  Examples:
*
*  To set variables in ‘unix’:
*
*    set nautopush=32
*    set maxusers=40
*
*  To set a variable named ‘debug’ in the module named ‘test_module’
*
*    set test_module:debug = 0x13

Modifying the /etc/system File

A system administrator will modify the /etc/system file to modify the kernel’s behavior. By default, the contents of the /etc/system file are completely commented out and the kernel is using all default values. A default kernel is adequate for average system use and you should not modify the /etc/system file unless you are certain of the results. A good practice is to always make a backup copy of any system file you modify, in case the original needs to be restored. Incorrect entries could prevent your system from booting. If a boot process fails because of an unusable /etc/system file, you should boot the system by using the interactive option boot -a. When you are asked to enter the name of the system file, you should enter the name of the backup system filename or /dev/null, to use default parameters.

The /etc/system file contains commands that have this form:

set <parameter>=<value>

For example, the setting for the kernel parameter nfs:nfs4_nra is set in the /etc/system file with the following line:

set nfs:nfs_nra=4

This parameter controls the number of read-ahead operations that are queued by the NFS version 4 client.

Commands that affect loadable modules have this form:

set <module>:<variable>=<value>

Editing the /etc/system File

A command must be 80 or fewer characters in length, and a comment line must begin with an asterisk (*) or hash mark (#) and end with a hard return.

For the most part, the Solaris OE is self-adjusting to system load and demands minimal tuning. In some cases, however, tuning is necessary.

If you need to change a tunable parameter in the /etc/system file, you can use the sysdef command or the mdb command to verify the change. sysdef lists all hardware devices, system devices, loadable modules, and the values of selected kernel-tunable parameters. The following is the output that is produced from the sysdef command:

* Hostid
  
*
  
 80a26382
  
*
  
* sun4u Configuration
  
*
  
*
  
* Devices
  
*
  
scsi_vhci, instance #0
  
packages (driver not attached)
  terminal-emulator (driver not attached)
  deblocker (driver not attached)
  obp-tftp (driver not attached)
  disk-label (driver not attached)
  SUNW,builtin-drivers (driver not attached)
  sun-keyboard (driver not attached)
  ufs-file-system (driver not attached)
chosen (driver not attached)
openprom (driver not attached)
  client-services (driver not attached)
options, instance #0
aliases (driver not attached)
memory (driver not attached)
virtual-memory (driver not attached)
pci, instance #0
  
  pci, instance #0
  
    ebus, instance #0
  
      auxio (driver not attached)
      power, instance #0
      SUNW,pll (driver not attached)
      se, instance #0
  
      su, instance #0
  
      su, instance #1
  
      ecpp (driver not attached)
      fdthree, instance #0
      eeprom (driver not attached)
      flashprom (driver not attached)
      SUNW,CS4231, instance #0 (driver not attached)
    network, instance #0
    SUNW,m64B, instance #0
    ide, instance #0
      disk (driver not attached)
      cdrom (driver not attached)
      sd, instance #1
      dad, instance #1
  pci, instance #1

  
    
  
Output has been truncated . . . . . . .

  
    
  

  
    
  
* System Configuration
*
 swap files
swapfile    dev swaplo blocks free
/dev/dsk/c0t0d0s3 136,11  16 1052624 1052624
*
* Tunable Parameters
*
 2498560  maximum memory allowed in buffer cache (bufhwm)
    1914  maximum number of processes (v.v_proc)
      99  maximum global priority in sys class (MAXCLSYSPRI)
    1909  maximum processes per user id (v.v_maxup)
      30  auto update time limit in seconds (NAUTOUP)
      25  page stealing low water mark (GPGSLO)
       1  fsflush run rate (FSFLUSHR)
      25  minimum resident memory for avoiding deadlock (MINARMEM)
      25  minimum swapable memory for avoiding deadlock (MINASMEM)
*
* Utsname Tunables
*
 5.10 release (REL)
 ultra5 node name (NODE)
 SunOS system name (SYS)
  
 Generic version (VER)
  
*
* Process Resource Limit Tunables (Current:Maximum)
*
0x0000000000000100:0x0000000000010000 file descriptors
*
* Streams Tunables
*
     9 maximum number of pushes allowed (NSTRPUSH)
 65536 maximum stream message size (STRMSGSZ)
  1024 max size of ctl part of message (STRCTLSZ)
*
* IPC Messages module is not loaded
* IPC Semaphores module is not loaded
* IPC Shared Memory module is not loaded
* Time Sharing Scheduler Tunables
*
60  maximum time sharing user priority (TSMAXUPRI)
SYS  system class name (SYS_NAME)

The mdb command is used to view or modify a running kernel and must be used with extreme care. The use of mdb is beyond the scope of this book; however, more information can be obtained from The Solaris Modular Debugger Guide available at http://docs.sun.com.

Kernel Tunable Parameters in Solaris 10

You’ll find in Solaris 10 that many tunable parameters that were previously set in /etc/system have been removed. For example, IPC facilities were previously controlled by kernel tunables, where you had to modify the /etc/system file and reboot the system to change the default values for these facilities. Because the IPC facilities are now controlled by resource controls, their configuration can be modified while the system is running. Many applications that previously required system tuning to function might now run without tuning because of increased defaults and the automatic allocation of resources.

Configuring the kernel and tunable parameters is a complex topic to describe in a few sections of a chapter. This introduction to the concept provides enough information for the average system administrator and describes the topics you’ll need to know for the exam. If you are interested in learning more about the kernel and tunable parameters, refer to the additional sources of information described at the end of this chapter.

The init Phase

Objective: The init phase has undergone major changes in Solaris 10. Even if you are experienced on previous versions of Solaris OE, this section introduces the svc.startd daemon and the Service Management Facility (SMF), which are new in Solaris 10 and will be tested heavily on the exam.

After control of the system is passed to the kernel, the system begins the last stage of the boot process—the init stage. In this phase of the boot process, the init daemon (/sbin/init) reads the /etc/default/init file to set any environment variables for the shell that init invokes. By default, the CMASK and variables are set. These values get passed to any processes that init starts. Then, init reads the /etc/inittab file and executes any process entries that have sysinit in the action field so that any special initializations can take place before users log in.

After reading the /etc/inittab file, init starts the svc.startd daemon, which is responsible for starting and stopping other system services such as mounting file systems and configuring network devices. In addition, svc.startd will execute legacy run control (rc) scripts, which are described later in this section.

The /sbin/init command sets up the system based on the directions in /etc/inittab. Each entry in the /etc/inittab file has the following fields:

id:runlevel:action:process

Table 3.23 provides a description of each field.

Table 3.23 Fields in the inittab File

Field

Description

id

A unique identifier

rstate

The run level(s)

action

How the process is to be run

process

The name of the command to execute

Valid action keywords are listed in Table 3.24:

Table 3.24 inittab action Field Values

Field

Description

sysinit

Executes the process before init tries to access the console via the console prompt. init waits for the completion of the process before it continues to read the inittab file.

powerfail

Indicates that the system has received a powerfail signal.

The following example shows a default /etc/inittab file:

ap::sysinit:/sbin/autopush -f /etc/iu.ap
  
sp::sysinit:/sbin/soconfig -f /etc/sock2path
  
smf::sysinit:/lib/svc/bin/svc.startd >/dev/msglog 2<>/dev/msglog </dev/console
  
p3:s1234:powerfail:/usr/sbin/shutdown -y -i5 -g0 >/dev/msglog 2<>/dev/msglog
  

The init process performs the following tasks based on the entries found in the default /etc/inittab file:

Line 1: Initializes the STREAMS modules used for communication services.

Line 2: Configures the socket transport providers for network connections.

Line 3: Initializes the svc.startd daemon for SMF.

Line 4: Describes the action to take when the init daemon receives a power fail shutdown signal.

The Solaris Management Facility (SMF) Service

Objective: Explain the Service Management Facility and the phases of the boot process.

  • Use Service Management Facility or legacy commands and scripts to control both the boot and shutdown procedures.

In Solaris 10, the svc.startd daemon replaces the init process as the master process starter and restarter. Where in previous versions of Solaris, init would start all processes and bring the system to the appropriate "run level" or "init state." Now SMF, or more specifically, the svc.startd daemon, assumes the role of starting system services.

SMF Services

A service can be described as an entity that provides a resource or list of capabilities to applications and other services. This entity can be running locally or remote, but at this phase of the boot process, the service is running locally. A service does not have to be a process; it can be the software state of a device or a mounted file system. Also, a system can have more than one instance of a service, such as with multiple network interfaces, multiple mounted file systems, or a set of other services.

The advantages of using SMF to manage system services over the traditional Unix startup scripts that, in the past, were run by the init process are

  • SMF automatically restarts failed services in the correct order, whether they failed as the result of administrator error, software bug, or were affected by an uncorrectable hardware error. The restart order is defined by dependency statements within the SMF facility.
  • The system administrator can view and manage services as well as view the relationships between services and processes.
  • Allows the system administrator to back up, restore, and undo changes to services by taking automatic snapshots of service configurations.
  • Allows the system administrator to interrogate services and determine why a service may not be running.
  • Allows services to be enabled and disabled either temporarily or permanently.
  • Allows the system administrator to delegate tasks to non-root users, giving these users the ability to modify, enable, disable, or restart system services.
  • Large systems boot and shutdown faster because services are started and stopped in parallel according to dependencies setup in the SMF.
  • Allows customization of output sent to the boot console to be either be as quiet as possible, which is the default, or to be verbose by using boot -m verbose from the OpenBoot prompt.
  • Provides compatibility with legacy RC scripts.

Those of you who have experience on previous versions of Solaris will notice a few differences immediately:

  • The boot process creates fewer messages. All of the information that was provided by the boot messages in previous versions of Solaris is located in the /var/svc/log directory. You still have the option of booting the system with the boot -v option, which provides more verbose boot messages.
  • Because SMF is able to start services in parallel, the boot time is substantially quicker than in previous versions of Solaris.
  • Since services are automatically restarted if possible, it may seem that a process refuses to die. The svcadm command should be used to disable any SMF service that should not be running.
  • Many of the scripts in /etc/init.d and /etc/rc*.d have been removed, as well as entries in the /etc/inittab file so that the services can be administered using SMF. You’ll still find a few RC scripts that still remain in the /etc/init.d directory such as sendmail, nfs.server, and dhcp, but most of these legacy RC scripts simply execute the svcadm command to start the services through the SMF. Scripts and inittab entries that may still exist from legacy applications or are locally developed will continue to run. The legacy services are started after the SMF services so that service dependencies do not become a problem.

The service instance is the fundamental unit of administration in the SMF framework, and each SMF service has the potential to have multiple versions of it configured. A service instance is either enabled or disabled with the svcadm command described later in this chapter. An instance is a specific configuration of a service, and multiple instances of the same service can run in the same Solaris instance. For example, a web server is a service. A specific web server daemon that is configured to listen on port 80 is an instance. Another instance of the web server service could have different configuration requirements listening on port 8080. The service has system-wide configuration requirements, but each instance can override specific requirements, as needed.

Services are represented in the SMF framework as service instance objects, which are children of service objects. These instance objects can inherit or override the configuration settings of the parent service object. Multiple instances of a single service are managed as child objects of the service object.

Services are not just the representation for standard long-running system services such as httpd or nfsd. Services also represent varied system entities that include third-party applications such as Oracle software. In addition, a service can include less traditional entities such as the following:

  • A physical network device
  • A configured IP address
  • Kernel configuration information

The services started by svc.startd are referred to as milestones. The milestone concept replaces the traditional run levels that were used in previous versions of Solaris. A milestone is a special type of service that represents a group of services. A milestone is made up of several SMF services. For example, the services that instituted run levels S, 2, and 3 in previous version of Solaris are now represented by milestone services named:

  • milestone/single-user (equivalent to run level S)
  • milestone/multi-user (equivalent to run level 2)
  • milestone/multi-user-server(equivalent to run level 3)

Other milestones that are available in the Solaris 10 OE are

  • milestone/name-services
  • milestone/devices
  • milestone/network
  • milestone/sysconfig

An SMF manifest is an XML (Extensible Markup Language) file that contains a complete set of properties that are associated with a service or a service instance. The properties are stored in files and subdirectories located in /var/svc/manifest. Manifests should not be edited directly to modify the properties of a service. The service configuration repository is the authoritative source of the service configuration information, and the service configuration repository can only be manipulated or queried using SMF interfaces, which are command-line utilities described later in this section.

Each service instance is named with a Fault Management Resource Identifier or FMRI. The FMRI includes the service name and the instance name. For example, the FMRI for the ftp service is svc:/network/ftp:default, where network/ftp identifies the service and default identifies the service instance.

You may see various forms of the FMRI that all refer to the same service instance, as follows:

svc://localhost/network/inetd:default 
svc:/network/inetd:default
network/inetd:default

An FMRI for a legacy service will have the following format:

lrc:/etc/rc3_d/S90samba

where the lrc (legacy run control) prefix indicates that the service is not managed by SMF. The pathname /etc/rc3_d refers to the directory where the legacy script is located, and S90samba is the name of the run control script. See the section titled "Using the Run Control Scripts to Stop or Start Services" later in this chapter for information on run control scripts.

The service names will include a general functional category which include the following:

  • Application
  • Device
  • Milestone
  • Network
  • Platform
  • Site
  • System
Service Dependencies

In earlier versions of Solaris, processes were started at bootup by their respective shell scripts, which ran in a pre-determined sequence. Sometimes, one of these shell scripts failed for various reasons. Perhaps it was an error in the script or one of the daemons did not start for various reasons. When a script failed, the other scripts were started regardless, and sometimes these scripts failed because a previous process failed to start. Tracking the problem down was difficult for the system administrator.

To remedy the problem with sequencing scripts, Sun uses the SMF to manage the starting and stopping of services. The SMF understands the dependencies that some services have on other services. With SMF, if a service managed by the SMF fails or is terminated, all dependent processes will be taken offline until the required process is restarted. The interdependency is started by means of a service contract, which is maintained by the kernel and is where the process interdependency, the restarter process, and the startup methods are all described.

Most service instances have dependencies on other services or files. Those dependencies control when the service is started and automatically stopped. When the dependencies of an enabled service are not satisfied, the service is kept in the offline state. When the service instance dependencies are satisfied, the service is started or restarted by the svc.startd daemon. If the start is successful, the service is transitioned to the online state. There are four types of service instance dependencies listed below.

  • require_all The dependency is satisfied when all cited services are running (online or degraded), or when all indicated files are present.
  • require_any—The dependency is satisfied when one of the cited services is running (online or degraded), or when at least one of the indicated files is present.
  • optional_all—The dependency is satisfied when all of the cited services are running (online or degraded), disabled, in the maintenance state, or when cited services are not present. For files, this type is the same as require_all.
  • exclude_all—The dependency is satisfied when all of the cited services are disabled, in the maintenance state, or when cited services or files are not present.

Each service or service instance must define a set of methods that start, stop, and optionally refresh the service. These methods can be listed and modified for each service using the svccfg command described later in this chapter.

A service instance is satisfied and started when its criteria, for the type of dependency, are met. Dependencies are satisfied when cited services move to the online state. Once running (online or degraded), if a service instance with a require_all, require_any, or optional_all dependency is stopped or refreshed, the SMF considers why the service was stopped and uses the restart_on attribute of the dependency to decide whether to stop the service. restart_on attributes are defined in Table 3.25x

Table 3.25 restart_on Values

Event

None

Error

Restart

Refresh

stop due to error

no

yes

yes

yes

non-error stop

no

no

yes

yes

refresh

no

no

no

yes

A service is considered to have stopped due to an error if the service has encountered a hardware error or a software error such as a core dump. For exclude_all dependencies, the service is stopped if the cited service is started and the restart_on attribute is not none.

You can use the svcs command, described later in this chapter, to view service instance dependencies and to troubleshoot failures. You’ll also see how to use the svccfg command to modify service dependencies.

SMF Command-line Administration Utilities

The SMF provides a set of command-line utilities used to administer and configure the SMF. Table 3.26 describes these utilities.

Table 3.26 SMF Command-line Utilities

Command

Description

inetadm

Used to configure and view services controlled by the inetd daemon. Described in more detail in Chapter 8, "The Solaris Network Environment."

svcadm

Used to perform common service management tasks such as enabling, disabling, or restarting service instances.

svccfg

Used to display and manipulate the contents of the service configuration repository.

svcprop

Used to retrieve property values from the service configuration repository with output that is appropriate for use in shell scripts.

svcs

Used to obtain a detailed view of the service state of all service instances in the configuration repository.

To report the status of all enabled service instances and get a list of the various services that are running, use the svcs command with no options as follows:

svcs | more

The svcs command obtains information about all service instances from the service configuration repository and displays the state, start time, and FMRI of each service instance as follows:

STATE            STIME    FMRI
legacy_run  14:10:49 lrc:/etc/rc2_d/S10lu
legacy_run  14:10:49 lrc:/etc/rc2_d/S20sysetup
legacy_run  14:10:50 lrc:/etc/rc2_d/S40llc2
legacy_run  14:10:50 lrc:/etc/rc2_d/S42ncakmod
legacy_run  14:10:50 lrc:/etc/rc2_d/S47pppd
legacy_run  14:10:50 lrc:/etc/rc2_d/S70uucp
Output has been truncated . . . . 
online           14:09:37 svc:/system/svc/restarter:default
online           14:09:48 svc:/network/pfil:default
online           14:09:48 svc:/network/loopback:default
online           14:09:48 svc:/milestone/name-services:default
online           14:09:50 svc:/system/filesystem/root:default
online           14:09:54 svc:/system/filesystem/usr:default
online           14:09:56 svc:/system/device/local:default
online           14:09:57 svc:/milestone/devices:default
online           14:09:57 svc:/network/physical:default
online           14:09:58 svc:/milestone/network:default

Listing Legacy Services

You’ll notice that the list includes legacy scripts that were used to start up processes. Legacy services can be viewed, but cannot be administered with SMF.

The state of each service is one of the following:

  • degraded—The service instance is enabled, but is running at a limited capacity.
  • disabled—The service instance is not enabled and is not running.
  • legacy_run—The legacy service is not managed by SMF, but the service can be observed. This state is only used by legacy services that are started with RC scripts.
  • maintenance—The service instance has encountered an error that must be resolved by the administrator.
  • offline—The service instance is enabled, but the service is not yet running or available to run.
  • online—The service instance is enabled and has successfully started.
  • uninitialized—This state is the initial state for all services before their configuration has been read.

Running the svcs command without options will display the status of all enabled services. Use the -a option to list all services, including disabled services as follows:

svcs -a

The result is a listing of all services as follows:

. .. . <output has been truncated>
disabled  15:48:41 svc:/network/shell:kshell
disabled  15:48:41 svc:/network/talk:default
disabled  15:48:42 svc:/network/rpc/ocfserv:default
disabled  15:48:42 svc:/network/uucp:default
disabled  15:48:42 svc:/network/security/krb5_prop:default
disabled  15:48:42 svc:/network/apocd/udp:default
online   15:47:44 svc:/system/svc/restarter:default
online   15:47:47 svc:/network/pfil:default
online   15:47:48 svc:/network/loopback:default
online   15:47:50 svc:/system/filesystem/root:default
. .. . <output has been truncated>

To display information on selected services, you can supply the FMRI as an argument to the svcs command as follows:

svcs -l network

With the -l option, the system displays detailed information about the network service instance. The network FMRI specified in the previous example is a general functional category and is also called the network milestone. The information displayed by the previous command is as follows:

fmri      svc:/milestone/network:default
name             Network milestone
enabled          true
state     online
next_state  none
state_time  Wed Jul 27 14:09:58 2005
alt_logfile /etc/svc/volatile/milestone-network:default.log
restarter   svc:/system/svc/restarter:default
  
dependency  require_all/none svc:/network/loopback (online)
dependency  require_all/none svc:/network/physical (online)

Use the -d option to view which services are started at the network:default milestone, as follows:

svcs -d milestone/network:default 

The system displays

STATE    STIME  FMRI
online   Jul_27 svc:/network/loopback:default
online   Jul_27 svc:/network/physical:default 

Another milestone is the multi-user milestone, which is displayed as follows:

svcs -d milestone/multi-user

The system displays all of the services started at the multi-user milestone:

STATE    STIME  FMRI
online   Jul_27 svc:/milestone/name-services:default
online   Jul_27 svc:/milestone/single-user:default
online   Jul_27 svc:/system/filesystem/local:default
online   Jul_27 svc:/network/rpc/bind:default
online   Jul_27 svc:/milestone/sysconfig:default
online   Jul_27 svc:/network/inetd:default
online   Jul_27 svc:/system/utmp:default
  
online   Jul_27 svc:/network/nfs/client:default
online   Jul_27 svc:/system/system-log:default
  
online   Jul_27 svc:/network/smtp:sendmail 

Many of these services have their own dependencies, services that must be started before they get started. We refer to these as sub-dependencies. For example, one of the services listed is the svc:/network/inetd:default service. A listing of the sub-dependencies for this service can be obtained by typing

svcs -d network/inetd

The system displays the following dependencies:

STATE     STIME    FMRI
disabled  15:47:57 svc:/network/inetd-upgrade:default
online    15:47:48 svc:/network/loopback:default
online    15:48:01 svc:/milestone/network:default
online    15:48:30 svc:/milestone/name-services:default
online    15:48:33 svc:/system/filesystem/local:default
online    15:48:34 svc:/network/rpc/bind:default
  
online    15:48:36 svc:/milestone/sysconfig:default

The -d option, in the previous example, lists the services or service instances upon which the multi-user service instance is dependent. These are the services that must be running before the multi-user milestone is reached. The -D option shows which other services depend on the milestone/multi-user service as follows:

svcs -D milestone/multi-user

The system displays the following output indicating that the dhcp-server and multi-user-server services are dependent on the multi-user service:

STATE    STIME  FMRI
online   Jul_27 svc:/network/dhcp-server:default
online   Jul_27 svc:/milestone/multi-user-server:default

To view processes associated with a service instance, use the -p option as follows:

svcs -p svc:/network/inetd:default

The system displays processes associated with the svc:/network/inetd:default service. In this case, information about the inetd process is shown as follows:

STATE   STIME   FMRI
online   Jul_27 svc:/network/inetd:default
           Jul_27  231 inetd

Viewing processes using svcs -p instead of the traditional ps command makes it easier to track all of the processes associated with a particular service.

If a service fails for some reason and cannot be restarted, you can list the service using the -x option as follows:

svcs -x

The system will display:

svc:/application/print/server:default (LP print server)
 State: disabled since Thu Sep 22 18:55:14 2005
Reason: Disabled by an administrator.
   See: http://sun.com/msg/SMF-8000-05
   See: lpsched(1M)
Impact: 2 dependent services are not running. (Use -v for list.)

The example shows that the LP print service has not started and provides an explanation that the service has not been enabled.

Starting and Stopping Services Using SMF

To disable services in previous versions of Solaris, the system administrator had to search out and rename the relevant RC script(s) or comment out statements in a configuration file such as modifying the inetd.conf file when disabling ftp.

SMF makes it much easier to locate services and their dependencies. To start a particular service using SMF, the service instance must be enabled using the svcadm enable command. By enabling a service, the status change is recorded in the service configuration repository. The enabled state will persist across reboots as long as the service dependencies are met. The following example demonstrates how to use the svcadm command to enable the ftp server:

svcadm enable network/ftp:default

To disable the ftp service, use the disable option as follows:

svcadm disable network/ftp:default

To verify the status of the service, type

svcs network/ftp

The system displays the following:

STATE    STIME    FMRI
online   16:07:08 svc:/network/ftp:default

The svcadm command allows the following subcommands:

  • Enable—Enables the service instances.
  • Disable—Disables the service instances.
  • Restart—Requests that the service instances be restarted.
  • Refresh—For each service instance specified, refresh requests that the assigned restarter update the service's running configuration snapshot with the values from the current configuration. Some of these values take effect immediately (for example, dependency changes). Other values do not take effect until the next service restart.
  • Clear—For each service instance specified, if the instance is in the maintenance state, signal to the assigned restarter that the service has been repaired. If the instance is in the degraded state, request that the assigned restarter take the service to the online state.

The svcadm command can also be used to change milestones. In the following step by step, I'll use the svcadm command to determine my current system state (milestone) and then change the system default milestone to single-user.

STEP BY STEP

3.2 Changing Milestones

  1. First, check to see what the default milestone is set to for your system by using the svcprop command. This command will retrieve the SMF service configuration properties for my system
# svcprop restarter|grep milestone

The system responds with the following, indicating that my system is set to boot to the multi-user milestone by default:

options/milestone astring svc:/milestone/multi-user:default
  1. I'll check to see which milestone the system is currently running at:
svcs | grep milestone

The system responds with

disabled  16:16:36 svc:/milestone/multi-user-server:default
online    16:16:36 svc:/milestone/name-services:default
online    16:16:43 svc:/milestone/devices:default
online    16:16:45 svc:/milestone/network:default
online    16:16:57 svc:/milestone/single-user:default
online    16:17:03 svc:/milestone/sysconfig:default
online    16:17:16 svc:/milestone/multi-user:default
  

From the output, I see that multi-user-server is not running, but multi-user is running.

  1. To start the transition to the single-user milestone, type
svcadm milestone single-user

The system responds with the following, prompting for the root password and finally entering single-user mode:

Root password for system maintenance (control-d to bypass): <enter root password>
single-user privilege assigned to /dev/console.
Entering System Maintenance Mode

  
    
  
Sep 22 17:22:09 su: 'su root' succeeded for root on /dev/console
Sun Microsystems Inc. SunOS 5.10  Generic January 2005
# 
  1. Verify the current milestone with the following command:
svcs -a | grep milestone

The system responds with:

disabled  16:16:36 svc:/milestone/multi-user-server:default
disabled  17:21:37 svc:/milestone/multi-user:default
disabled  17:21:37 svc:/milestone/sysconfig:default
disabled  17:21:39 svc:/milestone/name-services:default
online    16:16:43 svc:/milestone/devices:default
online    16:16:45 svc:/milestone/network:default
online    16:16:57 svc:/milestone/single-user:default

The output indicates that the multi-user and multi-user-server milestones are disabled, and the single-user milestone is the only milestone that is currently online.

  1. Finally, I'll bring the system backup to the multi-user-server milestone:
svcadm milestone milestone/multi-user-server:default
  

Issuing the svcs command again shows that the multi-user-server milestone is back online:

svcs -a |grep milestone
online   16:16:43 svc:/milestone/devices:default
online   16:16:45 svc:/milestone/network:default
online   16:16:57 svc:/milestone/single-user:default
online   17:37:06 svc:/milestone/name-services:default
online   17:37:12 svc:/milestone/sysconfig:default
online   17:37:23 svc:/milestone/multi-user:default
online   17:37:31 svc:/milestone/multi-user-server:default

At bootup, svc.startd retrieves the information in the service configuration repository and starts services when their dependencies are met. The daemon is also responsible for restarting services that have failed and for shutting down services whose dependencies are no longer satisfied.

In the following example, users cannot telnet into the server, so I check on the telnet service using the svcs -x command as follows:

svcs -x telnet

The results show that the service is not running:

svc:/network/telnet:default (Telnet server)
 State: disabled since Fri Sep 23 10:06:46 2005
Reason: Temporarily disabled by an administrator.
         See: http://sun.com/msg/SMF-8000-1S
   See: in.telnetd(1M)
   See: telnetd(1M)
Impact: This service is not running.

I’ll enable the service using the svcadm command as follows:

svcadm enable svc:/network/telnet:default

After enabling the service, check the status using the svcs command as follows:

# svcs -x telnet

The system responds with:

svc:/network/telnet:default (Telnet server)
 State: online since Fri Sep 23 10:20:47 2005
   See: in.telnetd(1M)
   See: telnetd(1M)
Impact: None.

Also, if a service that has been running but stops, try restarting the service using the svcadm restart command as follows:

svcadm restart svc:/network/telnet:default
 
### Reasons for slow GitLab boot - **Resource constraints**: Insufficient CPU, memory, or disk I/O can significantly slow down the boot process. GitLab is a resource - intensive application, and if the server hardware does not meet the recommended specifications, it will take longer to start up. For example, if there is not enough memory, the operating system may start swapping, which is extremely slow [^1]. - **Database issues**: GitLab relies on a database (usually PostgreSQL). If the database has a large number of records, complex queries during startup, or is experiencing performance problems such as slow disk access or high load, it can delay GitLab's boot. For instance, a fragmented database table can lead to longer query times [^1]. - **Configuration problems**: Incorrect or sub - optimal configuration settings in the `gitlab.rb` file can cause startup delays. For example, misconfigured external services (such as LDAP), overly aggressive caching settings, or incorrect network settings can all impact the boot process [^1]. - **Service dependencies**: GitLab has several dependent services like Redis and Nginx. If these services are not functioning properly or are taking a long time to start, it will hold up the GitLab startup. For example, if Redis is experiencing a high load or has connectivity issues, GitLab may have to wait for it to become available [^1]. ### Solutions for long GitLab boot time - **Resource upgrade**: Upgrade the server's CPU, memory, or storage to meet or exceed the recommended hardware specifications. For example, adding more RAM can reduce swapping and speed up the overall system performance during startup [^1]. - **Database optimization**: - **Indexing**: Analyze the database queries and create appropriate indexes on frequently accessed columns. This can significantly speed up database operations during startup. - **Vacuuming and reindexing**: Regularly perform vacuuming and reindexing operations on the PostgreSQL database to clean up dead rows and optimize the index structure [^1]. - **Configuration adjustment**: Review and optimize the `gitlab.rb` configuration file. Disable unnecessary services or features that are not in use. For example, if LDAP is not required, disable the LDAP integration in the configuration [^1]. - **Service troubleshooting**: Check the logs of dependent services like Redis and Nginx. Restart these services if necessary and ensure they are properly configured. For example, if Redis is running out of memory, adjust its memory allocation settings [^1]. ```bash # Example commands for checking and restarting services # Check Redis status sudo systemctl status redis # Restart Redis sudo systemctl restart redis # Check Nginx status sudo systemctl status nginx # Restart Nginx sudo systemctl restart nginx ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值