Solaris™ 10 System Administration Essentials-2 Boot, Service Management, and Shutdown

本文深入探讨了Solaris操作系统从加载引导加载器到内核启动,再到用户模式程序运行的过程,详细解释了如何通过修改引导行为、使用故障安全启动、配置运行级别等方法来定制启动流程。同时,介绍了服务管理设施(SMF)用于系统服务的管理,包括服务的状态、依赖关系、日志文件、服务实例的配置以及如何进行健康检查和故障排除。此外,还详细说明了Solaris操作系统的两种关机机制及其区别,帮助用户更好地理解和掌握Solaris系统的启动、服务管理和关机过程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Chapter 2 Boot, Service Management, and Shutdown 33

2.1 Boot 33

2.1.1 The Bootloader 33

2.1.2 The Kernel 34

2.1.3 User-Mode Programs 34

2.1.4 GRUB Extensions 35

From the Library of Daniel Johnson

vi Contents

2.1.5 Modifying Boot Behavior 36

2.1.6 Run Levels 37

2.1.7 Troubleshooting 37

2.2 Service Management Facility 39

2.2.1 enabled 40

2.2.2 state, next_state, and state_time 40

2.2.3 logfile 41

2.2.4 dependency 41

2.2.5 How SMF Interacts with Service Implementations 42

2.2.6 The Service Configuration Facility 44

2.2.7 Health and Troubleshooting 44

2.2.8 Service Manifests 45

2.2.9 Backup and Restore of SCF Data 45

2.3 Shutdown 46

2.3.1 Application-Specific Shutdown 46

2.3.2 Application-Independent Shutdown 46

 

Boot, Service

Management, and

Shutdown

This chapter describes how the Solaris 10 operating system boots and explains

options users and administrators have for changing the boot process. The chapter

also describes the two methods of shutting down a Solaris 10 system. In addition, it

describes the Service Management Facility (SMF) utility for managing system services.

Some of the information in this chapter describes Solaris boot processes that

apply to both the x86 and SPARC platform, but the chapter focuses primarily on

booting the x86 platform.

2.1 Boot

Like most contemporary operating systems, Solaris initialization begins with the

bootloader, continues with the kernel, and finishes with user-mode programs.

2.1.1 The Bootloader

On x86 platforms, the Solaris 10 OS is designed to be loaded by GNU Grand Unified

Bootloader (GRUB). By default, the bootloader displays a boot menu with two

entries:

Solaris 10 10/08 s10x_u6wos_07b X86

Solaris failsafe

From the Library of Daniel Johnson

34 Chapter 2 _ Boot, Service Management, and Shutdown

When a Solaris boot entry is chosen, GRUB loads the kernel specified by the entry

into memory and transfers control to it. The entry also directs GRUB to load a boot

archive with copies of the kernel modules and configuration files essential for startup.

See the boot(1M) manual page for more about the boot archive. The failsafe

entry facilitates troubleshooting and recovery.

Note that the GRUB that is supplied with Solaris contains extensions to GNU

GRUB required to load the Solaris OS.

2.1.2 The Kernel

The kernel starts by initializing the hardware, clearing the console, and printing a

banner:

After hardware initialization, the kernel mounts the root file system and

executes user-mode programs.

2.1.3 User-Mode Programs

As with all UNIX operating systems, most Solaris functionality is driven by usermode

programs. The kernel starts them by executing the /sbin/init file in the

first process, which always has process ID (“pid”) 1.

Like other UNIX operating systems, init reads the /etc/inittab configuration

file and executes programs according to it. Unlike most UNIX operating

systems, the default inittab does not instruct init to execute init scripts in the

/etc/rc*.d directories. Instead, the processes that implement most systemdelivered

functionality on Solaris are started by the service management facility or

SMF. Accordingly, the Solaris init contains special-purpose functionality to start

and restart (as necessary) the daemons that implement SMF. In turn, the facility

is responsible for executing the init scripts. SMF is described in more detail in the

next section.

Users accustomed to the Solaris 9 operating system will notice that the Solaris

10 operating system displays much less information on the console during boot.

This is because SMF now starts service daemons with standard output directed to

log files in /var/svc/log, rather than the console.

SunOS Release 5.10 Version Generic_137138-06 64-bit

Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved.

Use is subject to license terms.

From the Library of Daniel Johnson

2.1 BOOT 35

Near the end of startup, SMF will execute the ttymon program on the console

device at the direction of the console-login SMF service:

examplehost login:

If the SUNWdtlog package was installed, SMF will also start an X server on the

console device and the dtlogin greeter on the display as part of the cde-login

SMF service as shown in Figure 2.1.

2.1.4 GRUB Extensions

The GRUB installed by Solaris differs from standard GNU GRUB in a few ways:

_ It can read Solaris UFS file systems (which differ from BSD UFS file systems).

_ It recognizes the kernel$ and module$ commands (since 10/08 release).

Figure 2.1 SMF Login

From the Library of Daniel Johnson

36 Chapter 2 _ Boot, Service Management, and Shutdown

_ It can read Solaris ZFS pools, and recognizes the bootfs command (since

10/08 release).

_ It recognizes the findroot command (since 10/08 release).

As a result, versions of GRUB not delivered with Solaris will generally not be able

to boot a Solaris system image.

2.1.5 Modifying Boot Behavior

The Solaris kernel can accept a string of boot arguments from the bootloader. Recognized

arguments are listed in the kernel(1M) manual page. Commonly used

arguments are shown in Table 2.1.

The boot arguments for a single boot sequence can be set from the GRUB menu.

Select an entry and press the e key. GRUB will display the entry editing screen, as

shown in Figure 2.2.

Figure 2.3 shows the GRUB edit menu. In this menu, you can modify the kernel

behavior for a specified boot entry. This menu is accessed at boot time, by typing e

to interrupt the boot process, then with the boot entry selected, typing e again to

enter the edit menu for the selected entry.

Select the line beginning with kernel and press the e key.

After the path for unix, add the boot arguments. Press enter to commit the

change and b to boot the temporarily modified entry.

Boot arguments for a single boot can also be set on the reboot command line.

See the reboot(1M) manual page.

Boot arguments can be installed permanently by modifying menu.lst file. Use

bootadm list-menu to locate the file in the file system.

Table 2.1 Boot Arguments

Argument Description

-k Start the kernel debugger, kmdb, as soon as possible. See

the kmdb(1M) manual page and later in this chapter.

-s Single-user mode. Start only basic services and present an

sulogin prompt.

-v Be verbose by printing extra information on the console.

-m verbose Instruct the SMF daemons to be verbose.

From the Library of Daniel Johnson

2.1 BOOT 37

2.1.6 Run Levels

The Solaris OS defines eight run levels. Each run level is associated with particular

system behaviors (see Table 2.2).

By default, Solaris boots into run level 3. This is taken from the initdefault

entry of the /etc/inittab configuration file (see inittab(4)). It can be changed to

a single boot sequence by specifying -s in the boot arguments (refer to Table 2.2).

To change the run level while the operating system is running, use the init

command. See its manual page for a detailed description of run levels.

2.1.7 Troubleshooting

If you encounter problems during the boot process, check the tools and solutions

described here for a remedy.

Figure 2.2 Editing a GRUB Entry

From the Library of Daniel Johnson

38 Chapter 2 _ Boot, Service Management, and Shutdown

Figure 2.3 Editing the GRUB Menu at Boot Time

Table 2.2 Run Levels and Corresponding System Behaviors

Run Level Behavior

S Single-user mode. No login services running except for sulogin on the

console.

0 The operating system is shut down and the computer is running its firmware.

1 Like S, except applications which deliver into /etc/rc1.d are also started.

2 Multi-user mode. Local login services running. Some applications—usually

local—may be running.

3 Multi-user server mode. All configured services and applications running,

including remote login and network-visible applications.

4 Alternative multi-user server mode. Third-party applications may behave

differently than under run level 3.

5 Powered off.

6 Reboot.

From the Library of Daniel Johnson

2.2 SERVICE MANAGEMENT FACILITY 39

2.1.7.1 Milestone none

If a problem prevents user programs from starting normally, the Solaris 10 OS

can be instructed to start as few programs as possible during boot by

specifying -m milestone=none in the boot arguments. Once logged in,

svcadm milestone all can be used to instruct SMF to continue initialization

as usual.

2.1.7.2 Using the kmdb Command

If a problem prevents the kernel from starting normally, then it can be started

with the assembly-level kernel debugger, kmdb. When the -k option is specified in

the boot arguments, the kernel loads kmdb as soon as possible. If the kernel panics,

kmdb will stop the kernel and present a debugging prompt on the console. If

the -d option is also specified, kmdb will stop the kernel and present a debugging

prompt as soon as it finishes loading. For more information, see the kmdb(1) manual

page.

2.1.7.3 Failsafe boot

The second GRUB menu entry installed by default is labeled “failsafe”. Selecting it

will start the same kernel, but with the failsafe boot archive. It contains copies of the

kernel modules and configuration files as delivered by the installer, without any user

modifications. By default it also launches an interactive program that facilitates

updating the normal boot archive for instances of the Solaris OS found on the disk.

2.2 Service Management Facility

The service management facility provides means for computer administrators to

observe and control software services. Each service is modeled as an instance of an

SMF service, which allows for a single service implementation to be executed multiple

times simultaneously, as many are capable of doing.

Services and service instances are named by character strings. For example, the

service implemented by cron(1M) is named system/cron, and Solaris includes

an instance of it named default. Tools usually refer to service instances with

fault management resource identifiers, or FMRIs, which combine the service name

and the instance name. The FMRI of the default instance of cron is svc:/

system/cron:default. The service instances known to SMF can be listed with

the svcs -a command. For convenience, most SMF tools accept abbreviations for

service FMRIs – see svcadm(1M)’s manual page.

Service implementations are controlled by the SMF service manager,

svc.startd(1M). The current status and other information for service instances

From the Library of Daniel Johnson

40 Chapter 2 _ Boot, Service Management, and Shutdown

are printed by the svcs command. The -l (ell) option produces long output, like

the following:

The first line, labeled fmri, contains the full FMRI of the service instance. The

name line provides a short description. The remaining output is explained later.

2.2.1 enabled

The service manager considers each service instance to be enabled or disabled.

When enabled, the service manager will attempt to start a service instance’s

implementation and restart it as necessary; when disabled, the facility will try to

stop the implementation if it has been started. Whether a service is enabled can be

changed with svcadm’s enable and disable subcommands.

2.2.2 state, next_state, and state_time

To decide whether a service implementation should be started, the service manager

always considers each service instance to be in one of six states.

examplehost$ svcs -l cron

fmri svc:/system/cron:default

name clock daemon (cron)

enabled true

state online

next_state none

state_time Mon Mar 16 18:25:34 2009

logfile /var/svc/log/system-cron:default.log

restarter svc:/system/svc/restarter:default

contract_id 66

dependency require_all/none svc:/system/filesystem/local (online)

dependency require_all/none svc:/milestone/name-services (online)

disabled The service implementation has not been started, or has

been stopped.

offline The service is not running, but will be started when its

dependencies are met.

online The service has started successfully.

degraded The service is running, but at reduced functionality or

performance.

maintenance An operation failed and administrative intervention is

required.

uninitialized The service’s restarter has not taken control of the service

(restarters are explained later in this chapter).

From the Library of Daniel Johnson

2.2 SERVICE MANAGEMENT FACILITY 41

While a service is in a stable state, the next_state field is none. While an

operation to change the state of a service is incomplete, next_state will contain

the target state. For example, before a service implementation is started the

service manager sets the next_state to online, and if the operation succeeds,

the service manager changes state and next_state to online and none,

respectively.

The state_time line lists the time the state or next_state fields were

updated. This time is not necessarily the last time the service instance changed

states since SMF allows transitions to the same state.

2.2.3 logfile

The service manager logs some information about service events to a separate file

for each service instance. This field gives the name of that file.

2.2.3.1 restarter and contract_id

The service’s restarter interacts with the service’s implementation, and the contract

ID identifies the processes that implement the service. Details of both are

explained in Section 2.2.5, “How SMF Interacts with Service Implementations.

2.2.4 dependency

These lines list the dependencies of the service instance. SMF dependencies represent

dependencies of the service implementation on other services. The service

manager uses dependencies to determine when to start, and sometimes when to

stop, service instances.

Each dependency has a grouping and a set of FMRIs. The grouping dictates

when a dependency should be considered satisfied. SMF recognizes four dependency

groupings.

require_all All services indicated by the FMRIs must be in the online or

degraded states to satisfy the dependency.

require_any At least one cited service must be online or degraded to

satisfy the dependency.

optional_all The dependency is considered satisfied when all cited services

are online, degraded, disabled, in the maintenance

state, or are offline and will eventually come online

without administrative intervention. Services that don’t exist are

ignored.

exclude_all All cited services must be disabled or in the maintenance

state to satisfy the dependency.

From the Library of Daniel Johnson

42 Chapter 2 _ Boot, Service Management, and Shutdown

When a service is enabled, the service manager will not start it until all of its

dependencies are satisfied. Until then, the service will remain in the offline

state.

The service manager can also stop services according to dependencies. This

behavior is governed by the restart_on value of the dependency, which may take

one of four values.

2.2.5 How SMF Interacts with Service Implementations

SMF manages most services through daemons, though it manages some with what

is called “transient service.” In cases where neither daemons nor transient service

is appropriate, SMF allows for alternative service starters.

2.2.5.1 Services Implemented by Daemons

SMF starts a service implemented by daemons by executing its start method. The

start method is a program specified by the service author; its path and arguments

are stored in the SCF data for the service. (SCF is described in the next section.) If

the method exits with status 0, the service manager infers that the service has

started successfully (the daemons were started in the background and are ready to

provide service) and transitions its state to online. If the method exits with status

1, the service manager concludes that the service failed and re-executes the

method. If the method fails three times consecutively, then the service manager

gives up, transitions the service to the maintenance state, and appends a note to

the service’s SMF log file in /var/svc/log. In all cases, the method is started

with its standard output redirected to the service’s SMF log file. The service daemon

will inherit this unless the author wrote the start method to do otherwise.

After starting a service implemented by a daemon, the service manager will

monitor its processes. If all processes exit, then the service manager will infer that

the service has failed and will attempt to restart it by re-executing the start

method. If this happens more than ten times in ten seconds, then the service manager

will give up and transition the service to the maintenance state. Processes

are monitored through a process contract with the kernel. Contracts are a

new kernel abstraction documented in contract(4); process-type contracts are

none Do not stop the service if the dependency service is stopped.

error Stop the service if the dependency is stopped due to a software or

hardware error.

restart Stop the service if the dependency is stopped for any reason.

refresh Stop the service if the dependency is stopped or its configuration is

changed (refreshed).

From the Library of Daniel Johnson

2.2 SERVICE MANAGEMENT FACILITY 43

documented in process(4). Services treated by the service manager in this way

are referred to as contract services.

To stop a contract service, the service manager executes the stop method specified

by the service author. Stop methods exit with status 0 to signal that the service

has been stopped successfully, in which case the service manager will

transition the service to the disabled state. However, the facility uses process contracts

to ensure that a contract service has been fully stopped. If a service’s stop

method exits with status 0 but processes remain in the contract, then svc.startd

will send SIGKILL signals to the processes once each second until they have exited.

Each time, svc.startd records a note in the service’s /var/svc/log file.

The processes associated with a contract service can be listed with the svcs -p

command. To examine the contract itself, obtain its ID number from the svcs -v

command or the contract_id line of the output of svcs -l and pass it to the

ctstat(1) command.

2.2.5.2 Services Not Implemented by Daemons

Some services are not implemented by daemons. For example, the file system services

(e.g., svc:/system/filesystem/minimal:default) represent behavior

implemented by the kernel. Instead of representing whether the behavior is available

or not, the file system services represent whether parts of the file system

namespace that are allowed to be separate file systems (/var, /var/adm, /tmp)

have been mounted and are available. Ensuring this is the case does require programs

to be executed (e.g., mount(1M)), but the service should still be considered

online once those programs have exited successfully.

For such services, svc.startd provides the transient service model. After the

start method exits with status 0, the service is transitioned to online and any

processes it may have started in the background are not monitored.

2.2.5.3 Alternative Service Models

If a service author requires SMF to interact with his service in still a different way,

then the facility allows him to provide or specify an alternative service restarter.

When an author specifies a service restarter for a service, the facility delegates

interaction with the service to the restarter, which must itself be an SMF service.

Solaris 10 includes a single alternative restarter: inetd(1M). inetd defers execution

of a service’s daemon until a request has been received by a network device.

Before then, inetd reports services delegated to it to be online to signify readiness,

even though no daemons may have been started. Operations specific to

inetd-supervised services can be requested with the inetadm(1M) command.

The restarter for a service is listed by the svcs -l command. Services governed

by the models provided directly by the service manager are listed with the

special FMRI of svc:/system/svc/restarter:default as their restarter.

From the Library of Daniel Johnson

44 Chapter 2 _ Boot, Service Management, and Shutdown

Since restarters usually require distinct SCF configuration for the services they

control, the facility does not provide a way for an administrator to change the

restarter specified for a service.

2.2.6 The Service Configuration Facility

The enabled status, dependencies, method specifications, and other information for

each service instance are stored in a specialized database introduced with SMF

called the service configuration facility. SCF is implemented by the libscf(3LIB)

library and svc.configd(1M) daemon, and svccfg(1M) provides the most direct

access to SCF for command line users.

In addition to SMF-specific configuration, the libscf(3LIB) interfaces are documented

so that services can store service-specific configuration in SCF as well.

2.2.7 Health and Troubleshooting

Standard records of enabled status and states for each service permit an easy check

for malfunctioning services. The svcs -x command, without arguments, identifies

services that are enabled but not in the online state and attempts to diagnose why

they are not running. When all enabled services are online, svcs -x exits without

printing anything.

When a service managed by SMF is enabled but not running, investigation

should start by retrieving the service manager’s state for the service, usually with

the svcs command:

If the state is maintenance, then the service manager’s most recent attempt

to start (or stop) the service failed. The svcs -x command may explain precisely

why the service was placed in that state. The SMF log file for the service in

/var/svc/log should also provide more information. Note that many services

still maintain their own log files in service-specific locations.

When the problem with a service in the maintenance state is resolved, the

svcadm clear command should be executed for the service. The service manager

will re-evaluate the service’s dependencies and start it, if appropriate.

If a service isn’t running because it is in the offline state, SMF considers its

dependencies to be unsatisfied. svcs -l will display the dependencies and their

states, but if one of them is also offline, then following the chain can be

tedious. svcs -x, when invoked for an offline service, will automatically

examplehost$ svcs cron

STATE STIME FMRI

online Mar_16 svc:/system/cron:default

From the Library of Daniel Johnson

2.2 SERVICE MANAGEMENT FACILITY 45

follow dependency links to find the root cause of the problem, even if it is multiple

links away.

2.2.8 Service Manifests

To deliver an SMF service, the author must deliver a service manifest file into a

subdirectory of /var/svc/manifest. These files conform to the XML file format

standard and describe the SCF data SMF requires to start and interact with the

service. On each boot, the service manifests in /var/svc/manifest are loaded

into the SCF database by the special svc:/system/manifest-import:default

service.

Service manifests can also be imported directly into the SCF repository with the

svccfg import command. It allows new SMF services to be created, including

SMF services to control services that were not adapted to SMF by their authors.

2.2.9 Backup and Restore of SCF Data

SMF provides three methods for backing up SCF data.

2.2.9.1 Automatic

During each boot, SMF automatically stores a backup of persistent SCF data in a

file whose path begins with /etc/svc/repository-boot-. Furthermore, whenever

SMF notices that a file in a subdirectory of /var/svc/manifest has

changed, the facility creates another backup of persistent SCF data after it

has been updated according to the new files; the names of these backups begin

with /etc/svc/repository-manifest_import-. In both cases, only the four

most recent backups are retained and older copies are deleted. Two symbolic links,

repository-boot and repository-manifest_import, are updated to refer to

the latest copy of the respective backup type.

The SCF database may be restored by copying one of these files to

/etc/svc/repository.db. However, this must not be done while the svc.configd

daemon is executing.

2.2.9.2 Repository-wide

All persistent SCF data may be extracted with the svccfg archive command. It

can be restored with the svccfg restore command.

2.2.9.3 Service-specific

The SCF data associated with the instances of a particular service may be

extracted with the svccfg extract command. Note that the command only

accepts service FMRIs and not instance FMRIs. To restore the service instances for

From the Library of Daniel Johnson

46 Chapter 2 _ Boot, Service Management, and Shutdown

such a file, delete the service with svccfg delete and import the file with

svccfg import.

2.3 Shutdown

Solaris provides two main mechanisms to shut down the operating system. They

differ in how applications are stopped.

2.3.1 Application-Specific Shutdown

With appropriate arguments, the shutdown(1M) and init(1M) commands begin

operating system shutdown by instructing SMF to stop all services. The facility

complies by shutting down services in reverse-dependency order, so that each service

is stopped before the services it depends on are stopped since Solaris 10 11/06

release. Upon completion, the kernel flushes the file system buffers and powers off

the computer, unless directed otherwise by the arguments.

As in Solaris 9, the kill init scripts (/etc/init.d/K*) for the appropriate runlevel

are run at the beginning of shutdown. This is in parallel with SMF’s shutdown

sequence.

If an SMF service takes longer to stop than the service’s author specified, SMF

will complain and start killing the service’s processes once every second until they

have exited.

2.3.2 Application-Independent Shutdown

The reboot(1M), halt(1M), and poweroff(1M) commands skip both the SMF

shutdown sequence explained previously and the init scripts and instead stop

applications by sending a SIGTERM signal to all processes. After a 5 second wait,

any remaining processes are sent a SIGKILL signal before the kernel flushes the

file system buffers and stops. Since these commands don’t invoke the stop procedures

provided by application authors, this method has a chance of stopping applications

before they have written all of their data to nonvolatile storage.

From the Library of Daniel Johnson

 

目录 Chapter 1 Installing the Solaris 10 Operating System 1 1.1 Methods to Meet Your Needs 1 1.2 The Basics of Solaris Installation 2 1.2.1 Installing Solaris on a SPARC System 6 1.2.2 Installing Solaris on an x86 System 9 1.3 Solaris JumpStart Installation 13 1.3.1 Setting up a JumpStart Server 13 1.3.2 Creating a Pro?le Server for Networked Systems 14 1.3.3 Performing a Custom JumpStart Installation 22 1.4 Upgrading a Solaris System 25 1.5 Solaris Live Upgrade 26 Chapter 2 Boot, Service Management, and Shutdown 33 2.1 Boot 33 2.1.1 The Bootloader 33 2.1.2 The Kernel 34 2.1.3 User-Mode Programs 34 2.1.4 GRUB Extensions 35 2.1.5 Modifying Boot Behavior 36 2.1.6 Run Levels 37 2.1.7 Troubleshooting 37 2.2 Service Management Facility 39 2.2.1 enabled 40 2.2.2 state, next_state, and state_time 40 2.2.3 logfile 41 2.2.4 dependency 41 2.2.5 How SMF Interacts with Service Implementations 42 2.2.6 The Service Con?guration Facility 44 2.2.7 Health and Troubleshooting 44 2.2.8 Service Manifests 45 2.2.9 Backup and Restore of SCF Data 45 2.3 Shutdown 46 2.3.1 Application-Speci?c Shutdown 46 2.3.2 Application-Independent Shutdown 46 Chapter 3 Software Management: Packages 47 3.1 Managing Software Packages 47 3.2 What Is a Package? 47 3.2.1 SVR4 Package Content 48 3.2.2 Package Naming Conventions 49 3.3 Tools for Managing Software Packages 49 3.4 Installing or Removing a Software Package with the pkgadd or pkgrm Command 50 3.5 Using Package Commands to Manage Software Packages 51 3.5.1 How to Install Packages with the pkgadd Command 51 3.5.2 Adding Frequently Installed Packages to a Spool Directory 54 3.5.3 Removing Software Packages 56 Chapter 4 Software Management: Patches 59 4.1 Managing Software with Patches 59 4.2 What Is a Patch? 59 4.2.1 Patch Content 60 4.2.2 Patch Numbering 61 4.3 Patch Management Best Practices 61 4.3.1 Proactive Patch Management Strategy 62 4.3.2 Reactive Patch Management Strategy 68 4.3.3 Security Patch Management Strategy 70 4.3.4 Proactive Patching When Installing a New System 71 4.3.5 Identifying Patches for Proactive Patching and Accessing Patches 73 4.4 Example of Using Solaris Live Upgrade to Install Patches 75 4.4.1 Overview of Patching with Solaris Live Upgrade 75 4.4.2 Planning for Using Solaris Live Upgrade 77 4.4.3 How to Apply a Patch When Using Solaris Live Upgrade for the Solaris 10 8/07 Release 79 4.5 Patch Automation Tools 86 4.6 Overview of Patch Types 88 4.7 Patch README Special Instructions 93 4.7.1 When to Patch in Single-User Mode 93 4.7.2 When to Reboot After Applying or Removing a Patch 94 4.7.3 Patch Metadata for Non-Global Zones 95 4.8 Patch Dependencies (Interrelationships) 96 4.8.1 SUNW_REQUIRES Field for Patch Dependencies 96 4.8.2 SUNW_OBSOLETES Field for Patch Accumulation and Obsolescence 97 4.8.3 SUNW_INCOMPAT Field for Incompatibility 97 Chapter 5 Solaris File Systems 99 5.1 Solaris File System Overview 99 5.1.1 Mounting File Systems 100 5.1.2 Unmounting File Systems 102 5.1.3 Using the /etc/vfstab File 103 5.1.4 Determining a File System Type 104 5.1.5 Monitoring File Systems 105 5.2 UFS File Systems 105 5.2.1 Creating a UFS File System 106 5.2.2 Backing Up and Restoring UFS File Systems 107 5.2.3 Using Quotas to Manage Disk Space 108 5.2.4 Checking File System Integrity 110 5.2.5 Using Access Control Lists 112 5.2.6 Using UFS Logging 113 5.2.7 Using Extended File Attributes 115 5.2.8 Using Multiterabyte UFS File Systems 115 5.2.9 Creating UFS Snapshots 115 5.3 ZFS File System Administration 117 5.3.1 Using Pools and File Systems 118 5.3.2 Backing Up a ZFS File System 120 5.3.3 Using Mirroring and Striping 121 5.3.4 Using RAID-Z 122 5.3.5 Using Copy-on-Write and Snapshots 122 5.3.6 Using File Compression 124 5.3.7 Measuring Performance 124 5.3.8 Expanding a Pool 125 5.3.9 Checking a Pool 126 5.3.10 Replacing a Disk 127 5.4 NFS File System Administration 127 5.4.1 Finding Available NFS File Systems 128 5.4.2 Mounting an NFS File System 129 5.4.3 Unmounting an NFS File System 129 5.4.4 Con?guring Automatic File System Sharing 130 5.4.5 Automounting File Systems 130 5.5 Removable Media 133 5.5.1 Using the PCFS File System 135 5.5.2 Using the HSFS File System 136 5.6 Pseudo File System Administration 136 5.6.1 Using Swap Space 136 5.6.2 Using the TMPFS File System 138 5.6.3 Using the Loopback File System 139 Chapter 6 Managing System Processes 141 6.1 Overview 141 6.1.1 State of a Process 143 6.1.2 Process Context 143 6.2 Monitoring the Processes 145 6.2.1 Process Status: ps 146 6.2.2 Grepping for Process: pgrep 149 6.2.3 Process Statistics Summary: prstat 149 6.2.4 Reap a Zombie Process: preap 151 6.2.5 Temporarily Stop a Process: pstop 152 6.2.6 Resuming a Suspended Process: prun 152 6.2.7 Wait for Process Completion: pwait 152 6.2.8 Process Working Directory: pwdx 152 6.2.9 Process Arguments: pargs 152 6.2.10 Process File Table: pfiles 153 6.2.11 Process Libraries: pldd 154 6.2.12 Process Tree: ptree 154 6.2.13 Process Stack: pstack 155 6.2.14 Tracing Process: truss 156 6.3 Controlling the Processes 158 6.3.1 The nice and renice Commands 158 6.3.2 Signals 159 6.4 Process Manager 164 6.5 Scheduling Processes 170 6.5.1 cron Utility 171 6.5.2 The at Command 175 Chapter 7 Fault Management 179 7.1 Overview 179 7.2 Fault Noti?cation 181 7.3 Displaying Faults 182 7.4 Repairing Faults 184 7.5 Managing Fault Management Log Files 184 7.5.1 Automatic Log Rotation 185 7.5.2 Manual Log Rotation 186 7.5.3 Log Rotation Failures 187 7.5.4 Examining Historical Log Files 188 7.6 Managing fmd and fmd Modules 188 7.6.1 Loading and Unloading Modules 189 7.6.2 fmd Statistics 191 7.6.3 Con?guration Files 192 7.7 Fault Management Directories 193 7.8 Solaris Fault Management Downloadable Resources 193 7.8.1 Solaris FMA Demo Kit 193 7.8.2 Events Registry 194 Chapter 8 Managing Disks 197 8.1 Hard Disk Drive 197 8.2 Disk Terminology 199 8.3 Disk Device Naming Conventions 200 8.3.1 Specifying the Disk Subdirectory in Commands 202 8.4 Overview of Disk Management 202 8.4.1 Device Driver 202 8.4.2 Disk Labels (VTOC or EFI) 203 8.4.3 Disk Slices 205 8.4.4 Slice Arrangements on Multiple Disks 207 8.4.5 Partition Table 208 8.4.6 format Utility 210 8.4.7 format Menu and Command Descriptions 211 8.4.8 Partition Menu 213 8.4.9 x86: fdisk Menu 214 8.4.10 Analyze Menu 215 8.4.11 Defect Menu 217 8.5 Disk Management Procedures 217 8.5.1 How to Identify the Disks on a System 218 8.5.2 How to Determine If a Disk Is Formatted 218 8.5.3 How to Format a Disk 219 8.5.4 How to Identify a Defective Sector by Performing a Surface Analysis 221 8.5.5 How to Repair a Defective Sector 222 8.5.6 How to Display the Partition Table or Slice Information 223 8.5.7 Creating Disk Slices (Partitioning a Disk) and Labeling a Disk 224 8.5.8 Creating a File System On a Disk 228 8.5.9 Additional Commands to Manage Disks 229 Chapter 9 Managing Devices 235 9.1 Solaris Device Driver Introduction 235 9.2 Analyzing Lack of Device Support 236 9.2.1 Device Does Not Work 236 9.2.2 Obtaining Information About Devices 236 9.2.3 Obtaining Information About Drivers 241 9.2.4 Does the Device Have a Driver? 248 9.2.5 Current Driver Does Not Work 250 9.2.6 Can a Driver for a Similar Device Work? 250 9.3 Installing and Updating Drivers 251 9.3.1 Backing Up Current Functioning Driver Binaries 251 9.3.2 Package Installations 252 9.3.3 Install Time Updates 252 9.3.4 Manual Driver Binary Installation 253 9.3.5 Adding a Device Driver to a Net Installation Image 256 9.3.6 Adding a Device Driver to a CD/DVD Installation Image 262 9.3.7 Swapping Disks 263 9.4 When Drivers Hang or Panic the System 266 9.4.1 Device Driver Causes the System to Hang 266 9.4.2 Device Driver Causes the System to Panic 268 9.4.3 Device Driver Degrades System Performance 269 9.5 Driver Administration Commands and Files 270 9.5.1 Driver Administration Command Summary 270 9.5.2 Driver Administration File Summary 272 Chapter 10 Solaris Networking 275 10.1 Introduction to Network Con?guration 275 10.1.1 Overview of the TCP/IP Networking Stack 275 10.1.2 Con?guring the Network as Superuser 277 10.2 Setting Up a Network 277 10.2.1 Components of the XYZ, Inc. Network 277 10.2.2 Con?guring the Sales Domain 280 10.2.3 Con?guring the Accounting Domain 283 10.2.4 Con?guring the Multihomed Host 288 10.2.5 Setting Up a System for Static Routing 296 10.2.6 Con?guring the Corporate Domain 300 10.2.7 Testing the Network Con?guration 302 10.3 Monitoring Network Performance 304 10.3.1 dladm Command 304 10.3.2 ifconfig Command 305 10.3.3 netstat Command 305 10.3.4 snoop Command 307 10.3.5 traceroute Command 308 Chapter 11 Solaris User Management 309 11.1 Solaris Users, Groups, and Roles 309 11.1.1 File System Object Permissions 310 11.1.2 User Account Components 312 11.1.3 User Management Tools 313 11.1.4 User Management Files 313 11.2 Managing Users and Groups 314 11.2.1 Starting the Solaris Management Console 314 11.2.2 Adding a Group and a User to Local Files 315 11.2.3 Adding a Group and a User to an NIS Domain 317 11.3 Managing Roles 318 11.3.1 Changing root from a User to a Role 318 11.3.2 Viewing the List of Roles 319 11.3.3 Assigning a Role to a Local User 319 Chapter 12 Solaris Zones 321 12.1 Overview 321 12.2 How Zones Work 323 12.3 Branded Zones 324 12.4 Network Interfaces in Zones 324 12.5 Devices in Zones 325 12.6 Packages and Patches in a Zones Environment 325 12.7 Administering Zones 326 12.7.1 Zone Con?guration 327 12.7.2 Viewing a Zone Con?guration 331 12.7.3 Zone Installation and Booting 331 12.7.4 Zone Login Using the zlogin Command 332 12.8 Halting, Uninstalling, Moving, and Cloning Zones 333 12.9 Migrating a Zone to a New System 334 12.10 Deleting a Zone 336 12.11 Listing the Zones on a System 336 12.12 Zones Usage Examples 337 12.12.1 Adding a Dedicated Device to a Non-Global Zone 337 12.12.2 How to Export Home Directories in the Global Zone into a Non-Global Zone 337 12.12.3 Altering Privileges in a Non-Global Zone 337 12.12.4 Checking the Status of SMF Services 338 12.12.5 Modifying CPU, Swap, and Locked Memory Caps in Zones 338 12.12.6 Using the Dtrace Program in a Non-Global Zone 339 Chapter 13 Using Naming Services 341 13.1 Using Naming Services (DNS, NIS, AND LDAP) 341 13.1.1 Naming Service Cache Daemon (nscd) 342 13.1.2 DNS Naming Services 342 13.1.3 NIS Naming Services 342 13.1.4 LDAP Naming Services 343 13.1.5 Organizational Use of Naming Services 343 13.1.6 Network Database Sources 344 13.2 Name Service Switch File 347 13.2.1 Con?guring the Name Service Switch File 347 13.2.2 Database Status and Actions 349 13.3 DNS Setup and Con?guration 350 13.3.1 Resolver Files 350 13.3.2 Steps DNS Clients Use to Resolve Names 350 13.4 NIS Setup and Con?guration 351 13.4.1 Setting Up NIS Clients 351 13.4.2 Working with NIS Maps 352 13.5 LDAP Setup and Con?guration 356 13.5.1 Initializing a Client Using Per-User Credentials 357 13.5.2 Con?guring an LDAP Client 359 13.5.3 Using Pro?les to Initialize an LDAP Client 362 13.5.4 Using Proxy Credentials to Initialize an LDAP Client 362 13.5.5 Initializing an LDAP Client Manually 363 13.5.6 Modifying a Manual LDAP Client Con?guration 363 13.5.7 Troubleshooting LDAP Client Con?guration 364 13.5.8 Uninitializing an LDAP Client 364 13.5.9 Initializing the Native LDAP Client 364 13.5.10 LDAP API Entry Listings 368 13.5.11 Troubleshooting Name Service Information 368 Chapter 14 Solaris Print Administration 369 14.1 Overview of the Solaris Printing Architecture 369 14.2 Key Concepts 370 14.2.1 Printer Categories (Local and Remote Printers) 370 14.2.2 Printer Connections (Directly Attached and Network Attached) 370 14.2.3 Description of a Print Server and a Print Client 371 14.3 Solaris Printing Tools and Services 371 14.3.1 Solaris Print Manager 371 14.3.2 LP Print Service 371 14.3.3 PostScript Printer De?nitions File Manager 372 14.4 Network Protocols 372 14.4.1 Berkeley Software Distribution Protocol 372 14.4.2 Transmission Control Protocol 372 14.4.3 Internet Printing Protocol 373 14.4.4 Server Message Block Protocol 373 14. 5 Planning for Printer Setup 373 14. 5.1 Print Server Requirements 373 14. 5.2 Locating Information About Supported Printers 374 14. 5.3 Locating Information About Available PPD Files 375 14. 5.4 Adding a New PPD File to the System 375 14. 5.5 Adding Printers in a Naming Service 377 14. 5.6 Printer Support in the Naming Service Switch 377 14. 5.7 Enabling Network Listening Services 378 14.6 Setting Up Printers with Solaris Printer Manager 379 14.6.1 Assigning Printer De?nitions 379 14.6.2 Starting Solaris Print Manager 380 14.6.3 Setting Up a New Directly Attached Printer With Solaris Print Manager 381 14.6.4 Setting Up a New Network-Attached Printer with Solaris Print Manager 381 14.7 Setting Up a Printer on a Print Client with Solaris Print Manager 385 14.7.1 Adding Printer Access With Solaris Print Manager 385 14.8 Administering Printers by Using LP Print Commands 385 14.8.1 Frequently Used LP Print Commands 386 14.8.2 Using the lpstat Command 386 14.8.3 Disabling and Enabling Printers 387 14.8.4 Accepting or Rejecting Print Requests 387 14.8.5 Canceling a Print Request 388 14.8.6 Moving Print Requests from One Printer to Another Printer 389 14.8.7 Deleting a Printer 390 14.9 Troubleshooting Printing Problems 392 14.9.1 Troubleshooting No Output (Nothing Prints) 392 14.9.2 Checking That the Print Scheduler Is Running 393 14.9.3 Debugging Printing Problems 393 14.9.4 Checking the Printer Network Connections 394 Index 395
2013年最新版的强悍Unix版本Solaris 11.1系统文件,功能十分强大,不愧是真正血统的Unix系统! Oracle Announces Availability of Oracle Solaris 11.1 and Oracle Solaris Cluster 4.1 Delivers Oracle Database and Java Enhancements, Expanded Mission Critical Cloud Management Capabilities and Advanced Platform Features Redwood Shores, Calif – October 26, 2012 News Facts Oracle today announced general availability of Oracle Solaris 11.1 and Oracle Solaris Cluster 4.1. Oracle Solaris 11 is the first cloud OS that allows customers to build large-scale enterprise-class Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) clouds on a wide range of SPARC and x86 servers and Oracle engineered systems. Oracle Solaris Cluster 4.1 extends high availability and disaster recovery capabilities of Oracle Solaris and includes unique virtual cluster features supporting highly efficient application consolidation with best-in-class availability. Oracle Solaris 11 is already widely in production with thousands of customers with mission critical deployments across industries such as financial services, communications, healthcare, retail, public sector and media and entertainment. Read customer success stories about Oracle Solaris here. Oracle Solaris 11 is also gaining strong momentum among enterprise application vendors with hundreds of applications already qualified for Oracle Solaris Ready status through the Oracle PartnerNetwork (OPN). OPN members can develop, sell and implement their solutions on Oracle Solaris 11 and take advantage of specialized Oracle Solaris resources to expand their market reach. Customers and partners can quickly and safely upgrade to Oracle Solaris 11.1 using the built-in update tools and software repositories available with Oracle Solaris 11. Oracle will host a webcast on November 7, 2012 at 8 a.m. Pacific time on Oracle Solaris 11.1 and Oracle Solaris Cluster 4.1, featuring Markus Flierl, vice president, Oracle Solaris Engineering, Core Technology and Bill Nesheim, vice president, Oracle Solaris Engineering, Platform Software. Register here. This event will also include an interactive chat with core developers of Oracle Solaris and Oracle Solaris Cluster. New and Enhanced Features in Oracle Solaris 11.1 Oracle Solaris 11.1 increases the performance, availability and I/O throughput of the latest Oracle Database technology. A new, optimized shared memory interface between the Oracle Database and Oracle Solaris 11.1 provides 8x faster database startup and shutdown, as well as online resizing of the Oracle Database System Global Area (SGA). Oracle Solaris 11.1 introduces unique new capabilities for optimizing Oracle Database performance. Oracle Solaris 11.1 exposes Oracle Solaris DTrace I/O interfaces that allow an Oracle Database administrator to identify I/O outliers and subsequently isolate network or storage bottlenecks. A new Oracle Solaris DTrace plug-in for Oracle Java Mission Control to enable customers to profile Java applications on Oracle Solaris production systems. New cloud management features add to Oracle Solaris 11’s zero overhead built-in virtualization capabilities across system, network and storage resources, including expanded support for Software Defined Networks (SDN) with Edge Virtual Bridging enhancements, to maximize network resource utilization and manage bandwidth in cloud environments. New built-in memory predictor monitors application memory use and provides optimized memory page sizes and resource location to speed overall application performance. Support for an unprecedented 32 TB of RAM and thousands of CPUs unlocks the full potential of Oracle’s latest server systems. Oracle Solaris Cluster 4.1 Highlights New Oracle Solaris 10 Zone Clusters allow customers to consolidate mission critical Oracle Solaris 10 applications on Oracle Solaris 11 cloud environments. Expanded disaster recovery operations using Oracle’s Sun ZFS Storage Appliance services along with Oracle Solaris Cluster 4.1 to coordinate failover of applications and data to a remote disaster recovery site. Faster application recovery with improved storage failure detection and resource dependencies management. New labeled security capability in Oracle Solaris Zone Clusters provides military grade application separation in highly consolidated mission-critical deployments using Oracle Solaris 11 Trusted Extensions. Integrated Oracle Deployments and Support Oracle Enterprise Manager Ops Center provides comprehensive cloud management capabilities for Oracle Solaris 11, including self-service provisioning of Oracle Solaris 11 Zones. Ops Center’s integrated systems management delivers enterprise scale cloud performance. Oracle Enterprise Manager Ops Center is available to Oracle Solaris customers at no additional cost under the Ops Center Everywhere Program. Oracle Solaris Studio delivers the latest in compiler optimizations, multithread performance and powerful analysis tools for native development, and optimized application performance and reliability on Oracle Solaris 11.1 systems. Oracle Solaris 11 guarantees binary compatibility with previous Oracle Solaris versions through the Oracle Solaris Binary Application Guarantee Program, which provides customers a seamless upgrade path and the industry’s best investment protection. Oracle Solaris Legacy Containers allows older Oracle Solaris environments to be brought forward onto latest generation hardware to provide power, cooling and footprint consolidation savings. OPN members can find Oracle Solaris tools and resources in the Oracle Solaris Knowledge Zone, including Oracle Solaris Ready, Oracle Solaris 11 Specialization and Oracle Solaris Development Initiative. The Oracle Solaris Remote Lab now provides a secure cloud environment for OPN members to test and validate their applications with Oracle Solaris 11 in SPARC and x86 virtual environments. Supporting Quotes “Oracle recommends Oracle Solaris 11 for all UNIX®-based Oracle implementations. Oracle Solaris 11.1 delivers over 300 new performance and feature enhancements and is engineered together with Oracle Database, middleware, applications to increase performance, streamline management and automate support for Oracle deployments,” said John Fowler, executive vice president, Systems, Oracle. “The combination of the secure, highly available capabilities of Oracle Solaris Cluster 4.1 and the built-in virtualization of Oracle Solaris 11.1 helps customers bring their most mission-critical applications into a cost effective, agile cloud environment and delivers extreme availability for enterprise applications.” “Clients are looking for ways to reduce the complexity of systems management while enabling Platform as a Service (PaaS) & Software as a Service (SaaS) clouds,” says Lee Diamante, solutions architect, Systems Computing Solutions at Forsythe. “The value of Oracle Solaris 11 is that it maintains all the enterprise-class features expected with a mission-critical OS, while bringing in new, innovative technologies. Forsythe has a long and rich history of delivering customer solutions on Oracle Solaris systems. This is why we are excited about the Solaris 11.1 release.” “Oracle is making it much easier for partners like Informatica to gain access to their software with the new testing environments; shrinking the time to measurable results and value,” said Julie Lockner, vice president, ILM, Informatica. “With the release of Oracle Solaris 11.1 Informatica customers now have access to mission critical deployments across major industries, with an environment of high performance and high availability. With all the new feature enhancements, we look forward to making the Informatica Platform certified on the Oracle Solaris 11 product family.”
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值