Welcome to Delicate template
Header
Just another WordPress site
Header

LOGICAL PARTITIONS

Creating logical partitions

To create a logical partition, begin in the HMC workplace window. Select Systems Management -> Servers and then select the name of the server. This action takes you to the view shown in Figure

 

Expand Configuration and then expand Create Logical partition to open the area to create an AIX, Linux, or Virtual I/O Server partition.

Creating an AIX or a Linux partition

Note:The options and window views for creating a Virtual I/O Server (VIO Server) partition are the same as those that we present in this section. Thus, we do not document the steps for the VIO Server.

To create an AIX or a Linux partition, follow these steps:

1. Select Configuration → Create Logical partition → AIX or Linux to open the window shown in Figure. Here you can set the partition ID and specify the partition’s name. Then, select Next.

1

Figure: Create an AIX partition

2. Enter a profile name for this partition and click Next (Figure). You can

then create a partition with either shared or dedicated processors on your

server.

2

Figure: Create an AIX partition

Configuring a shared processor partition

This section describes how to create a partition with a shared processor.

To configure a shared processor partition:

  1. Select Shared and then select Next, as shown in Figure

3

Figure: Create a shared processor partition

2. Specify the processing units for the partition as well as any settings for virtual processors, as shown in Figure. The sections that immediately follow this figure discuss the settings in this figure in detail.

3. After you have entered data in each of the fields (or accepted the defaults) select Next and then proceed to “Setting partition memory”

a

Figure: Shared partition settings

Processing Settings area

In the Processing Settings area, you must specify the minimum number of processors that you want the shared processor partition to acquire, the desired amount, and the maximum upper limit allowed for the partition.

The values in each field can range anywhere between .1 and the total number of processors in the managed server and can be any increment in between in tenths of a processor.

Each field defines the following information:

  • Minimum processing units

The absolute minimum number of processing units required from the shared processing pool for this partition to become active. If the number in this field is not available from the shared processing pool, this partition cannot be activated. This value has a direct bearing on dynamic logical partitioning (DLPAR), as the minimum processing units value represents the smallest value of processors the partition can have as the result of a DLPAR deallocation.

  • Desired processing units

This number has to be greater than or equal to the amount set in Minimum processing units, and represents an amount of processors asked for above the minimum amount. If the minimum is set to 2.3 and the desired set to 4.1, then the partition could become active with any number of processors between 4.1 and 2.3, whatever number is greater and available from the shared resource pool. When a partition is activated, it queries for processing units starting at the desired value and goes down in .1 of a processor until it reaches the minimum value. If the minimum is not met, the partition does not become active. Desired processing units only governs the possible number of processing units a partition can become active with. If the partition is made upcapped, then the hypervisor can let the partition exceed its desired value depending on how great the peak need is and what is available from the shared processing pool.

  • Maximum processing units

This setting represents the absolute maximum number of processors this partition can own at any given time, and must be equal to or greater than the Desired processing units. This value has a direct bearing on dynamic logical partitioning (DLPAR), as the maximum processing units value represents the largest value of processors the partition can have as the result of a DLPAR allocation. Furthermore, while this value affects DLPAR allocation, it does not affect the processor allocation handled by the hypervisor for idle processor allocation during processing peaks.

 

Note:Whether your partition is capped or uncapped, the minimum value for Maximum processing units is equal to the value specified for Desired processing units.

Uncapped option

The Uncapped option represents whether you want the HMC to consider the partition capped or uncapped. Whether a partition is capped or uncapped, when it is activated it takes on a processor value equal to a number somewhere between the minimum and desired processing units, depending on what is available from the shared resource pool. However, if a partition is capped, it can gainprocessing power only through a DLPAR allocation and otherwise stays at thevalue given to it at time of activation. If the partition is uncapped, it can exceed the value set in Desired virtual processors and it can take the number of processing units from the shared processor pool that it needs. This is not seen from the HMC view of the partition, but you can check the value of processors owned by the partition from the operating system level with the appropriate commands. Theweight field defaults to 128 and can range from 0 to 256. Setting this number below 128 decreases a partition’s priority for processor allocation, and increasing it above 128, up to 256, increases a partition’s priority for processor allocation. If all partitions are set to 128 (or another equivalent number), then all partitions have equal access to the shared processor pool. If a partition’s uncapped weight is set to 0, then that partition is considered capped, and it never owns a number of processors greater than that specified in Desired processing units. 

Virtual processors area

The values that are set in the Virtual processors are of this window govern how many processors to present to the operating system of the partition. You must show a minimum of one virtual processor per actual processor, and you can have as many as 10 virtual processors per physical processing unit. As a general recommendation, a partition requires at least as many virtual processors as you have actual processors, and a partition should be configured with no more than twice the number of virtual processors as you have actual processors. Each field defines the following information:

  • Minimum virtual processors

Your partition must have at least one virtual processor for every part of a physical processor assigned to the partition. For example, if you have assigned 2.5 processing units to the partition, the minimum number of virtual processors is three. Furthermore, this value represents the lowest number of virtual processors that can be owned by this partition as the result of a DLPAR operation.

Desired virtual processors

The desired virtual processors value has to be greater than or equal to the value set in Minimum virtual processors, and as a general guideline about twice the amount set in Desired processing units. Performance with virtual processing can vary depending on the application, and you might need to experiment with the desired virtual processors value before you find the perfect value for this field and your implementation.

  • Maximum virtual processors

You can only have 10 virtual processors per processing unit. Therefore, you cannot assign a value greater than 10 times theMaximum processing units value as set in “Processing Settings area” on page 232. It is recommended, though not required, to set this number to twice the value entered in Maximum processing units.

Finally, this value represents the maximum number of virtual processors that this partition can have as the result of a DLPAR operation. 

 

Note:The desired virtual processors value, along with the resources available in the shared resource pool, is the only value that can set an effective limit on the amount of resources that can be utilized by an uncapped partition.

 

Note:Regardless of the number of processors in the server or the processing units owned by the partition, there is an absolute upper limit of 64 virtual processors per partition with the HMC V7 software.

Configuring a dedicated processor partition

This section describes how to create a partition with a dedicated processor. If you want to create a partition with a sharedprocessor, refer to “Configuring a shared processor partition” on page 231.

To configure a dedicated processor partition:

  1. Select Dedicated and then select Next, as shown in Figure

6

Figure: Create dedicated processor partition

2     Specify the number of minimum, desired, and maximum processors for the

partition, as shown in Figure.

4

Figure: Processor settings with dedicated processors

3         After you have entered the values for the fields, select Next.

 

Setting partition memory

Now, you need to set the partition memory, as shown in Figure.

5

Figure:  Set partition memory

 

The minimumdesired, and maximum settings are similar to their processor

counterparts:

  • Minimum memory

Represents the absolute memory required to make the partition active. If the amount of memory specified under minimum is not available on the managed server then the partition cannot become active.

  • Desired memory

Specifies the amount of memory beyond the minimum that can be allocated to the partition. If the minimum is set at 256 MB and the desired is set at 4 GB, then the partition in question can become active with anywhere between 256 MB and 4 GB.

  • Maximum memory

Represents the absolute maximum amount of memory for this partition, and it can be a value greater than or equal to the number specified in Desired memory. If set at the same amount as desired, then the partition is considered capped, and if this number is equal to the total amount of memory in the server then this partition is considered uncapped. After you have made your memory selections, select Next.

Configuring physical I/O

On the I/O window, as shown in Figure, you can select I/O resources for the partition to own. After you have made your selections in this window, select Next.

 Configuring virtual resources

If you have adapters assigned to the Virtual I/O server (as explained in Chapter 9, “Virtual I/O” on page 259), you can create a virtual adapter share for your partition. Follow these steps:

1. Select Actions → Create → SCSI Adapter to create a virtual SCSI share. Alternatively, select Actions → Create Ethernet Adapter to create a shared Ethernet share.

 2. You can specify your server partition, get System VIOS info, and specify a tag for adapter identification, as shown in Figure. When you have entered all of the data, select OK.

b

Figure: Create virtual SCSI adapter

You are returned to the virtual adapters window as shown in Figure. When you are done creating all the virtual resources, select Next.

 

 

 Optional Settings window

On the Optional Settings window shown in Figure you can:

  • Enable connection monitoring
  • Start the partition with the managed system automatically
  • Enable redundant error path reporting

You can also specify one of the various boot modes that are available.

After you have made your selections in this window, click Next to continue.

c

Figure: Optional settings

 Enabling connection monitoring

Select this option to enable connection monitoring between the HMC and the logical partition that is associated with this partition profile. When connection monitoring is enabled, the Service Focal Point (SFP) application periodically tests the communications channel between this logical partition and the HMC. If the channel does not work, the SFP application generates a serviceable event in the SFP log. This ensures that the communications channel can carry service requests from the logical partition to the HMC when needed. If this option is not selected, the SFP application still collects service request information when there are issues on the managed system. This option only controls whether the SFP application automatically tests the connection and generates a serviceable event if the channel does not work. Clear this option if you do not want the SFP application to monitor the communications channel between the HMC and the logical partition associated with this partition profile.

Starting with managed system automatically

This option shows whether this partition profile sets the managed system to activate the logical partition that is associated with this partition profile automatically when you power on the managed system. When you power on a managed system, the managed system is set to activate certain logical partitions automatically. After these logical partitions are activated, you must activate any remaining logical partitions manually. When you activate this partition profile, the partition profile overwrites the current setting for this logical partition with this setting. If this option is selected, the partition profile sets the managed system to activate this logical partition automatically the next time the managed system is powered on. If this option is not selected, the partition profile sets the managed system so that you must activate this logical partition manually the next time the managed system is powered on.

Enabling redundant error path reporting

Select this option to enable the reporting of server common hardware errors from this logical partition to the HMC. The service processor is the primary path for reporting server common hardware errors to the HMC. Selecting this option allows you to set up redundant error reporting paths in addition to the error reporting path provided by the service processor. Server common hardware errors include errors in processors, memory, power subsystems, the service processor, the system unit vital product data (VPD), non-volatile random access memory (NVRAM), I/O unit bus transport (RIO and PCI), clustering hardware, and switch hardware. Server common hardware errors do not include errors in I/O processors (IOPs), I/O adapters (IOAs), or I/O device hardware. If this option is selected, this logical partition reports server common hardware errors and partition hardware errors to the HMC. If this option is not selected, this logical partition reports only partition hardware errors to the HMC. This option is available only if the server firmware allows you to enable redundant error path reporting (the Redundant Error Path Reporting Capable option on the Capabilities tab in Managed System Properties is True.

Boot modes

Select the default boot mode that is associated with this partition profile. When you activate this partition profile, the system uses this boot mode to start the operating system on the logical partition unless you specify otherwise when activating the partition profile. (The boot mode applies only to AIX, Linux, and Virtual I/O server logical partitions. This area is unavailable for i5/OS logical partitions.) Valid boot modes are as follows:

  • Normal

The logical partition starts up as normal. (This is the mode that you use to perform most everyday tasks.)

  • System Management Services (SMS)

The logical partition boots to the System Management Services (SMS) menu.

  • Diagnostic with default boot list (DIAG_DEFAULT)

The logical partition boots using the default boot list that is stored in the system firmware. This mode is normally used to boot customer diagnostics from the CD-ROM drive. Use this boot mode to run standalone diagnostics.

  • Diagnostic with stored boot list (DIAG_STORED)

The logical partition performs a service mode boot using the service mode boot list saved in NVRAM. Use this boot mode to run online diagnostics.

  • Open Firmware OK prompt (OPEN_FIRMWARE)

The logical partition boots to the open firmware prompt. This option is used by service personnel to obtain additional debug information.

Profile summary

When you arrive at the profile summary as shown in Figure 7-19, you can review your partition profile selections. If you see anything that you want to change, select Back to get to the appropriate window and to make changes. If you are satisfied with the data represented in the Profile Summary, select Finish to create your partition.

d

Figure: Profile summary

After you select Finish, for a few minutes the window shown in Figure displays. When this window goes away, go back to your main HMC view, and the partition that you created is listed under the existing partitions on your managed server.

 

Figure: Partition creation status window

Virtual I/O

Virtual I/O provides the capability for a single physical I/O adapter and disk to be used by multiple logical partitions of the same server, allowing consolidation of I/O resources and minimizing the number of I/O adapters that are required.

Understanding Virtual I/O

Virtual I/O describes the ability to share physical I/O resources between partitions in the form of virtual adapter cards that are located in the managed system. Each logical partition typically requires one I/O slot for disk attachment and another I/O slot for network attachment. In the past, these I/O slot requirements would have been physical requirements. To overcome these physical limitations, I/O resources are shared with Virtual I/O. In the case of Virtual Ethernet, the physical Ethernet adapter is not required to communicate between LPARS. Virtual SCSI provides the means to share I/O resources for SCSI storage devices.

POWER Hypervisor for Virtual I/O

The POWER Hypervisor™ provides the interconnection for the partitions. To use the functionalities of Virtual I/O, a partition uses a virtual adapter as shown in Figure. The POWER Hypervisor provides the partition with a view of an adapter that has the appearance of an I/O adapter, which might or might not correspond to a physical I/O adapter.

 

Figure: Role of POWER Hypervisor for Virtual I/O

Virtual I/O Server

The Virtual I/O Server can link the physical resources to the virtual resources. By this linking, it provides virtual storage and Shared Ethernet Adapter capability to client logical partitions on the system. It allows physical adapters with attached disks on the Virtual I/O Server to be shared by one or more client partitions.

Virtual I/O Server mainly provides two functions:

  • Serves virtual SCSI devices to clients,
  • Provides a Shared Ethernet Adapter for virtual Ethernet

Virtual I/O Server partitions are not intended to run applications or for general user logins. The Virtual I/O Server is installed in its own partition. The Virtual I/O Server partition is a special type of partition which is marked as such on the first window of the Create Logical partitioning Wizard program. Currently the Virtual I/O Server is implemented as a customized AIX partition, however the interface to the system is abstracted using a secure shell-based command line interface (CLI). When a partition is created as this type of partition, only the Virtual I/O Server software boot image will boot successfully when the partition is activated. This Virtual I/O Server should be properly configured with enough resources. The most important resource is the processor resources. If a Virtual I/O Server has to host a lot of resources to other partitions, you must ensure that enough processor power is available.

Virtual SCSI

Virtual SCSI is based on a client/server relationship. A Virtual I/O Server partition owns the physical resources, and logical client partitions access the virtual SCSI resources provided by the Virtual I/O Server partition. The Virtual I/O Server partition has physically attached I/O devices and exports one or more of these devices to other partitions as shown in Figure

 

Figure: Virtual SCSI overview

 The client partition is a partition that has a virtual client adapter node defined in its device tree and relies on the Virtual I/O Server partition to provide access to one or more block interface devices. Virtual SCSI requires POWER5 or POWER6 hardware with the Advanced POWER Virtualization feature activated.

Client/server communications

In the Figure, the virtual SCSI adapters on the server and the client are connected through the hypervisor. The virtual SCSI adapter drivers (server and client) communicate control data through the hypervisor. When data is transferred from the backing storage to the client partition, it is transferred to and from the client’s data buffer by the DMA controller on the physical adapter card using redirected SCSI Remote Direct Memory Access (RDMA) Protocol. This facility enables the Virtual I/O Server to securely target

memory pages on the client to support virtual SCSI.

Adding a virtual SCSI server adapter

You can create the virtual adapters in two periods. One is to create those during

installing the Virtual I/O Server. The other is to add those in already existing

Virtual I/O Server. In this chapter, we suppose that we already created the Virtual

I/O Server.

Before activating a server, you can add the virtual adapter using the Manage

Profiles task. For an activated server, you can only do that through dynamic

LPAR operation if you want to use virtual adapters immediately. This procedure

requires that the network is configured with connection to the HMC to allow for

dynamic LPAR.

Now, you can add the adapter through dynamic LPAR. To add the adapter:

1. Select the activated Virtual I/O Server partition in HMC. Then click Virtual

Adapters in the Dynamic Logical partitioning section in the Task pane. The

Virtual Adapters window opens.

  1. Click Actions → Create → SCSI Adapter, as shown in Figure