This major software update for Sun ZFS Storage Appliances contains numerous bug fixes and important firmware upgrades. As such, we strongly encourage administrators to apply this update at their earliest convenience. In addition, this release adds support for 3TB 7.2K hard drive storage within the Sun ZFS Storage 7120. More information can be found in the Maintenance:Hardware:Overview:7120 section of the online help. This release also contains performance enhancements that were used to achieve a new SPC-1 V1.12 benchmark result. For more information see the Oracle Corporation A00108 SPC-1 result at: http://www.storageperformance.org/results/benchmark_results_spc1/#oracle_spc1.
This release requires appliances to be running the 2010.Q3.2.1 micro release or higher prior to updating to this release. In addition, this release includes update healthchecks that are performed automatically when an update is started prior to the actual update from the prerequisite 2010.Q3.2.1 micro release or higher. If an update healthcheck fails, it can cause an update to abort. The update healthchecks help ensure component issues that may impact an update are addressed. It is important to resolve all hardware component issues prior to performing an update. Due to a defect with Revision B3 SAS HBAs, which is exposed by changes in the 2011.1 release, 7210, 7310, and 7410 appliances with Revision B3 SAS HBAs will be prevented by the appliance kit update healthcheck software from upgrading to 2011.1. If this occurs, please contact Oracle Support. Revision C0 SAS HBAs or later do not have this issue.
- Sun Storage 7110
- Sun Storage 7210
- Sun Storage 7310
- Sun Storage 7410
- Sun Storage 7120
- Sun Storage 7320
- Sun Storage 7420
- Sun Storage 7000 Simulator
This release includes a variety of new features, including:
- Improved RMAN support for Oracle Exadata
- Improved ACL interoperability with SMB
- Replication enhancements - including self-replication
- InfiniBand enhancements - including better connectivity to Oracle Exalogic
- Datalink configuration enhancements - including custom jumbogram MTUs
- Improved fault diagnosis - including support for a variety of additional alerts
- Per-share rstchown support
This release also includes major performance improvements, including:
- Significant cluster rejoin performance improvements
- Significant AD Domain Controller failover time improvements
- Support for level-2 SMB Oplocks
- Significant zpool import speed improvements
- Significant NFS, iSER, iSCSI and Fibre Channel performance improvements due to elimination of data copying in critical datapaths
- ZFS RAIDZ read performance improvements
- Significant fairness improvements during ZFS resilver operations
- Significant Ethernet VLAN performance improvements
This release includes numerous bug fixes, including:
- Significant clustering stability fixes
- ZFS aclmode support restored and enhanced
- Assorted user interface and online help fixes
- Significant ZFS, NFS, SMB and FMA stability fixes
- Significant InfiniBand, iSER, iSCSI and Fibre Channel stability fixes
- Important firmware updates
|Title||Network Datalink Modifications Do Not Rename Routes|
|Related Bug IDs||6715567|
The Configuration/Network view permits a wide variety of networking configuration changes on the Sun Storage system. One such change is taking an existing network interface and associating it with a different network datalink, effectively moving the interface's IP addresses to a different physical link (or links, in the case of an aggregation). In this scenario, the network routes associated with the original interface are automatically deleted, and must be re-added by the administrator to the new interface. In some situations this may imply loss of a path to particular hosts until those routes are restored.
|Title||Appliance doesn't boot after removing first system disk|
|Related Bug IDs||6812465|
In a 7210 System, removing the first system disk will make the system unbootable, despite the presence of a second mirrored disk. To workaround this issue, break into the BIOS boot menu, under 'HDD boot order', modify the list so the first item is "[SCSI:#0300 ID00 LU]".
|Title||Data integrity and connectivity issues with NFS/RDMA|
|Related Bug IDs||6879948, 6870155, 6977462, 6977463|
NFS/RDMA is now (and only) supported with Solaris 10 U9 and above clients.
|Title||Network interfaces may fail to come up in large jumbogram configurations|
|Related Bug IDs||6857490|
In systems with large numbers of network interfaces using jumbo frames, some network interfaces may fail to come up due to hardware resource limitations. Such network interfaces will be unavailable, but will not be shown as faulted in the BUI or CLI. If this occurs, turn off jumbo frames on some of the network interfaces.
|Title||Multi-pathed connectivity issues with SRP initiators|
|Related Bug IDs||6908898, 6911881, 6920633, 6920730, 6920927, 6924447, 6924889, 6925603|
In cluster configurations, Linux multi-path clients have experienced loss of access to shares on the appliance. If this happens, a new session or connection to the appliance may be required to resume I/O activity.
|Title||Rolling back after storage reconfiguration results in faulted pools|
|Related Bug IDs||6878243|
Rolling back to a previous release after reconfiguring storage will result in pool(s) appearing to be faulted. These pools are those that existed when the rollback target release was in use, and are not the same pools that were configured using the more recent software. The software does not warn about this issue, and does not attempt to preserve pool configuration across rollback. To work around this issue, after rolling back, unconfigure the storage pool(s) and then import the pools you had created using the newer software. Note that this will not succeed if there was a pool format change between the rollback target release and the newer release under which the pools were created. If this is the case, an error will result on import and the only solution will be to perform the upgrade successfully. Therefore, in general, one best avoids this issue by not reconfiguring storage after an upgrade until the functionality of the new release has been validated.
|Title||Unanticipated error when cloning replicated projects with CIFS shares|
|Related Bug IDs||6917160|
When cloning replicated projects that are exported using the new "exported" property and shared via CIFS, you will see an error and the clone will fail. You can workaround this by unexporting the project or share or by unsharing it via CIFS before attempting to create the clone.
|Title||Some FC paths may not be rediscovered after takeover/failback|
|Related Bug IDs||6920713|
After a takeover and subsequent failback of shared storage, Qlogic FC HBAs on Windows 2008 will occasionally not rediscover all paths. When observed in lab conditions, at least one path was always rediscovered. Moreover, when this did occur the path was always rediscovered upon initiator reboot. Other HBAs on Windows 2008 and Qlogic HBAs on other platforms do not exhibit this problem.
|Title||Unable to change resource allocation during initial cluster setup when using CLI|
|Platforms||7310C, 7410C, 7320C, 7420C|
|Related Bug IDs||6982615|
When performing initial cluster setup via the CLI, any attempt to change the storage controller to which a resource is allocated will result in an error message of the form error: bad property value "(other_controller)" (expecting "(controller)"). To work around this problem, use the BUI to perform initial cluster setup. Alternately, complete cluster setup, log out of the CLI, log back in and return to the configuration cluster resources context to finish resource allocation and initial failback.
|Title||Multiple attempts to disable remote replication service hang management interface|
|Related Bug IDs||6969007|
Multiple consecutive attempts to disable remote replication may render the appliance management stack unusable, requiring a maintenance system restart to recover. To avoid this problem, do not attempt to disable remote replication more than once.
|Title|| Chassis service LED is not always illuminated in response to
|Platforms||7120 7320 7320C 7420 7420C|
|Related Bug IDs||6956136|
In some situations the chassis service LED on the controller will not be illuminated following a failure condition. Notification of the failure via the user interface, alerts including email, syslog, and SNMP if configured, and Oracle Automatic Service Request ("Phone Home") will function normally.
|Title||iSCSI IOs sometimes fail on cluster takeover/failback when using Solaris MPxIO clients|
|Platforms||7310C 7320C 7410C 7420C|
|Related Bug IDs||6959608|
iSCSI IO failures have been seen during takeover/failback when iSCSI targets that are separately owned by each controller in a cluster are part of the same target group. To workaround this issue, iSCSI target groups should only contain iSCSI targets owned by a single controller. This also implies that the default target group should not be used in this case.
|Title||HCA port may be reported as down|
|Related Bug IDs||6978400|
HCA ports may be reported as down after reboot. If the overlaid datalinks and interfaces are functioning, this state is incorrect.
|Title||nearly full storage pool impairs performance and manageability|
|Related Bug IDs||6525233, 6975500, 6978596|
Storage pools at more than 80% capacity may experience degraded I/O performance, especially when performing write operations. This degradation can become severe when the pool exceeds 90% full and can result in impaired manageability as the free space available in the storage pool approaches zero. This impairment may include very lengthy boot times, slow BUI/CLI operation, management hangs, inability to cancel an in-progress scrub, and very lengthy or indefinite delays while restarting services such as NFS and SMB. Best practices, as described in the product documentation, call for expanding available storage or deleting unneeded data when a storage pool approaches these thresholds. Storage pool consumption can be tracked via the BUI or CLI; refer to the product documentation for details.
|Title||Solaris 10 iSCSI client failures under heavy load|
|Related Bug IDs||6976733|
Prior to Solaris 10 Update 10 (S10u10), if the appliance is under sufficient load to cause iSCSI commands to take longer than 60 seconds, the Solaris iSCSI initiator may accidentally reuse the Initiator Task Tag (ITT) value for a pending task, which will cause the initiator to close the connection with the log message iscsi connection(7/3f) closing connection – target requested reason:0x7. Subsequent SCSI commands may also fail once the initiator enters this state and may require client-side filesystems to be unmounted, checked if appropriate, and remounted before they can be used again. If this occurs, either upgrade the initiator to S10u10 (or later), or reduce the appliance load so that iSCSI commands can be serviced in under 60 seconds. This can be observed using the Analytics statistic “iSCSI operations broken down by latency.”
|Title||management UI hangs on takeover or management restart with thousands of shares or LUNs|
|Related Bug IDs||6980997, 6979837|
When a cluster takeover occurs or the management subsystem is restarted either following an internal error or via the maintenance system restart CLI command, management functionality may hang in the presence of thousands of shares or LUNs. The likelihood of this is increased if the controller is under heavy I/O load. The threshold at which this occurs will vary with load and system model and configuration; smaller systems such as the 7110 and 7120 may hit these limits at lower levels than controllers with more CPUs and DRAM, which can support more shares and LUNs and greater loads. Best practices include testing cluster takeover and failback times under realistic workloads prior to placing the system into production. If you have a very large number of shares or LUNs, avoid restarting the management subsystem unless directed to do so by your service provider.
|Title||moving shares between projects can disrupt client I/O|
|Related Bug IDs||6979504|
When moving a share from one project to another, client I/O may be interrupted. Do not move shares between projects while client I/O is under way unless the client-side application is known to be resilient to temporary interruptions of this type.
|Title||shadow migration hangs management UI when source has thousands of files in the root directory|
|Related Bug IDs||6967206, 6976109|
Shadow migration sources containing many thousands of files in the root directory will take many minutes to migrate, and portions of the appliance management UI may be unusable until migration completes. This problem will be exacerbated if the source is particularly slow or the target system is also under heavy load. If this problem is encountered, do not reboot the controller or restart the management subsystem; instead, wait for migration to complete. When planning shadow migration, avoid this filesystem layout if possible; placing the files in a single subdirectory beneath the root or migrating from a higher-level share on the source will avoid the problem.
|Title||shadow migration does not report certain errors at the filesystem root|
|Related Bug IDs||6890508|
Errors migrating files at the root of the source filesystem may not be visible to the administrator. Migration will be reported "in progress" but no progress is made. This may occur when attempting to migrate large files via NFSv2; instead, use NFSv3 or later when large files are present.
|Title||repair of faulted pool does not trigger sharing|
|Related Bug IDs||6975228|
When a faulted pool is repaired, the shares and LUNs on the pool are not automatically made available to clients. There are two main ways to enter this state:
- Booting the appliance with storage enclosures disconnected, powered off, or missing disks
- Performing a cluster takeover at a time when some or all of the storage enclosures and/or disks making up one or more pool were detached from the surviving controller or powered off
When the missing devices become available, controllers with SAS-1 storage subsystems will automatically repair the affected storage pools. Controllers with SAS-2 storage subsystems will not; the administrator must repair the storage pool resource using the resource management CLI or BUI functionality. See product documentation for details. In neither case, however, will the repair of the storage pool cause the shares and LUNs to become available. To work around this issue, restart the management subsystem on the affected controller using the maintenance system restart command in the CLI. This is applicable ONLY following repair of a faulted pool as described above.
|Title||DFS links may be inaccessible from some Windows clients|
|Related Bug IDs||6962610|
Under rare circumstances, some Microsoft Windows 2008/Vista and Microsoft Windows XP clients may be unable to access DFS links on the appliance, receiving the error Access is denied. Windows 2003 is believed not to be affected. The proximate cause of this problem is that the client incorrectly communicates with the DFS share as if it were an ordinary share; however, the root cause is not known. The problem has been observed with other DFS root servers and is not specific to the Storage 7000 appliance family. At present the only known way to resolve this issue is via reinstallation of the affected client system. If you encounter this problem, please contact your storage service provider and your Microsoft Windows service provider.
|Title||NDMP service may enter the maintenance state when changing properties|
|Related Bug IDs||6979723|
When changing properties of the NDMP service, it may enter the maintenance state due to a timeout. This will be reflected in the NDMP service log with an entry of the form stop method timed out. If this occurs, restart the NDMP service as described in the product documentation. The changes made to service properties will be preserved and do not need to be made again.
|Title||Suboptimal PCIe link training|
|Platforms||7120, 7320, 7320C|
|Related Bug IDs||6979482|
PCI Express 2.0 compatible cards may train to PCI Express 1.X speeds. This may impact performance of I/O through the affected card. Detection software for this condition runs during boot and sends a fault message observable through the Maintenance fault logs. To recover from this condition, the system must be rebooted.
|Title||ZFS pool fullness conditions incompletely documented|
|Related Bug IDs||6525233|
The product documentation notes that ZFS pools filled beyond 80% of capacity may deliver degraded performance. However, it should also state that a ZFS pool with any vdev (RAID-protected stripe or mirror) more than 80% full may suffer from the same problem. This is not a defect in the product, but should be documented to aid in planning. Add storage to an existing pool before it approaches 80% full to avoid this problem.
|Title||Solaris/VxVM FC initiator timeouts|
|Platforms||7310C 7410C 7320C 7420C|
|Related Bug IDs||6937477, 6951173|
Symantec has enhanced their code to handle I/O delays during takeover and/or failback. This work was covered under Symantec bug number e2046696 - fixes for dmp_lun_retry_timeout handling issues found during SUN7x10 array qualification. Symantec created hot fix VRTSvxvm 5.1RP1_HF3 (for Solaris SPARC & x86) with this fix in it. The next patch 5.1RP2 & major update 5.1SP1 will have these changes. Obtain and install these patches from Symantec if you are using VxVM on Solaris as an FC initiator attached to a clustered appliance.
|Title||Missing data in certain Analytics drilldowns|
|Related Bug IDs||6958579, 6959575|
When drilling down on a statistic that existed prior to the current system startup, certain statistics may show no data in the drilldown. This can occur if the original statistic required looking up a DNS name or initiator alias, as would typically be the case for statistics broken down by client hostname (for files) or initiator (for blocks). The problem occurs only intermittently and only with some statistics. To work around this issue, disable or delete the affected dataset(s), then restart the management software stack using the 'maintenance system restart' command in the CLI. Once the statistics are recreated or reenabled, subsequent drilldowns should contain the correct data.
|Title||Solaris initiators may lose access to FC LUNs during cluster takeover|
|Platforms||7310C 7410C 7320C 7420C|
|Related Bug IDs||6959914|
Solaris 10 update 9 and older releases as well as Solaris 11 build 153 and older initiator software incorrectly handles repeated INQUIRY command failures experienced when using multipathing during a 7000 cluster takeover or failback. There is no workaround, but patches and a fix in a subsequent Solaris 10 update are planned.
|Title||Multiple SMB DFS roots can be created|
|Related Bug IDs||6979350|
It is possible to create more than the maximum of 1 DFS standalone root on an appliance if multiple pools are available. Do not create multiple DFS roots.
|Title||Intermittent probe-based IPMP link failures|
|Related Bug IDs||6979470|
An appliance under heavy load may occasionally detect spurious IPMP link failures. This is part of the nature of probe-based failure detection and is not a defect. The product documentation explains the algorithm used in determining link failure; the probe packets it uses may be delayed if the system is under heavy load.
|Title||Backup of "system" pool|
|Related Bug IDs||6988252|
The NDMP backup subsystem may incorrectly allow backup operations involving filesystems on the system pool, which contains the appliance software. Attempting to back up these filesystems will result in exhaustion of space on the system pool, which will interfere with correct operation of the appliance. Do not attempt to back up any filesystem in the system pool. If you have done so in the past, check the utilisation of the pool as described in the Maintenance/System section of the product documentation. If the pool is full or nearly full, contact your authorized service provider.
|Title||Boot loader not updated when upgrade performed with faulty system disk|
|Related Bug IDs||6975232|
If an upgrade is performed when one of the two system disks is faulty, its boot loader configuration will not be updated. If the faulty disk is the primary boot disk (slot 6 in the 7310, 7310C, 7410, and 7410C, slot 0 in the 7320, 7320C, 7420 and 7420C, and the rear slot 0 in the 7120), the system may boot obsolete software following the upgrade. Replace any failed system disks and allow resilvering to complete prior to beginning an upgrade.
|Title||Configuration restore does not work on clustered systems|
|Platforms||7310C, 7320C, 7410C, 7420C|
|Related Bug IDs||6982025, 7024182, 7024518|
If a system is configured in a cluster, or has ever been configured in a cluster, the configuration restore feature does not work properly. This can lead to appliance panics, incorrect configuration, or a hung system. At the present time, configuration restore should only be used on stand-alone systems.
|Title||Node fails to join cluster after root password change prior to rollback|
|Platforms||7310C, 7410C, 7320C, 7420C|
|Related Bug IDs||6961359|
In a cluster configuration, if the root password is changed prior to a rollback to an older release a cluster join failure can occur on that node. If this occurs, change the root password on the node that was rolled back to match the other node and perform a reboot. Once both nodes are at the same version and operating as a cluster, the root password can be changed again as needed.
|Title||SIM 3525 firmware may cause path loss|
|Platforms||All platforms using J4410 SAS-2 Disk Shelf|
|Related Bug IDs||7064704|
After upgrading the J4410 SAS-2 disk shelf SIM firmware to version 3525 some disk shelves may report 1 path. Also, if the appliance is configured as a cluster, one head may report 1 path while the other head reports 2 paths. The workaround for this is to re-seat the SAS-2 cables one at a time starting with the top effected disk shelf until both paths are restored to all disk shelves. The disk shelf path count should be monitored after each re-seat so only effected disk shelf cables are re-seated.
|Title||Explicit warnings should be issued for mixed drive speeds and sizes|
|Related Bug IDs||7071877, 7071878|
When data disk drives with different speeds (e.g., 7,200 rpm and 15,000 rpm) are combined in a disk shelf or pool, drive performance will be impacted. The pool will only run as fast as the slowest drive. Likewise, when data disk drives of different sizes (e.g., 2 TB and 600 GB) are combined in a pool, pool capacity will be impacted. In such case, ZFS will in some cases use the size of the smallest capacity disk for some or all of the disks within the storage pool, thereby reducing the overall expected capacity. The sizes used will depend on the storage profile, layout, and combination of devices. Disk drives with different speeds or sizes should not be mixed within disk shelves or pools.
|Title||Disk drive speed is not considered when replacing drives|
|Related Bug IDs||7082114|
It's possible to replace a disk drive with one that has a different drive speed. This can impact overall pool performance. Disk drives should only be replaced with drives that have the same size and speed.
|Title||Disk shelf cable changes can cause client side I/O failures|
|Related Bug IDs||7091557|
In some cases, disk shelf SAS cable changes can cause client side I/O failures. When making disk shelf cable changes, it's important to pause 30 seconds between disconnecting a cable and reconnecting it.
|Title||CPU utilization appears to be higher|
|Related Bug IDs||6952967|
The 2010.Q1 and 2010.Q3 releases incorrectly calculated the “CPU: utilization” statistic. This has been fixed in the 2011.1 release, but, as a consequence, the same workload appears to use twice as much CPU as before. This is merely an artifact of fixing the calculation and thus has no effect on the actual capability or limits of the appliance.
|Title||FTP transfer of 2011.1 release from 2010.Q3 fails|
|Related Bug IDs||7070586|
Due to a defect in the 2010.Q3 release, FTP transfers that exceed 60 seconds may fail with FTP response timeout. Due to the file size of the 2011.1 release image, this will likely happen when using a 100 Mb/s (or slower) network connection. If this occurs, use HTTP (via the appliance BUI) or a faster network connection to transfer the 2011.1 release image.
|Title||May see Clustron link alerts under extreme load|
|Platforms||7310C, 7320C, 7410C, 7420C|
|Related Bug IDs||6799505, 7063308, 7067776|
As of the 2011.1 release, link monitoring has been enabled for the Ethernet link between Clustron cards. Under extreme load, the heartbeat packets used to monitor this link can be delayed, causing spurious link alerts to be generated. Because all three interconnect links between Clustron cards must fail to trigger a failover, these spurious alerts do not affect operation of the appliance cluster.
|Title||ACLs with unresolvable SIDs are inaccessible via NFSv4|
|Related Bug IDs||6941854|
If a file has an ACL entry with a security identity (SID) that cannot be resolved to a name (either due to a transient failure, such as an Active Directory outage, or because the SID has no name, such as the “system” SID), NFSv4 clients (which, per the NFSv4 standard, use a “user@domain” format when passing ACLs between the client and server) will be unable to view – and thus use – those ACLs. For a Solaris client, this will manifest as ls: can't read ACL on <file>: Not owner. If this occurs, either resolve the transient failure or use a CIFS client to remove the problematic ACL entry, as appropriate.
|Title||SMB operation during AD outages may create damaged ACLs|
|Related Bug IDs||6844652|
SMB operation while the Active Directory (AD) domain controllers are unavailable can yield damaged ACL entries. In particular, problems can arise if Windows groups are present in the ACLs on dataset roots because the domain controllers are not immediately available at system startup. This can be avoided by not including Windows groups in dataset root ACLs. Instead, consider mapping those groups to UNIX groups, so the ACLs contain the UNIX group information. A CIFS client can be used to repair damaged entries.
|Title||Cannot modify MTU of datalink in an IPMP group|
|Related Bug IDs||7036132|
As of the 2011.1 release, the datalink MTU can be explicitly set via the appliance BUI. However, attempting to change the MTU of a datalink with an IP interface in an IPMP group causes the datalink to enter the maintenance state. If this occurs, prior to placing the datalink into the IPMP group, destroy and recreate the datalink with the desired MTU.
|Title||SNMP does not work on appliances with > 12 IP interfaces|
|Related Bug IDs||6998845, 7018550|
If more than 12 IP interfaces are configured, the SNMP service may go into the maintenance state or saturate a CPU, and will often fill its log file with error on subcontainer ‘interface container’ insert (-1). If any of these problems occur, either disable SNMP or reduce the number of IP interfaces.
|Title||Cannot set identical quota and reservation on a project|
|Related Bug IDs||7100682|
Attempting to set a quota and reservation to the same value at the project level will fail. As a workaround, set the reservation to a slightly smaller value than the quota.
|Title||Resilver can severely impact I/O latency|
|Related Bug IDs||7012341, 7025155|
During a disk resilver operation (e.g., due to activating a spare after a disk failure), latency for I/O associated with the containing pool may be severely impacted. For example, the “NFSv3 operations broken down by latency” Analytics statistic can show 2-4 second response times. After the resilver completes, I/O latency returns to normal.
|Title||Windows 2008 R2 IB client may fail to ping appliance|
|Related Bug IDs||7098153|
Due to what appears to be an initiator-side problem, Windows 2008 R2 InfiniBand (IB) initiators may be initially unable to access the appliance. If this occurs, disable and re-enable the IB port on the initiator side by navigating to Network Connections, right-clicking on the appropriate IB port, selecting “disable,” right-clicking again, and selecting “enable.”
|Title||Accrued analytics data can fill system pool|
|Related Bug IDs||6973870, 7017827, 7020845|
If analytics run for long periods of time, the system disk can become full. Depending on when this occurs, a wide range of problems may be observed, such as a hang at boot, inability to login, or a hung BUI. If this occurs, please contact Oracle Support to help you identify and delete unnecessary analytics data.
|Title||Hot-swap of Readzilla or system disk can fail to be recognized|
|Related Bug IDs||7023548|
If a Readzilla (read cache SSD) or system disk failure occurs, a reboot may be required for the replacement disk to be correctly recognized.
|Title||NDMP-ZFS backup limitations for clones|
|Related Bug IDs||7045412|
First, to successfully back up and restore a clone by itself (i.e., without backing up its containing project), ZFS_MODE=dataset must be set in the data management application. Second, to successfully back up and restore a project that contains a clone whose origin resides in the same project as the clone, use ZFS_MODE=recursive (the default mode). Third, to successfully back up a project containing a clone whose origin resides in a project different from the clone, back up the shares of the project individually using ZFS_MODE=dataset. (This even applies to shares that are not clones, although at least one will be a clone.) These limitations may be lifted in a future release. For more information on NDMP and the “zfs” backup type, refer to http://www.oracle.com/technetwork/articles/systems-hardware-architecture/ndmp-whitepaper-192164.pdf
|Title||Unavailable shadow migration source can impact NFS service|
|Related Bug IDs||7026945|
If the source of a shadow migration becomes unavailable, other NFS shares being served from the same appliance may become inaccessible. If this occurs, restoring access to the shadow source – or disabling shadow migration – will allow other NFS access to resume.
|Title||Revision B3 SAS HBAs not permitted with 2011.1 release|
|Platforms||7210, 7310, 7410|
|Related Bug IDs||7102346|
Due to a defect with Revision B3 SAS HBAs, which is exposed by changes in the 2011.1 release, 7210, 7310, and 7410 appliances with Revision B3 SAS HBAs will be prevented by the appliance kit update healthcheck software from upgrading to 2011.1. If this occurs, please contact Oracle Support. Revision C0 SAS HBAs or later do not have this issue.
|Title||"ZFS" should be used instead of "Sun" in /etc/multipath.conf|
|Platforms||7120, 7320, 7420|
|Related Bug IDs||7119902, 7121536|
When configuring Linux FC Multipath client initiators for use with 7120, 7320 and 7420 platforms, the product string in the /etc/multipath.conf file should be "ZFS Storage 7x20". 7x10 platforms should continue to use the "Sun Storage 7x10" product string.
|Title||Disks may be offline following firmware upgrade|
|Platforms||All platforms with attached J4410 disk shelves|
|Related Bug IDs||7126475|
After performing an update to this release, disks may be left offline after the completion of firmware upgrade to 061A. This problem is not observable in the UI and can impact storage pool redundancy characteristics. Depending on the configuration of your system, the following procedures are required to bring online any disks left offline by the firmware upgrade process.
Cluster Configuration: After completing the instructions for a cluster update in the Online Help section (Maintenance:System:Updates), Head A (the first cluster head to be updated) should be rebooted. This can be accomplished by logging in to Head B and issuing a Takeover. Any offline disks on Head A will be automatically brought back online following the reboot. The cluster is now ready for normal operation. If you do not know which head performed the firmware upgrades, both heads will need to be rebooted. This can be accomplished by repeating the takeover step described above for both heads, one at a time.
Standalone Configuration: After all firmware upgrades have completed, the system should be rebooted to ensure that all disks are brought back online.
|Title||7x10C systems with e1000g cards may be unable to rejoin cluster|
|Related Bug IDs||6950388, 7132157|
Appliances with both Cluster Controller 100 cards (PN: 371-3024-01) and 4x1Gb Copper Ethernet cards (PN: 375-3481-01) may be unable to rejoin the cluster with this release. If this occurs, you will need to roll back to the previous software release.
|Title||Disk firmware upgrade continues to show as pending in BUI|
|Related Bug IDs||7132721|
After performing an update to this release, disk firmware upgrades have been shown as pending in the BUI with no upgrades pending in the CLI via the maintenance system updates show command. This is strictly a reporting error in the BUI. However – and ONLY after verifying that ALL upgrades have completed via the CLI – the CLI command maintenance system restart can be used to reset the appliance management stack and correct the BUI reporting error.