ak-2011.04.24.4.0 Release Notes

Skip to end of metadata
Go to start of metadata

2011.1.4.0

This minor release of the Sun ZFS Storage Appliance software contains significant bug fixes for all supported platforms. Please carefully review the list of CRs that have been addressed and all known issues prior to updating.

This release requires appliances to be running the 2010.Q3.2.1 micro release or higher prior to updating to this release. In addition, this release includes update health checks that are performed automatically when an update is started, prior to the actual update. If an update health check fails, it can cause an update to abort. The update health checks help ensure component issues that may impact an update are addressed. It is important to resolve all hardware component issues prior to performing an update.

Deferred Updates

When updating from a 2010.Q3 release to a 2011.1 release, the following deferred updates are available and may be reviewed in the Maintenance System BUI screen. See the "Maintenance:System:Updates#Deferred_Updates" section in the online help for important information on deferred updates before applying them.

1. RAIDZ/Mirror Deferred Update (Improved RAID performance)
This deferred update improves both latency and throughput on several important workloads. These improvements rely on a ZFS pool upgrade provided by this update. Applying this update is equivalent to upgrading the on-disk ZFS pool to version 29.

2. Optional Child Directory Deferred Update (Improved snapshot performance)
This deferred update improves list retrieval performance and replication deletion performance by improving dataset rename speed. These improvements rely on a ZFS pool upgrade provided by this update. Before this update has been applied, the system will be able to retrieve lists and delete replications, but will do so using the old, much slower, recursive rename code. Applying this update is equivalent to upgrading the on-disk ZFS pool to version 31.

Supported Platforms

Issues Addressed

The following CRs have been fixed in this release:

6199185 netname2user() code has a limit for the number of groups
6949066 User can't belong to more than 16 groups. Impacts AUTH_SYS authentication
6950788 Duplicate packets with link aggregation
6962304 metaslab_min_alloc_size may be too big
6969007 replication can deadlock if service disabled
6973592 config restore can break phone home config
6976827 Unable to remove files once zfs filesystem is totally filled
6994789 "iommu0/1 running" noise on boot
7005269 rdsv3 communication stops with unprocessed completion queue entries
7009010 Unable to delete ZFS file on NFSv4 share once refquota has been reached
7012308 apic_intrmap_init() has a bug in using the apic_mode variable
7012679 Configurable bypass for vcnumber zero
7013410 apix: MSI interrupt is not delivered when interrupt remapping is active on X4470 server
7014783 long recovery time for IPoIB-CM during fault injection
7020715 apix: rdsv3 hangs with an unprocessed event in hermon event queue on G5/X4800
7022974 "NOTICE: DMAR ACPI table: skipping unsupported unit type ATSR" on fresh installed otoro
7023195 hermon: workaround for "rdsv3 hangs with an unprocessed event on G5/X4800"
7026639 system hangs when handling IPIs
7034010 while transitioning to LINK DOWN ibp may report LINK UP before finally reporting LINK DOWN
7034185 bad drive of a redundant disk causing zfs hanging
7038163 Update thebe firmware to 1.11.00
7039799 late initialization causing issues during boot time
7042087 want dragonfly SF04
7044547 kernel rpc should call KEY_GETCRED_3 and get all available gids
7044600 keyserv dumps core when the remote procedure KEY_GETCRED_3 is called
7044891 groups aren't always sorted in the credential
7047829 AUTH_LOOPBACK corrupts data when > 32 groups are available
7052192 Several parts of the kernel are inefficient when using multiple groups
7052195 The backend can call netname2user with an improperly sized array
7054207 NFS/RDMA needs to support more than 320 connections
7059955 NFS Server does not xdr_encode the XID in the rpc header for the reply
7065730 long failover time seen due to retrying connection establishment on timeouts
7070662 Hang during failback at end of Q3.4 upgrade from unresponsive I/O
7076000 smb_is_stream_name() can validate paths of the form d:\\a\\\b.x as streams resulting in panic
7078625 Unconfiguring an HCA under IPMP puts the other link in the group in failed state
7082623 SMB_TRANS2_FIND_FIRST2 can return wrong filename when file is a symbolic link
7084892 ZFSSA Clone creation takes more than 30s under Write sequential I/O over IB and 10G network on NFSV4
7086159 disabling replication service in a cluster can deadlock
7086975 want Firefly 0D70 firmware
7088895 Need support for Intel PVR 512Gb readzilla SSDs
7089913 mismatch between replication service composite and class state
7091582 Devices with excessive transport errors are not being unconfigured
7096339 deadlock possible between ipnet_close and ipnet_dispatch
7096953 Write cache is disabled after upgrading disk firmware
7105703 txg_sync_thread panic when performing dsl_dataset_destroy_sync()
7109554 7410C cluster head panic in tcp_closei_local() (2010.Q3.4.2)
7113528 CONN_DEC_REF panic
7118662 Online Help: Document troubleshooting update health check failures
7121162 After configuring ipmp interface in active/standby, it shows as active/active
7129155 Unable to disable/ unset/ turn off the nbmand flag on an existing share
7129503 zfssa panics in zil_clean(), zl_clean_taskq is NULL
7131577 Leak of nfs_xuio_t in rfs3_read()/rfs4_op_read() when VOP_READ() failed
7132997 leaked blocks post-7004800
7141645 fzap_cursor_retrieve() exhibits pathological behavior on once large now empty zap objects
7141708 2011.Q1 : intermittent akd hangs - mutex owned by a thread stuck in zfs land
7144107 Service Bundle collected even if cancel button pressed in admin gui
7144580 HBA sgllen limitations cause "no supported WRITE BUFFER command found"
7149108 nas_iscsit_initiator_update() leak leads to 2.2Gb akd immediately after startup
7153404 Viper-C disk drive only works with new BP (back drilled)
7154280 After updating to 2011.1.1.0 NT 4 and Mac Tiger smb mounts fail until smb is restarted
7154499 ZFSSA nlockmgr service drops into maintenance during cluster failback
7156020 when running many backup sessions in parallel ndmpd hits its file descriptor limit
7158701 Assertion panic in common/fs/zfs/zil.c on a T5240 system in s11u1_13
7159153 aksh crashes on badly-named updates post-7097526
7159709 Need support for STEC Gen 4 2.5" and 3.5" logzillas
7160021 nfsv3 breakdowns by size broken in 2011.1 post-7090133
7160373 tst.prune.py fails because it prunes an unsaved dataset
7162302 nfsd hang with one thread stuck in mir_timer_start()
7164019 DC discovery should be triggered upon NIC addition (after clustered head failback)
7164138 Request to add cfgadm -alv and format in support bundle
7164139 Request to add ::memstat,::arc in support bundle
7164143 Add iostat -En & kernel threadlist to be collected in appropriate dir of support bundle
7165075 600G Viper-C disk drive needs to be restricted to use in new mid-plane (back drilled)
7165471 Port fix for 4766188 to libumem
7165641 akd crashes in ak_msg_getitems
7165935 akmacbind should honor system settings
7166972 tst.prune.py must delete datasets before executing tests
7167399 Intel PVR code should deliver metadata xml file onto the appliance
7167528 want 2+ versions PVR firmware to facilitate upgrade testing
7167608 Need to deliver CMOS image for SW 1.6.1 Lynxplus
7167616 Need to deliver CMOS image for SW 1.3 for Otoro+
7168171 panic: nv_power_reset attempts to acquire a mutex for an invalid port
7168302 want Eagle 0B25 firmware
7168762 Remove zfs:metaslab_min_alloc_size tunable from system files
7169262 want Firebird 0703 firmware
7170273 system hang: threads pinned behind nv_power_reset nv_reset
7174111 STEC Gen 4 is ready for 940C
7175940 2011.1.4.0:Appliance went into continuous panic state during ak upgrade, IB RDMA I/O were running
7175973 aksync should report actual cache setting, not expected setting
7176423 STEC Gen 4 is ready for 9410
7184837 smbfs_iod_door_open() calls on a door with the wrong argument

Known Issues

Release Note RN001
Title Network Datalink Modifications Do Not Rename Routes
Platforms All
Related Bug IDs 6715567

The Configuration/Network view permits a wide variety of networking configuration changes on the Sun Storage system. One such change is taking an existing network interface and associating it with a different network datalink, effectively moving the interface's IP addresses to a different physical link (or links, in the case of an aggregation). In this scenario, the network routes associated with the original interface are automatically deleted, and must be re-added by the administrator to the new interface. In some situations this may imply loss of a path to particular hosts until those routes are restored.

Release Note RN002
Title Appliance doesn't boot after removing first system disk
Platforms 7210
Related Bug IDs 6812465

In a 7210 System, removing the first system disk will make the system unbootable, despite the presence of a second mirrored disk. To workaround this issue, break into the BIOS boot menu, under 'HDD boot order', modify the list so the first item is "[SCSI:#0300 ID00 LU]".

Release Note RN003
Title Data integrity and connectivity issues with NFS/RDMA
Platforms All
Related Bug IDs 6879948, 6870155, 6977462, 6977463

NFS/RDMA is now (and only) supported with Solaris 10 U9 and above clients.

Release Note RN004
Title Network interfaces may fail to come up in large jumbogram configurations
Platforms All
Related Bug IDs 6857490

In systems with large numbers of network interfaces using jumbo frames, some network interfaces may fail to come up due to hardware resource limitations. Such network interfaces will be unavailable, but will not be shown as faulted in the BUI or CLI. If this occurs, turn off jumbo frames on some of the network interfaces.

Release Note RN005
Title Multi-pathed connectivity issues with SRP initiators
Platforms All
Related Bug IDs 6908898, 6911881, 6920633, 6920730, 6920927, 6924447, 6924889,6925603

In cluster configurations, Linux multi-path clients have experienced loss of access to shares on the appliance. If this happens, a new session or connection to the appliance may be required to resume I/O activity.

Release Note RN007
Title Rolling back after storage reconfiguration results in faulted pools
Platforms All
Related Bug IDs 6878243

Rolling back to a previous release after reconfiguring storage will result in pool(s) appearing to be faulted. These pools are those that existed when the rollback target release was in use, and are not the same pools that were configured using the more recent software. The software does not warn about this issue, and does not attempt to preserve pool configuration across rollback. To work around this issue, after rolling back, unconfigure the storage pool(s) and then import the pools you had created using the newer software. Note that this will not succeed if there was a pool format change between the rollback target release and the newer release under which the pools were created. If this is the case, an error will result on import and the only solution will be to perform the upgrade successfully. Therefore, in general, one best avoids this issue by not reconfiguring storage after an upgrade until the functionality of the new release has been validated.

Release Note RN009
Title Unanticipated error when cloning replicated projects with CIFS shares
Platforms All
Related Bug IDs 6917160

When cloning replicated projects that are exported using the new "exported" property and shared via CIFS, you will see an error and the clone will fail. You can work around this by unexporting the project or share or by unsharing it via CIFS before attempting to create the clone.

Release Note RN010
Title Some FC paths may not be rediscovered after takeover/failback
Platforms All
Related Bug IDs 6920713

After a takeover and subsequent failback of shared storage, Qlogic FC HBAs on Windows 2008 will occasionally not rediscover all paths. When observed in lab conditions, at least one path was always rediscovered. Moreover, when this did occur the path was always rediscovered upon initiator reboot. Other HBAs on Windows 2008 and Qlogic HBAs on other platforms do not exhibit this problem.

Release Note RN013
Title Unable to change resource allocation during initial cluster setup when using CLI
Platforms 7310C,7410C,7320C,7420C
Related Bug IDs 6982615

When performing initial cluster setup via the CLI, any attempt to change the storage controller to which a resource is allocated will result in an error message of the form error: bad property value "(other_controller)" (expecting "(controller)"). To work around this problem, use the BUI to perform initial cluster setup. Alternately, complete cluster setup, log out of the CLI, log back in and return to the configuration cluster resources context to finish resource allocation and initial failback.

Release Note RN017
Title Chassis service LED is not always illuminated in response to hardware faults
Platforms 7120 7320 7320C 7420 7420C
Related Bug IDs 6956136

In some situations the chassis service LED on the controller will not be illuminated following a failure condition. Notification of the failure via the user interface, alerts including email, syslog, and SNMP if configured, and Oracle Automatic Service Request ("Phone Home") will function normally.

Release Note RN018
Title iSCSI IOs sometimes fail on cluster takeover/failback when using Solaris MPxIO clients
Platforms 7310C 7320C 7410C 7420C
Related Bug IDs 6959608

iSCSI IO failures have been seen during takeover/failback when iSCSI targets that are separately owned by each controller in a cluster are part of the same target group. To workaround this issue, iSCSI target groups should only contain iSCSI targets owned by a single controller. This also implies that the default target group should not be used in this case.

Release Note RN019
Title HCA port may be reported as down
Platforms All
Related Bug IDs 6978400

HCA ports may be reported as down after reboot. If the overlaid datalinks and interfaces are functioning, this state is incorrect.

Release Note RN022
Title nearly full storage pool impairs performance and manageability
Platforms All
Related Bug IDs 6525233, 6975500, 6978596

Storage pools at more than 80% capacity may experience degraded I/O performance, especially when performing write operations. This degradation can become severe when the pool exceeds 90% full and can result in impaired manageability as the free space available in the storage pool approaches zero. This impairment may include very lengthy boot times, slow BUI/CLI operation, management hangs, inability to cancel an in-progress scrub, and very lengthy or indefinite delays while restarting services such as NFS and SMB. Best practices, as described in the product documentation, call for expanding available storage or deleting unneeded data when a storage pool approaches these thresholds. Storage pool consumption can be tracked via the BUI or CLI; refer to the product documentation for details.

Release Note RN023
Title Solaris 10 iSCSI client failures under heavy load
Platforms All
Related Bug IDs 6976733

Prior to Solaris 10 Update 10 (S10u10), if the appliance is under sufficient load to cause iSCSI commands to take longer than 60 seconds, the Solaris iSCSI initiator may accidentally reuse the Initiator Task Tag (ITT) value for a pending task, which will cause the initiator to close the connection with the log message iscsi connection(7/3f) closing connection – target requested reason:0x7. Subsequent SCSI commands may also fail once the initiator enters this state and may require client-side filesystems to be unmounted, checked if appropriate, and remounted before they can be used again. If this occurs, either upgrade the initiator to S10u10 (or later), or reduce the appliance load so that iSCSI commands can be serviced in under 60 seconds. This can be observed using the Analytics statistic “iSCSI operations broken down by latency.”

Release Note RN025
Title management UI hangs on takeover or management restart with thousands of shares or LUNs
Platforms All
Related Bug IDs 6980997, 6979837

When a cluster takeover occurs or the management subsystem is restarted either following an internal error or via the maintenance system restart CLI command, management functionality may hang in the presence of thousands of shares or LUNs. The likelihood of this is increased if the controller is under heavy I/O load. The threshold at which this occurs will vary with load and system model and configuration; smaller systems such as the 7110 and 7120 may hit these limits at lower levels than controllers with more CPUs and DRAM, which can support more shares and LUNs and greater loads. Best Practices include testing cluster takeover and failback times under realistic workloads prior to placing the system into production. If you have a very large number of shares or LUNs, avoid restarting the management subsystem unless directed to do so by your service provider.

Release Note RN026
Title moving shares between projects can disrupt client I/O
Platforms All
Related Bug IDs 6979504

When moving a share from one project to another, client I/O may be interrupted. Do not move shares between projects while client I/O is under way unless the client-side application is known to be resilient to temporary interruptions of this type.

Release Note RN027
Title shadow migration hangs management UI when source has thousands of files in the root directory
Platforms All
Related Bug IDs 6967206, 6976109

Shadow migration sources containing many thousands of files in the root directory will take many minutes to migrate, and portions of the appliance management UI may be unusable until migration completes. This problem will be exacerbated if the source is particularly slow or the target system is also under heavy load. If this problem is encountered, do not reboot the controller or restart the management subsystem; instead, wait for migration to complete. When planning shadow migration, avoid this filesystem layout if possible; placing the files in a single subdirectory beneath the root or migrating from a higher-level share on the source will avoid the problem.

Release Note RN028
Title shadow migration does not report certain errors at the filesystem root
Platforms All
Related Bug IDs 6890508

Errors migrating files at the root of the source filesystem may not be visible to the administrator. Migration will be reported "in progress" but no progress is made. This may occur when attempting to migrate large files via NFSv2; instead, use NFSv3 or later when large files are present.

Release Note RN029
Title repair of faulted pool does not trigger sharing
Platforms All
Related Bug IDs 6975228

When a faulted pool is repaired, the shares and LUNs on the pool are not automatically made available to clients. There are two main ways to enter this state:

  • Booting the appliance with storage enclosures disconnected, powered off, or missing disks
    Performing a cluster takeover at a time when some or all of the storage enclosures and/or disks making up one or more pool were detached from the surviving controller or powered off
  • When the missing devices become available, controllers with SAS-1 storage subsystems will automatically repair the affected storage pools. Controllers with SAS-2 storage subsystems will not; the administrator must repair the storage pool resource using the resource management CLI or BUI functionality. See product documentation for details. In neither case, however, will the repair of the storage pool cause the shares and LUNs to become available. To work around this issue, restart the management subsystem on the affected controller using the maintenance system restart command in the CLI. This is applicable ONLY following repair of a faulted pool as described above.
Release Note RN030
Title DFS links may be inaccessible from some Windows clients
Platforms All
Related Bug IDs 6962610

Under rare circumstances, some Microsoft Windows 2008/Vista and Microsoft Windows XP clients may be unable to access DFS links on the appliance, receiving the error Access is denied. Windows 2003 is believed not to be affected. The proximate cause of this problem is that the client incorrectly communicates with the DFS share as if it were an ordinary share; however, the root cause is not known. The problem has been observed with other DFS root servers and is not specific to the Storage 7000 appliance family. At present the only known way to resolve this issue is via reinstallation of the affected client system. If you encounter this problem, please contact your storage service provider and your Microsoft Windows service provider.

Release Note RN032
Title NDMP service may enter the maintenance state when changing properties
Platforms All
Related Bug IDs 6979723

When changing properties of the NDMP service, it may enter the maintenance state due to a timeout. This will be reflected in the NDMP service log with an entry of the form stop method timed out. If this occurs, restart the NDMP service as described in the product documentation. The changes made to service properties will be preserved and do not need to be made again.

Release Note RN035
Title Suboptimal PCIe link training
Platforms 7120,7320,7320C
Related Bug IDs 6979482

PCI Express 2.0 compatible cards may train to PCI Express 1.X speeds. This may impact performance of I/O through the affected card. Detection software for this condition runs during boot and sends a fault message observable through the Maintenance fault logs. To recover from this condition, the system must be rebooted. ||

Release Note RN038
Title ZFS pool fullness conditions incompletely documented
Platforms All
Related Bug IDs 6525233

The product documentation notes that ZFS pools filled beyond 80% of capacity may deliver degraded performance. However, it should also state that a ZFS pool with any vdev (RAID-protected stripe or mirror) more than 80% full may suffer from the same problem. This is not a defect in the product, but should be documented to aid in planning. Add storage to an existing pool before it approaches 80% full to avoid this problem.

Release Note RN039
Title Solaris/VxVM FC initiator timeouts
Platforms 7310C 7410C 7320C 7420C
Related Bug IDs 6937477, 6951173

Symantec has enhanced their code to handle I/O delays during takeover and/or failback. This work was covered under Symantec bug number e2046696 - fixes for dmp_lun_retry_timeout handling issues found during SUN7x10 array qualification. Symantec created hot fix VRTSvxvm 5.1RP1_HF3 (for Solaris SPARC & x86) with this fix in it. The next patch 5.1RP2 & major update 5.1SP1 will have these changes. Obtain and install these patches from Symantec if you are using VxVM on Solaris as an FC initiator attached to a clustered appliance.

Release Note RN040
Title Missing data in certain Analytics drilldowns
Platforms All
Related Bug IDs 6958579, 6959575

When drilling down on a statistic that existed prior to the current system startup, certain statistics may show no data in the drilldown. This can occur if the original statistic required looking up a DNS name or initiator alias, as would typically be the case for statistics broken down by client hostname (for files) or initiator (for blocks). The problem occurs only intermittently and only with some statistics. To work around this issue, disable or delete the affected dataset(s), then restart the management software stack using the 'maintenance system restart' command in the CLI. Once the statistics are recreated or reenabled, subsequent drilldowns should contain the correct data.

Release Note RN041
Title Solaris initiators may lose access to FC LUNs during Cluster takeover
Platforms 7310C 7410C 7320C 7420C
Related Bug IDs 6959914

Solaris 10 update 9 and older releases as well as Solaris 11 build 153 and older initiator software incorrectly handles repeated INQUIRY command failures experienced when using multipathing during a 7000 cluster takeover or failback. There is no workaround, but patches and a fix in a subsequent Solaris 10 update are planned. ||

Release Note RN043
Title Multiple SMB DFS roots can be created
Platforms All
Related Bug IDs 6979350

It is possible to create more than the maximum of 1 DFS standalone root on an appliance if multiple pools are available. Do not create multiple DFS roots. ||

Release Note RN044
Title Intermittent probe-based IPMP link failures
Platforms All
Related Bug IDs 6979470

An appliance under heavy load may occasionally detect spurious IPMP link failures. This is part of the nature of probe-based failure detection and is not a defect. The product documentation explains the algorithm used in determining link failure; the probe packets it uses may be delayed if the system is under heavy load. ||

Release Note RN047
Title Backup of "system" pool
Platforms All
Related Bug IDs 6988252

The NDMP backup subsystem may incorrectly allow backup operations involving filesystems on the system pool, which contains the appliance software. Attempting to back up these filesystems will result in exhaustion of space on the system pool, which will interfere with correct operation of the appliance. Do not attempt to back up any filesystem in the system pool. If you have done so in the past, check the utilisation of the pool as described in the Maintenance/System section of the product documentation. If the pool is full or nearly full, contact your authorized service provider.

Release Note RN050
Title Boot loader not updated when upgrade performed with faulty system disk
Platforms All
Related Bug IDs 6975232

If an upgrade is performed when one of the two system disks is faulty, its boot loader configuration will not be updated. If the faulty disk is the primary boot disk (slot 6 in the 7310, 7310C, 7410, and 7410C, slot 0 in the 7320, 7320C, 7420 and 7420C, and the rear slot 0 in the 7120), the system may boot obsolete software following the upgrade. Replace any failed system disks and allow resilvering to complete prior to beginning an upgrade.

Release Note RN051
Title Configuration restore does not work on clustered systems
Platforms 7310C,7320C,7410C,7420C
Related Bug IDs 6982025,7024182,7024518

If a system is configured in a cluster, or has ever been configured in a cluster, the configuration restore feature does not work properly. This can lead to appliance panics, incorrect configuration, or a hung system. At the present time, configuration restore should only be used on stand-alone systems.

Release Note RN052
Title Node fails to join cluster after root password change prior to rollback:
Platforms 7310C, 7410C, 7320C, 7420C
Related Bug IDs 6961359

In a cluster configuration, if the root password is changed prior to a rollback to an older release a cluster join failure can occur on that node. If this occurs, change the root password on the node that was rolled back to match the other node and perform a reboot. Once both nodes are at the same version and operating as a cluster, the root password can be changed again as needed.

Release Note RN059
Title Explicit warnings should be issued for mixed drive speeds and sizes
Platforms All
Related Bug IDs 7071877, 7071878

When data disk drives with different speeds (e.g., 7,200 rpm and 15,000 rpm) are combined in a disk shelf or pool, drive performance will be impacted. The pool will only run as fast as the slowest drive. Likewise, when data disk drives of different sizes (e.g., 2 TB and 600 GB) are combined in a pool, pool capacity will be impacted. In such case, ZFS will in some cases use the size of the smallest capacity disk for some or all of the disks within the storage pool, thereby reducing the overall expected capacity. The sizes used will depend on the storage profile, layout, and combination of devices. Disk drives with different speeds or sizes should not be mixed within disk shelves or pools.

Release Note RN060
Title Disk drive speed is not considered when replacing drives
Platforms All
Related Bug IDs 7082114

It's possible to replace a disk drive with one that has a different drive speed. This can impact overall pool performance. Disk drives should only be replaced with drives that have the same size and speed.

Release Note RN064
Title FTP transfer of 2011.1 release from 2010.Q3 fails
Platforms All
Related Bug IDs 7070586

Due to a defect in the 2010.Q3 release, FTP transfers that exceed 60 seconds may fail with FTP response timeout. Due to the file size of the 2011.1 release image, this will likely happen when using a 100 Mb/s (or slower) network connection. If this occurs, use HTTP (via the appliance BUI) or a faster network connection to transfer the 2011.1 release image.

Release Note RN067
Title SMB operation during AD outages may create damaged ACLs
Platforms All
Related Bug IDs 6844652

SMB operation while the Active Directory (AD) domain controllers are unavailable can yield damaged ACL entries. In particular, problems can arise if Windows groups are present in the ACLs on dataset roots because the domain controllers are not immediately available at system startup. This can be avoided by not including Windows groups in dataset root ACLs. Instead, consider mapping those groups to UNIX groups, so the ACLs contain the UNIX group information. A CIFS client can be used to repair damaged entries.

Release Note RN068
Title Cannot modify MTU of datalink in an IPMP group
Platforms All
Related Bug IDs 7036132

As of the 2011.1 release, the datalink MTU can be explicitly set via the appliance BUI. However, attempting to change the MTU of a datalink with an IP interface in an IPMP group causes the datalink to enter the maintenance state. If this occurs, prior to placing the datalink into the IPMP group, destroy and recreate the datalink with the desired MTU.

Release Note RN069
Title SNMP does not work on appliances with > 12 IP interfaces
Platforms All
Related Bug IDs 6998845, 7018550

If more than 12 IP interfaces are configured, the SNMP service may go into the maintenance state or saturate a CPU, and will often fill its log file with error on subcontainer ‘interface container’ insert (-1). If any of these problems occur, either disable SNMP or reduce the number of IP interfaces.

Release Note RN071
Title Resilver can severely impact I/O latency
Platforms All
Related Bug IDs 7012341, 7025155

During a disk resilver operation (e.g., due to activating a spare after a disk failure), latency for I/O associated with the containing pool may be severely impacted. For example, the “NFSv3 operations broken down by latency” Analytics statistic can show 2-4 second response times. After the resilver completes, I/O latency returns to normal.

Release Note RN072
Title Windows 2008 R2 IB client may fail to ping appliance
Platforms All
Related Bug IDs 7098153

Due to what appears to be an initiator-side problem, Windows 2008 R2 InfiniBand (IB) initiators may be initially unable to access the appliance. If this occurs, disable and re-enable the IB port on the initiator side by navigating to Network Connections, right-clicking on the appropriate IB port, selecting “disable,” right-clicking again, and selecting “enable.”

Release Note RN075
Title NDMP-ZFS backup limitations for clones
Platforms All
Related Bug IDs 7045412

First, to successfully back up and restore a clone by itself (i.e., without backing up its containing project), ZFS_MODE=dataset must be set in the data management application. Second, to successfully back up and restore a project that contains a clone whose origin resides in the same project as the clone, use ZFS_MODE=recursive (the default mode). Third, to successfully back up a project containing a clone whose origin resides in a project different from the clone, back up the shares of the project individually using ZFS_MODE=dataset. (This even applies to shares that are not clones, although at least one will be a clone.) These limitations may be lifted in a future release. For more information on NDMP and the “zfs” backup type, refer to http://www.oracle.com/technetwork/articles/systems-hardware-architecture/ndmp-whitepaper-192164.pdf

Release Note RN076
Title Unavailable shadow migration source can impact NFS service
Platforms All
Related Bug IDs 7026945

If the source of a shadow migration becomes unavailable, other NFS shares being served from the same appliance may become inaccessible. If this occurs, restoring access to the shadow source – or disabling shadow migration – will allow other NFS access to resume.

Release Note RN077
Title Revision B3 SAS HBAs not permitted with 2011.1 release
Platforms 7210,7310,7410
Related Bug IDs 7102346

Due to a defect with Revision B3 SAS HBAs, which is exposed by changes in the 2011.1 release, 7210, 7310, and 7410 appliances with Revision B3 SAS HBAs will be prevented by the appliance kit update health check software from upgrading to 2011.1. If this occurs, please contact Oracle Support about an upgrade to Revision C0 SAS HBAs.

Release Note RN078
Title "ZFS" should be used instead of "Sun" in /etc/multipath.conf
Platforms 7120,7320,7420
Related Bug IDs 7119902, 7121536

When configuring Linux FC Multipath client initiators for use with 7120, 7320 and 7420 platforms, the product string in the /etc/multipath.conf file should be "ZFS Storage 7x20". 7x10 platforms should continue to use the "Sun Storage 7x10" product string.

Release Note RN081
Title Disk firmware upgrade continues to show as pending in BUI
Platforms All
Related Bug IDs 7132721,7126813

After performing an update to this release, disk firmware upgrades have been shown as pending in the BUI with no upgrades pending in the CLI via the maintenance system updates show command. This is strictly a reporting error in the BUI. However – and ONLY after verifying that ALL upgrades have completed via the CLI – the CLI command maintenance system restart can be used to reset the appliance management stack and correct the BUI reporting error.

Release Note RN082
Title Following a SIM upgrade, a Logzilla may be left with a single path
Platforms All
Related Bug IDs 7110874

If this problem happens, SIM upgrades will stop and the UI will report one path to the affected device from both heads in a cluster system. To re-enable the path and continue with SIM upgrades, the affected Logzilla device must be re-seated. The device should be pulled from its bay and re-inserted after waiting 10 seconds.

Release Note RN083
Title During SIM upgrades, paths will appear to be offline for all devices on the affected path
Platforms All
Related Bug IDs N/A

The paths will remain quiesced until the SIM has been upgraded and verified to be functional. SIM upgrades will resume after the verification and all paths have been brought back online. No action is required.

Release Note RN084
Title Detailed and summary pending firmware update counts may be inconsistent
Platforms All
Related Bug IDs 7142747

The number of firmware updates remaining on the Maintenance:System screen may be inconsistent with the number of pending firmware updates shown in the Maintenance:System Firmware Updates popup. This inconsistency will not prevent the firmware upgrades from completing and can be ignored.

Release Note RN085
Title Disk firmware upgrade is successful but status indicates upgrade failed
Platforms All
Related Bug IDs 7126813

After performing an update to this release, disk firmware upgrades have been shown as failed in the Maintenance:System Firmware Updates popup, but when reviewed in the Maintenance:Hardware screens the disk firmware revision has been updated. This is strictly a reporting error in the BUI. To clear the failed status – and ONLY after verifying that ALL firmware upgrades have completed via the Maintenance:Hardware screens – on a clustered system a failback can be performed. On a standalone system, a reboot can be performed.

Release Note RN086
Title A downrev SIM placed into a disk shelf may fail to upgrade
Platforms All
Related Bug IDs 7132931

When a SIM with downrev firmware is inserted into a disk shelf, the firmware may fail to upgrade as it should automatically. To workaround this issue on a clustered system, a takeover followed by a failback can be performed. On a standalone system, a reboot can be performed.

Release Note RN088
Title Update health check on stripped node may fail due to one path
Platforms 7310C,7410C,7320C,7420C
Related Bug IDs 7140904,7141113

During an appliance software update on a cluster after the first node has been updated and has taken over the cluster resources, the update health check may fail on the stripped node due to one device path. To workaround this issue, the stripped node can be rebooted.

Release Note RN089
Title Restored or replicated share SMB names may get changed
Platforms All
Related Bug IDs 7133104

Shares that are restored from an NDMP backup or replicated shares may have different SMB names than the original share. To workaround this issue, manually update the incorrect SMB names.

Labels:
None
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.

Sign up or Log in to add a comment or watch this page.


The individuals who post here are part of the extended Oracle community and they might not be employed or in any way formally affiliated with Oracle. The opinions expressed here are their own, are not necessarily reviewed in advance by anyone but the individual authors, and neither Oracle nor any other party necessarily agrees with them.