ak-2011.04.24.5.0 Release Notes

Skip to end of metadata
Go to start of metadata

2011.1.5

This minor release of the Sun ZFS Storage Appliance software contains significant bug fixes for all supported platforms. Please carefully review the list of CRs that have been addressed and all known issues prior to updating.

This release requires appliances to be running the 2010.Q3.2.1 micro release or higher prior to updating to this release. In addition, this release includes update health checks that are performed automatically when an update is started, prior to the actual update. If an update health check fails, it can cause an update to abort. The update health checks help ensure component issues that may impact an update are addressed. It is important to resolve all hardware component issues prior to performing an update.

NOTE: IT IS RECOMMENDED THAT YOU UPDATE TO THIS RELEASE IMMEDIATELY.

This release contains the 2011.1.4.2 fixes that address issues that can cause panics on Sun ZFS Storage 7420 Appliances with 1TB of memory (CR7198357) and on any Sun ZFS Storage Appliance running the 2011.1.3 release or later where the size of regular files is set incorrectly following a system panic, power failure or failover/takeover event on clustered systems (CR7206840). In some cases, the incorrect file size may result in data loss on the affected files, indicating an EIO return to the caller or a system panic with the message zfs: accessing past end of object.

After uploading the 2011.1.5 release and prior to applying the update, execute the update health checks by hand as protection against the CR7206840 panic. Do this on each controller, and on both nodes of a clustered system before issuing a takeover. The update health checks should be performed each time a controller is rebooted until that controller is updated. After the health checks have been run, follow the online help instructions in the Maintenance:System:Updates section for single or cluster system updates. The following shows how to perform the update health checks using the CLI.

zfs7420-010:> maintenance system updates select ak-nas@2011.04.24.5.0,1-1.33 check
You have requested to run checks associated with waiting upgrade media. This
will execute the same set of checks as will be performed as part of any upgrade
attempt to this media, and will highlight conditions that would prevent
successful upgrade. No actual upgrade will be attempted, and the checks
performed are of static system state and non-invasive. Do you wish to continue?

Are you sure? (Y/N) Y
Healthcheck running ... \

Healthcheck completed. There are no issues at this time which would cause an upgrade
to this media to be aborted.

This release includes all the issues addressed from the prior releases. Prior release notes may be found here: Software Updates.

DE2-24C/P Drive Enclosures

This release includes support for DE2-24C/P drive enclosures. For more information on these high capacity and high performance drive enclosures contact your Oracle Sales representative or see the Sun ZFS Storage Appliance Product Webpage.

NOTE: THIS RELEASE DOES NOT SUPPORT MIXING DE2-24C/P DRIVE ENCLOSURES WITH OTHER DRIVE ENCLOSURE TYPES.

If an appliance is running a release prior to 2011.1.5, the following upgrade procedures should be used to configure a new system with DE2-24C/P drive enclosures.

Standalone Controller DE2-24C/P Upgrade Procedure

1. Rack all components and cable everything with the exception of the SAS cables between the DE2-24C/P drive enclosures and the controller.

2. Power on the controller and perform the initial configuration (networking, password, etc.).

3. Download the 2011.1.5 package and perform the software update.

4. After the controller has rebooted and is running 2011.1.5, attach the DE2-24C/P drive enclosures to the controller (see the Online Help Installation:Cabling section).

5. The controller will see the drive enclosures and will automatically start the DE2-24C/P IOM firmware upgrades to version 0010.

6. After the DE2-24C/P IOM firmware upgrades are complete, perform the standard storage configuration.

Clustered Controllers DE2-24C/P Upgrade Procedure

1. Rack all of the components and cable everything with the exception of the SAS cables between the DE2-24C/P drive enclosures and the controllers.

2. Power on each controller and perform the initial configuration (networking, password, etc.) for each as a standalone appliance.

NOTE: DO NOT PERFORM THE CLUSTER CONFIGURATION UNTIL THE LAST STEP.

3. Download the 2011.1.5 package to each controller and perform the software update.

4. After the controllers have rebooted, log into one controller and perform a factory reset.

5. After the factory reset has been completed and the controller is waiting for initial configuration connect the DE2-24C/P drive enclosures to both controllers (see the Online Help Installation:Cabling section).

6. The non-factory reset controller will see the drive enclosures and will automatically start the DE2-24C/P IOM firmware upgrades to version 0010.

7. After the DE2-24C/P IOM firmware upgrades are complete, perform the standard storage and cluster configuration.

Device Firmware Upgrades

This release contains the following device firmware upgrades.

Device Vendor Product ID Old Version New Version
DE2-24C/P Drive Enclosure IOM Oracle Oracle Storage DE2-24C/P 000F 0010
HITACHI 2.0T/3.0T HDD Hitachi HITACHI 2.0T/3.0T A28A A310
HITACHI 300G/600G HDD Hitachi HITACHI 300G/600G A2A8 A6C0
SEAGATE 3.0T HDD Seagate SEAGATE 3.0T 061A 064A

During appliance software updates, it is important that customers postpone administrative operations such as cluster failback, reboot, or power down until the system has automatically upgraded all device firmware. For more information on firmware upgrades and information on how to monitor them following the first boot after a software update, refer to the Maintenance:System:Updates Hardware Firmware Updates section of the Customer Service Manual or online help.

Deferred Updates

When updating from a 2010.Q3 release to a 2011.1 release, the following deferred updates are available and may be reviewed in the Maintenance System BUI screen. See the "Maintenance:System:Updates#Deferred_Updates" section in the online help for important information on deferred updates before applying them.

NOTE: APPLYING 2011.1 DEFERRED UPDATES WILL PREVENT ROLLING BACK TO PREVIOUS VERSIONS OF 2010.Q3 SOFTWARE OR EARLIER.

1. RAIDZ/Mirror Deferred Update (Improved RAID performance)
This deferred update improves both latency and throughput on several important workloads. These improvements rely on a ZFS pool upgrade provided by this update.

2. Optional Child Directory Deferred Update (Improved snapshot performance)
This deferred update improves list retrieval performance and replication deletion performance by improving dataset rename speed. These improvements rely on a ZFS pool upgrade provided by this update. Before this update has been applied, the system will be able to retrieve lists and delete replications, but will do so using the old, much slower, recursive rename code.

Supported Platforms

Issues Addressed

The following CRs have been fixed in this release:

15759418 SUNBT7097870-AK-2011.04.24 Spill block can be dropped in some situations during
15759794 SUNBT7115925-AK-2011.04.24 temporary "resource updated, verification pending" fa
15761479 SUNBT7107814-AK-2011.04.24 zfs receive error message always displays feature fla
15764419 SUNBT7127291-AK-2011.04.24 SIM firmware upgraded while appliance cluster at mism
15767059 SUNBT7116603-AK-2011.04.24 Need to define a property for an externally managed e
15771518 SUNBT6858883-AK-2011.04.24 Need way to clear NFS locks on behalf of non-Solaris
15776621 SUNBT7032986-AK-2011.04.24 NDMP three-way restore takes too long
15776623 SUNBT7033953-AK-2011.04.24 NDMP 3-way backup needs performance improvement
15776624 SUNBT7041998-AK-2011.04.24 NDMP backup could hang up at the end in local or remo
15776677 SUNBT7150697-AK-2011.04.24 OVM workflow changes for Fiber Channel support
15777943 SUNBT7041770-AK-2011.04.24 Panic in ip_output_options accessing already freed me
15781003 SUNBT7156231-AK-2011.04.24 Fix for 7097870 needs to be applied in a deferred upd
15782102 SUNBT7154572-AK-2011.04.24 Disabling one PHY between expander and initiator resu
15782104 SUNBT7155074-AK-2011.04.24 system panic while disable/enable 4 phys connected to
15783544 SUNBT7101959-AK-2011.04.24 AKTXT_NAS_NDMP_BACKUP_INUSE NDMP restore in progress
15786350 SUNBT7162465 7420 Increase support to 36 JBODs
15787391 SUNBT7160309-AK-2011.04.24 SRA Manage Replication workflow should return the rec
15787393 SUNBT7163739-AK-2011.04.24 SRA Manage Replication workflow should return status
15787412 SUNBT7156604-AK-2011.04.24 assertion failed: !IPCL_IS_NONSTR(connp), file: ../..
15791156 SUNBT7148236-AK-2011.04.24 Exalogic - NIS/NFSv4 issue after ZFS storage head tak
15791379 SUNBT7168135-AK-2011.04.24 akbundle should capture multicast group memberships (
15791380 SUNBT7046309-AK-2011.04.24 netstat -s output would be useful to include in the s
15791602 SUNBT7060157-AK-2011.04.24 libses: SUN plugin needs to learn how to deal with th
15791864 SUNBT7156126-AK-2011.04.24 libfruid needs to recognize Oracle FRUID records
15791985 SUNBT7077629-AK-2011.04.24 fme_undiagnosable tries to add NULL observation to ca
15796332 SUNBT7031123-AK-2011.04.24 Elapsed time for zfs delete may grow quadratically wi
15798291 SUNBT7127514-AK-2011.04.24 CLI: Deferred updates counter is not correct
15798325 SUNBT7154281-AK-2011.04.24 ssh sessions causing several NFS short stalls
15798326 SUNBT7163177-AK-2011.04.24 need ability to set IB node desc
15798812 SUNBT7159038-AK-2011.04.24 unable to delete zombie snapshot causes CPU spikes
15800169 SUNBT7109684-AK-2011.04.24 pmcs is in a 4-way livelock
15800337 SUNBT7111333-AK-2011.04.24 One node of Q3.1.1 cluster stops serving data (pmcs l
15800339 SUNBT7111419-AK-2011.04.24 I/Os stopped and owner head hung during SIM updates f
15800341 SUNBT7129893-AK-2011.04.24 pmcs is still dropping ACKs
15800343 SUNBT7161882-AK-2011.04.24 ::walk pmcs_targets fails if first target is NULL
15800346 SUNBT7176677-AK-2011.04.24 bad mutex panic during I/O w/ device state recovery
15800348 SUNBT7173500-AK-2011.04.24 panic at pmcs_register_device ()
15800558 SUNBT7141690-AK-2011.04.24 ndmpd emits error message to log file when backing up
15800634 SUNBT7165336-AK-2011.04.24 svr4 packaging tools should allow package names longe
15800747 SUNBT7143720-AK-2011.04.24 tst.ntpauth.aksh always fails in nightly running on s
15801397 SUNBT7146119-AK-2011.04.24 cloned LUN is not thin-provisioned when clone thin-pr
15801398 SUNBT7149036-AK-2011.04.24 OVM Plugin reports volume group sizes incorrectly
15801485 SUNBT7179828-AK-2011.04.24 topo_module walker misses a few entries
15802280 SUNBT7145280-AK-2011.04.24 Need MPxIO support for Gari/Wasabi
15802345 SUNBT7148804-AK-2011.04.24 Add support for 900 GB Hitachi Cobra-E drives
15802454 SUNBT7182469-AK-2011.04.24 Incorrect pool/volume group information returned for
15802507 SUNBT7178802-AK-2011.04.24 NFS4 server should allow non-conflicting IOs for recl
15802597 SUNBT7000943-AK-2011.04.24 SDP: data loss or a race whereby the read doesn't wak
15802632 SUNBT7150190-AK-2011.04.24 out of memory condition wedges akd
15802647 SUNBT7167373-AK-2011.04.24 SRA Manage Replication workflow should take key inclu
15802648 SUNBT7161171-AK-2011.04.24 add last replication result to source package propert
15802837 SUNBT7167949-AK-2011.04.24 buffer overrun in pmcs driver
15802839 SUNBT7077592-AK-2011.04.24 pmcs_check_commands should add command to completion
15802842 SUNBT7088494-AK-2011.04.24 pmcs_check_commands is not taking the statlock when c
15802910 SUNBT7164708-AK-2011.04.24 Want Hitachi 300/600GB 15k Firmware A6C0
15803034 SUNBT7125798-AK-2011.04.24 zfs: accessing past end of object panic on Solaris 11
15803246 SUNBT6916965-AK-2011.04.24 Hermon FMA should print the error code when the fatal
15803248 SUNBT7006122-AK-2011.04.24 SDP: another "data loss or a race whereby the read do
15803316 SUNBT6986563-AK-2011.04.24 pmcs_attach leaks ddi_dma_mem_alloc memory
15803375 SUNBT7133567-AK-2011.04.24 NAS NDMP backups failing on reaching EOM [Commvault S
15803518 SUNBT7060897-AK-2011.04.24 pmcs mdb output with -e option displays incorrect sys
15803592 SUNBT7178151-AK-2011.04.24 NFSv4 race condition when using delegation
15803622 SUNBT7181149-AK-2011.04.24 interrupting storage unconfig can be fatal
15803756 SUNBT7000120-AK-2011.04.24 BUI services: icon displays "enable service" even ser
15804043 SUNBT7184373-AK-2011.04.24 pmcs no longer compiles 32-bit
15804157 SUNBT7156966-AK-2011.04.24 unclear use of pointer to ak_dataspan_datum_t  
15804158 SUNBT7156964-AK-2011.04.24 assertion failed: dataset-akd_datum == 0 during I/O
15804629 SUNBT7103456-AK-2011.04.24 Analytics stops temporarily when clone is run with se
15805140 SUNBT7185664-AK-2011.04.24 reduce notice subscriptions
15805494 SUNBT7186108-AK-2011.04.24 stale connection processing should be sped up
15806055 SUNBT7014792-AK-2011.04.24 assertion failed: status == HERMON_CMD_SUCCESS, file:
15806056 SUNBT7034960-AK-2011.04.24 big performance drop from lock contention in hermon_c
15806060 SUNBT7033172-AK-2011.04.24 panic: "testof" exposed a bug in an error code path h
15806411 SUNBT7003997-AK-2011.04.24 hermon should implement "inline" for better performan
15806422 SUNBT7043115-AK-2011.04.24 hermon: "testof -v --verb_tests 0x10000000" on CX-2 s
15806426 SUNBT7046230-AK-2011.04.24 IBTF cq_sched test uncovers a failure in hermon_cq_al
15806453 SUNBT7055282-AK-2011.04.24 pmcs should double-check PCIe BAR registers for queue
15806454 SUNBT7141343-AK-2011.04.24 pmcs should support multiple outbound queues (with in
15806488 SUNBT7160017-AK-2011.04.24 pmcs driver does not set STAT_ABORTED in pkt_statisti
15806677 SUNBT7180244-AK-2011.04.24 Remove legacy interrupt support in pmcs
15806678 SUNBT7173319-AK-2011.04.24 Kernel panic: BAD TRAP: type=e (#pf Page fault) occur
15806687 SUNBT7175670-AK-2011.04.24 pmcs logging statement was shortened
15806689 SUNBT7176487-AK-2011.04.24 pmcs binary files produce various nits warnings
15806690 SUNBT7179469-AK-2011.04.24 thebe locking improvements
15806691 SUNBT7180172-AK-2011.04.24 recursive mutex_enter panic in pmcs_soft_reset
15806695 SUNBT7180321-AK-2011.04.24 state_lock could be a krwlock_t instead of a mutex
15806699 SUNBT7181097-AK-2011.04.24 recursive mutex_enter in pmcs_ds_iocq_run
15806703 SUNBT7181164-AK-2011.04.24 Thebe1 with MSI interrupts does not work anymore
15806705 SUNBT7181326-AK-2011.04.24 odb_auto_clear should be set correctly when interrupt
15806781 SUNBT7063342-AK-2011.04.24 p_init_type_reply member of ibt_hca_portinfo_t is nev
15806919 SUNBT7180431-AK-2011.04.24 ak_hca_port_refresh() callers do not release ahs_lock
15806929 SUNBT6993558-AK-2011.04.24 Unexplained "Connection refused" after successfully e
15806930 SUNBT7004239-AK-2011.04.24 CQE local transport retry count exceeded error comes
15806933 SUNBT7016951-AK-2011.04.24 x4800 panics in ibmf_i_free_msg on snv_157
15806939 SUNBT7025408-AK-2011.04.24 topspin SM handling in ibtl could delay event deliver
15806951 SUNBT7032556-AK-2011.04.24 recovery from IB switch reboot is too slow, causing d
15806970 SUNBT7016515-AK-2011.04.24 Duplicate messages received when running NICDRV over
15806971 SUNBT7032315-AK-2011.04.24 hermon driver should cache DMA handles
15806974 SUNBT7038585-AK-2011.04.24 hermon: rdsv3 performance throughput drops from after
15806978 SUNBT7039748-AK-2011.04.24 rdsv3 can use "inline" to improve performance
15807123 SUNBT7187984 typo in backport of 7054207 in 2011.1
15807260 SUNBT7160960-AK-2011.04.24 mutex_enter: bad mutex in pppt:pppt_lport_xfer_data
15807681 SUNBT7151925-AK-2011.04.24 nas_list_common() stumbles on a stale nas cache entry
15807800 SUNBT7188238-AK-2011.04.24 AKD aborts on startup in ak_chassis_create_topo
15807801 SUNBT7183747-AK-2011.04.24 disks no longer enumerated on ultra-27 after 7168295
15807871 SUNBT7068525-AK-2011.04.24 small memory leak by strdup() in libshare_ak::sa_enab
15808045 SUNBT7188186-AK-2011.04.24 check for target's phy is unnecessary in pmcs_scsa_ab
15808047 SUNBT7185645-AK-2011.04.24 pmcs panic at pmcs_iport_active()
15808104 SUNBT7078280-AK-2011.04.24 Need libses plugin for an enclosure with vendor id 'O
15808106 SUNBT7189074-AK-2011.04.24 disks unresponsive after disk firmware upgrade
15808124 SUNBT7171338-AK-2011.04.24 AK libses needs to process IPMI storage definition ba
15808128 SUNBT7177889-AK-2011.04.24 The libses SUN plugin needs to handle different FRUID
15808129 SUNBT7150786-AK-2011.04.24 Need to define a libses property for an enclosure tha
15808131 SUNBT7142891-AK-2011.04.24 fru-monitor needs to manage service LEDs based on SES
15808133 SUNBT7124196-AK-2011.04.24 sensor-transport should avoid processing an enclosure
15808135 SUNBT7169337-AK-2011.04.24 AK needs to provide libses plugin for enclosures with
15808231 SUNBT7114859-AK-2011.04.24 Kernel panic in ibmf (bad_mutex) running Solaris 11
15808232 SUNBT7129513-AK-2011.04.24 devfs causing prtconf, zpool status, rsh to hang on s
15808234 SUNBT7159046-AK-2011.04.24 Parfait uninitialized variable errors are seen on ibc
15808235 SUNBT7161381-AK-2011.04.24 System cannot be pinged
15808257 SUNBT7089063-AK-2011.04.24 SRU7 fails to join multicast group of SRU6
15808260 SUNBT7001837-AK-2011.04.24 IPOIB partition link name shown in the dmesg are not
15808262 SUNBT7053540-AK-2011.04.24 memory leak in ibd_rc_connect()
15808264 SUNBT7053988-AK-2011.04.24 assertion failed: cycled ((mce == NULL) || (mce-m
15808266 SUNBT7081144-AK-2011.04.24 ibd reports a non-zero link speed even if the port is
15808315 SUNBT7166349-AK-2011.04.24 User role authorization changeAccessProps should be n
15808365 SUNBT7047586-AK-2011.04.24 smbsrv`smb_alloc+0x2e memory leaks
15808436 SUNBT7189297-AK-2011.04.24 upgrade from 2011.1.3.0 fails C100_disk_paths_and_fau
15808529 SUNBT6871725-AK-2011.04.24 conf restore: smb/nge0 fails to import during configu
15808615 SUNBT7141823-AK-2011.04.24 Need write cache support for Toshiba MK2001RKBSUN2.0T
15808648 SUNBT7131527-AK-2011.04.24 upgrades do not report disk/SSD firmware failures/tim
15808650 SUNBT7132721-AK-2011.04.24 Disk firmware update continues to show as pending fol
15808652 SUNBT7162296-AK-2011.04.24 Need to remove ak_dprintf()'s for disk speed probes
15808657 SUNBT7166696-AK-2011.04.24 chassis subsystem generates invalid_enclosure alert e
15808658 SUNBT7168295-AK-2011.04.24 chassis configuration checking should be handled by c
15808671 SUNBT7185343-AK-2011.04.24 Add support for 300 GB Hitachi Cobra-E drive
15808672 SUNBT7183280-AK-2011.04.24 Include MarsK HDD FW A310
15808674 SUNBT7168301-AK-2011.04.24 Need support for expander FW upgrades independent of
15808675 SUNBT7168359-AK-2011.04.24 Want ability to upgrade disk not configured in a pool
15808679 SUNBT7178295-AK-2011.04.24 ak_chassis_disk_update_check() is leaky
15808681 SUNBT7181782-AK-2011.04.24 memory leak in ak_chassis_enum_disk
15808683 SUNBT7181940-AK-2011.04.24 retire J4400 and Sun Fire x4240 expander firmware upg
15808687 SUNBT7182013-AK-2011.04.24 uninitialized variables in ak_chassis_ipmi_post_sp_er
15808690 SUNBT7182221-AK-2011.04.24 chassis update structure used after free
15808691 SUNBT7182192-AK-2011.04.24 Can not rely on disk serial numbers for FW upgrade
15808692 SUNBT7182397-AK-2011.04.24 The 'zones' chassis is missing its ops vector
15808695 SUNBT7182609-AK-2011.04.24 FW upgrade 'reason' status leak
15808696 SUNBT7182698-AK-2011.04.24 chassis rework jumped the gun on removal of J4400 sup
15808698 SUNBT7183025-AK-2011.04.24 spurious chassis fault
15808700 SUNBT7184643-AK-2011.04.24 chassis snapshot leaks
15808702 SUNBT7185499-AK-2011.04.24 upgrade to fw8_9 fails C100_disk_paths_and_faults hea
15808706 SUNBT7167387-AK-2011.04.24 akd fails to initialize on 7120's with Aura
15808829 SUNBT7187929-AK-2011.04.24 mutex is not released properly when pmcs driver is in
15808983 SUNBT7177892-AK-2011.04.24 Add software interface identifier VPD page
15808987 SUNBT7181169-AK-2011.04.24 mgmt-url property change should generate correct ASC/
15808989 SUNBT7181960-AK-2011.04.24 Should allow lu creation with '000000' oui
15808991 SUNBT7187526-AK-2011.04.24 Add support for SCSI INQUIRY VPD page 0x84 (software
15809146 SUNBT7177116-AK-2011.04.24 server_delegation contains booby-trap in property typ
15809235 SUNBT7190491-AK-2011.04.24 System panic occurred while attempting to break an sm
15809423 SUNBT7157268-AK-2011.04.24 Repeated analytics graphs and drilldowns fragment mem
15809662 SUNBT7172982-AK-2011.04.24 config-backup cannot be used when an analytics retent
15810638 SUNBT7129940-AK-2011.04.24 stat/tst.errors.aksh need some work post-7090613
15810639 SUNBT7131190-AK-2011.04.24 sleep should support sub-second intervals
15810640 SUNBT7090628-AK-2011.04.24 tst.errors.aksh failure seen
15810703 SUNBT7187656-AK-2011.04.24 Old plugin release is not compatible with new workflo
15810881 SUNBT7191636-AK-2011.04.24 Add Gen 4 STEC firmware support to Gari and Wasabi
15811169 SUNBT7087005-AK-2011.04.24 want Manta Ray firmware and support
15811301 SUNBT7186916-AK-2011.04.24 ak_dataset_init leak state on failure
15811541 SUNBT7192787-AK-2011.04.24 Gari and Wasabi IOMs do not automatically upgrade to
15812080 SUNBT7193836-AK-2011.04.24 upgrade is using diskid.xml from the old OS.
15812234 SUNBT6882270-AK-2011.04.24 sshd MaxStartups 50:30:60 to prevent connection drops
15813034 SUNBT7194960 lint warning : ctype.h : E_STATIC_UNUSED
15813550 SUNBT7193230-AK-2011.04.24 arc_reclaim_thread missing a ptob call on the redzone
15814052 SUNBT7095730-AK-2011.04.24 potential thread hang in ndmpd-zfs
15814053 SUNBT7096196-AK-2011.04.24 ndmpd buf thread error is not passed back up the stac
15814054 SUNBT7094628-AK-2011.04.24 ndmpd_zfs_reader_writer(): rename local variables and
15814125 SUNBT7194705-AK-2011.04.24 flush write cache not suppressed with STEC Gen4 logzi
15814129 SUNBT7184335-AK-2011.04.24 Workaround for IBQ Stall
15814312 SUNBT7151155-AK-2011.04.24 akd process dumped core - ABORT: bad share type 0x10
15814314 SUNBT6982225-AK-2011.04.24 sendmail requires a new define statement in submit.cf
15814378 SUNBT7195026-AK-2011.04.24 "projectshares" command gives exception if non-existi
15814426 SUNBT7193389-AK-2011.04.24 Update SUN-AK-MIB MODULE-IDENTITY to reflect Oracle c
15814427 SUNBT7179373-AK-2011.04.24 add full AK version string to SNMP MIB
15814709 SUNBT7196927-AK-2011.04.24 Need temporary workaround until solution for 7025224
15814755 SUNBT7196730-AK-2011.04.24 assertion failed: cdp-akcd_devpath != 0 during disk
15814813 SUNBT7193644-AK-2011.04.24 panic during RW2 SIM firmware upgrade testing
15814875 SUNBT7167842-AK-2011.04.24 An abort received after the final response is sent ca
15814877 SUNBT7191026-AK-2011.04.24 During lun-enumeration, sometimes SCSI commands are d
15815042 SUNBT7197342-AK-2011.04.24 ak faults all Wasabis all the time
15815356 SUNBT7184202-AK-2011.04.24 Need in-band CLI utility for Gari/Wasabi
15815357 SUNBT7184203-AK-2011.04.24 Support bundle should include Gari/Wasabi firmware fo
15815364 SUNBT7188162-AK-2011.04.24 Wasabi chassis metadata needs to include image inform
15815428 SUNBT7197766-AK-2011.04.24 Potential use of uninitialized variable in ak_chassis
15816420 SUNBT7198534-AK-2011.04.24 return value from realloc is not checked
15816558 SUNBT7199036-AK-2011.04.24 uninitialized variable causes unnecessary topo snapsh
15817039 SUNBT7144206-AK-2011.04.24 pmcs condition variable assertion panic in cv_destroy
15818362 SUNBT7107582-AK-2011.04.24 Incorrect pkt_state set on low resources status
15818363 SUNBT7198913-AK-2011.04.24 Need to deliver CMOS image for SW 1.4 for Otoro
15818383 SUNBT7191947-AK-2011.04.24 System panic in pmcs_destroy_target()
15818653 SUNBT7200832-AK-2011.04.24 RoHS compliant Thebe card is missing info for manufac
15818708 SUNBT7180852-AK-2011.04.24 arc_no_grow forcing unnecessary shrink in arc_size
15818758 SUNBT7043397-AK-2011.04.24 tcp listener can drop its sonode reference too early
15819017 SUNBT7197461-AK-2011.04.24 ::ak_component shows garbage for SAS expanders
15819066 SUNBT7201599-AK-2011.04.24 fish-gate lint warnings for devctl_device_getstate()
15819160 SUNBT7199660-AK-2011.04.24 Discovery gets stuck on abort_all_cv in pmcs_kill_dev
15820766 SUNBT7203096-AK-2011.04.24 7199036 removed one too many ak_topo_refreshes from t
15821155 SUNBT7192007-AK-2011.04.24 BUI: Firmware Updates verbiage does not word wrap cau
15821286 SUNBT7152995-AK-2011.04.24 fmd core generated after PXE install 'libnvpair.so.1`
15821591 SUNBT7203550-AK-2011.04.24 Need to update the revision history in the expander h
15821826 SUNBT7203966 Wasabi configuration Wiki page should state that a filler should be
15822304 SUNBT7204351-AK-2011.04.24 pmcs should not block SMP functions when iport has ch
15823412 SUNBT7204731-AK-2011.04.24 appliance panic with mutex_destroy: bad mutex
15823448 SUNBT7201605-AK-2011.04.24 restores hang with ndmp local tape and remote tape us
15824112 SUNBT7205479-AK-2011.04.24 ses2 plugin fails to report rquested failure bit on a
15824602 SUNBT7204421-AK-2011.04.24 conf_nvlist created to hold send exclusions nvlist is
15824933 SUNBT7203461-AK-2011.04.24 All Wasabi EBODs on one cluster head show single path
15825328 SUNBT7202455-AK-2011.04.24 double exit from state_lock in pmcs_flush_all_tgts_qu
15826261 SUNBT7207084-AK-2011.04.24 libtopo should default to the WWN when the chassis se
15899775 2nd IOM will not upgrade because 4 HDDs have a single path from 1st IOM upgrade
15905762 SUNBT7200302-AK-2011.04.24 deferred updates fail
15948408 Failback caused appliance inaccesible up to 2 and half hours.
15961294 600G Viper-C disk drive needs to be restricted to use in new mid-plane
15976465 akd core found after upgrade 'libnvpair.so.1'
15979369 disable dam unconfig due to transport errors

Known Issues

Release Note RN001
Title Network Datalink Modifications Do Not Rename Routes
Platforms All
Related Bug IDs 15488020

The Configuration/Network view permits a wide variety of networking configuration changes on the Sun Storage system. One such change is taking an existing network interface and associating it with a different network datalink, effectively moving the interface's IP addresses to a different physical link (or links, in the case of an aggregation). In this scenario, the network routes associated with the original interface are automatically deleted, and must be re-added by the administrator to the new interface. In some situations this may imply loss of a path to particular hosts until those routes are restored.

Release Note RN002
Title Appliance doesn't boot after removing first system disk
Platforms 7210
Related Bug IDs 15546043

In a 7210 System, removing the first system disk will make the system unbootable, despite the presence of a second mirrored disk. To workaround this issue, break into the BIOS boot menu, under 'HDD boot order', modify the list so the first item is "[SCSI:#0300 ID00 LU]".

Release Note RN004
Title Network interfaces may fail to come up in large jumbogram configurations
Platforms All
Related Bug IDs 15573843

In systems with large numbers of network interfaces using jumbo frames, some network interfaces may fail to come up due to hardware resource limitations. Such network interfaces will be unavailable, but will not be shown as faulted in the BUI or CLI. If this occurs, turn off jumbo frames on some of the network interfaces.

Release Note RN005
Title Multi-pathed connectivity issues with SRP initiators
Platforms All
Related Bug IDs 15609172, 15611632, 15618166, 15618253, 15618436, 15621220, 15621562, 15622079

In cluster configurations, Linux multi-path clients have experienced loss of access to shares on the appliance. If this happens, a new session or connection to the appliance may be required to resume I/O activity.

Release Note RN007
Title Rolling back after storage reconfiguration results in faulted pools
Platforms All
Related Bug IDs 15586706

Rolling back to a previous release after reconfiguring storage will result in pool(s) appearing to be faulted. These pools are those that existed when the rollback target release was in use, and are not the same pools that were configured using the more recent software. The software does not warn about this issue, and does not attempt to preserve pool configuration across rollback. To work around this issue, after rolling back, unconfigure the storage pool(s) and then import the pools you had created using the newer software. Note that this will not succeed if there was a pool format change between the rollback target release and the newer release under which the pools were created. If this is the case, an error will result on import and the only solution will be to perform the upgrade successfully. Therefore, in general, one best avoids this issue by not reconfiguring storage after an upgrade until the functionality of the new release has been validated.

Release Note RN009
Title Unanticipated error when cloning replicated projects with CIFS shares
Platforms All
Related Bug IDs 15615612

When cloning replicated projects that are exported using the new "exported" property and shared via CIFS, you will see an error and the clone will fail. You can work around this by unexporting the project or share or by unsharing it via CIFS before attempting to create the clone.

Release Note RN010
Title Some FC paths may not be rediscovered after takeover/failback
Platforms All
Related Bug IDs 15618238

After a takeover and subsequent failback of shared storage, Qlogic FC HBAs on Windows 2008 will occasionally not rediscover all paths. When observed in lab conditions, at least one path was always rediscovered. Moreover, when this did occur the path was always rediscovered upon initiator reboot. Other HBAs on Windows 2008 and Qlogic HBAs on other platforms do not exhibit this problem.

Release Note RN013
Title Unable to change resource allocation during initial cluster setup when using CLI
Platforms 7310C,7410C,7320C,7420C
Related Bug IDs 15667251

When performing initial cluster setup via the CLI, any attempt to change the storage controller to which a resource is allocated will result in an error message of the form error: bad property value "(other_controller)" (expecting "(controller)"). To work around this problem, use the BUI to perform initial cluster setup. Alternately, complete cluster setup, log out of the CLI, log back in and return to the configuration cluster resources context to finish resource allocation and initial failback.

Release Note RN017
Title Chassis service LED is not always illuminated in response to hardware faults
Platforms 7120 7320 7320C 7420 7420C
Related Bug IDs 15646092

In some situations the chassis service LED on the controller will not be illuminated following a failure condition. Notification of the failure via the user interface, alerts including email, syslog, and SNMP if configured, and Oracle Automatic Service Request ("Phone Home") will function normally.

Release Note RN018
Title iSCSI IOs sometimes fail on cluster takeover/failback when using Solaris MPxIO clients
Platforms 7310C 7320C 7410C 7420C
Related Bug IDs 15648589

iSCSI IO failures have been seen during takeover/failback when iSCSI targets that are separately owned by each controller in a cluster are part of the same target group. To workaround this issue, iSCSI target groups should only contain iSCSI targets owned by a single controller. This also implies that the default target group should not be used in this case.

Release Note RN019
Title HCA port may be reported as down
Platforms All
Related Bug IDs 15698685

HCA ports may be reported as down after reboot. If the overlaid datalinks and interfaces are functioning, this state is incorrect.

Release Note RN022
Title nearly full storage pool impairs performance and manageability
Platforms All
Related Bug IDs 15378956, 15661408, 15663845

Storage pools at more than 80% capacity may experience degraded I/O performance, especially when performing write operations. This degradation can become severe when the pool exceeds 90% full and can result in impaired manageability as the free space available in the storage pool approaches zero. This impairment may include very lengthy boot times, slow BUI/CLI operation, management hangs, inability to cancel an in-progress scrub, and very lengthy or indefinite delays while restarting services such as NFS and SMB. Best practices, as described in the product documentation, call for expanding available storage or deleting unneeded data when a storage pool approaches these thresholds. Storage pool consumption can be tracked via the BUI or CLI; refer to the product documentation for details.

Release Note RN023
Title Solaris 10 iSCSI client failures under heavy load
Platforms All
Related Bug IDs 15662377

If using Solaris 10 as the iSCSI initiator, you must use Solaris 10 Update 10 or later.

Release Note RN025
Title management UI hangs on takeover or management restart with thousands of shares or LUNs
Platforms All
Related Bug IDs 15665874, 15699950

When a cluster takeover occurs or the management subsystem is restarted either following an internal error or via the maintenance system restart CLI command, management functionality may hang in the presence of thousands of shares or LUNs. The likelihood of this is increased if the controller is under heavy I/O load. The threshold at which this occurs will vary with load and system model and configuration; smaller systems such as the 7110 and 7120 may hit these limits at lower levels than controllers with more CPUs and DRAM, which can support more shares and LUNs and greater loads. Best Practices include testing cluster takeover and failback times under realistic workloads prior to placing the system into production. If you have a very large number of shares or LUNs, avoid restarting the management subsystem unless directed to do so by your service provider.

Release Note RN026
Title moving shares between projects can disrupt client I/O
Platforms All
Related Bug IDs 15664600

When moving a share from one project to another, client I/O may be interrupted. Do not move shares between projects while client I/O is under way unless the client-side application is known to be resilient to temporary interruptions of this type.

Release Note RN029
Title repair of faulted pool does not trigger sharing
Platforms All
Related Bug IDs 15661166

When a faulted pool is repaired, the shares and LUNs on the pool are not automatically made available to clients. There are two main ways to enter this state:

  • Booting the appliance with storage enclosures disconnected, powered off, or missing disks
    Performing a cluster takeover at a time when some or all of the storage enclosures and/or disks making up one or more pool were detached from the surviving controller or powered off
  • When the missing devices become available, controllers with SAS-1 storage subsystems will automatically repair the affected storage pools. Controllers with SAS-2 storage subsystems will not; the administrator must repair the storage pool resource using the resource management CLI or BUI functionality. See product documentation for details. In neither case, however, will the repair of the storage pool cause the shares and LUNs to become available. To work around this issue, restart the management subsystem on the affected controller using the maintenance system restart command in the CLI. This is applicable ONLY following repair of a faulted pool as described above.
Release Note RN030
Title DFS links may be inaccessible from some Windows clients
Platforms All
Related Bug IDs 15650980

Under rare circumstances, some Microsoft Windows 2008/Vista and Microsoft Windows XP clients may be unable to access DFS links on the appliance, receiving the error Access is denied. Windows 2003 is believed not to be affected. The proximate cause of this problem is that the client incorrectly communicates with the DFS share as if it were an ordinary share; however, the root cause is not known. The problem has been observed with other DFS root servers and is not specific to the Storage 7000 appliance family. At present the only known way to resolve this issue is via reinstallation of the affected client system. If you encounter this problem, please contact your storage service provider and your Microsoft Windows service provider.

Release Note RN032
Title NDMP service may enter the maintenance state when changing properties
Platforms All
Related Bug IDs 15664828

When changing properties of the NDMP service, it may enter the maintenance state due to a timeout. This will be reflected in the NDMP service log with an entry of the form stop method timed out. If this occurs, restart the NDMP service as described in the product documentation. The changes made to service properties will be preserved and do not need to be made again.

Release Note RN039
Title Solaris/VxVM FC initiator timeouts
Platforms 7310C 7410C 7320C 7420C
Related Bug IDs 15642153

Symantec has enhanced their code to handle I/O delays during takeover and/or failback. This work was covered under Symantec bug number e2046696 - fixes for dmp_lun_retry_timeout handling issues found during SUN7x10 array qualification. Symantec created hot fix VRTSvxvm 5.1RP1_HF3 (for Solaris SPARC & x86) with this fix in it. The next patch 5.1RP2 & major update 5.1SP1 will have these changes. Obtain and install these patches from Symantec if you are using VxVM on Solaris as an FC initiator attached to a clustered appliance.

Release Note RN040
Title Missing data in certain Analytics drilldowns
Platforms All
Related Bug IDs 15648562

When drilling down on a statistic that existed prior to the current system startup, certain statistics may show no data in the drilldown. This can occur if the original statistic required looking up a DNS name or initiator alias, as would typically be the case for statistics broken down by client hostname (for files) or initiator (for blocks). The problem occurs only intermittently and only with some statistics. To work around this issue, disable or delete the affected dataset(s), then restart the management software stack using the 'maintenance system restart' command in the CLI. Once the statistics are recreated or reenabled, subsequent drilldowns should contain the correct data.

Release Note RN041
Title Solaris initiators may lose access to FC LUNs during cluster takeover
Platforms 7310C 7410C 7320C 7420C
Related Bug IDs 15648815

If using Solaris 10 as the FC initiator, you must use Solaris 10 Update 10 or later.

Release Note RN043
Title Multiple SMB DFS roots can be created
Platforms All
Related Bug IDs 15664518

It is possible to create more than the maximum of 1 DFS standalone root on an appliance if multiple pools are available. Do not create multiple DFS roots. ||

Release Note RN044
Title Intermittent probe-based IPMP link failures
Platforms All
Related Bug IDs 15664567

An appliance under heavy load may occasionally detect spurious IPMP link failures. This is part of the nature of probe-based failure detection and is not a defect. The product documentation explains the algorithm used in determining link failure; the probe packets it uses may be delayed if the system is under heavy load. ||

Release Note RN047
Title Backup of "system" pool
Platforms All
Related Bug IDs 15671861

The NDMP backup subsystem may incorrectly allow backup operations involving filesystems on the system pool, which contains the appliance software. Attempting to back up these filesystems will result in exhaustion of space on the system pool, which will interfere with correct operation of the appliance. Do not attempt to back up any filesystem in the system pool. If you have done so in the past, check the utilisation of the pool as described in the Maintenance/System section of the product documentation. If the pool is full or nearly full, contact your authorized service provider.

Release Note RN051
Title Configuration restore does not work on clustered systems
Platforms 7310C,7320C,7410C,7420C
Related Bug IDs 15666733, 15700466, 15700693

If a system is configured in a cluster, or has ever been configured in a cluster, the configuration restore feature does not work properly. This can lead to appliance panics, incorrect configuration, or a hung system. At the present time, configuration restore should only be used on stand-alone systems.

Release Note RN052
Title Node fails to join cluster after root password change prior to rollback:
Platforms 7310C, 7410C, 7320C, 7420C
Related Bug IDs 15649957

In a cluster configuration, if the root password is changed prior to a rollback to an older release a cluster join failure can occur on that node. If this occurs, change the root password on the node that was rolled back to match the other node and perform a reboot. Once both nodes are at the same version and operating as a cluster, the root password can be changed again as needed.

Release Note RN067
Title SMB operation during AD outages may create damaged ACLs
Platforms All
Related Bug IDs 15565116

SMB operations while the Active Directory (AD) domain controllers are unavailable can yield damaged ACL entries. In particular, problems can arise if Windows group SIDs are present in the ACLs on dataset roots. Access control may not work right, but will fix itself when the DCs become available and the cache entries expire, which normally takes approximately 10 minutes. SMB client sessions started during that period might need to be restarted. Any ACL that includes a Windows group written during such an outage will be damaged. A CIFS client can be used to repair damaged entries.

Release Note RN068
Title Cannot modify MTU of datalink in an IPMP group
Platforms All
Related Bug IDs 15708978

As of the 2011.1 release, the datalink MTU can be explicitly set via the appliance BUI. However, attempting to change the MTU of a datalink with an IP interface in an IPMP group causes the datalink to enter the maintenance state. If this occurs, prior to placing the datalink into the IPMP group, destroy and recreate the datalink with the desired MTU.

Release Note RN069
Title SNMP does not work on appliances with > 12 IP interfaces
Platforms All
Related Bug IDs 15680487, 15696295

If more than 12 IP interfaces are configured, the SNMP service may go into the maintenance state or saturate a CPU, and will often fill its log file with error on subcontainer ‘interface container’ insert (-1). If any of these problems occur, either disable SNMP or reduce the number of IP interfaces.

Release Note RN071
Title Resilver can severely impact I/O latency
Platforms All
Related Bug IDs 15701038

During a disk resilver operation (e.g., due to activating a spare after a disk failure), latency for I/O associated with the containing pool may be severely impacted. For example, the “NFSv3 operations broken down by latency” Analytics statistic can show 2-4 second response times. After the resilver completes, I/O latency returns to normal.

Release Note RN072
Title Windows 2008 R2 IB client may fail to ping appliance
Platforms All
Related Bug IDs 15746292

Due to what appears to be an initiator-side problem, Windows 2008 R2 InfiniBand (IB) initiators may be initially unable to access the appliance. If this occurs, disable and re-enable the IB port on the initiator side by navigating to Network Connections, right-clicking on the appropriate IB port, selecting “disable,” right-clicking again, and selecting “enable.”

Release Note RN075
Title NDMP-ZFS backup limitations for clones
Platforms All
Related Bug IDs 15716003

First, to successfully back up and restore a clone by itself (i.e., without backing up its containing project), ZFS_MODE=dataset must be set in the data management application. Second, to successfully back up and restore a project that contains a clone whose origin resides in the same project as the clone, use ZFS_MODE=recursive (the default mode). Third, to successfully back up a project containing a clone whose origin resides in a project different from the clone, back up the shares of the project individually using ZFS_MODE=dataset. (This even applies to shares that are not clones, although at least one will be a clone.) These limitations may be lifted in a future release. For more information on NDMP and the “zfs” backup type, refer to http://www.oracle.com/technetwork/articles/systems-hardware-architecture/ndmp-whitepaper-192164.pdf

Release Note RN077
Title Revision B3 SAS HBAs not permitted with 2011.1 release
Platforms 7210,7310,7410
Related Bug IDs 15749140

Due to a defect with Revision B3 SAS HBAs, which is exposed by changes in the 2011.1 release, 7210, 7310, and 7410 appliances with Revision B3 SAS HBAs will be prevented by the appliance kit update health check software from upgrading to 2011.1. If this occurs, please contact Oracle Support about an upgrade to Revision C0 SAS HBAs.

Release Note RN078
Title "ZFS" should be used instead of "Sun" in /etc/multipath.conf
Platforms 7120,7320,7420
Related Bug IDs 15760277, 15761179

When configuring Linux FC Multipath client initiators for use with 7120, 7320 and 7420 platforms, the product string in the /etc/multipath.conf file should be "ZFS Storage 7x20". 7x10 platforms should continue to use the "Sun Storage 7x10" product string.

Release Note RN082
Title Following a SIM upgrade, a Logzilla may be left with a single path
Platforms All
Related Bug IDs 15754494

If this problem happens, SIM upgrades will stop and the UI will report one path to the affected device from both heads in a cluster system. To re-enable the path and continue with SIM upgrades, the affected Logzilla device must be re-seated. The device should be pulled from its bay and re-inserted after waiting 10 seconds.

Release Note RN089
Title Restored or replicated share SMB names may get changed
Platforms All
Related Bug IDs 15768498

Shares that are restored from an NDMP backup or replicated shares may have different SMB names than the original share. To workaround this issue, manually update the incorrect SMB names.

Release Note RN093
Title Shadow migration issues
Platforms All
Related Bug IDs 15654495, 15661918, 15595857, 15702398

Avoid shadow migrating filesystems which have thousands of files and/or directories in the root directory of the source file system. If errors are encountered in migrating the root directory of the source file system, the migration may fail to make progress. Cancel and re-start after fixing the errors. Losing a shadow migration source can have severe negative impact on sharing of all other file systems. Restore access to the source as soon as possible, or cancel the migration if access to the source cannot be re-established.

Labels:
None
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.

Sign up or Log in to add a comment or watch this page.


The individuals who post here are part of the extended Oracle community and they might not be employed or in any way formally affiliated with Oracle. The opinions expressed here are their own, are not necessarily reviewed in advance by anyone but the individual authors, and neither Oracle nor any other party necessarily agrees with them.