ak-2013.1.0.1 Release Notes

Skip to end of metadata
Go to start of metadata

Oracle ZFS Storage Appliance - 2013.1.0.1 Release Notes

Introduction

About This Release

These are the release notes for the Oracle ZFS Storage Appliance software version 2013.1.0.1. Following on from 2011.1, 2013.1 is a new major release of the appliance controller software, and 2013.1.0.1 is the first public release of 2013.1. Versioning for this major release will be 2013.1.<minor-version>.<micro-version>

In Maintenance -> System -> Updates the full build version string is reported as 2013.06.05.<minor-version>.<micro-version>,<external-build-number>

The full version for 2013.1.0.1 is 2013.06.05.0.1,1-1.2.

Uploading Update Packages to Your Appliance

The .zip file downloaded from My Oracle Support cannot be uploaded to the appliance directly. Follow these steps:

  • Download the [PATCH_ID].zip file from My Oracle Support to a local filesystem accessible from your desktop.
  • Unzip that file.
  • A directory All_Supported_Platforms is created. In that directory the software update file has a .pkg.gz file name extension. Update your appliance using that file, following instructions in the online help off the current install on the appliance.

What's New

New platform support

The 2013.1.0.1 release is the minimum release required for the Oracle ZFS Storage Appliance ZS3 family, including the ZS3-2 and the ZS3-4.

Alerting - pool/project/share capacity threshold alerts

A new group of "capacity" threshold alerts are available in 2013.1.0.1, for triggering actions based on pool, project or individual share capacity. The four new alerts are:

  • Capacity: system pool bytes used
  • Capacity: system pool percent used
  • Capacity: capacity bytes used
  • Capacity: capacity percent used

The first two apply only to the system pool. The second two apply to user data pools, and allow the threshold criteria specification in terms of pools, projects and individual shares such as "any pool capacity exceeds 80%", "any project in any pool exceeds 85%", "any share in project foo of pool pool-0 exceeds 85%", and so on.

Analytics - dataset management

There are three facilities for reducing or limiting the size of an analytics dataset:

  • Deleting all saved data for a dataset.
  • The ability to specify an analytics data retention policy that is automatically applied to all datasets; for each of per-second, per-minute, and per-hour data you can specify whether to retain all the data (the default) or a chosen minimum period to retain for each level of granularity.
  • The ability to prune a dataset on demand by removing the fine-grained individual statistics from a dataset, with the option of applying such filtering only to statistics older than some chosen period.

In addition, configuring capacity threshold alerts for the system pool can be used to guard against excessive dataset space usage filling the system pool.

NDMP - Analytics

New NDMP protocol statistics are available, with an extended set of breakdowns:

  • NDMP bytes, as a raw statistic or broken down by:
    • client
    • device
    • file name
    • type of operation
    • session
  • NDMP operations, as a raw statistic or broken down by:
    • client
    • device
    • file name
    • type of operation
    • session
    • latency
    • offset
    • size

As usual, threshold alerts can be configured for these statistics. The new statistics augment the previous set of four NDMP statistics.

NDMP - token-based backup for ZFS backup types

Past releases have supported only level-based incremental backups for the "zfs" backup type. In 2013.1.0.1, support for token-based backup for "zfs" backup types is included - whether or not it is available to you is dependent on the backup application (DMA) in use at your site.

Token-based backup is an alternative to level-based backup, in which incremental backup timestamps are recorded and retained by the DMA instead of NDMP. The DMA receives this information in the form of a "token", and manages future incremental backups by incorporating a selected token into a future incremental backup request.

Token-based backup is not enabled by default. See Configuration->Services->NDMP to enable it. Note that a past level 0 dump cannot serve as a base for future token-based incrementals - when you switch to token-based backup a new full dump will be required to act as base.

Note: Tivoli TSM is not supported for token-based backup in this release.

NFS - maximum number of UNIX groups per user

In the 2013.1.0.1 release, the maximum number of UNIX groups that an NFS user on a client may be a member of is increased to 1024 from 16. This functionality has already been backported to the 2011.1.6 release.

NFSv4 - planned GRACE-less recovery

The GRACE period previously applied during NFS service restarts and during cluster takeover or failback events and defaults to 90s. During this time NFSv4 activity is suspended while clients reclaim their state.

With planned GRACE-less recovery, the GRACE period can be avoided. It is always avoided on NFS service restarts and cluster failback events. It is also avoided on a takeover event provided it is not a forced takeover (i.e., not 'configuration cluster takeover') but instead initiated by a reboot (maintenance system reboot) of the head whose resources are being taken over.

Networking - VNIC support

VNIC support is added in 2013.1.0.1. Physical datalinks are created atop devices, as before, and one or more virtual datalinks (VNICs) can be created atop each physical datalink. Interfaces are built on top of datalinks, as before, but can now be built on virtual as well as physical datalinks.

Traffic for a given VNIC flows through the corresponding physical datalink and device upon which the VNIC is ultimately built.

In a clustered configuration, VNIC-based interfaces can be assigned as singleton or as private resources as usual. VNIC-based interfaces do not have to be owned by the same head as normally owns the underlying physical device; assigning resource ownership of a VNIC-based interface to the opposite head permits the active use of that network device on the other head when it would previously have sat idle in standby, awaiting takeover. For example, a physical interface built atop ixgbe0 could be owned by head A and a VNIC-based interface built atop ixgbe0 could be assigned to head B; traffic on the VNIC-based interface would utilize ixgbe0 on head B, which would previously have sat idle awaiting takeover.

Networking - small MTU support

Previous releases have enforced a minimum datalink MTU of 1500 bytes. In 2013.1.0.1 the minimum MTU is reduced to 1280 bytes, which leaves headroom for networking hardware on the path between the server and client to grow the packet and not exceed 1500 bytes. The default MTU remains 1500 bytes.

OISP - Oracle Intelligent Storage Protocol

Configuring Oracle Database to use NFS for database files stored on an Oracle ZFS Storage Appliance has been made much easier and less error-prone with the introduction of OISP support in 2013.1.0.1 and Oracle Database 12c. With OISP new database files are automatically created with the correct record size, and all writes are automatically correctly optimized to choose the right ZFS Synchronous write bias value (Latency or Throughput) depending on the Database operational context and file type.

Note: OISP requires the use of Oracle Database Direct NFS Client configured to use NFSv4.

Replication - Analytics

Past releases have had no statistics available for replication. The 2013.1.0.1 release introduces the following statistics:

  • Replication bytes, as a raw statistic or broken down by direction, type of operation, peer, pool, project, or dataset.
  • Replication latency, as a raw statistic or broken down by direction, type of operation, peer, pool, project, or dataset.
  • Replication operations, as a raw statistic or broken down by direction, type of operation, peer, pool name, project, dataset, latency, offset, or size.

Replication - audit events

The 2013.1.0.1 release adds the following audit events for replication:

  • target creation
  • target modification (rename, change target IP)
  • target removal
  • request to create a replication action
  • request to remove a replication action
  • cloning a replica
  • reversing a replication
  • severing a replication

Replication - modifiable target IP address

In 2013.1.0.1 you can change the hostname property of a replication target and specify a new hostname or IP address. The hostname must still resolve to the same actual appliance. In past releases it was not possible to change the IP address of a replication target other than by deleting and recreating the target after first removing all replication actions using that target. After they were recreated, full resyncs were required.

Replication - progress monitoring

In past releases, replication progress has been difficult to monitor - source and target would indicate that a replication action was active but did not indicate how much had completed nor how much data remained.

In 2013.1.0.1, the source system includes replication progress information which indicates the estimated total size of the send stream, the amount that has already been sent, the amount remaining, and the average throughput and estimated time remaining. Information on the ''target'' side is unchanged.

SAN - multiple groups per initiator

Past releases have restricted a given initiator to appearing in no more than one initiator group. The 2013.1.0.1 release removes this restriction, and an initiator may now feature in more than one initiator group.

Unlike the related feature permitting multiple initiator groups per LUN, this feature is not tied to a deferred update and has no implications for replication compatibility.

SAN - multiple initiator groups per LUN

In past releases, no more than a single initiator group could be associated with a given LUN (when an initiator group is associated with a LUN only initiators in that group may access the LUN). In the 2013.1.0.1 release more than one initiator group may be associated with a single LUN, permitting more-flexible configurations.

This featured is tied to a deferred update - on upgrade to 2013.1.0.1 the feature is not automatically enabled, and is only enabled when all deferred updates are applied. Once you apply deferred updates you cannot rollback to previous system software.

This feature also has an impact on compatibility for remote replication - see the earlier note.

SMB - non-privileged domain join

In past releases an AD domain administrator user and password were required in order to join an appliance to an AD domain. In 2013.1.0.1, non-administrative users (authenticated users in the Domain Users group) can join an appliance to an AD domain where the computer account for that system has been pre-staged by a domain administrator.

Scheduled snapshot labels

An optional snapshot schedule label can be specified for a snapshot schedule, for use in the snapshot names used for snapshots of that schedule. Scheduled snapshots are now named as .auto[-<snaplabel>]-<formatted-timestamp>, for example .auto-halfhourly-2013-07-19T04:18:00UTC.

Feature Changes

Revised SAN configuration BUI layout and CLI commands

The UI for SAN configuration has undergone a reorganization in 2013.1.0.1. In both the BUI and CLI for Configuration -> SAN, the first level of selection is now for protocol (Fibre Channel, iSCSI, SRP) followed by a selection of targets and initiators for that protocol. This differs from previous releases in which you first select targets or initiators and then the protocol you wish to configure.

In the CLI of previous releases, configuration san had children targets and initiators. Each of those in turn had children fc, iscsi, and srp. In 2013.1.0.1, configuration san has children fc, iscsi, and srp and each of those has children targets and initiators.

Existing scripts that use the outgoing CLI SAN configuration command hierarchy will continue to work - the old command hierarchy is still available (although list will show only the revised command hierarchy). If and when new protocol support is added it will be available only under the revised CLI hierarchy.

SAN - FC ports now default to target mode

Fibre channel ports now default to target mode, which is the more common requirement. A mode change continues to require a reboot for effect. If the appliance is connected to a tape SAN for backup, one or more of the FC ports will need to be switched to initiator mode and the system rebooted for effect.

Support Notices

Summary of Controller Support

The following table summarizes the controllers supported by the 2013.1.0.1 release:

No support for legacy 7110, 7210, 7310, 7410 systems

Beginning with the 2013.1.0.1 software release, the original series of Sun Storage Appliance systems, namely the 7110, 7210, 7310 and 7410, are no longer supported. If upgrade packages are downloaded to any of these four systems, they will unpack successfully, but when an attempt is made to apply the waiting upgrade it will fail with an alert message "SUNW,{iwashi,fugu,maguro,toro} is not supported in this release". Additionally, if you have an older disk shelf attached to the system you will also see an alert "J4400 and J4500 disk shelves are not supported in this release".

Summary of Disk Shelf Support

The following table summarizes the disk shelves supported in the 2013.1.0.1 release:

Product Name Supported?
Sun Storage J4400 No
Sun Storage J4500 No
Sun Disk Shelf SAS-2 (Sun Storage J4410; DS2) Yes
Oracle Storage DE2-24C Yes
Oracle Storage DE2-24P Yes

No support for J4400 and J4500 disk shelves

The Sun Storage J4400 and Sun Storage J4500 disk shelf products are not supported in the 2013.1 major release. The J4500 disk shelf is supported only on the Sun ZFS Storage Appliance 7210 and the J4400 is supported only on the Sun ZFS Storage Appliance 7110/7310/7410, all of which are not supported in 2013.1.

It is not possible to connect these legacy disk shelf products to a 2013.1 supported system, not even temporarily for the purposes of data migration. Data migration, if required, must be performed by some external copy mechanism or through use of Shadow Migration or Remote Replication.

Intermix of J4410 and DE2-24P or DE2-24C

The 2013.1.0.1 release supports intermix of Sun Disk Shelf SAS-2 and Oracle Storage DE2-24P or DE2-24C disk shelves on the same controller, but with two key requirements:

  1. A system with intermixed disk shelves must use the new Sun Storage 6 Gb SAS 16 Port PCIe HBA, and must have no Sun Storage 6 Gb SAS 8 Port PCIe HBA present. The ZS3-2 and ZS3-4 ship with the new 16 port HBA HBA; older systems (7120, 7220, 7420) need to have their 8 port HBAs replaced with Sun Storage 6 Gb SAS 16 Port PCIe HBAs in order to support intermix.
  2. Intermix is not supported on the same SAS fabric. This means that only disk shelves of the same generation (Sun Disk Shelf SAS-2 or Oracle Storage DE2-24C/P) may be chained together off of the same HBA port.

The Sun Storage 6 Gb SAS 16 Port PCIe HBA is not supported in the 2011.1 release, so intermixing may only be performed after upgrade to 2013.1. If rolling back from such an upgrade, the original 8 port HBA will need to be restored to service.

In all cases, interaction with your support representative is suggested when configuring an intermix of disk shelves. If upgrading from a 7120, 7320 or 7420 with Sun Disk Shelf SAS-2 to an intermixed configuration running the 2013.1 release then such support is strongly suggested - please contact your support representative to obtain specific requirements and additional information.

Minimum version to upgrade from is 2011.1.4.2

The minimum software version from which you can upgrade to 2013.1.0.1 is 2011.1.4.2. If a system is running an earlier software version and 2013.1.0.1 update packages are uploaded, they will be listed as "unavailable" for upgrade until such time as the system is first upgraded to 2011.1.4.2 or later.

A 2011.1.4.2 system reports version string 2011.04.24.4.2,1-1.28 in Maintenance -> System -> Updates.

Support for multi-switch link aggregation

While the IEEE802.3ad (link aggregation) standard does not explicitly support aggregations across multiple switches, some vendors provide multi-switch support via proprietary extensions. So long as a switch configured with those extensions still conforms to the IEEE standard (and thus that the extensions are transparent to the end-nodes), its use is supported with the storage appliance. However, if an issue is encountered, Oracle support may require it to be reproduced on a single-switch configuration.

Upcoming changes to supported browsers

2013.1 will be the last major version with support for the current list of browsers. The new baseline for supported browsers will be Internet Explorer 9 or later, Safari 5.1 or later, and up-to-date versions of Chrome, Firefox and Opera.

Deferred Updates

What are Deferred Updates?

When upgrading appliance software from one release to another, some features of the new release can be delivered as "deferred updates" meaning that they will not be available until such time as the administrator applies all deferred updates that are unapplied. Features delivered in this way are those that have some impact on compatibility, or on the ability to rollback to previously active appliance software.

Once deferred updates are applied, you cannot easily rollback to previously active appliance software. For example, some deferred updates will progress the ZFS pool version beyond what the previous software understands and the pool will fail to import after rollback. The appliance software does not block an attempt to rollback, but in most cases (e.g., unless you have unconfigured the pool) rollback will fail. You cannot select between deferred updates that are unapplied - you either leave all unapplied, or apply all those that are outstanding. Deferred updates accumulate as appliance software updates are performed; for example, a system upgraded to 2013.1.0.1 from a 2011.1 release may have unapplied deferred updates as delivered in the 2011.1 releases.

Note that factory-installed systems have all features active - no unapplied deferred updates. Similarly, upgrading a system and subsequently performing a factory reset will result in a system with all deferred update features active.

See Maintenance:System:Updates#Deferred_Updates in the online help for important information on deferred updates before applying them.

2013.1.0.1 Deferred Update - Support for multiple initiator groups per LUN

The 2013.1.0.1 releases includes a single new deferred update feature: support for associating multiple initiator groups with a LUN (see feature description below).

Note that, as described below, enabling the multiple initiator groups per LUN feature has implications for remote replication compatibility.

Compatibility and Interoperability

Remote Replication Compatibility

A restriction applies with regard to replication compatibility in the presence of the "multiple initiator groups per LUN" feature:

If the source system both has LUNs for replication and has the "multiple initiator groups per LUN" feature enabled, then the target appliance for replication must also have the feature enabled. Other combinations are not subject to this restriction (for example where no LUNs are being replicated, or where only the target system has the feature enabled and not the source).

The "multiple initiator groups per LUN" feature is delivered in both 2011.1.1.8 and 2013.1.0.1 releases. In both cases it is the subject of a deferred update, meaning that a system that is upgraded to one of those releases (or later) will not enable the feature until such time as the administrator requests the application of all deferred updates. A system whose initial install was of 2011.1.1.8 or 2013.1.0.1, or which was upgraded to one of these releases (or later) and subsequently factory-reset, will have the feature enabled.

Where a source system with this feature enabled attempts to replicate to a "down-rev" system without the feature enabled, replication will fail with an alert as follows: "Replication of <project[/share]> failed because the target system's software does not support replication updates from this system. Check the replication documentation for details."

SMF repository upgrade when upgrading to 2013.1.0.1 or later

When upgrading to 2013.1 from a 2011.1 release, an upgrade to the SMF (service management facility) repository is performed during the reboot that is part of upgrade. Repository upgrade progress is reported to the console, and typically takes between 3 and 4 minutes. A configuration that includes very large numbers of VLANs or link aggregations, or which has a very complex IPMP setup, will take longer - up to 11 minutes has been seen on systems with the maximum number of VLANs and so on configured.

It is the upgrade from version 6 to version 7 which dominates the repository upgrade time. A typical number of rows to upgrade is between 6000 and 7000, and that will complete in a total time of 3 to 4 minutes. A configuration with large numbers of VLANs and so on (as above) can have up to 13000 rows to upgrade from version 6 to 7, and that takes 10 to 11 minutes.

Prefer reboot to forced cluster takeover

In a cluster, a "takeover" operation initiated on head B using Configuration->Cluster->Takeover to takeover from head A is considered a forced takeover operation. In routine service procedures, such as in rolling cluster upgrade, it is preferable to reboot (or power-off, if that is what is required) head A via the UI on head A - head B will takeover automatically, anyway. Such a co-operative takeover both completes a little quicker and allows features such as planned GRACE-less recovery for NFSv4 to operate.

NTLMv2 is recommended as the minimum SMB LAN Manager authentication level, but is not the default

In Configuration->Services->SMB, the default LAN Manager compatibility level is "4", which permits both NTLM and NTLMv2 authentication. NTLM has been shown to be vulnerable, and it is recommended that once all clients have been made NTLMv2-capable that the minimum level acceptable to the server should be changed to "5" which will permit only NTLMv2 authentication (this is not the default in order to avoid breaking existing client setups on upgrade of the server to 2013.1.0.1 or later).

Kerberos does not support weak encryption types by default

Support for weak encryption types in Kerberos is disabled by default in the 2013.1.0.1 release. The weak types are arcfour-hmac-md5-exp, des-cbc-md5, and des-cbc-crc. If your environment is using one of these weak ciphers, then until such time as your infrastructure adopts stronger ciphers you can enable support for the weak encryption types by navigating to Configuration->Services->NFS and checking "Allow weak encryption types in Kerberos" in the BUI, or 'set krb5_allow_weak_crypto=true' and 'commit' at the CLI.

iSCSI clients should not have a too-short session replacement timeout value

Some iSCSI clients configure a default session replacement timeout value or connection retry timeout value that is too short, and can lead to timeouts during cluster takeover and failback scenarios particularly when there are very large numbers of logical units and/or shares exported. It is recommended that the iSCSI session replacement timeout or connection timeout value be set to at least 240 seconds; this is a configuration option that should be applied on the client side - there is nothing on the appliance server that needs to be changed. Please consult the appropriate client software documentation to modify this configuration setting.

Oracle Solaris 11 NFS/RDMA clients should be upgraded to SRU10 or later

Oracle Solaris 11 clients of an Oracle ZFS Storage appliance that are to use NFS over RDMA (InfiniBand) should be updated to Solaris 11 Update 1 SRU 10 or later.

If you are running an NFS/RDMA client that predates Solaris 11 Update 1 SRU 10 and experience a problem in which the appliance NFS service appears to hang during appliance shutdown or cluster failback, then it may be an instance of this problem.

Known Issues

Each known issue is identified with a label of the form "RNnnn". Known issues are tracked in subsequent minor/micro release notes using the same identifier label. These labels are unique across all 2013.1 releases; when an issue is resolved, the identifier label will not subsequently be reused to track another issue.

Release
Note
RN001
Title NAS replication - can't clone project shared via CIFS
Description If a replication package is shared via SMB, either because it is shared on the source via SMB or because an administrator has chosen to share the package from the target system using SMB, then an attempt to clone such a replication package will fail with the following message: The clone operation failed because one or more of the shares' SMB resource names is already in use. Check that the replicated project and its shares are not currently shared via SMB. Note that "shared via SMB" means that the package has a value other than "off" for the sharesmb property (or for the resource name under Shares -> Protocols -> SMB in the BUI); the clone operation will fail even if the SMB service itself is disabled.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 15615612
Workaround Before performing replication package clone, disable SMB sharing for the package (at project level and for any shares that do not inherit SMB sharing from the project) on the target system by setting the sharesmb resource value to "off" after noting the existing value. After performing the clone operation, enable SMB sharing for any newly created clones if needed (they'll have been cloned to the 'off' value) and reinstate the SMB sharing of the package if required. Note that after changing SMB sharing options for a replication package it will no longer reflect changes on the replication source and will need to be reset to the desired value after the package is severed or reversed. A clone operation that has failed because of this limitation fails cleanly, and the operation can be repeated after performing the workaround steps above with no need to tidy up first.
Release
Note
RN002
Title replication fails in zfs_receive - does not match incremental source
Description If a running replication send is cancelled and then restarted and some snapshots within the replication action are deleted while the restarted replication send is running, then in some cases the replication may fail. The source and target alert logs do not include a specific failure reason for this case, and a service representative would have to confirm an instance of this issue.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 15771270
Workaround Retry the replication, and do not delete snapshots while it runs. This will clean up the snapshot discrepancies and allow the replication to complete successfully. If the repeated replication attempt fails then it is not this problem that is being encountered. Log a service call if problems persist.
Release
Note
RN003
Title Unable to restore multiple files in different directories via NDMP
Description Direct Access Recovery (DAR) restores with multiple files in different directories fail with error NDMP_RECOVERY_FAILED_NOT_FOUND. How and where the error is reported and logged depends on the Data Management Application (DMA) in use; the error is not logged on the appliance. The following example is from a Symantec Netbackup (tm) bptm log: 5726/2013 9:51:03 AM - Error ndmpagent(pid=18291) 192.168.5.149: Recovery status NDMP_RECOVERY_FAILED_NOT_FOUND for file "/export/foo/bar/file1"
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 15798321
Workaround Perform non-DAR enabled restores, or perform multiple restore operations within single directories. How to request restore without using DAR varies between data management applications.
Release
Note
RN004
Title ixgbe link autonegotiation can fail on some switch types
Description The ixgbe NIC, as present on the Oracle ZFS Storage ZS3-2 and Oracle ZFS Storage ZS3-4, can experience inconsistent link autonegotiation on some switch types. For example a link speed of 100 Mbit/s may sometimes be negotiated when 1000 Mbit/s is expected. This is known to happen intermittently on an Extreme Networks Summit x325-24t switch, although it is not believed to be a problem with the switch itself.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 15993107
Workaround Confirm in Configuration->Net->Datalinks that the autonegotiated datalink speed and duplex settings are as expected. If not, there are two workaround available:
  1. Use a crossover cable - link autonegotiation when using a crossover cable behaves as expected; or
  2. Set the datalink properties for speed and duplex in the UI - with specified settings no autonegotiation is attempted.
Release
Note
RN005
Title Panic at failed to add interface to smb
Description During a cluster takeover or failback operation, a cluster node importing or exporting SMB resources can panic with a message PANIC: failed to import ak:/sm /aggr87001: failed to add interface to smb: aggr87001 or PANIC: failed to export ak:/smb/ixgbe240002: failed to remove interface 'ixgbe240002' for host 'hostA' in which the particular interface that fails to import or export may vary. Configurations that have a very large number of network interfaces configured are more likely to see this failure, but even in such configurations the problem typically affects less than 5% of takeover/failback operations.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 16290949
Workaround There is no suggested workaround.
Release
Note
RN006
Title Anonymous LDAP doesn't work - can't select "none" authentication method
Description Configuring LDAP to use anonymous binding (checking "Bind credential level" of "Anonymous" in the BUI, or setting "cred_level" to value "anonymous" at the CLI) does not work. A system that was previously configured to use anonymous binding and which is upgraded to 2013.1.0.1 will also malfunction. In both cases, Maintenance->Problems will show the LDAP service in a maintenance state.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 17450664
Workaround Select "Simple (RFC 4513)" for the "Authentication method ("auth_method" in the CLI) and select "Proxy" for the "Bind credential level" ("cred_level"). Enter "cn=dummy" in the DN field ("proxy_dn" in the CLI), and "dummy" in the Password field ("proxy_password: in the CLI). Apply or commit the changes, then Select "Anonymous" for the "Bind credential level" and apply or commit again. Confirm that the LDAP service comes online and stays online.
Release
Note
RN007
Title stale pool config when failback races with fmd and sysevent handler
Description If a pool configuration is modified from the non-owner head and the pool immediately passed back to the designated owner via a failback operation, there is a small chance that import on the designated owner can fail: PANIC: failed to import ak:/zfs/pool_080: cannot open 'pool_080': pool is unavailable. Modifying a pool configuration includes the operations of initial pool creation, adding additional storage to a pool, unconfiguring a pool, repairing a disk in a pool, offline/online of a disk in a pool, and so on.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 16979558
Workaround Perform such pool modification operations from the head that should own the pool. If you do modify pool configuration from the non-owner head, wait at least 3 minutes after the operation before attempting failback.
Release
Note
RN008
Title datasets with lots of breakdowns causes memory fragmentation in akd
Description If analytics breakdowns by filename or latency are enabled for over two weeks, then a memory fragmentation issue can cause the management interface to stop gathering further analytics data and generally to become unresponsive. Data services such as NFS are unaffected, but services such as replication that are managed by the management interface would be impacted.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 16187433
Workaround Latency and filename analytics breakdowns should be enabled for relatively short periods of time when a particular issue is being investigated, and should be destroyed when not in use. If such breakdowns have been in use for an extended period, then monitor the memory usage of the management application as shown by 'status memory get management' in the CLI and labelled 'Mgmt' in the memory usage pie chart of the BUI Status dashboard. If the memory usage of the management interface grows larger than 2 GB, destroy analytics datasets not in use starting with those with filename or latency breakdowns. If the management system memory continues to grow, restart the management system from the CLI using 'maintenance system restart'. If filename or latency breakdowns are enabled for an extended period and memory fragmentation issues begin to affect the operation of the management application then there is no ready means to distinguish an instance of this problem vs some other issue affecting the management application. If the management application appears unresponsive (BUI/CLI hang) and you do not have good reason to suspect this issue then do not routinely restart the management application - raise a service call. On attempted CLI login to a system in which the management application is hung you will likely be dropped to an emergency shell; do not perform any underlying shell operations here, and raise a service call instead.
Release
Note
RN009
Title 3-way direct restore hangs waiting for tape server to send data
Description NDMP 3-way restores can hang in a specific set of circumstances:
  • the original backup included multiple datasets in a single backup and spanned multiple tapes, and
  • the administrator has selected specific datasets to restore, and
  • the first dataset selected for restore is not on the first tape
    In that set of circumstances the 3-way restore will hang until it times out or is cancelled, and no data will be sent for restore. The backup on tape is intact - this is just a problem with restore.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 17046311
Workaround Include a dataset from the first tape in the restore selection, where practical (if this dataset does not require restore then you may need to restore it to a temporary alternate location and then destroy it after the restore completes).
Release
Note
RN010
Title 2nd head failed to join domain upon cluster setup and failback
Description This issue can arise during initial cluster configuration where Active Directory (AD) is in use. If the first head has been configured and has joined an AD domain before clustering is configured, the second head may fail to join the AD domain after the initial failback operation. If the second head fails to join the AD domain, an alert will be raised: "The Active Directory service is degraded and cannot access the domain." Another symptom of this problem is that no AD server is displayed in the output of the configuration services ad show CLI command and the equivalent Configuration -> Services -> Active Directory on the BUI.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 16324457
Workaround To recover from an instance of this problem, navigate to 'configuration services ad' and ask to join the domain using the 'domain' command in the CLI, or by clicking the "Join Domain" button in the BUI. If this reattempt fails then you are experiencing a different issue - raise a service call if necessary. In initial cluster configuration, if you complete cluster setup and initial failback before joining either head to an AD domain then you can avoid any chance of this problem.
Release
Note
RN011
Title Share level replication fails after cloning with "Retain Other Local Settings"
Description If a share that has share-level replication actions defined is cloned using the BUI and in cloning the administrator checks "Retain other local settings" (not the default) then the replication action of the original share will begin to fail on all future updates. Where only project-level replication actions exist (or no replication actions at all) there is no problem. This issue also does not have a CLI equivalent.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 16667589
Workaround To avoid this issue, do not select "Retain other local settings" when cloning a share that has share-level replication actions defined. To recover from an instance of this problem, the failing replication action must be destroyed and re-created on both the original share and the clone thereof (if replication of the clone is required).
Release
Note
RN012
Title Dynamic DNS does not update the DNS record correctly after takeover completed
Description On a clustered system for which "Enable Dynamic DNS" is checked in Configuration->Services->SMB, Dynamic DNS records are not updated when a takeover or failback operation is completed.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 16185989
Workaround Do not enable Dynamic DNS for the SMB service, and manually add DNS records for the cluster system.
Release
Note
RN013
Title "software memory scrubber exiting" notice during boot
Description During boot of the 2013.1.0.1 release, a message "NOTICE: Software memory scrubber exiting" is seen on the system console.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 17460135
Workaround This message is harmless and expected, although new in the 2013.1.0.1 release. No workaround is necessary.
Release
Note
RN014
Title Alert during upgrade: ZFS device 'mirror' in pool 'pool-0' has insufficient replicas to continue
Description This alert is generated when log device firmware is updated as part of the upgrade to 2013.1.0.1 from either 2011.1.4.2 or 2011.1.5 and the log profile is mirrored.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 16980198
Workaround The message is informational only and does not indicate that a problem exists when seen during a firmware upgrade, although it is new in the 2013.1.0.1 release. No action or workaround is necessary.
Release
Note
RN015
Title NDMP 3-way restore times out at end-of-session
Description NDMP-based restore of a "zfs" stream, from one appliance (tape server) to another (data server), may suffer a delay of 320 seconds. This delay only occurs in configurations where the tape device is not physically connected to the appliance being restored but rather to a separate appliance on the same network (also called "three-way configurations"). The operation completes successfully.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 16948758
Workaround There is no suggested workaround.
Release
Note
RN016
Title data disk FW revision not updated on non-owner head
Description During disk firmware upgrade in a rolling upgrade of a clustered system, the head which does not own the disk pool may not report the updated firmware level. If the owning head fails then, following failover, the new head may attempt to update firmware on the newly acquired disks. Since disks are taken offline for upgrade, there is a brief impact to the redundancy profile of storage pools during upgrade. This issue affects disk shelves that are present on the system during an upgrade from the 2011.1 release to 2013.1.0.1. Depending on which 2011.1 release is being upgraded from, there may be more or less disk firmware in need of update: 2013.1.0.1 includes the same disk firmware as 2011.1.8.0, only one update relative to 2011.1.7.0, and so on. You also need to have the relevant disk types present on the system being upgraded. The symptom may also be experienced when attaching a disk shelf to a 2013.1.0.1 system that was previously in use by an appliance running software that predates 2013.1.0.1.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 16060318
Workaround There is no suggested workaround.
Release
Note
RN017
Title Mac Mountain Lion (10.8) fails to copy files to Solaris/ak-2011.1/ak-2013.1
Description An Apple Mac Mountain Lion 10.8 client that mounts an SMB share from an Oracle ZFS Storage Appliance can fail to copy files to that share. Copying folders works as expected, only file copy is affected. The failure message for file copy is "The operation cannot be completed because an unexpected error occurred (error code -50)"
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 15967211
Workaround Disable named streams on the Apple Mac client for the affected SMB mount point(s), as detailed in this Apple support document.
Release
Note
RN018
Title Out of memory crash cloning hundreds of replicas
Description If the CLI is scripted to perform hundreds of replication package clone operations in quick succession in a loop, then the CLI shell may run out of memory if the script does not pair each 'select' operation with a corresponding 'done' (for example, if 'cd ..' or 'cd /' is used instead of 'done'):
akService.js:50: out of memory
akService.js:50: malformed XML at line 1, column 0: out of memory
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 16602936
Workaround Rework any such script to use 'done' instead of 'cd ..' or 'cd /' when stepping out of replication source and package selections previously made in the script.
Release
Note
RN019
Title ACEs for wingroups get mangled when idmap can't resolve names
Description SMB operation while the Active Directory (AD) domain controllers are unavailable can yield damaged ACL entries. In particular, problems can arise if Windows groups are present in the ACLs on dataset roots because the domain controllers are not immediately available at system startup.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 17291177
Workaround Avoid including Windows groups in dataset root ACLs. Instead, consider mapping those groups to UNIX groups, so the ACLs contain the UNIX group information. An SMB client can be used to repair damaged entries. On a Windows client, navigate to the affected file, right-click on it, select "Properties", then the Security tab, and adjust the ACL as desired.
Release
Note
RN020
Title Dynamic DNS not working with IPv6 when using SMB hosts DB
Description Dynamic DNS updates do not work for SMB with IPv6 support enabled.
First Published in
Release Notes
2013.1.0.1
Related Bug IDs 15898691
Workaround Create a manual DNS record for the appliance host IPv6 address.
Labels:
None
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.

Sign up or Log in to add a comment or watch this page.


The individuals who post here are part of the extended Oracle community and they might not be employed or in any way formally affiliated with Oracle. The opinions expressed here are their own, are not necessarily reviewed in advance by anyone but the individual authors, and neither Oracle nor any other party necessarily agrees with them.