4 4 3 Beefalo:Data Store
- 1 Data stores
- 1.1 SEP sesam data store concept
- 1.2 Data store capacity
- 1.3 HWM, purge and Si3 repair area
- 1.4 Clean up orphaned savesets
- 1.5 Data store calculation recommendations
- 1.6 Limitations
- 1.7 Data store properties
- 1.8 Savesets tab
- 1.9 Media tab
- 1.10 Actions tab
- 1.11 Si3 State, HPE Catalyst Store State and HPE Cloud Bank Store State tabs
- 2 See also
SEP sesam data store is a device type used for writing the savesets directly on one or several configured storage locations – into the file system. SEP sesam uses a data store instead of a conventional media pool to define storage repository. The data is still primarily backed up to a media pool, however, a data store is used underneath to save data to dynamically managed data areas, including disk backups.
SEP sesam can contain multiple data stores of different types and sizes based on the type of data being backed up, backup technique, and on your storage location (a local disk, virtualized storage device, storage appliances, etc.). The following data store types are supported:
- Path: The default data store type, relevant for configuring most storage locations, except when one of the following data store types is required, e.g., for deduplication or backing up to cloud storage.
- SEP Si3 deduplication store: Used for a target-based (Si3T) and source-based deduplication (Si3S), replication, Si3 deduplication store seeding, and for backing up to S3.
- NetApp Snap Store: Used for NetApp snapshots backup.
- HPE StoreOnce: Used for integration with HPE StoreOnce Catalyst storage system.
- HPE Cloud Bank Store: Used for replicating the data to HPE Cloud Bank Storage.
The data store types and their properties have been significantly enhanced with SEP sesam Beefalo. Therefore, depending on your SEP sesam version, the data store window might be different and available features and configuration options may vary.
Data store status icons
The data store status icons consist of three parts.
- The top bar shows the clone status of the S3 store.
- The middle bar shows the sanity state of the Si3 store.
- The bottom bar shows the data store status.
|The data store state has errors.|
|The data store state is OK (good).|
|The data store and Si3 state is OK (good).|
|The Si3 and the sanity state are OK, but the S3 clone state shows an error.|
|The Si3 store state shows an error, but its sanity state is OK.|
|Shows the Si3 status under the Si3 State tab.|
View Mode button
SEP sesam provides additional View Mode button that enables you to switch between the table view and the tree view with grouped objects.
- Tree view: SEP sesam default view shows the status of individual data stores grouped together in a hierarchical view.
- Table view: A simple flat view that shows the status of individual data stores one after another.
SEP sesam data store concept
SEP sesam data stores can be of different types depending on the type of data being backed up, backup technique, and used storage. The default and most commonly used data store type is Path. The following information applies to the Path data store; other data store types may have slightly different characteristics. For details, see the relevant articles: SEP Si3 deduplication store, NetApp Snap Store, HPE StoreOnce, and HPE Cloud Bank Storage.
The difference between a conventional media pool, typically used for backing up directly to tapes, and a data store is in defining the storage space directory directly in the drive by using the operating system's partition functions. Therefore the data store space is managed at the partition level. You configure a data store by specifying its capacity and (optionally) setting the high watermark value (HWM; the upper value for the used disk space on the data store). Take into account that exceeding HWM and filling up the data store may cause backup issues. You should consider this when specifying the data store capacity. For details, see Data store calculation recommendations below.
|Only one data store should be used for each hard disk partition. Even though several data stores can be set up on one partition, you are advised against such configuration as each data store reads the values of the other partitions when checking partition allocation. Consequently, such coexisting data stores obstruct each other.|
As shown in the illustration below, a media pool still points to a drive group. However, there is now an additional level of one or more data stores between the media pool and the drives. The connection between a data store and the related drive is static.
Data store capacity
Data store configuration consists of specifying the data store capacity and HWM. The data store capacity is space reserved for the SEP sesam data store and, optionally, non-SEP sesam data that might be stored on the same volume as the SEP sesam data store. If the data store is shared with non-SEP sesam data, you will have to obtain a special SEP sesam storage license.
When specifying the capacity value, a dedicated partition must have enough free space. The method for calculating the required disk space is:
space occupied by SEP sesam + free disk space = DS capacity
where DS capacity is the configured capacity value in SEP sesam's data store configuration. For examples on calculating a data store capacity, see How do I calculate the data store capacity.
More than one data store is required in a media pool only if the media pool uses data from several disk partitions, in which case all the drives of a media pool's data stores must be part of the same drive group. This ensures that the SEP sesam queue manager distributes the backups in this media pool to all data stores (balancing). For details on drive groups, see Drives.
HWM, purge and Si3 repair area
A high watermark is a general parameter used for triggering the data store purge process, while the Si3 repair area parameter is related to SEP Si3 Deduplication Store only permitting you to specify reserved space for unused Si3 files (DDLs), as explained below.
- High watermark (HWM): A parameter for managing disk space usage; it defines the upper value for the used disk space on the data store. This parameter can be specified manually for the Path, Si3 and NetApp Snap store, and is set automatically during the creation of a SEP Si3 Deduplication store.
When this value is reached, a data store purge process is started for all (EOL-free) savesets. The oldest free savesets are deleted first. Purge is done until all EOL-free savesets are deleted. For more details on EOL, see Managing EOL.
- From SEP sesam v. ≥ 126.96.36.199 Tigon V2, if HWM is set, exceeding it will issue an information message but will not prevent backups to be started.
- In older SEP sesam versions (≤ 188.8.131.52 Tigon), if HWM was set and exceeded, backups could no longer be started while running backups were allowed to finish. The data store was purged until the low watermark (LWM; if set) was reached.
- Sharing the data store drive after a backup
- Starting purge manually in GUI
- Low watermark (LWM): No longer used, relevant only for previous versions of SEP sesam.
- Si3 Repair Area: A parameter used by SEP Si3 Deduplication Store for managing disk space dedicated to Si3 files (DDLs) that were identified by a garbage collection job and are no longer used. These files are still kept in the repair area to enable possible repair of Si3 if there are any structural problems (may be caused by a file system error or by a crash of an operating system). The files in the repair area are removed automatically after the specified amount of time (SEP sesam default: 4 days) or when the disk usage threshold is reached. If the value is set to 0, the Si3 repair functionality is turned off.
When setting the HWM parameter, ensure that sufficient space is allocated between data store capacity and HWM for a complete full backup. For details, see Data store calculation recommendations below.
|SEP sesam HWM and purge behavior is version dependent.
Events that trigger the data store purge are:
The manual execution of the data store purge process deletes the obsolete (EOL-free) savesets. Another option for releasing the data store space is to clean up a data store, as described in the folowing section.
Clean up orphaned savesets
You can manually remove orphaned savesets from the data stores by using the Clean up option in the Data Stores content pane, thus releasing the space that might be occupied by orphaned savesets. This is useful in cases when a data store seems to be inaccessible, its storage space is occupied, or SEP sesam space check shows non-sesam data.
- In the Main Selection -> Components, click Data stores to display the data store contents frame.
- In the content pane menu, click Clean up and select the data store (and the relevant drive number) for which you want to free up space by removing orphaned savesets.
Click Clean Up.
- You can check the status of the clean-up action in the data store properties under the Actions tab.
Data store calculation recommendations
- Data store volume sizing and capacity usage should be managed at the partition level. It is recommended that only SEP sesam data is stored on the respective volume.
- The data store should be at least three times (3x) the maximum full backup size of the planned backup to allow the watermarks to work automatically and dynamically.
- It may be necessary to scale up the data store to beyond 3x the maximum size when a longer hold-back time is stipulated or very big savesets are to be stored. To do this, sufficient space should be allocated between capacity and HWM for two (2) full backups.
- A virtual drive can handle up to 124 simultaneous backups (channels) for storing data to a SEP sesam data store (depending on the SEP sesam version). Only when it becomes necessary to back up more than 124 channels (SEP sesam Server Premium Edition) should another drive be added to the data store.
- Because a backup is aborted rather than split up when a data store becomes full, the correlation between the size of the data store and the size of the biggest backup task must be determined carefully. Take the example of a data store with the capacity of 6 TB, where the biggest saveset is 2 TB and sum up of other saveset(s) is around 1 TB. With a retention time=3, a data store three times (3x) the maximum full backup size may be too small to allow the HWM to work properly. In such a case, it is recommended to scale up the data store size, e.g., by summing up all the savesets for 3 days and adding 15%.
- When a media pool requires more than one data store, all data stores must be connected to the same SEP sesam Device Server (IP host). SEP sesam does not currently support network-distributed data stores being served by a single media pool.
- When using more than one data store, only negative OR positive values can be used for specifying the capacity and HWM. SEP sesam does not support the use of negative and positive values at the same time.
Data store properties
Double-clicking the data store displays all its details, e.g., the name of the data store, the store type, the message about the last executed action, the last action performed, its capacity, etc.
The buttons in the lower left corner allow you to create and delete drive, create new media pool, and delete a data store.
- Capacity: The size (in GiB) of the partition available for the backups. For examples on calculating a data store capacity, see How do I calculate the data store capacity.
- High watermark: The upper value (in GiB) for the used disk space on the data store. When this value is reached, a data store purge process is started for all EOL-free savesets thus freeing up data store capacity.
- Filled: The size (in GiB/TiB) of the occupied data store space by SEP sesam.
- Used: Total used space (in GiB/TiB) on the partition (incl. SEP sesam external data).
- Total: Maximum available space (in GiB/TiB) on the partition as reported by the operating system.
- Free: Available disk space (in GiB/TiB) for SEP sesam.
You can modify existing drive options and set additional by double-clicking the drive. In the Drive Properties window, you can select another drive group, browse the path for the data store, set access mode, etc. The first drive on the list has an additional Account tab that allows you to specify the credentials (user name and password) required to access the configured drive path.
Selecting a data store and clicking the tab Savesets opens a list of all savesets with their details. You can change EOL of individual saveset, adjust backup-related EOL, and lock or unlock individual saveset.
|The circles next to the EOL (Backup/Saveset) indicate the status of your saveset (gray circle – EOL has expired, blue circle – saveset is protected; EOL has not yet been reached). For details, see SEP sesam Icons Legend.|
- Saveset EOL: The column Saveset EOL enables you to change EOL for each individual saveset, stored on the respective data store. You can extend or reduce its retention time. If the adjusted saveset is a part of a backup chain, the whole chain is affected as described below in EOL-related backup chain dependencies.
- Backup EOL: The column Backup EOL enables you to adjust EOL for all savesets containing the same data. This backup-related EOL is applied to all savesets with the same data, including migrated and replicated savesets.
For example, adjusting EOL of a migrated saveset from 2.12.2017 to 12.12.2018 results in changed EOL for all related backup data, i.e., original backup, replicated backup, as well as for all backups in a backup chain, if a saveset with adjusted EOL is a part of it.
- EOL-related backup chain dependencies: You can extend or reduce the retention period for an individual saveset or backup-related saveset, as described above. Keep in mind that increasing EOL of a DIFF or INCR saveset will result in increased EOL of all dependent backups (FULL and other DIFF and INCR) in order to retain the backup data. This keeps the backup chain readily available for restore. On the other hand, decreasing EOL of a DIFF or INCR saveset to a date in the past will result in a warning message prompting you to confirm your decision to set the whole backup chain to already passed time. By setting EOL for DIFF or INCR savesets to expired time results in purging and overwriting the complete backup chain.
|How SEP sesam manages failed backups depends on its version. As of v. ≥ 4.4.3 Beefalo V2, SEP sesam keeps the failed backup according to the media pool retention time together with the last successful backup or migration saveset. This is the default backup retention behavior and can be changed by modifying retention-related keys, as described in Customizing retention policy. These keys may not be supported in previous versions, where failed backups were deleted automatically after 3 days.|
|Each saveset can be deleted when the following conditions are met:
The Media tab provides an overview of the configured media pools and media.
The Actions tab provides an overview of the media-related events. It displays the media status, the action type, its start/stop time, duration and message.
Si3 State, HPE Catalyst Store State and HPE Cloud Bank Store State tabs
The Si3 State tab is shown for Si3 data stores, HPE Catalyst Store State tab is shown for HPE Catalyst stores and HPE Cloud Bank Store State tab is shown for HPE Cloud Bank stores. They provide an overview of the Si3 deduplication, HPE Catalyst and HPE Cloud Bank store status. The tabs display the last deduplication message, the status of active tasks, encryption status, number of stored objects, data size before/after deduplication, dedup ratio, saved storage space, etc. You can also review the Si3 deduplication, HPE Catalyst store or HPE Cloud Bank store status in the media actions properties.