# SOME DESCRIPTIVE TITLE. # Copyright (C) 2025, OpenStack Foundation # This file is distributed under the same license as the Swift package. # FIRST AUTHOR , YEAR. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: Swift 2.36.0.dev97\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2025-07-09 17:24+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" #: ../../source/account.rst:5 msgid "Account" msgstr "" #: ../../source/account.rst:10 msgid "Account Auditor" msgstr "" #: ../../source/account.rst:20 msgid "Account Backend" msgstr "" #: ../../source/account.rst:30 msgid "Account Reaper" msgstr "" #: ../../source/account.rst:40 msgid "Account Server" msgstr "" #: ../../source/admin_guide.rst:3 msgid "Administrator's Guide" msgstr "" #: ../../source/admin_guide.rst:7 msgid "Defining Storage Policies" msgstr "" #: ../../source/admin_guide.rst:9 msgid "" "Defining your Storage Policies is very easy to do with Swift. It is " "important that the administrator understand the concepts behind Storage " "Policies before actually creating and using them in order to get the most " "benefit out of the feature and, more importantly, to avoid having to make " "unnecessary changes once a set of policies have been deployed to a cluster." msgstr "" #: ../../source/admin_guide.rst:15 msgid "" "It is highly recommended that the reader fully read and comprehend :doc:" "`overview_policies` before proceeding with administration of policies. Plan " "carefully and it is suggested that experimentation be done first on a non-" "production cluster to be certain that the desired configuration meets the " "needs of the users. See :ref:`upgrade-policy` before planning the upgrade " "of your existing deployment." msgstr "" #: ../../source/admin_guide.rst:22 msgid "" "Following is a high level view of the very few steps it takes to configure " "policies once you have decided what you want to do:" msgstr "" #: ../../source/admin_guide.rst:25 msgid "Define your policies in ``/etc/swift/swift.conf``" msgstr "" #: ../../source/admin_guide.rst:26 msgid "Create the corresponding object rings" msgstr "" #: ../../source/admin_guide.rst:27 msgid "Communicate the names of the Storage Policies to cluster users" msgstr "" #: ../../source/admin_guide.rst:29 msgid "" "For a specific example that takes you through these steps, please see :doc:" "`policies_saio`" msgstr "" #: ../../source/admin_guide.rst:34 msgid "Managing the Rings" msgstr "" #: ../../source/admin_guide.rst:36 msgid "" "You may build the storage rings on any server with the appropriate version " "of Swift installed. Once built or changed (rebalanced), you must distribute " "the rings to all the servers in the cluster. Storage rings contain " "information about all the Swift storage partitions and how they are " "distributed between the different nodes and disks." msgstr "" #: ../../source/admin_guide.rst:42 msgid "" "Swift 1.6.0 is the last version to use a Python pickle format. Subsequent " "versions use a different serialization format. **Rings generated by Swift " "versions 1.6.0 and earlier may be read by any version, but rings generated " "after 1.6.0 may only be read by Swift versions greater than 1.6.0.** So " "when upgrading from version 1.6.0 or earlier to a version greater than " "1.6.0, either upgrade Swift on your ring building server **last** after all " "Swift nodes have been successfully upgraded, or refrain from generating " "rings until all Swift nodes have been successfully upgraded." msgstr "" #: ../../source/admin_guide.rst:52 msgid "" "If you need to downgrade from a version of Swift greater than 1.6.0 to a " "version less than or equal to 1.6.0, first downgrade your ring-building " "server, generate new rings, push them out, then continue with the rest of " "the downgrade." msgstr "" #: ../../source/admin_guide.rst:57 msgid "For more information see :doc:`overview_ring`." msgstr "" #: ../../source/admin_guide.rst:61 msgid "Removing a device from the ring::" msgstr "" #: ../../source/admin_guide.rst:65 msgid "Removing a server from the ring::" msgstr "" #: ../../source/admin_guide.rst:69 msgid "Adding devices to the ring:" msgstr "" #: ../../source/admin_guide.rst:71 msgid "See :ref:`ring-preparing`" msgstr "" #: ../../source/admin_guide.rst:73 msgid "See what devices for a server are in the ring::" msgstr "" #: ../../source/admin_guide.rst:77 msgid "" "Once you are done with all changes to the ring, the changes need to be " "\"committed\"::" msgstr "" #: ../../source/admin_guide.rst:82 msgid "" "Once the new rings are built, they should be pushed out to all the servers " "in the cluster." msgstr "" #: ../../source/admin_guide.rst:85 msgid "" "Optionally, if invoked as 'swift-ring-builder-safe' the directory containing " "the specified builder file will be locked (via a .lock file in the parent " "directory). This provides a basic safe guard against multiple instances of " "the swift-ring-builder (or other utilities that observe this lock) from " "attempting to write to or read the builder/ring files while operations are " "in progress. This can be useful in environments where ring management has " "been automated but the operator still needs to interact with the rings " "manually." msgstr "" #: ../../source/admin_guide.rst:93 msgid "" "If the ring builder is not producing the balances that you are expecting, " "you can gain visibility into what it's doing with the ``--debug`` flag.::" msgstr "" #: ../../source/admin_guide.rst:99 msgid "" "This produces a great deal of output that is mostly useful if you are either " "(a) attempting to fix the ring builder, or (b) filing a bug against the ring " "builder." msgstr "" #: ../../source/admin_guide.rst:103 msgid "" "You may notice in the rebalance output a 'dispersion' number. What this " "number means is explained in :ref:`ring_dispersion` but in essence is the " "percentage of partitions in the ring that have too many replicas within a " "particular failure domain. You can ask 'swift-ring-builder' what the " "dispersion is with::" msgstr "" #: ../../source/admin_guide.rst:111 msgid "" "This will give you the percentage again, if you want a detailed view of the " "dispersion simply add a ``--verbose``::" msgstr "" #: ../../source/admin_guide.rst:116 msgid "" "This will not only display the percentage but will also display a dispersion " "table that lists partition dispersion by tier. You can use this table to " "figure out were you need to add capacity or to help tune an :ref:" "`ring_overload` value." msgstr "" #: ../../source/admin_guide.rst:120 msgid "" "Now let's take an example with 1 region, 3 zones and 4 devices. Each device " "has the same weight, and the ``dispersion --verbose`` might show the " "following::" msgstr "" #: ../../source/admin_guide.rst:141 msgid "" "The first line reports that there are 256 partitions with 3 copies in region " "1; and this is an expected output in this case (single region with 3 " "replicas) as reported by the \"Max\" value." msgstr "" #: ../../source/admin_guide.rst:145 msgid "" "However, there is some imbalance in the cluster, more precisely in zone 3. " "The \"Max\" reports a maximum of 1 copy in this zone; however 50.00% of the " "partitions are storing 2 replicas in this zone (which is somewhat expected, " "because there are more disks in this zone)." msgstr "" #: ../../source/admin_guide.rst:150 msgid "" "You can now either add more capacity to the other zones, decrease the total " "weight in zone 3 or set the overload to a value `greater than` 33.333333% - " "only as much overload as needed will be used." msgstr "" #: ../../source/admin_guide.rst:156 msgid "Scripting Ring Creation" msgstr "" #: ../../source/admin_guide.rst:157 msgid "" "You can create scripts to create the account and container rings and " "rebalance. Here's an example script for the Account ring. Use similar " "commands to create a make-container-ring.sh script on the proxy server node." msgstr "" #: ../../source/admin_guide.rst:159 msgid "" "Create a script file called make-account-ring.sh on the proxy server node " "with the following content::" msgstr "" #: ../../source/admin_guide.rst:170 msgid "" "You need to replace the values of , , " "etc. with the IP addresses of the account servers used in your setup. You " "can have as many account servers as you need. All account servers are " "assumed to be listening on port 6202, and have a storage device called " "\"sdb1\" (this is a directory name created under /drives when we setup the " "account server). The \"z1\", \"z2\", etc. designate zones, and you can " "choose whether you put devices in the same or different zones. The \"r1\" " "designates the region, with different regions specified as \"r1\", \"r2\", " "etc." msgstr "" #: ../../source/admin_guide.rst:180 msgid "" "Make the script file executable and run it to create the account ring file::" msgstr "" #: ../../source/admin_guide.rst:185 msgid "" "Copy the resulting ring file /etc/swift/account.ring.gz to all the account " "server nodes in your Swift environment, and put them in the /etc/swift " "directory on these nodes. Make sure that every time you change the account " "ring configuration, you copy the resulting ring file to all the account " "nodes." msgstr "" #: ../../source/admin_guide.rst:193 msgid "Handling System Updates" msgstr "" #: ../../source/admin_guide.rst:195 msgid "" "It is recommended that system updates and reboots are done a zone at a time. " "This allows the update to happen, and for the Swift cluster to stay " "available and responsive to requests. It is also advisable when updating a " "zone, let it run for a while before updating the other zones to make sure " "the update doesn't have any adverse effects." msgstr "" #: ../../source/admin_guide.rst:203 msgid "Handling Drive Failure" msgstr "" #: ../../source/admin_guide.rst:205 msgid "" "In the event that a drive has failed, the first step is to make sure the " "drive is unmounted. This will make it easier for Swift to work around the " "failure until it has been resolved. If the drive is going to be replaced " "immediately, then it is just best to replace the drive, format it, remount " "it, and let replication fill it up." msgstr "" #: ../../source/admin_guide.rst:211 msgid "" "After the drive is unmounted, make sure the mount point is owned by root " "(root:root 755). This ensures that rsync will not try to replicate into the " "root drive once the failed drive is unmounted." msgstr "" #: ../../source/admin_guide.rst:215 msgid "" "If the drive can't be replaced immediately, then it is best to leave it " "unmounted, and set the device weight to 0. This will allow all the replicas " "that were on that drive to be replicated elsewhere until the drive is " "replaced. Once the drive is replaced, the device weight can be increased " "again. Setting the device weight to 0 instead of removing the drive from the " "ring gives Swift the chance to replicate data from the failing disk too (in " "case it is still possible to read some of the data)." msgstr "" #: ../../source/admin_guide.rst:223 msgid "" "Setting the device weight to 0 (or removing a failed drive from the ring) " "has another benefit: all partitions that were stored on the failed drive are " "distributed over the remaining disks in the cluster, and each disk only " "needs to store a few new partitions. This is much faster compared to " "replicating all partitions to a single, new disk. It decreases the time to " "recover from a degraded number of replicas significantly, and becomes more " "and more important with bigger disks." msgstr "" #: ../../source/admin_guide.rst:233 msgid "Handling Server Failure" msgstr "" #: ../../source/admin_guide.rst:235 msgid "" "If a server is having hardware issues, it is a good idea to make sure the " "Swift services are not running. This will allow Swift to work around the " "failure while you troubleshoot." msgstr "" #: ../../source/admin_guide.rst:239 msgid "" "If the server just needs a reboot, or a small amount of work that should " "only last a couple of hours, then it is probably best to let Swift work " "around the failure and get the machine fixed and back online. When the " "machine comes back online, replication will make sure that anything that is " "missing during the downtime will get updated." msgstr "" #: ../../source/admin_guide.rst:245 msgid "" "If the server has more serious issues, then it is probably best to remove " "all of the server's devices from the ring. Once the server has been " "repaired and is back online, the server's devices can be added back into the " "ring. It is important that the devices are reformatted before putting them " "back into the ring as it is likely to be responsible for a different set of " "partitions than before." msgstr "" #: ../../source/admin_guide.rst:254 msgid "Detecting Failed Drives" msgstr "" #: ../../source/admin_guide.rst:256 msgid "" "It has been our experience that when a drive is about to fail, error " "messages will spew into `/var/log/kern.log`. There is a script called " "`swift-drive-audit` that can be run via cron to watch for bad drives. If " "errors are detected, it will unmount the bad drive, so that Swift can work " "around it. The script takes a configuration file with the following " "settings:" msgstr "" #: ../../source/admin_guide.rst:263 msgid "``[drive-audit]``" msgstr "" #: ../../source/admin_guide.rst:266 msgid "Default" msgstr "" #: ../../source/admin_guide.rst:266 ../../source/admin_guide.rst:735 #: ../../source/admin_guide.rst:949 ../../source/admin_guide.rst:1075 msgid "Description" msgstr "" #: ../../source/admin_guide.rst:266 msgid "Option" msgstr "" #: ../../source/admin_guide.rst:268 msgid "Drop privileges to this user for non-root tasks" msgstr "" #: ../../source/admin_guide.rst:268 msgid "swift" msgstr "" #: ../../source/admin_guide.rst:268 msgid "user" msgstr "" #: ../../source/admin_guide.rst:270 msgid "LOG_LOCAL0" msgstr "" #: ../../source/admin_guide.rst:270 msgid "Syslog log facility" msgstr "" #: ../../source/admin_guide.rst:270 msgid "log_facility" msgstr "" #: ../../source/admin_guide.rst:271 msgid "INFO" msgstr "" #: ../../source/admin_guide.rst:271 msgid "Log level" msgstr "" #: ../../source/admin_guide.rst:271 msgid "log_level" msgstr "" #: ../../source/admin_guide.rst:272 msgid "/srv/node" msgstr "" #: ../../source/admin_guide.rst:272 msgid "Directory devices are mounted under" msgstr "" #: ../../source/admin_guide.rst:272 msgid "device_dir" msgstr "" #: ../../source/admin_guide.rst:273 msgid "60" msgstr "" #: ../../source/admin_guide.rst:273 msgid "Number of minutes to look back in `/var/log/kern.log`" msgstr "" #: ../../source/admin_guide.rst:273 msgid "minutes" msgstr "" #: ../../source/admin_guide.rst:275 msgid "1" msgstr "" #: ../../source/admin_guide.rst:275 msgid "Number of errors to find before a device is unmounted" msgstr "" #: ../../source/admin_guide.rst:275 msgid "error_limit" msgstr "" #: ../../source/admin_guide.rst:277 msgid "/var/log/kern*" msgstr "" #: ../../source/admin_guide.rst:277 msgid "" "Location of the log file with globbing pattern to check against device errors" msgstr "" #: ../../source/admin_guide.rst:277 msgid "log_file_pattern" msgstr "" #: ../../source/admin_guide.rst:279 msgid "(see below)" msgstr "" #: ../../source/admin_guide.rst:279 msgid "" "Regular expression patterns to be used to locate device blocks with errors " "in the log file" msgstr "" #: ../../source/admin_guide.rst:279 msgid "regex_pattern_X" msgstr "" #: ../../source/admin_guide.rst:284 msgid "" "The default regex pattern used to locate device blocks with errors are " "`\\berror\\b.*\\b(sd[a-z]{1,2}\\d?)\\b` and `\\b(sd[a-z]{1,2}\\d?)\\b." "*\\berror\\b`. One is able to overwrite the default above by providing new " "expressions using the format `regex_pattern_X = regex_expression`, where `X` " "is a number." msgstr "" #: ../../source/admin_guide.rst:289 msgid "" "This script has been tested on Ubuntu 10.04 and Ubuntu 12.04, so if you are " "using a different distro or OS, some care should be taken before using in " "production." msgstr "" #: ../../source/admin_guide.rst:294 msgid "Preventing Disk Full Scenarios" msgstr "" #: ../../source/admin_guide.rst:298 msgid "" "Prevent disk full scenarios by ensuring that the ``proxy-server`` blocks PUT " "requests and rsync prevents replication to the specific drives." msgstr "" #: ../../source/admin_guide.rst:301 msgid "" "You can prevent `proxy-server` PUT requests to low space disks by ensuring " "``fallocate_reserve`` is set in ``account-server.conf``, ``container-server." "conf``, and ``object-server.conf``. By default, ``fallocate_reserve`` is set " "to 1%. In the object server, this blocks PUT requests that would leave the " "free disk space below 1% of the disk. In the account and container servers, " "this blocks operations that will increase account or container database size " "once the free disk space falls below 1%." msgstr "" #: ../../source/admin_guide.rst:310 msgid "" "Setting ``fallocate_reserve`` is highly recommended to avoid filling disks " "to 100%. When Swift's disks are completely full, all requests involving " "those disks will fail, including DELETE requests that would otherwise free " "up space. This is because object deletion includes the creation of a zero-" "byte tombstone (.ts) to record the time of the deletion for replication " "purposes; this happens prior to deletion of the object's data. On a " "completely-full filesystem, that zero-byte .ts file cannot be created, so " "the DELETE request will fail and the disk will remain completely full. If " "``fallocate_reserve`` is set, then the filesystem will have enough space to " "create the zero-byte .ts file, and thus the deletion of the object will " "succeed and free up some space." msgstr "" #: ../../source/admin_guide.rst:323 msgid "" "In order to prevent rsync replication to specific drives, firstly setup " "``rsync_module`` per disk in your ``object-replicator``. Set this in " "``object-server.conf``:" msgstr "" #: ../../source/admin_guide.rst:332 msgid "Set the individual drives in ``rsync.conf``. For example:" msgstr "" #: ../../source/admin_guide.rst:344 msgid "" "Finally, monitor the disk space of each disk and adjust the rsync ``max " "connections`` per drive to ``-1``. We recommend utilising your existing " "monitoring solution to achieve this. The following is an example script:" msgstr "" #: ../../source/admin_guide.rst:387 msgid "" "For the above script to work, ensure ``/etc/rsync.d/`` conf files are " "included, by specifying ``&include`` in your ``rsync.conf`` file:" msgstr "" #: ../../source/admin_guide.rst:394 msgid "" "Use this in conjunction with a cron job to periodically run the script, for " "example:" msgstr "" #: ../../source/admin_guide.rst:407 msgid "Dispersion Report" msgstr "" #: ../../source/admin_guide.rst:409 msgid "" "There is a swift-dispersion-report tool for measuring overall cluster " "health. This is accomplished by checking if a set of deliberately " "distributed containers and objects are currently in their proper places " "within the cluster." msgstr "" #: ../../source/admin_guide.rst:413 msgid "" "For instance, a common deployment has three replicas of each object. The " "health of that object can be measured by checking if each replica is in its " "proper place. If only 2 of the 3 is in place the object's heath can be said " "to be at 66.66%, where 100% would be perfect." msgstr "" #: ../../source/admin_guide.rst:418 msgid "" "A single object's health, especially an older object, usually reflects the " "health of that entire partition the object is in. If we make enough objects " "on a distinct percentage of the partitions in the cluster, we can get a " "pretty valid estimate of the overall cluster health. In practice, about 1% " "partition coverage seems to balance well between accuracy and the amount of " "time it takes to gather results." msgstr "" #: ../../source/admin_guide.rst:425 msgid "" "The first thing that needs to be done to provide this health value is create " "a new account solely for this usage. Next, we need to place the containers " "and objects throughout the system so that they are on distinct partitions. " "The swift-dispersion-populate tool does this by making up random container " "and object names until they fall on distinct partitions. Last, and " "repeatedly for the life of the cluster, we need to run the swift-dispersion-" "report tool to check the health of each of these containers and objects." msgstr "" #: ../../source/admin_guide.rst:435 msgid "" "These tools need direct access to the entire cluster and to the ring files " "(installing them on a proxy server will probably do). Both swift-dispersion-" "populate and swift-dispersion-report use the same configuration file, /etc/" "swift/dispersion.conf. Example conf file::" msgstr "" #: ../../source/admin_guide.rst:448 msgid "" "There are also options for the conf file for specifying the dispersion " "coverage (defaults to 1%), retries, concurrency, etc. though usually the " "defaults are fine. If you want to use keystone v3 for authentication there " "are options like auth_version, user_domain_name, project_domain_name and " "project_name." msgstr "" #: ../../source/admin_guide.rst:453 msgid "" "Once the configuration is in place, run `swift-dispersion-populate` to " "populate the containers and objects throughout the cluster." msgstr "" #: ../../source/admin_guide.rst:456 msgid "" "Now that those containers and objects are in place, you can run `swift-" "dispersion-report` to get a dispersion report, or the overall health of the " "cluster. Here is an example of a cluster in perfect health::" msgstr "" #: ../../source/admin_guide.rst:469 msgid "" "Now I'll deliberately double the weight of a device in the object ring (with " "replication turned off) and rerun the dispersion report to show what impact " "that has::" msgstr "" #: ../../source/admin_guide.rst:486 msgid "" "You can see the health of the objects in the cluster has gone down " "significantly. Of course, I only have four devices in this test environment, " "in a production environment with many many devices the impact of one device " "change is much less. Next, I'll run the replicators to get everything put " "back into place and then rerun the dispersion report::" msgstr "" #: ../../source/admin_guide.rst:502 msgid "You can also run the report for only containers or objects::" msgstr "" #: ../../source/admin_guide.rst:514 msgid "" "Alternatively, the dispersion report can also be output in JSON format. This " "allows it to be more easily consumed by third party utilities::" msgstr "" #: ../../source/admin_guide.rst:520 msgid "" "Note that you may select which storage policy to use by setting the option " "'--policy-name silver' or '-P silver' (silver is the example policy name " "here). If no policy is specified, the default will be used per the swift." "conf file. When you specify a policy the containers created also include the " "policy index, thus even when running a container_only report, you will need " "to specify the policy not using the default." msgstr "" #: ../../source/admin_guide.rst:529 msgid "Geographically Distributed Swift Considerations" msgstr "" #: ../../source/admin_guide.rst:531 msgid "" "Swift provides two features that may be used to distribute replicas of " "objects across multiple geographically distributed data-centers: with :doc:" "`overview_global_cluster` object replicas may be dispersed across devices " "from different data-centers by using `regions` in ring device descriptors; " "with :doc:`overview_container_sync` objects may be copied between " "independent Swift clusters in each data-center. The operation and " "configuration of each are described in their respective documentation. The " "following points should be considered when selecting the feature that is " "most appropriate for a particular use case:" msgstr "" #: ../../source/admin_guide.rst:541 msgid "" "Global Clusters allows the distribution of object replicas across data-" "centers to be controlled by the cluster operator on per-policy basis, since " "the distribution is determined by the assignment of devices from each data-" "center in each policy's ring file. With Container Sync the end user controls " "the distribution of objects across clusters on a per-container basis." msgstr "" #: ../../source/admin_guide.rst:548 msgid "" "Global Clusters requires an operator to coordinate ring deployments across " "multiple data-centers. Container Sync allows for independent management of " "separate Swift clusters in each data-center, and for existing Swift clusters " "to be used as peers in Container Sync relationships without deploying new " "policies/rings." msgstr "" #: ../../source/admin_guide.rst:554 msgid "" "Global Clusters seamlessly supports features that may rely on cross-" "container operations such as large objects and versioned writes. Container " "Sync requires the end user to ensure that all required containers are sync'd " "for these features to work in all data-centers." msgstr "" #: ../../source/admin_guide.rst:559 msgid "" "Global Clusters makes objects available for GET or HEAD requests in both " "data-centers even if a replica of the object has not yet been asynchronously " "migrated between data-centers, by forwarding requests between data-centers. " "Container Sync is unable to serve requests for an object in a particular " "data-center until the asynchronous sync process has copied the object to " "that data-center." msgstr "" #: ../../source/admin_guide.rst:566 msgid "" "Global Clusters may require less storage capacity than Container Sync to " "achieve equivalent durability of objects in each data-center. Global " "Clusters can restore replicas that are lost or corrupted in one data-center " "using replicas from other data-centers. Container Sync requires each data-" "center to independently manage the durability of objects, which may result " "in each data-center storing more replicas than with Global Clusters." msgstr "" #: ../../source/admin_guide.rst:574 msgid "" "Global Clusters execute all account/container metadata updates synchronously " "to account/container replicas in all data-centers, which may incur delays " "when making updates across WANs. Container Sync only copies objects between " "data-centers and all Swift internal traffic is confined to each data-center." msgstr "" #: ../../source/admin_guide.rst:580 msgid "" "Global Clusters does not yet guarantee the availability of objects stored in " "Erasure Coded policies when one data-center is offline. With Container Sync " "the availability of objects in each data-center is independent of the state " "of other data-centers once objects have been synced. Container Sync also " "allows objects to be stored using different policy types in different data-" "centers." msgstr "" #: ../../source/admin_guide.rst:589 msgid "Checking handoff partition distribution" msgstr "" #: ../../source/admin_guide.rst:591 msgid "" "You can check if handoff partitions are piling up on a server by comparing " "the expected number of partitions with the actual number on your disks. " "First get the number of partitions that are currently assigned to a server " "using the ``dispersion`` command from ``swift-ring-builder``::" msgstr "" #: ../../source/admin_guide.rst:624 msgid "" "As you can see from the output, each server should store 4096 partitions, " "and each region should store 8192 partitions. This example used a partition " "power of 13 and 3 replicas." msgstr "" #: ../../source/admin_guide.rst:628 msgid "" "With write_affinity enabled it is expected to have a higher number of " "partitions on disk compared to the value reported by the swift-ring-builder " "dispersion command. The number of additional (handoff) partitions in region " "r1 depends on your cluster size, the amount of incoming data as well as the " "replication speed." msgstr "" #: ../../source/admin_guide.rst:634 msgid "" "Let's use the example from above with 6 nodes in 2 regions, and " "write_affinity configured to write to region r1 first. `swift-ring-builder` " "reported that each node should store 4096 partitions::" msgstr "" #: ../../source/admin_guide.rst:642 msgid "" "Worst case is that handoff partitions in region 1 are populated with new " "object replicas faster than replication is able to move them to region 2. In " "that case you will see ~ 6144 partitions per server in region r1. Your " "actual number should be lower and between 4096 and 6144 partitions " "(preferably on the lower side)." msgstr "" #: ../../source/admin_guide.rst:648 msgid "" "Now count the number of object partitions on a given server in region 1, for " "example on 172.16.10.1. Note that the pathnames might be different; `/srv/" "node/` is the default mount location, and `objects` applies only to storage " "policy 0 (storage policy 1 would use `objects-1` and so on)::" msgstr "" #: ../../source/admin_guide.rst:656 msgid "" "If this number is always on the upper end of the expected partition number " "range (4096 to 6144) or increasing you should check your replication speed " "and maybe even disable write_affinity. Please refer to the next section how " "to collect metrics from Swift, and especially :ref:`swift-recon -r ` how to check replication stats." msgstr "" #: ../../source/admin_guide.rst:668 msgid "Cluster Telemetry and Monitoring" msgstr "" #: ../../source/admin_guide.rst:670 msgid "" "Various metrics and telemetry can be obtained from the account, container, " "and object servers using the recon server middleware and the swift-recon " "cli. To do so update your account, container, or object servers pipelines to " "include recon and add the associated filter config." msgstr "" #: ../../source/admin_guide.rst:677 msgid "object-server.conf sample::" msgstr "" #: ../../source/admin_guide.rst:686 msgid "container-server.conf sample::" msgstr "" #: ../../source/admin_guide.rst:695 msgid "account-server.conf sample::" msgstr "" #: ../../source/admin_guide.rst:706 msgid "" "The recon_cache_path simply sets the directory where stats for a few items " "will be stored. Depending on the method of deployment you may need to create " "this directory manually and ensure that Swift has read/write access." msgstr "" #: ../../source/admin_guide.rst:710 msgid "" "Finally, if you also wish to track asynchronous pending on your object " "servers you will need to setup a cronjob to run the swift-recon-cron script " "periodically on your object servers::" msgstr "" #: ../../source/admin_guide.rst:716 msgid "" "Once the recon middleware is enabled, a GET request for \"/recon/\" " "to the backend object server will return a JSON-formatted response::" msgstr "" #: ../../source/admin_guide.rst:729 msgid "" "Note that the default port for the object server is 6200, except on a Swift " "All-In-One installation, which uses 6210, 6220, 6230, and 6240." msgstr "" #: ../../source/admin_guide.rst:732 msgid "The following metrics and telemetry are currently exposed:" msgstr "" #: ../../source/admin_guide.rst:735 msgid "Request URI" msgstr "" #: ../../source/admin_guide.rst:737 msgid "/recon/load" msgstr "" #: ../../source/admin_guide.rst:737 msgid "returns 1,5, and 15 minute load average" msgstr "" #: ../../source/admin_guide.rst:738 msgid "/recon/mem" msgstr "" #: ../../source/admin_guide.rst:738 msgid "returns /proc/meminfo" msgstr "" #: ../../source/admin_guide.rst:739 msgid "/recon/mounted" msgstr "" #: ../../source/admin_guide.rst:739 msgid "returns *ALL* currently mounted filesystems" msgstr "" #: ../../source/admin_guide.rst:740 msgid "/recon/unmounted" msgstr "" #: ../../source/admin_guide.rst:740 msgid "returns all unmounted drives if mount_check = True" msgstr "" #: ../../source/admin_guide.rst:741 msgid "/recon/diskusage" msgstr "" #: ../../source/admin_guide.rst:741 msgid "returns disk utilization for storage devices" msgstr "" #: ../../source/admin_guide.rst:742 msgid "/recon/driveaudit" msgstr "" #: ../../source/admin_guide.rst:742 msgid "returns # of drive audit errors" msgstr "" #: ../../source/admin_guide.rst:743 msgid "/recon/ringmd5" msgstr "" #: ../../source/admin_guide.rst:743 msgid "returns object/container/account ring md5sums" msgstr "" #: ../../source/admin_guide.rst:744 msgid "/recon/swiftconfmd5" msgstr "" #: ../../source/admin_guide.rst:744 msgid "returns swift.conf md5sum" msgstr "" #: ../../source/admin_guide.rst:745 msgid "/recon/quarantined" msgstr "" #: ../../source/admin_guide.rst:745 msgid "returns # of quarantined objects/accounts/containers" msgstr "" #: ../../source/admin_guide.rst:746 msgid "/recon/sockstat" msgstr "" #: ../../source/admin_guide.rst:746 msgid "returns consumable info from /proc/net/sockstat|6" msgstr "" #: ../../source/admin_guide.rst:747 msgid "/recon/devices" msgstr "" #: ../../source/admin_guide.rst:747 msgid "returns list of devices and devices dir i.e. /srv/node" msgstr "" #: ../../source/admin_guide.rst:748 msgid "/recon/async" msgstr "" #: ../../source/admin_guide.rst:748 msgid "returns count of async pending" msgstr "" #: ../../source/admin_guide.rst:749 msgid "/recon/replication" msgstr "" #: ../../source/admin_guide.rst:749 msgid "returns object replication info (for backward compatibility)" msgstr "" #: ../../source/admin_guide.rst:750 msgid "/recon/replication/" msgstr "" #: ../../source/admin_guide.rst:750 msgid "returns replication info for given type (account, container, object)" msgstr "" #: ../../source/admin_guide.rst:751 msgid "/recon/auditor/" msgstr "" #: ../../source/admin_guide.rst:751 msgid "" "returns auditor stats on last reported scan for given type (account, " "container, object)" msgstr "" #: ../../source/admin_guide.rst:752 msgid "/recon/updater/" msgstr "" #: ../../source/admin_guide.rst:752 msgid "returns last updater sweep times for given type (container, object)" msgstr "" #: ../../source/admin_guide.rst:753 msgid "/recon/expirer/object" msgstr "" #: ../../source/admin_guide.rst:753 msgid "" "returns time elapsed and number of objects deleted during last object " "expirer sweep" msgstr "" #: ../../source/admin_guide.rst:754 msgid "/recon/version" msgstr "" #: ../../source/admin_guide.rst:754 msgid "returns Swift version" msgstr "" #: ../../source/admin_guide.rst:755 msgid "/recon/time" msgstr "" #: ../../source/admin_guide.rst:755 msgid "returns node time" msgstr "" #: ../../source/admin_guide.rst:758 msgid "" "Note that 'object_replication_last' and 'object_replication_time' in object " "replication info are considered to be transitional and will be removed in " "the subsequent releases. Use 'replication_last' and 'replication_time' " "instead." msgstr "" #: ../../source/admin_guide.rst:762 msgid "" "This information can also be queried via the swift-recon command line " "utility::" msgstr "" #: ../../source/admin_guide.rst:802 msgid "" "For example, to obtain container replication info from all hosts in zone " "\"3\"::" msgstr "" #: ../../source/admin_guide.rst:816 msgid "Reporting Metrics to StatsD" msgstr "" #: ../../source/admin_guide.rst:821 msgid "" "The legacy statsd metrics described in this section are being supplemented " "with :doc:`metrics/labels`." msgstr "" #: ../../source/admin_guide.rst:824 msgid "" "If you have a StatsD_ server running, Swift may be configured to send it " "real-time operational metrics. To enable this, set the following " "configuration entries (see the sample configuration files)::" msgstr "" #: ../../source/admin_guide.rst:834 msgid "" "If `log_statsd_host` is not set, this feature is disabled. The default " "values for the other settings are given above. The `log_statsd_host` can be " "a hostname, an IPv4 address, or an IPv6 address (not surrounded with " "brackets, as this is unnecessary since the port is specified separately). " "If a hostname resolves to an IPv4 address, an IPv4 socket will be used to " "send StatsD UDP packets, even if the hostname would also resolve to an IPv6 " "address." msgstr "" #: ../../source/admin_guide.rst:845 msgid "" "The sample rate is a real number between 0 and 1 which defines the " "probability of sending a sample for any given event or timing measurement. " "This sample rate is sent with each sample to StatsD and used to multiply the " "value. For example, with a sample rate of 0.5, StatsD will multiply that " "counter's value by 2 when flushing the metric to an upstream monitoring " "system (Graphite_, Ganglia_, etc.)." msgstr "" #: ../../source/admin_guide.rst:852 msgid "" "Some relatively high-frequency metrics have a default sample rate less than " "one. If you want to override the default sample rate for all metrics whose " "default sample rate is not specified in the Swift source, you may set " "`log_statsd_default_sample_rate` to a value less than one. This is NOT " "recommended (see next paragraph). A better way to reduce StatsD load is to " "adjust `log_statsd_sample_rate_factor` to a value less than one. The " "`log_statsd_sample_rate_factor` is multiplied to any sample rate (either the " "global default or one specified by the actual metric logging call in the " "Swift source) prior to handling. In other words, this one tunable can lower " "the frequency of all StatsD logging by a proportional amount." msgstr "" #: ../../source/admin_guide.rst:863 msgid "" "To get the best data, start with the default " "`log_statsd_default_sample_rate` and `log_statsd_sample_rate_factor` values " "of 1 and only lower `log_statsd_sample_rate_factor` if needed. The " "`log_statsd_default_sample_rate` should not be used and remains for backward " "compatibility only." msgstr "" #: ../../source/admin_guide.rst:869 msgid "" "The metric prefix will be prepended to every metric sent to the StatsD " "server For example, with::" msgstr "" #: ../../source/admin_guide.rst:874 msgid "" "the metric `proxy-server.errors` would be sent to StatsD as `proxy01.proxy-" "server.errors`. This is useful for differentiating different servers when " "sending statistics to a central StatsD server. If you run a local StatsD " "server per node, you could configure a per-node metrics prefix there and " "leave `log_statsd_metric_prefix` blank." msgstr "" #: ../../source/admin_guide.rst:880 msgid "" "Note that metrics reported to StatsD are counters or timing data (which are " "sent in units of milliseconds). StatsD usually expands timing data out to " "min, max, avg, count, and 90th percentile per timing metric, but the details " "of this behavior will depend on the configuration of your StatsD server. " "Some important \"gauge\" metrics may still need to be collected using " "another method. For example, the `object-server.async_pendings` StatsD " "metric counts the generation of async_pendings in real-time, but will not " "tell you the current number of async_pending container updates on disk at " "any point in time." msgstr "" #: ../../source/admin_guide.rst:889 msgid "" "Note also that the set of metrics collected, their names, and their " "semantics are not locked down and will change over time. For more details, " "see the service-specific tables listed below:" msgstr "" #: ../../source/admin_guide.rst:911 msgid "Or, view :doc:`metrics/all` as one page." msgstr "" #: ../../source/admin_guide.rst:915 msgid "Debugging Tips and Tools" msgstr "" #: ../../source/admin_guide.rst:917 msgid "" "When a request is made to Swift, it is given a unique transaction id. This " "id should be in every log line that has to do with that request. This can " "be useful when looking at all the services that are hit by a single request." msgstr "" #: ../../source/admin_guide.rst:921 msgid "" "If you need to know where a specific account, container or object is in the " "cluster, `swift-get-nodes` will show the location where each replica should " "be." msgstr "" #: ../../source/admin_guide.rst:924 msgid "" "If you are looking at an object on the server and need more info, `swift-" "object-info` will display the account, container, replica locations and " "metadata of the object." msgstr "" #: ../../source/admin_guide.rst:928 msgid "" "If you are looking at a container on the server and need more info, `swift-" "container-info` will display all the information like the account, " "container, replica locations and metadata of the container." msgstr "" #: ../../source/admin_guide.rst:932 msgid "" "If you are looking at an account on the server and need more info, `swift-" "account-info` will display the account, replica locations and metadata of " "the account." msgstr "" #: ../../source/admin_guide.rst:936 msgid "" "If you want to audit the data for an account, `swift-account-audit` can be " "used to crawl the account, checking that all containers and objects can be " "found." msgstr "" #: ../../source/admin_guide.rst:942 msgid "Managing Services" msgstr "" #: ../../source/admin_guide.rst:944 msgid "" "Swift services are generally managed with ``swift-init``. the general usage " "is ``swift-init ``, where service is the Swift service to " "manage (for example object, container, account, proxy) and command is one of:" msgstr "" #: ../../source/admin_guide.rst:949 msgid "Command" msgstr "" #: ../../source/admin_guide.rst:951 msgid "Start the service" msgstr "" #: ../../source/admin_guide.rst:951 msgid "start" msgstr "" #: ../../source/admin_guide.rst:952 msgid "Stop the service" msgstr "" #: ../../source/admin_guide.rst:952 msgid "stop" msgstr "" #: ../../source/admin_guide.rst:953 msgid "Restart the service" msgstr "" #: ../../source/admin_guide.rst:953 msgid "restart" msgstr "" #: ../../source/admin_guide.rst:954 msgid "Attempt to gracefully shutdown the service" msgstr "" #: ../../source/admin_guide.rst:954 msgid "shutdown" msgstr "" #: ../../source/admin_guide.rst:955 msgid "Attempt to gracefully restart the service" msgstr "" #: ../../source/admin_guide.rst:955 msgid "reload" msgstr "" #: ../../source/admin_guide.rst:956 msgid "Attempt to seamlessly restart the service" msgstr "" #: ../../source/admin_guide.rst:956 msgid "reload-seamless" msgstr "" #: ../../source/admin_guide.rst:959 msgid "" "A graceful shutdown or reload will allow all server workers to finish any " "current requests before exiting. The parent server process exits " "immediately." msgstr "" #: ../../source/admin_guide.rst:962 msgid "" "A seamless reload will make new configuration settings active, with no " "window where client requests fail due to there being no active listen " "socket. The parent server process will re-exec itself, retaining its " "existing PID. After the re-exec'ed parent server process binds its listen " "sockets, the old listen sockets are closed and old server workers finish any " "current requests before exiting." msgstr "" #: ../../source/admin_guide.rst:969 msgid "" "There is also a special case of ``swift-init all ``, which will run " "the command for all swift services." msgstr "" #: ../../source/admin_guide.rst:972 msgid "" "In cases where there are multiple configs for a service, a specific config " "can be managed with ``swift-init . ``. For " "example, when a separate replication network is used, there might be ``/etc/" "swift/object-server/public.conf`` for the object server and ``/etc/swift/" "object-server/replication.conf`` for the replication services. In this case, " "the replication services could be restarted with ``swift-init object-server." "replication restart``." msgstr "" #: ../../source/admin_guide.rst:982 msgid "Object Auditor" msgstr "" #: ../../source/admin_guide.rst:984 msgid "" "On system failures, the XFS file system can sometimes truncate files it's " "trying to write and produce zero-byte files. The object-auditor will catch " "these problems but in the case of a system crash it would be advisable to " "run an extra, less rate limited sweep to check for these specific files. You " "can run this command as follows::" msgstr "" #: ../../source/admin_guide.rst:992 msgid "" "``-z`` means to only check for zero-byte files at 1000 files per second." msgstr "" #: ../../source/admin_guide.rst:994 msgid "" "At times it is useful to be able to run the object auditor on a specific " "device or set of devices. You can run the object-auditor as follows::" msgstr "" #: ../../source/admin_guide.rst:999 msgid "" "This will run the object auditor on only the sda and sdb devices. This param " "accepts a comma separated list of values." msgstr "" #: ../../source/admin_guide.rst:1004 msgid "Object Replicator" msgstr "" #: ../../source/admin_guide.rst:1006 msgid "" "At times it is useful to be able to run the object replicator on a specific " "device or partition. You can run the object-replicator as follows::" msgstr "" #: ../../source/admin_guide.rst:1011 msgid "" "This will run the object replicator on only the sda and sdb devices. You " "can likewise run that command with ``--partitions``. Both params accept a " "comma separated list of values. If both are specified they will be ANDed " "together. These can only be run in \"once\" mode." msgstr "" #: ../../source/admin_guide.rst:1018 msgid "Swift Orphans" msgstr "" #: ../../source/admin_guide.rst:1020 msgid "Swift Orphans are processes left over after a reload of a Swift server." msgstr "" #: ../../source/admin_guide.rst:1022 msgid "" "For example, when upgrading a proxy server you would probably finish with a " "``swift-init proxy-server reload`` or ``/etc/init.d/swift-proxy reload``. " "This kills the parent proxy server process and leaves the child processes " "running to finish processing whatever requests they might be handling at the " "time. It then starts up a new parent proxy server process and its children " "to handle new incoming requests. This allows zero-downtime upgrades with no " "impact to existing requests." msgstr "" #: ../../source/admin_guide.rst:1030 msgid "" "The orphaned child processes may take a while to exit, depending on the " "length of the requests they were handling. However, sometimes an old process " "can be hung up due to some bug or hardware issue. In these cases, these " "orphaned processes will hang around forever. ``swift-orphans`` can be used " "to find and kill these orphans." msgstr "" #: ../../source/admin_guide.rst:1036 msgid "" "``swift-orphans`` with no arguments will just list the orphans it finds that " "were started more than 24 hours ago. You shouldn't really check for orphans " "until 24 hours after you perform a reload, as some requests can take a long " "time to process. ``swift-orphans -k TERM`` will send the SIG_TERM signal to " "the orphans processes, or you can ``kill -TERM`` the pids yourself if you " "prefer." msgstr "" #: ../../source/admin_guide.rst:1043 msgid "You can run ``swift-orphans --help`` for more options." msgstr "" #: ../../source/admin_guide.rst:1048 msgid "Swift Oldies" msgstr "" #: ../../source/admin_guide.rst:1050 msgid "" "Swift Oldies are processes that have just been around for a long time. " "There's nothing necessarily wrong with this, but it might indicate a hung " "process if you regularly upgrade and reload/restart services. You might have " "so many servers that you don't notice when a reload/restart fails; ``swift-" "oldies`` can help with this." msgstr "" #: ../../source/admin_guide.rst:1056 msgid "" "For example, if you upgraded and reloaded/restarted everything 2 days ago, " "and you've already cleaned up any orphans with ``swift-orphans``, you can " "run ``swift-oldies -a 48`` to find any Swift processes still around that " "were started more than 2 days ago and then investigate them accordingly." msgstr "" #: ../../source/admin_guide.rst:1066 msgid "Custom Log Handlers" msgstr "" #: ../../source/admin_guide.rst:1068 msgid "" "Swift supports setting up custom log handlers for services by specifying a " "comma-separated list of functions to invoke when logging is setup. It does " "so via the ``log_custom_handlers`` configuration option. Logger hooks " "invoked are passed the same arguments as Swift's ``get_logger`` function, as " "well as the ``logging.Logger`` and ``SwiftLogAdapter`` objects:" msgstr "" #: ../../source/admin_guide.rst:1075 msgid "Name" msgstr "" #: ../../source/admin_guide.rst:1077 msgid "Configuration dict to read settings from" msgstr "" #: ../../source/admin_guide.rst:1077 msgid "conf" msgstr "" #: ../../source/admin_guide.rst:1078 msgid "Name of the logger received" msgstr "" #: ../../source/admin_guide.rst:1078 msgid "name" msgstr "" #: ../../source/admin_guide.rst:1079 msgid "(optional) Write log messages to console on stderr" msgstr "" #: ../../source/admin_guide.rst:1079 msgid "log_to_console" msgstr "" #: ../../source/admin_guide.rst:1080 msgid "Route for the logging received" msgstr "" #: ../../source/admin_guide.rst:1080 msgid "log_route" msgstr "" #: ../../source/admin_guide.rst:1081 msgid "Override log format received" msgstr "" #: ../../source/admin_guide.rst:1081 msgid "fmt" msgstr "" #: ../../source/admin_guide.rst:1082 msgid "The logging.getLogger object" msgstr "" #: ../../source/admin_guide.rst:1082 msgid "logger" msgstr "" #: ../../source/admin_guide.rst:1083 msgid "The LogAdapter object" msgstr "" #: ../../source/admin_guide.rst:1083 msgid "adapted_logger" msgstr "" #: ../../source/admin_guide.rst:1087 msgid "" "The instance of ``SwiftLogAdapter`` that wraps the ``logging.Logger`` object " "may be replaced with cloned instances during runtime, for example to use a " "different log prefix with the same ``logging.Logger``. Custom log handlers " "should therefore not modify any attributes of the ``SwiftLogAdapter`` " "instance other than those that will be copied if it is cloned." msgstr "" #: ../../source/admin_guide.rst:1094 msgid "" "A basic example that sets up a custom logger might look like the following:" msgstr "" #: ../../source/admin_guide.rst:1106 msgid "See :ref:`custom-logger-hooks-label` for sample use cases." msgstr "" #: ../../source/admin_guide.rst:1110 msgid "Securing OpenStack Swift" msgstr "" #: ../../source/admin_guide.rst:1112 msgid "" "Please refer to the security guide at https://docs.openstack.org/security-" "guide and in particular the `Object Storage `__ section." msgstr "" #: ../../source/apache_deployment_guide.rst:3 msgid "Apache Deployment Guide" msgstr "" #: ../../source/apache_deployment_guide.rst:7 msgid "Web Front End Considerations" msgstr "" #: ../../source/apache_deployment_guide.rst:9 msgid "" "Swift can be configured to work both using an integral web front-end and " "using a full-fledged Web Server such as the Apache2 (HTTPD) web server. The " "integral web front-end is a wsgi mini \"Web Server\" which opens up its own " "socket and serves http requests directly. The incoming requests accepted by " "the integral web front-end are then forwarded to a wsgi application (the " "core swift) for further handling, possibly via wsgi middleware sub-" "components." msgstr "" #: ../../source/apache_deployment_guide.rst:16 msgid "client<---->'integral web front-end'<---->middleware<---->'core swift'" msgstr "" #: ../../source/apache_deployment_guide.rst:18 msgid "" "To gain full advantage of Apache2, Swift can alternatively be configured to " "work as a request processor of the Apache2 server. This alternative " "deployment scenario uses mod_wsgi of Apache2 to forward requests to the " "swift wsgi application and middleware." msgstr "" #: ../../source/apache_deployment_guide.rst:23 msgid "client<---->'Apache2 with mod_wsgi'<----->middleware<---->'core swift'" msgstr "" #: ../../source/apache_deployment_guide.rst:25 msgid "" "The integral web front-end offers simplicity and requires minimal " "configuration. It is also the web front-end most commonly used with Swift. " "Additionally, the integral web front-end includes support for receiving " "chunked transfer encoding from a client, presently not supported by Apache2 " "in the operation mode described here." msgstr "" #: ../../source/apache_deployment_guide.rst:31 msgid "" "The use of Apache2 offers new ways to extend Swift and integrate it with " "existing authentication, administration and control systems. A single " "Apache2 server can serve as the web front end of any number of swift servers " "residing on a swift node. For example when a storage node offers account, " "container and object services, a single Apache2 server can serve as the web " "front end of all three services." msgstr "" #: ../../source/apache_deployment_guide.rst:38 msgid "" "The apache variant described here was tested as part of an IBM research " "work. It was found that following tuning, the Apache2 offer generally " "equivalent performance to that offered by the integral web front-end. " "Alternative to Apache2, other web servers may be used, but were never tested." msgstr "" #: ../../source/apache_deployment_guide.rst:45 msgid "Apache2 Setup" msgstr "" #: ../../source/apache_deployment_guide.rst:46 msgid "" "Both Apache2 and mod-wsgi needs to be installed on the system. Ubuntu comes " "with Apache2 installed. Install mod-wsgi using::" msgstr "" #: ../../source/apache_deployment_guide.rst:51 msgid "Create a directory for the Apache2 wsgi files::" msgstr "" #: ../../source/apache_deployment_guide.rst:55 msgid "Create a working directory for the wsgi processes::" msgstr "" #: ../../source/apache_deployment_guide.rst:60 msgid "Create a file for each service under ``/srv/www/swift``." msgstr "" #: ../../source/apache_deployment_guide.rst:62 msgid "For a proxy service create ``/srv/www/swift/proxy-server.wsgi``::" msgstr "" #: ../../source/apache_deployment_guide.rst:68 msgid "For an account service create ``/srv/www/swift/account-server.wsgi``::" msgstr "" #: ../../source/apache_deployment_guide.rst:75 msgid "" "For an container service create ``/srv/www/swift/container-server.wsgi``::" msgstr "" #: ../../source/apache_deployment_guide.rst:82 msgid "For an object service create ``/srv/www/swift/object-server.wsgi``::" msgstr "" #: ../../source/apache_deployment_guide.rst:89 msgid "" "Create a ``/etc/apache2/conf.d/swift_wsgi.conf`` configuration file that " "will define a port and Virtual Host per each local service. For example an " "Apache2 serving as a web front end of a proxy service::" msgstr "" #: ../../source/apache_deployment_guide.rst:110 msgid "" "Notice that when using Apache the limit on the maximal object size should be " "imposed by Apache using the `LimitRequestBody` rather by the swift proxy. " "Note also that the `LimitRequestBody` should indicate the same value as " "indicated by `max_file_size` located in both ``/etc/swift/swift.conf`` and " "in ``/etc/swift/test.conf``. The Swift default value for `max_file_size` " "(when not present) is `5368709122`. For example an Apache2 serving as a web " "front end of a storage node::" msgstr "" #: ../../source/apache_deployment_guide.rst:166 msgid "Enable the newly configured Virtual Hosts::" msgstr "" #: ../../source/apache_deployment_guide.rst:170 msgid "Next, stop, test and start Apache2 again::" msgstr "" #: ../../source/apache_deployment_guide.rst:182 msgid "Edit the tests config file and add::" msgstr "" #: ../../source/apache_deployment_guide.rst:187 msgid "" "Also check to see that the file includes `max_file_size` of the same value " "as used for the `LimitRequestBody` in the apache config file above." msgstr "" #: ../../source/apache_deployment_guide.rst:190 msgid "We are done. You may run functional tests to test - e.g.::" msgstr "" #: ../../source/associated_projects.rst:4 msgid "Associated Projects" msgstr "" #: ../../source/associated_projects.rst:9 msgid "Application Bindings" msgstr "" #: ../../source/associated_projects.rst:11 msgid "OpenStack supported binding:" msgstr "" #: ../../source/associated_projects.rst:13 msgid "`Python-SwiftClient `_" msgstr "" #: ../../source/associated_projects.rst:15 msgid "Unofficial libraries and bindings:" msgstr "" #: ../../source/associated_projects.rst:17 msgid "PHP" msgstr "" #: ../../source/associated_projects.rst:19 msgid "" "`PHP-opencloud `_ - Official Rackspace PHP " "bindings that should work for other Swift deployments too." msgstr "" #: ../../source/associated_projects.rst:22 msgid "Ruby" msgstr "" #: ../../source/associated_projects.rst:24 msgid "" "`swift_client `_ - Small but " "powerful Ruby client to interact with OpenStack Swift" msgstr "" #: ../../source/associated_projects.rst:26 msgid "" "`nightcrawler_swift `_ - This " "Ruby gem teleports your assets to an OpenStack Swift bucket/container" msgstr "" #: ../../source/associated_projects.rst:28 msgid "" "`swift storage `_ - Simple " "OpenStack Swift storage client." msgstr "" #: ../../source/associated_projects.rst:31 msgid "Java" msgstr "" #: ../../source/associated_projects.rst:33 msgid "" "`libcloud `_ - Apache Libcloud - a unified " "interface in Python for different clouds with OpenStack Swift support." msgstr "" #: ../../source/associated_projects.rst:35 msgid "" "`jclouds `_ - Java library " "offering bindings for all OpenStack projects" msgstr "" #: ../../source/associated_projects.rst:37 msgid "" "`java-openstack-swift `_ " "- Java bindings for OpenStack Swift" msgstr "" #: ../../source/associated_projects.rst:39 msgid "" "`javaswift `_ - Collection of Java tools for Swift" msgstr "" #: ../../source/associated_projects.rst:41 msgid "Bash" msgstr "" #: ../../source/associated_projects.rst:43 msgid "" "`supload `_ - Bash script to upload " "file to cloud storage based on OpenStack Swift API." msgstr "" #: ../../source/associated_projects.rst:46 msgid ".NET" msgstr "" #: ../../source/associated_projects.rst:48 msgid "" "`openstacknetsdk.org `_ - An OpenStack Cloud " "SDK for Microsoft .NET." msgstr "" #: ../../source/associated_projects.rst:51 msgid "Go" msgstr "" #: ../../source/associated_projects.rst:53 msgid "`Go language bindings `_" msgstr "" #: ../../source/associated_projects.rst:54 msgid "" "`Gophercloud an OpenStack SDK for Go `_" msgstr "" #: ../../source/associated_projects.rst:58 msgid "Authentication" msgstr "" #: ../../source/associated_projects.rst:60 msgid "" "`Keystone `_ - Official Identity " "Service for OpenStack." msgstr "" #: ../../source/associated_projects.rst:62 msgid "" "`Swauth `_ - **RETIRED**: An alternative " "Swift authentication service that only requires Swift itself." msgstr "" #: ../../source/associated_projects.rst:64 msgid "" "`Basicauth `_ - HTTP Basic " "authentication support (keystone backed)." msgstr "" #: ../../source/associated_projects.rst:69 msgid "Command Line Access" msgstr "" #: ../../source/associated_projects.rst:71 msgid "" "`Swiftly `_ - Alternate command line " "access to Swift with direct (no proxy) access capabilities as well." msgstr "" #: ../../source/associated_projects.rst:76 msgid "Log Processing" msgstr "" #: ../../source/associated_projects.rst:78 msgid "" "`slogging `_ - Basic stats and logging tools." msgstr "" #: ../../source/associated_projects.rst:83 msgid "Monitoring & Statistics" msgstr "" #: ../../source/associated_projects.rst:85 msgid "" "`Swift Informant `_ - Swift " "proxy Middleware to send events to a statsd instance." msgstr "" #: ../../source/associated_projects.rst:87 msgid "" "`Swift Inspector `_ - Swift " "middleware to relay information about a request back to the client." msgstr "" #: ../../source/associated_projects.rst:92 msgid "Content Distribution Network Integration" msgstr "" #: ../../source/associated_projects.rst:94 msgid "`SOS `_ - Swift Origin Server." msgstr "" #: ../../source/associated_projects.rst:98 msgid "Alternative API" msgstr "" #: ../../source/associated_projects.rst:100 msgid "" "`ProxyFS `_ - Integrated file and object " "access for Swift object storage" msgstr "" #: ../../source/associated_projects.rst:102 msgid "" "`SwiftHLM `_ - a middleware for " "using OpenStack Swift with tape and other high latency media storage " "backends." msgstr "" #: ../../source/associated_projects.rst:108 msgid "Benchmarking/Load Generators" msgstr "" #: ../../source/associated_projects.rst:110 msgid "`getput `_ - getput tool suite" msgstr "" #: ../../source/associated_projects.rst:111 msgid "" "`COSbench `_ - COSbench tool suite" msgstr "" #: ../../source/associated_projects.rst:117 msgid "Custom Logger Hooks" msgstr "" #: ../../source/associated_projects.rst:119 msgid "" "`swift-sentry `_ - Sentry " "exception reporting for Swift" msgstr "" #: ../../source/associated_projects.rst:123 msgid "Storage Backends (DiskFile API implementations)" msgstr "" #: ../../source/associated_projects.rst:124 msgid "" "`Swift-on-File `_ - Enables objects " "created using Swift API to be accessed as files on a POSIX filesystem and " "vice versa." msgstr "" #: ../../source/associated_projects.rst:127 msgid "" "`swift-scality-backend `_ - " "Scality sproxyd object server implementation for Swift." msgstr "" #: ../../source/associated_projects.rst:131 msgid "Developer Tools" msgstr "" #: ../../source/associated_projects.rst:132 msgid "" "`SAIO bash scripts `_ - Well " "commented simple bash scripts for Swift all in one setup." msgstr "" #: ../../source/associated_projects.rst:134 msgid "" "`vagrant-swift-all-in-one `_ - Quickly setup a standard development environment using Vagrant and " "Chef cookbooks in an Ubuntu virtual machine." msgstr "" #: ../../source/associated_projects.rst:138 msgid "" "`SAIO Ansible playbook `_ - " "Quickly setup a standard development environment using Vagrant and Ansible " "in a Fedora virtual machine (with built-in `Swift-on-File `_ support)." msgstr "" #: ../../source/associated_projects.rst:142 msgid "" "`Multi Swift `_ - Bash scripts to " "spin up multiple Swift clusters sharing the same hardware" msgstr "" #: ../../source/associated_projects.rst:147 msgid "Other" msgstr "" #: ../../source/associated_projects.rst:149 msgid "" "`Glance `_ - Provides services for " "discovering, registering, and retrieving virtual machine images (for " "OpenStack Compute [Nova], for example)." msgstr "" #: ../../source/associated_projects.rst:152 msgid "" "`Django Swiftbrowser `_ - " "Simple Django web app to access OpenStack Swift." msgstr "" #: ../../source/associated_projects.rst:154 msgid "" "`Swift-account-stats `_ - " "Swift-account-stats is a tool to report statistics on Swift usage at tenant " "and global levels." msgstr "" #: ../../source/associated_projects.rst:157 msgid "" "`PyECLib `_ - High-level erasure code " "library used by Swift" msgstr "" #: ../../source/associated_projects.rst:159 msgid "" "`liberasurecode `_ - Low-level " "erasure code library used by PyECLib" msgstr "" #: ../../source/associated_projects.rst:161 msgid "" "`Swift Browser `_ - JavaScript " "interface for Swift" msgstr "" #: ../../source/associated_projects.rst:163 msgid "" "`swift-ui `_ - OpenStack Swift web " "browser" msgstr "" #: ../../source/associated_projects.rst:165 msgid "" "`swiftbackmeup `_ - Utility " "that allows one to create backups and upload them to OpenStack Swift" msgstr "" #: ../../source/audit_watchers.rst:5 msgid "Object Audit Watchers" msgstr "" #: ../../source/audit_watchers.rst:10 msgid "Dark Data" msgstr "" #: ../../source/container.rst:5 msgid "Container" msgstr "" #: ../../source/container.rst:10 msgid "Container Auditor" msgstr "" #: ../../source/container.rst:20 msgid "Container Backend" msgstr "" #: ../../source/container.rst:30 msgid "Container Replicator" msgstr "" #: ../../source/container.rst:40 msgid "Container Server" msgstr "" #: ../../source/container.rst:50 msgid "Container Reconciler" msgstr "" #: ../../source/container.rst:60 msgid "Container Sharder" msgstr "" #: ../../source/container.rst:70 msgid "Container Sync" msgstr "" #: ../../source/container.rst:80 msgid "Container Updater" msgstr "" #: ../../source/cors.rst:3 msgid "CORS" msgstr "" #: ../../source/cors.rst:5 msgid "" "CORS_ is a mechanism to allow code running in a browser (Javascript for " "example) make requests to a domain other than the one from where it " "originated." msgstr "" #: ../../source/cors.rst:8 msgid "Swift supports CORS requests to containers and objects." msgstr "" #: ../../source/cors.rst:10 msgid "" "CORS metadata is held on the container only. The values given apply to the " "container itself and all objects within it." msgstr "" #: ../../source/cors.rst:13 msgid "The supported headers are," msgstr "" #: ../../source/cors.rst:16 msgid "Metadata" msgstr "" #: ../../source/cors.rst:16 msgid "Use" msgstr "" #: ../../source/cors.rst:18 msgid "Origins to be allowed to make Cross Origin Requests, space separated." msgstr "" #: ../../source/cors.rst:18 msgid "X-Container-Meta-Access-Control-Allow-Origin" msgstr "" #: ../../source/cors.rst:22 msgid "Max age for the Origin to hold the preflight results." msgstr "" #: ../../source/cors.rst:22 msgid "X-Container-Meta-Access-Control-Max-Age" msgstr "" #: ../../source/cors.rst:25 msgid "" "Headers exposed to the user agent (e.g. browser) in the actual request " "response. Space separated." msgstr "" #: ../../source/cors.rst:25 msgid "X-Container-Meta-Access-Control-Expose-Headers" msgstr "" #: ../../source/cors.rst:31 msgid "" "In addition the values set in container metadata, some cluster-wide values " "may also be configured using the ``strict_cors_mode``, ``cors_allow_origin`` " "and ``cors_expose_headers`` in ``proxy-server.conf``. See ``proxy-server." "conf-sample`` for more information." msgstr "" #: ../../source/cors.rst:36 msgid "" "Before a browser issues an actual request it may issue a `preflight " "request`_. The preflight request is an OPTIONS call to verify the Origin is " "allowed to make the request. The sequence of events are," msgstr "" #: ../../source/cors.rst:40 msgid "Browser makes OPTIONS request to Swift" msgstr "" #: ../../source/cors.rst:41 msgid "Swift returns 200/401 to browser based on allowed origins" msgstr "" #: ../../source/cors.rst:42 msgid "" "If 200, browser makes the \"actual request\" to Swift, i.e. PUT, POST, " "DELETE, HEAD, GET" msgstr "" #: ../../source/cors.rst:45 msgid "" "When a browser receives a response to an actual request it only exposes " "those headers listed in the ``Access-Control-Expose-Headers`` header. By " "default Swift returns the following values for this header," msgstr "" #: ../../source/cors.rst:49 msgid "" "\"simple response headers\" as listed on http://www.w3.org/TR/cors/#simple-" "response-header" msgstr "" #: ../../source/cors.rst:51 msgid "" "the headers ``etag``, ``x-timestamp``, ``x-trans-id``, ``x-openstack-request-" "id``" msgstr "" #: ../../source/cors.rst:53 msgid "" "all metadata headers (``X-Container-Meta-*`` for containers and ``X-Object-" "Meta-*`` for objects)" msgstr "" #: ../../source/cors.rst:55 msgid "headers listed in ``X-Container-Meta-Access-Control-Expose-Headers``" msgstr "" #: ../../source/cors.rst:56 msgid "" "headers configured using the ``cors_expose_headers`` option in ``proxy-" "server.conf``" msgstr "" #: ../../source/cors.rst:60 msgid "" "An OPTIONS request to a symlink object will respond with the options for the " "symlink only, the request will not be redirected to the target object. " "Therefore, if the symlink's target object is in another container with CORS " "settings, the response will not reflect the settings." msgstr "" #: ../../source/cors.rst:68 msgid "Sample Javascript" msgstr "" #: ../../source/cors.rst:70 msgid "" "To see some CORS Javascript in action download the `test CORS page`_ (source " "below). Host it on a webserver and take note of the protocol and hostname " "(origin) you'll be using to request the page, e.g. http://localhost." msgstr "" #: ../../source/cors.rst:74 msgid "" "Locate a container you'd like to query. Needless to say the Swift cluster " "hosting this container should have CORS support. Append the origin of the " "test page to the container's ``X-Container-Meta-Access-Control-Allow-" "Origin`` header,::" msgstr "" #: ../../source/cors.rst:83 msgid "" "At this point the container is now accessible to CORS clients hosted on " "http://localhost. Open the test CORS page in your browser." msgstr "" #: ../../source/cors.rst:86 msgid "Populate the Token field" msgstr "" #: ../../source/cors.rst:87 msgid "Populate the URL field with the URL of either a container or object" msgstr "" #: ../../source/cors.rst:88 msgid "Select the request method" msgstr "" #: ../../source/cors.rst:89 msgid "Hit Submit" msgstr "" #: ../../source/cors.rst:91 msgid "" "Assuming the request succeeds you should see the response header and body. " "If something went wrong the response status will be 0." msgstr "" #: ../../source/cors.rst:98 msgid "Test CORS Page" msgstr "" #: ../../source/cors.rst:100 msgid "" "A sample cross-site test page is located in the project source tree ``doc/" "source/test-cors.html``." msgstr "" #: ../../source/crossdomain.rst:3 msgid "Cross-domain Policy File" msgstr "" #: ../../source/crossdomain.rst:5 msgid "" "A cross-domain policy file allows web pages hosted elsewhere to use client " "side technologies such as Flash, Java and Silverlight to interact with the " "Swift API." msgstr "" #: ../../source/crossdomain.rst:9 msgid "" "See https://www.adobe.com/devnet-docs/acrobatetk/tools/AppSec/xdomain.html " "for a description of the purpose and structure of the cross-domain policy " "file. The cross-domain policy file is installed in the root of a web server " "(i.e., the path is ``/crossdomain.xml``)." msgstr "" #: ../../source/crossdomain.rst:14 msgid "" "The crossdomain middleware responds to a path of ``/crossdomain.xml`` with " "an XML document such as:" msgstr "" #: ../../source/crossdomain.rst:25 msgid "" "You should use a policy appropriate to your site. The examples and the " "default policy are provided to indicate how to syntactically construct a " "cross domain policy file -- they are not recommendations." msgstr "" #: ../../source/crossdomain.rst:31 msgid "Configuration" msgstr "" #: ../../source/crossdomain.rst:33 msgid "" "To enable this middleware, add it to the pipeline in your proxy-server.conf " "file. It should be added before any authentication (e.g., tempauth or " "keystone) middleware. In this example ellipsis (...) indicate other " "middleware you may have chosen to use:" msgstr "" #: ../../source/crossdomain.rst:43 msgid "And add a filter section, such as:" msgstr "" #: ../../source/crossdomain.rst:52 msgid "" "For continuation lines, put some whitespace before the continuation text. " "Ensure you put a completely blank line to terminate the " "``cross_domain_policy`` value." msgstr "" #: ../../source/crossdomain.rst:56 msgid "" "The ``cross_domain_policy`` name/value is optional. If omitted, the policy " "defaults as if you had specified:" msgstr "" #: ../../source/crossdomain.rst:65 msgid "" "The default policy is very permissive; this is appropriate for most public " "cloud deployments, but may not be appropriate for all deployments. See also: " "`CWE-942 `__" msgstr "" #: ../../source/db.rst:5 msgid "Account DB and Container DB" msgstr "" #: ../../source/db.rst:10 msgid "DB" msgstr "" #: ../../source/db.rst:20 msgid "DB replicator" msgstr "" #: ../../source/deployment_guide.rst:3 msgid "Deployment Guide" msgstr "" #: ../../source/deployment_guide.rst:5 msgid "" "This document provides general guidance for deploying and configuring Swift. " "Detailed descriptions of configuration options can be found in the :doc:" "`configuration documentation `." msgstr "" #: ../../source/deployment_guide.rst:11 msgid "Hardware Considerations" msgstr "" #: ../../source/deployment_guide.rst:13 msgid "" "Swift is designed to run on commodity hardware. RAID on the storage drives " "is not required and not recommended. Swift's disk usage pattern is the worst " "case possible for RAID, and performance degrades very quickly using RAID 5 " "or 6." msgstr "" #: ../../source/deployment_guide.rst:19 msgid "Deployment Options" msgstr "" #: ../../source/deployment_guide.rst:21 msgid "" "The Swift services run completely autonomously, which provides for a lot of " "flexibility when architecting the hardware deployment for Swift. The 4 main " "services are:" msgstr "" #: ../../source/deployment_guide.rst:25 msgid "Proxy Services" msgstr "" #: ../../source/deployment_guide.rst:26 msgid "Object Services" msgstr "" #: ../../source/deployment_guide.rst:27 msgid "Container Services" msgstr "" #: ../../source/deployment_guide.rst:28 msgid "Account Services" msgstr "" #: ../../source/deployment_guide.rst:30 msgid "" "The Proxy Services are more CPU and network I/O intensive. If you are using " "10g networking to the proxy, or are terminating SSL traffic at the proxy, " "greater CPU power will be required." msgstr "" #: ../../source/deployment_guide.rst:34 msgid "" "The Object, Container, and Account Services (Storage Services) are more disk " "and network I/O intensive." msgstr "" #: ../../source/deployment_guide.rst:37 msgid "" "The easiest deployment is to install all services on each server. There is " "nothing wrong with doing this, as it scales each service out horizontally." msgstr "" #: ../../source/deployment_guide.rst:40 msgid "" "Alternatively, one set of servers may be dedicated to the Proxy Services and " "a different set of servers dedicated to the Storage Services. This allows " "faster networking to be configured to the proxy than the storage servers, " "and keeps load balancing to the proxies more manageable. Storage Services " "scale out horizontally as storage servers are added, and the overall API " "throughput can be scaled by adding more proxies." msgstr "" #: ../../source/deployment_guide.rst:47 msgid "" "If you need more throughput to either Account or Container Services, they " "may each be deployed to their own servers. For example you might use faster " "(but more expensive) SAS or even SSD drives to get faster disk I/O to the " "databases." msgstr "" #: ../../source/deployment_guide.rst:51 msgid "" "A high-availability (HA) deployment of Swift requires that multiple proxy " "servers are deployed and requests are load-balanced between them. Each proxy " "server instance is stateless and able to respond to requests for the entire " "cluster." msgstr "" #: ../../source/deployment_guide.rst:56 msgid "" "Load balancing and network design is left as an exercise to the reader, but " "this is a very important part of the cluster, so time should be spent " "designing the network for a Swift cluster." msgstr "" #: ../../source/deployment_guide.rst:63 msgid "Web Front End Options" msgstr "" #: ../../source/deployment_guide.rst:65 msgid "" "Swift comes with an integral web front end. However, it can also be deployed " "as a request processor of an Apache2 using mod_wsgi as described in :doc:" "`Apache Deployment Guide `." msgstr "" #: ../../source/deployment_guide.rst:73 msgid "Preparing the Ring" msgstr "" #: ../../source/deployment_guide.rst:75 msgid "" "The first step is to determine the number of partitions that will be in the " "ring. We recommend that there be a minimum of 100 partitions per drive to " "insure even distribution across the drives. A good starting point might be " "to figure out the maximum number of drives the cluster will contain, and " "then multiply by 100, and then round up to the nearest power of two." msgstr "" #: ../../source/deployment_guide.rst:81 msgid "" "For example, imagine we are building a cluster that will have no more than " "5,000 drives. That would mean that we would have a total number of 500,000 " "partitions, which is pretty close to 2^19, rounded up." msgstr "" #: ../../source/deployment_guide.rst:85 msgid "" "It is also a good idea to keep the number of partitions small (relatively). " "The more partitions there are, the more work that has to be done by the " "replicators and other backend jobs and the more memory the rings consume in " "process. The goal is to find a good balance between small rings and maximum " "cluster size." msgstr "" #: ../../source/deployment_guide.rst:91 msgid "" "The next step is to determine the number of replicas to store of the data. " "Currently it is recommended to use 3 (as this is the only value that has " "been tested). The higher the number, the more storage that is used but the " "less likely you are to lose data." msgstr "" #: ../../source/deployment_guide.rst:96 msgid "" "It is also important to determine how many zones the cluster should have. It " "is recommended to start with a minimum of 5 zones. You can start with fewer, " "but our testing has shown that having at least five zones is optimal when " "failures occur. We also recommend trying to configure the zones at as high a " "level as possible to create as much isolation as possible. Some example " "things to take into consideration can include physical location, power " "availability, and network connectivity. For example, in a small cluster you " "might decide to split the zones up by cabinet, with each cabinet having its " "own power and network connectivity. The zone concept is very abstract, so " "feel free to use it in whatever way best isolates your data from failure. " "Each zone exists in a region." msgstr "" #: ../../source/deployment_guide.rst:108 msgid "" "A region is also an abstract concept that may be used to distinguish between " "geographically separated areas as well as can be used within same " "datacenter. Regions and zones are referenced by a positive integer." msgstr "" #: ../../source/deployment_guide.rst:112 msgid "You can now start building the ring with::" msgstr "" #: ../../source/deployment_guide.rst:116 msgid "" "This will start the ring build process creating the with " "2^ partitions. is the time in hours before a " "specific partition can be moved in succession (24 is a good value for this)." msgstr "" #: ../../source/deployment_guide.rst:120 msgid "Devices can be added to the ring with::" msgstr "" #: ../../source/deployment_guide.rst:124 msgid "" "This will add a device to the ring where is the name of the " "builder file that was created previously, is the number of the " "region the zone is in, is the number of the zone this device is in, " " is the ip address of the server the device is in, is the port " "number that the server is running on, is the name of the " "device on the server (for example: sdb1), is a string of metadata for " "the device (optional), and is a float weight that determines how " "many partitions are put on the device relative to the rest of the devices in " "the cluster (a good starting point is 100.0 x TB on the drive).Add each " "device that will be initially in the cluster." msgstr "" #: ../../source/deployment_guide.rst:135 msgid "Once all of the devices are added to the ring, run::" msgstr "" #: ../../source/deployment_guide.rst:139 msgid "" "This will distribute the partitions across the drives in the ring. It is " "important whenever making changes to the ring to make all the changes " "required before running rebalance. This will ensure that the ring stays as " "balanced as possible, and as few partitions are moved as possible." msgstr "" #: ../../source/deployment_guide.rst:144 msgid "" "The above process should be done to make a ring for each storage service " "(Account, Container and Object). The builder files will be needed in future " "changes to the ring, so it is very important that these be kept and backed " "up. The resulting .tar.gz ring file should be pushed to all of the servers " "in the cluster. For more information about building rings, running swift-" "ring-builder with no options will display help text with available commands " "and options. More information on how the ring works internally can be found " "in the :doc:`Ring Overview `." msgstr "" #: ../../source/deployment_guide.rst:157 msgid "Running object-servers Per Disk" msgstr "" #: ../../source/deployment_guide.rst:159 msgid "" "The lack of true asynchronous file I/O on Linux leaves the object-server " "workers vulnerable to misbehaving disks. Because any object-server worker " "can service a request for any disk, and a slow I/O request blocks the " "eventlet hub, a single slow disk can impair an entire storage node. This " "also prevents object servers from fully utilizing all their disks during " "heavy load." msgstr "" #: ../../source/deployment_guide.rst:165 msgid "" "Another way to get full I/O isolation is to give each disk on a storage node " "a different port in the storage policy rings. Then set the :ref:" "`servers_per_port ` option in the object-" "server config. NOTE: while the purpose of this config setting is to run one " "or more object-server worker processes per *disk*, the implementation just " "runs object-servers per unique port of local devices in the rings. The " "deployer must combine this option with appropriately-configured rings to " "benefit from this feature." msgstr "" #: ../../source/deployment_guide.rst:174 msgid "" "Here's an example (abbreviated) old-style ring (2 node cluster with 2 disks " "each)::" msgstr "" #: ../../source/deployment_guide.rst:183 msgid "And here's the same ring set up for ``servers_per_port``::" msgstr "" #: ../../source/deployment_guide.rst:191 msgid "" "When migrating from normal to ``servers_per_port``, perform these steps in " "order:" msgstr "" #: ../../source/deployment_guide.rst:193 msgid "Upgrade Swift code to a version capable of doing ``servers_per_port``." msgstr "" #: ../../source/deployment_guide.rst:195 msgid "Enable ``servers_per_port`` with a value greater than zero." msgstr "" #: ../../source/deployment_guide.rst:197 msgid "" "Restart ``swift-object-server`` processes with a SIGHUP. At this point, you " "will have the ``servers_per_port`` number of ``swift-object-server`` " "processes serving all requests for all disks on each node. This preserves " "availability, but you should perform the next step as quickly as possible." msgstr "" #: ../../source/deployment_guide.rst:202 msgid "" "Push out new rings that actually have different ports per disk on each " "server. One of the ports in the new ring should be the same as the port " "used in the old ring (\"6200\" in the example above). This will cover " "existing proxy-server processes who haven't loaded the new ring yet. They " "can still talk to any storage node regardless of whether or not that storage " "node has loaded the ring and started object-server processes on the new " "ports." msgstr "" #: ../../source/deployment_guide.rst:210 msgid "" "If you do not run a separate object-server for replication, then this " "setting must be available to the object-replicator and object-reconstructor " "(i.e. appear in the [DEFAULT] config section)." msgstr "" #: ../../source/deployment_guide.rst:218 msgid "General Service Configuration" msgstr "" #: ../../source/deployment_guide.rst:220 msgid "" "Most Swift services fall into two categories. Swift's wsgi servers and " "background daemons." msgstr "" #: ../../source/deployment_guide.rst:223 msgid "" "For more information specific to the configuration of Swift's wsgi servers " "with paste deploy see :ref:`general-server-configuration`." msgstr "" #: ../../source/deployment_guide.rst:226 msgid "" "Configuration for servers and daemons can be expressed together in the same " "file for each type of server, or separately. If a required section for the " "service trying to start is missing there will be an error. The sections not " "used by the service are ignored." msgstr "" #: ../../source/deployment_guide.rst:231 msgid "" "Consider the example of an object storage node. By convention, " "configuration for the object-server, object-updater, object-replicator, " "object-auditor, and object-reconstructor exist in a single file ``/etc/swift/" "object-server.conf``::" msgstr "" #: ../../source/deployment_guide.rst:250 msgid "Swift services expect a configuration path as the first argument::" msgstr "" #: ../../source/deployment_guide.rst:257 msgid "" "If you omit the object-auditor section this file could not be used as the " "configuration path when starting the ``swift-object-auditor`` daemon::" msgstr "" #: ../../source/deployment_guide.rst:263 msgid "" "If the configuration path is a directory instead of a file all of the files " "in the directory with the file extension \".conf\" will be combined to " "generate the configuration object which is delivered to the Swift service. " "This is referred to generally as \"directory based configuration\"." msgstr "" #: ../../source/deployment_guide.rst:268 msgid "" "Directory based configuration leverages ConfigParser's native multi-file " "support. Files ending in \".conf\" in the given directory are parsed in " "lexicographical order. Filenames starting with '.' are ignored. A mixture " "of file and directory configuration paths is not supported - if the " "configuration path is a file only that file will be parsed." msgstr "" #: ../../source/deployment_guide.rst:274 msgid "" "The Swift service management tool ``swift-init`` has adopted the convention " "of looking for ``/etc/swift/{type}-server.conf.d/`` if the file ``/etc/swift/" "{type}-server.conf`` file does not exist." msgstr "" #: ../../source/deployment_guide.rst:278 msgid "" "When using directory based configuration, if the same option under the same " "section appears more than once in different files, the last value parsed is " "said to override previous occurrences. You can ensure proper override " "precedence by prefixing the files in the configuration directory with " "numerical values.::" msgstr "" #: ../../source/deployment_guide.rst:294 msgid "" "You can inspect the resulting combined configuration object using the " "``swift-config`` command line tool" msgstr "" #: ../../source/deployment_guide.rst:301 msgid "General Server Configuration" msgstr "" #: ../../source/deployment_guide.rst:303 msgid "" "Swift uses paste.deploy (https://pypi.org/project/Paste/) to manage server " "configurations. Detailed descriptions of configuration options can be found " "in the :doc:`configuration documentation `." msgstr "" #: ../../source/deployment_guide.rst:307 msgid "" "Default configuration options are set in the ``[DEFAULT]`` section, and any " "options specified there can be overridden in any of the other sections BUT " "ONLY BY USING THE SYNTAX ``set option_name = value``. This is the " "unfortunate way paste.deploy works and I'll try to explain it in full." msgstr "" #: ../../source/deployment_guide.rst:312 msgid "First, here's an example paste.deploy configuration file::" msgstr "" #: ../../source/deployment_guide.rst:330 msgid "The resulting configuration that myapp receives is::" msgstr "" #: ../../source/deployment_guide.rst:341 msgid "" "So, ``name1`` got the global value which is fine since it's only in the " "``DEFAULT`` section anyway." msgstr "" #: ../../source/deployment_guide.rst:344 msgid "" "``name2`` got the global value from ``DEFAULT`` even though it appears to be " "overridden in the ``app:myapp`` subsection. This is just the unfortunate way " "paste.deploy works (at least at the time of this writing.)" msgstr "" #: ../../source/deployment_guide.rst:348 msgid "" "``name3`` got the local value from the ``app:myapp`` subsection because it " "is using the special paste.deploy syntax of ``set option_name = value``. So, " "if you want a default value for most app/filters but want to override it in " "one subsection, this is how you do it." msgstr "" #: ../../source/deployment_guide.rst:353 msgid "" "``name4`` got the global value from ``DEFAULT`` since it's only in that " "section anyway. But, since we used the ``set`` syntax in the ``DEFAULT`` " "section even though we shouldn't, notice we also got a ``set name4`` " "variable. Weird, but probably not harmful." msgstr "" #: ../../source/deployment_guide.rst:358 msgid "" "``name5`` got the local value from the ``app:myapp`` subsection since it's " "only there anyway, but notice that it is in the global configuration and not " "the local configuration. This is because we used the ``set`` syntax to set " "the value. Again, weird, but not harmful since Swift just treats the two " "sets of configuration values as one set anyway." msgstr "" #: ../../source/deployment_guide.rst:364 msgid "" "``name6`` got the local value from ``app:myapp`` subsection since it's only " "there, and since we didn't use the ``set`` syntax, it's only in the local " "configuration and not the global one. Though, as indicated above, there is " "no special distinction with Swift." msgstr "" #: ../../source/deployment_guide.rst:369 msgid "" "That's quite an explanation for something that should be so much simpler, " "but it might be important to know how paste.deploy interprets configuration " "files. The main rule to remember when working with Swift configuration files " "is:" msgstr "" #: ../../source/deployment_guide.rst:375 msgid "" "Use the ``set option_name = value`` syntax in subsections if the option is " "also set in the ``[DEFAULT]`` section. Don't get in the habit of always " "using the ``set`` syntax or you'll probably mess up your non-paste.deploy " "configuration files." msgstr "" #: ../../source/deployment_guide.rst:385 msgid "Per policy configuration" msgstr "" #: ../../source/deployment_guide.rst:387 msgid "" "Some proxy-server configuration options may be overridden for individual :" "doc:`overview_policies` by including per-policy config section(s). These " "options are:" msgstr "" #: ../../source/deployment_guide.rst:391 msgid "``sorting_method``" msgstr "" #: ../../source/deployment_guide.rst:392 msgid "``read_affinity``" msgstr "" #: ../../source/deployment_guide.rst:393 msgid "``write_affinity``" msgstr "" #: ../../source/deployment_guide.rst:394 msgid "``write_affinity_node_count``" msgstr "" #: ../../source/deployment_guide.rst:395 msgid "``write_affinity_handoff_delete_count``" msgstr "" #: ../../source/deployment_guide.rst:397 msgid "The per-policy config section name must be of the form::" msgstr "" #: ../../source/deployment_guide.rst:403 msgid "" "The per-policy config section name should refer to the policy index, not the " "policy name." msgstr "" #: ../../source/deployment_guide.rst:408 msgid "" "The first part of proxy-server config section name must match the name of " "the proxy-server config section. This is typically ``proxy-server`` as shown " "above, but if different then the names of any per-policy config sections " "must be changed accordingly." msgstr "" #: ../../source/deployment_guide.rst:413 msgid "" "The value of an option specified in a per-policy section will override any " "value given in the proxy-server section for that policy only. Otherwise the " "value of these options will be that specified in the proxy-server section." msgstr "" #: ../../source/deployment_guide.rst:417 msgid "" "For example, the following section provides policy-specific options for a " "policy with index ``3``::" msgstr "" #: ../../source/deployment_guide.rst:429 msgid "" "It is recommended that per-policy config options are *not* included in the " "``[DEFAULT]`` section. If they are then the following behavior applies." msgstr "" #: ../../source/deployment_guide.rst:432 msgid "" "Per-policy config sections will inherit options in the ``[DEFAULT]`` section " "of the config file, and any such inheritance will take precedence over " "inheriting options from the proxy-server config section." msgstr "" #: ../../source/deployment_guide.rst:436 msgid "" "Per-policy config section options will override options in the ``[DEFAULT]`` " "section. Unlike the behavior described under `General Server Configuration`_ " "for paste-deploy ``filter`` and ``app`` sections, the ``set`` keyword is not " "required for options to override in per-policy config sections." msgstr "" #: ../../source/deployment_guide.rst:442 msgid "For example, given the following settings in a config file::" msgstr "" #: ../../source/deployment_guide.rst:461 msgid "would result in policy with index ``0`` having settings:" msgstr "" #: ../../source/deployment_guide.rst:463 msgid "``read_affinity = r0=100`` (inherited from the ``[DEFAULT]`` section)" msgstr "" #: ../../source/deployment_guide.rst:464 msgid "``write_affinity = r1`` (specified in the policy 0 section)" msgstr "" #: ../../source/deployment_guide.rst:466 msgid "and any other policy would have the default settings of:" msgstr "" #: ../../source/deployment_guide.rst:468 msgid "``read_affinity = r1=100`` (set in the proxy-server section)" msgstr "" #: ../../source/deployment_guide.rst:469 msgid "``write_affinity = r0`` (inherited from the ``[DEFAULT]`` section)" msgstr "" #: ../../source/deployment_guide.rst:473 msgid "Proxy Middlewares" msgstr "" #: ../../source/deployment_guide.rst:475 msgid "" "Many features in Swift are implemented as middleware in the proxy-server " "pipeline. See :doc:`middleware` and the ``proxy-server.conf-sample`` file " "for more information. In particular, the use of some type of :doc:" "`authentication and authorization middleware ` is highly " "recommended." msgstr "" #: ../../source/deployment_guide.rst:483 msgid "Memcached Considerations" msgstr "" #: ../../source/deployment_guide.rst:485 msgid "" "Several of the Services rely on Memcached for caching certain types of " "lookups, such as auth tokens, and container/account existence. Swift does " "not do any caching of actual object data. Memcached should be able to run " "on any servers that have available RAM and CPU. Typically Memcached is run " "on the proxy servers. The ``memcache_servers`` config option in the ``proxy-" "server.conf`` should contain all memcached servers." msgstr "" #: ../../source/deployment_guide.rst:494 msgid "Shard Range Listing Cache" msgstr "" #: ../../source/deployment_guide.rst:496 msgid "" "When a container gets :ref:`sharded` the root container will " "still be the primary entry point to many container requests, as it provides " "the list of shards. To take load off the root container Swift by default " "caches the list of shards returned." msgstr "" #: ../../source/deployment_guide.rst:500 msgid "" "As the number of shards for a root container grows to more than 3k the " "memcache default max size of 1MB can be reached." msgstr "" #: ../../source/deployment_guide.rst:503 msgid "" "If you over-run your max configured memcache size you'll see messages like::" msgstr "" #: ../../source/deployment_guide.rst:507 msgid "" "When you see these messages your root containers are getting hammered and " "probably returning 503 reponses to clients. Override the default 1MB limit " "to 5MB with something like::" msgstr "" #: ../../source/deployment_guide.rst:513 msgid "" "Memcache has a ``stats sizes`` option that can point out the current size " "usage. As this reaches the current max an increase might be in order::" msgstr "" #: ../../source/deployment_guide.rst:526 msgid "System Time" msgstr "" #: ../../source/deployment_guide.rst:528 msgid "" "Time may be relative but it is relatively important for Swift! Swift uses " "timestamps to determine which is the most recent version of an object. It is " "very important for the system time on each server in the cluster to by " "synced as closely as possible (more so for the proxy server, but in general " "it is a good idea for all the servers). Typical deployments use NTP with a " "local NTP server to ensure that the system times are as close as possible. " "This should also be monitored to ensure that the times do not vary too much." msgstr "" #: ../../source/deployment_guide.rst:540 msgid "General Service Tuning" msgstr "" #: ../../source/deployment_guide.rst:542 msgid "" "Most services support either a ``workers`` or ``concurrency`` value in the " "settings. This allows the services to make effective use of the cores " "available. A good starting point is to set the concurrency level for the " "proxy and storage services to 2 times the number of cores available. If more " "than one service is sharing a server, then some experimentation may be " "needed to find the best balance." msgstr "" #: ../../source/deployment_guide.rst:549 msgid "" "For example, one operator reported using the following settings in a " "production Swift cluster:" msgstr "" #: ../../source/deployment_guide.rst:552 msgid "" "Proxy servers have dual quad core processors (i.e. 8 cores); testing has " "shown 16 workers to be a pretty good balance when saturating a 10g network " "and gives good CPU utilization." msgstr "" #: ../../source/deployment_guide.rst:556 msgid "" "Storage server processes all run together on the same servers. These servers " "have dual quad core processors, for 8 cores total. The Account, Container, " "and Object servers are run with 8 workers each. Most of the background jobs " "are run at a concurrency of 1, with the exception of the replicators which " "are run at a concurrency of 2." msgstr "" #: ../../source/deployment_guide.rst:562 msgid "" "The ``max_clients`` parameter can be used to adjust the number of client " "requests an individual worker accepts for processing. The fewer requests " "being processed at one time, the less likely a request that consumes the " "worker's CPU time, or blocks in the OS, will negatively impact other " "requests. The more requests being processed at one time, the more likely one " "worker can utilize network and disk capacity." msgstr "" #: ../../source/deployment_guide.rst:569 msgid "" "On systems that have more cores, and more memory, where one can afford to " "run more workers, raising the number of workers and lowering the maximum " "number of clients serviced per worker can lessen the impact of CPU intensive " "or stalled requests." msgstr "" #: ../../source/deployment_guide.rst:574 msgid "" "The ``nice_priority`` parameter can be used to set program scheduling " "priority. The ``ionice_class`` and ``ionice_priority`` parameters can be " "used to set I/O scheduling class and priority on the systems that use an I/O " "scheduler that supports I/O priorities. As at kernel 2.6.17 the only such " "scheduler is the Completely Fair Queuing (CFQ) I/O scheduler. If you run " "your Storage servers all together on the same servers, you can slow down the " "auditors or prioritize object-server I/O via these parameters (but probably " "do not need to change it on the proxy). It is a new feature and the best " "practices are still being developed. On some systems it may be required to " "run the daemons as root. For more info also see setpriority(2) and " "ioprio_set(2)." msgstr "" #: ../../source/deployment_guide.rst:585 msgid "" "The above configuration setting should be taken as suggestions and testing " "of configuration settings should be done to ensure best utilization of CPU, " "network connectivity, and disk I/O." msgstr "" #: ../../source/deployment_guide.rst:591 msgid "Filesystem Considerations" msgstr "" #: ../../source/deployment_guide.rst:593 msgid "" "Swift is designed to be mostly filesystem agnostic--the only requirement " "being that the filesystem supports extended attributes (xattrs). After " "thorough testing with our use cases and hardware configurations, XFS was the " "best all-around choice. If you decide to use a filesystem other than XFS, we " "highly recommend thorough testing." msgstr "" #: ../../source/deployment_guide.rst:599 msgid "" "For distros with more recent kernels (for example Ubuntu 12.04 Precise), we " "recommend using the default settings (including the default inode size of " "256 bytes) when creating the file system::" msgstr "" #: ../../source/deployment_guide.rst:605 msgid "" "In the last couple of years, XFS has made great improvements in how inodes " "are allocated and used. Using the default inode size no longer has an " "impact on performance." msgstr "" #: ../../source/deployment_guide.rst:609 msgid "" "For distros with older kernels (for example Ubuntu 10.04 Lucid), some " "settings can dramatically impact performance. We recommend the following " "when creating the file system::" msgstr "" #: ../../source/deployment_guide.rst:615 msgid "" "Setting the inode size is important, as XFS stores xattr data in the inode. " "If the metadata is too large to fit in the inode, a new extent is created, " "which can cause quite a performance problem. Upping the inode size to 1024 " "bytes provides enough room to write the default metadata, plus a little " "headroom." msgstr "" #: ../../source/deployment_guide.rst:621 msgid "The following example mount options are recommended when using XFS::" msgstr "" #: ../../source/deployment_guide.rst:625 msgid "" "We do not recommend running Swift on RAID, but if you are using RAID it is " "also important to make sure that the proper sunit and swidth settings get " "set so that XFS can make most efficient use of the RAID array." msgstr "" #: ../../source/deployment_guide.rst:629 msgid "" "For a standard Swift install, all data drives are mounted directly under ``/" "srv/node`` (as can be seen in the above example of mounting label ``D1`` as " "``/srv/node/d1``). If you choose to mount the drives in another directory, " "be sure to set the ``devices`` config option in all of the server configs to " "point to the correct directory." msgstr "" #: ../../source/deployment_guide.rst:635 msgid "" "The mount points for each drive in ``/srv/node/`` should be owned by the " "root user almost exclusively (``root:root 755``). This is required to " "prevent rsync from syncing files into the root drive in the event a drive is " "unmounted." msgstr "" #: ../../source/deployment_guide.rst:639 msgid "" "Swift uses system calls to reserve space for new objects being written into " "the system. If your filesystem does not support ``fallocate()`` or " "``posix_fallocate()``, be sure to set the ``disable_fallocate = true`` " "config parameter in account, container, and object server configs." msgstr "" #: ../../source/deployment_guide.rst:644 msgid "" "Most current Linux distributions ship with a default installation of " "updatedb. This tool runs periodically and updates the file name database " "that is used by the GNU locate tool. However, including Swift object and " "container database files is most likely not required and the periodic update " "affects the performance quite a bit. To disable the inclusion of these files " "add the path where Swift stores its data to the setting PRUNEPATHS in ``/etc/" "updatedb.conf``::" msgstr "" #: ../../source/deployment_guide.rst:656 msgid "General System Tuning" msgstr "" #: ../../source/deployment_guide.rst:658 msgid "" "The following changes have been found to be useful when running Swift on " "Ubuntu Server 10.04." msgstr "" #: ../../source/deployment_guide.rst:661 msgid "The following settings should be in ``/etc/sysctl.conf``::" msgstr "" #: ../../source/deployment_guide.rst:673 msgid "To load the updated sysctl settings, run ``sudo sysctl -p``." msgstr "" #: ../../source/deployment_guide.rst:675 msgid "" "A note about changing the TIME_WAIT values. By default the OS will hold a " "port open for 60 seconds to ensure that any remaining packets can be " "received. During high usage, and with the number of connections that are " "created, it is easy to run out of ports. We can change this since we are in " "control of the network. If you are not in control of the network, or do not " "expect high loads, then you may not want to adjust those values." msgstr "" #: ../../source/deployment_guide.rst:684 msgid "Logging Considerations" msgstr "" #: ../../source/deployment_guide.rst:686 msgid "" "Swift is set up to log directly to syslog. Every service can be configured " "with the ``log_facility`` option to set the syslog log facility destination. " "We recommended using syslog-ng to route the logs to specific log files " "locally on the server and also to remote log collecting servers. " "Additionally, custom log handlers can be used via the custom_log_handlers " "setting." msgstr "" #: ../../source/development_auth.rst:3 msgid "Auth Server and Middleware" msgstr "" #: ../../source/development_auth.rst:7 msgid "Creating Your Own Auth Server and Middleware" msgstr "" #: ../../source/development_auth.rst:9 msgid "" "The included swift/common/middleware/tempauth.py is a good example of how to " "create an auth subsystem with proxy server auth middleware. The main points " "are that the auth middleware can reject requests up front, before they ever " "get to the Swift Proxy application, and afterwards when the proxy issues " "callbacks to verify authorization." msgstr "" #: ../../source/development_auth.rst:15 msgid "" "It's generally good to separate the authentication and authorization " "procedures. Authentication verifies that a request actually comes from who " "it says it does. Authorization verifies the 'who' has access to the " "resource(s) the request wants." msgstr "" #: ../../source/development_auth.rst:20 msgid "" "Authentication is performed on the request before it ever gets to the Swift " "Proxy application. The identity information is gleaned from the request, " "validated in some way, and the validation information is added to the WSGI " "environment as needed by the future authorization procedure. What exactly is " "added to the WSGI environment is solely dependent on what the installed " "authorization procedures need; the Swift Proxy application itself needs no " "specific information, it just passes it along. Convention has " "environ['REMOTE_USER'] set to the authenticated user string but often more " "information is needed than just that." msgstr "" #: ../../source/development_auth.rst:30 msgid "" "The included TempAuth will set the REMOTE_USER to a comma separated list of " "groups the user belongs to. The first group will be the \"user's group\", a " "group that only the user belongs to. The second group will be the " "\"account's group\", a group that includes all users for that auth account " "(different than the storage account). The third group is optional and is the " "storage account string. If the user does not have admin access to the " "account, the third group will be omitted." msgstr "" #: ../../source/development_auth.rst:38 msgid "" "It is highly recommended that authentication server implementers prefix " "their tokens and Swift storage accounts they create with a configurable " "reseller prefix (``AUTH_`` by default with the included TempAuth). This " "prefix will avoid conflicts with other authentication servers that might be " "using the same Swift cluster. Otherwise, the Swift cluster will have to try " "all the resellers until one validates a token or all fail." msgstr "" #: ../../source/development_auth.rst:45 msgid "" "A restriction with group names is that no group name should begin with a " "period '.' as that is reserved for internal Swift use (such as the .r for " "referrer designations as you'll see later)." msgstr "" #: ../../source/development_auth.rst:49 msgid "Example Authentication with TempAuth:" msgstr "" #: ../../source/development_auth.rst:51 msgid "" "Token AUTH_tkabcd is given to the TempAuth middleware in a request's X-Auth-" "Token header." msgstr "" #: ../../source/development_auth.rst:53 msgid "" "The TempAuth middleware validates the token AUTH_tkabcd and discovers it " "matches the \"tester\" user within the \"test\" account for the storage " "account \"AUTH_storage_xyz\"." msgstr "" #: ../../source/development_auth.rst:56 msgid "" "The TempAuth middleware sets the REMOTE_USER to \"test:tester,test," "AUTH_storage_xyz\"" msgstr "" #: ../../source/development_auth.rst:58 msgid "" "Now this user will have full access (via authorization procedures later) to " "the AUTH_storage_xyz Swift storage account and access to containers in other " "storage accounts, provided the storage account begins with the same " "``AUTH_`` reseller prefix and the container has an ACL specifying at least " "one of those three groups." msgstr "" #: ../../source/development_auth.rst:64 msgid "" "Authorization is performed through callbacks by the Swift Proxy server to " "the WSGI environment's swift.authorize value, if one is set. The swift." "authorize value should simply be a function that takes a Request as an " "argument and returns None if access is granted or returns a " "callable(environ, start_response) if access is denied. This callable is a " "standard WSGI callable. Generally, you should return 403 Forbidden for " "requests by an authenticated user and 401 Unauthorized for an " "unauthenticated request. For example, here's an authorize function that only " "allows GETs (in this case you'd probably return 405 Method Not Allowed, but " "ignore that for the moment).::" msgstr "" #: ../../source/development_auth.rst:85 msgid "" "Adding the swift.authorize callback is often done by the authentication " "middleware as authentication and authorization are often paired together. " "But, you could create separate authorization middleware that simply sets the " "callback before passing on the request. To continue our example above::" msgstr "" #: ../../source/development_auth.rst:119 msgid "" "The Swift Proxy server will call swift.authorize after some initial work, " "but before truly trying to process the request. Positive authorization at " "this point will cause the request to be fully processed immediately. A " "denial at this point will immediately send the denial response for most " "operations." msgstr "" #: ../../source/development_auth.rst:124 msgid "" "But for some operations that might be approved with more information, the " "additional information will be gathered and added to the WSGI environment " "and then swift.authorize will be called once more. These are called " "delay_denial requests and currently include container read requests and " "object read and write requests. For these requests, the read or write access " "control string (X-Container-Read and X-Container-Write) will be fetched and " "set as the 'acl' attribute in the Request passed to swift.authorize." msgstr "" #: ../../source/development_auth.rst:132 msgid "" "The delay_denial procedures allow skipping possibly expensive access control " "string retrievals for requests that can be approved without that " "information, such as administrator or account owner requests." msgstr "" #: ../../source/development_auth.rst:136 msgid "" "To further our example, we now will approve all requests that have the " "access control string set to same value as the authenticated user string. " "Note that you probably wouldn't do this exactly as the access control string " "represents a list rather than a single user, but it'll suffice for this " "example::" msgstr "" #: ../../source/development_auth.rst:174 msgid "" "The access control string has a standard format included with Swift, though " "this can be overridden if desired. The standard format can be parsed with " "swift.common.middleware.acl.parse_acl which converts the string into two " "arrays of strings: (referrers, groups). The referrers allow comparing the " "request's Referer header to control access. The groups allow comparing the " "request.remote_user (or other sources of group information) to control " "access. Checking referrer access can be accomplished by using the swift." "common.middleware.acl.referrer_allowed function. Checking group access is " "usually a simple string comparison." msgstr "" #: ../../source/development_auth.rst:184 msgid "" "Let's continue our example to use parse_acl and referrer_allowed. Now we'll " "only allow GETs after a referrer check and any requests after a group check::" msgstr "" #: ../../source/development_auth.rst:221 msgid "" "The access control strings are set with PUTs and POSTs to containers with " "the X-Container-Read and X-Container-Write headers. Swift allows these " "strings to be set to any value, though it's very useful to validate that the " "strings meet the desired format and return a useful error to the user if " "they don't." msgstr "" #: ../../source/development_auth.rst:227 msgid "" "To support this validation, the Swift Proxy application will call the WSGI " "environment's swift.clean_acl callback whenever one of these headers is to " "be written. The callback should take a header name and value as its " "arguments. It should return the cleaned value to save if valid or raise a " "ValueError with a reasonable error message if not." msgstr "" #: ../../source/development_auth.rst:233 msgid "" "There is an included swift.common.middleware.acl.clean_acl that validates " "the standard Swift format. Let's improve our example by making use of that::" msgstr "" #: ../../source/development_auth.rst:272 msgid "" "Now, if you want to override the format for access control strings you'll " "have to provide your own clean_acl function and you'll have to do your own " "parsing and authorization checking for that format. It's highly recommended " "you use the standard format simply to support the widest range of external " "tools, but sometimes that's less important than meeting certain ACL " "requirements." msgstr "" #: ../../source/development_auth.rst:281 msgid "Integrating With repoze.what" msgstr "" #: ../../source/development_auth.rst:283 msgid "" "Here's an example of integration with repoze.what, though honestly I'm no " "repoze.what expert by any stretch; this is just included here to hopefully " "give folks a start on their own code if they want to use repoze.what::" msgstr "" #: ../../source/development_auth.rst:488 msgid "Allowing CORS with Auth" msgstr "" #: ../../source/development_auth.rst:490 msgid "" "Cross Origin Resource Sharing (CORS) require that the auth system allow the " "OPTIONS method to pass through without a token. The preflight request will " "make an OPTIONS call against the object or container and will not work if " "the auth system stops it. See TempAuth for an example of how OPTIONS " "requests are handled." msgstr "" #: ../../source/development_guidelines.rst:3 msgid "Development Guidelines" msgstr "" #: ../../source/development_guidelines.rst:7 msgid "Coding Guidelines" msgstr "" #: ../../source/development_guidelines.rst:9 msgid "" "For the most part we try to follow PEP 8 guidelines which can be viewed " "here: http://www.python.org/dev/peps/pep-0008/" msgstr "" #: ../../source/development_guidelines.rst:14 msgid "Testing Guidelines" msgstr "" #: ../../source/development_guidelines.rst:16 msgid "" "Swift has a comprehensive suite of tests and pep8 checks that are run on all " "submitted code, and it is recommended that developers execute the tests " "themselves to catch regressions early. Developers are also expected to keep " "the test suite up-to-date with any submitted code changes." msgstr "" #: ../../source/development_guidelines.rst:21 msgid "" "Swift's tests and pep8 checks can be executed in an isolated environment " "with ``tox``: http://tox.testrun.org/" msgstr "" #: ../../source/development_guidelines.rst:24 msgid "To execute the tests:" msgstr "" #: ../../source/development_guidelines.rst:26 msgid "" "Ensure ``pip`` and ``virtualenv`` are upgraded to satisfy the version " "requirements listed in the OpenStack `global requirements`_::" msgstr "" #: ../../source/development_guidelines.rst:34 msgid "Install ``tox``::" msgstr "" #: ../../source/development_guidelines.rst:38 msgid "Generate list of distribution packages to install for testing::" msgstr "" #: ../../source/development_guidelines.rst:42 msgid "" "Now install these packages using your distribution package manager like apt-" "get, dnf, yum, or zypper." msgstr "" #: ../../source/development_guidelines.rst:45 msgid "Run ``tox`` from the root of the swift repo::" msgstr "" #: ../../source/development_guidelines.rst:49 msgid "To run a selected subset of unit tests with ``pytest``:" msgstr "" #: ../../source/development_guidelines.rst:51 msgid "Create a virtual environment with ``tox``::" msgstr "" #: ../../source/development_guidelines.rst:56 msgid "" "Alternatively, here are the steps of manual preparation of the virtual " "environment::" msgstr "" #: ../../source/development_guidelines.rst:64 msgid "Activate the virtual environment::" msgstr "" #: ../../source/development_guidelines.rst:68 msgid "Run some unit tests, for example::" msgstr "" #: ../../source/development_guidelines.rst:72 msgid "Run all unit tests::" msgstr "" #: ../../source/development_guidelines.rst:77 msgid "" "If you installed using ``cd ~/swift; sudo python setup.py develop``, you may " "need to do ``cd ~/swift; sudo chown -R ${USER}:${USER} swift.egg-info`` " "prior to running ``tox``." msgstr "" #: ../../source/development_guidelines.rst:81 msgid "" "By default ``tox`` will run **all of the unit test** and pep8 checks listed " "in the ``tox.ini`` file ``envlist`` option. A subset of the test " "environments can be specified on the ``tox`` command line or by setting the " "``TOXENV`` environment variable. For example, to run only the pep8 checks " "and python3 unit tests use::" msgstr "" #: ../../source/development_guidelines.rst:89 msgid "or::" msgstr "" #: ../../source/development_guidelines.rst:93 msgid "To run unit tests with python3.12 specifically::" msgstr "" #: ../../source/development_guidelines.rst:98 msgid "" "As of ``tox`` version 2.0.0, most environment variables are not " "automatically passed to the test environment. Swift's ``tox.ini`` overrides " "this default behavior so that variable names matching ``SWIFT_*`` and " "``*_proxy`` will be passed, but you may need to run ``tox --recreate`` for " "this to take effect after upgrading from ``tox`` <2.0.0." msgstr "" #: ../../source/development_guidelines.rst:104 msgid "" "Conversely, if you do not want those environment variables to be passed to " "the test environment then you will need to unset them before calling ``tox``." msgstr "" #: ../../source/development_guidelines.rst:107 msgid "" "Also, if you ever encounter DistributionNotFound, try to use ``tox --" "recreate`` or remove the ``.tox`` directory to force ``tox`` to recreate the " "dependency list." msgstr "" #: ../../source/development_guidelines.rst:111 msgid "" "Swift's tests require having an XFS directory available in ``/tmp`` or in " "the ``TMPDIR`` environment variable." msgstr "" #: ../../source/development_guidelines.rst:114 msgid "" "Swift's functional tests may be executed against a :doc:`development_saio` " "or other running Swift cluster using the command::" msgstr "" #: ../../source/development_guidelines.rst:119 msgid "" "The endpoint and authorization credentials to be used by functional tests " "should be configured in the ``test.conf`` file as described in the section :" "ref:`setup_scripts`." msgstr "" #: ../../source/development_guidelines.rst:123 msgid "" "The environment variable ``SWIFT_TEST_POLICY`` may be set to specify a " "particular storage policy *name* that will be used for testing. When set, " "tests that would otherwise not specify a policy or choose a random policy " "from those available will instead use the policy specified. Tests that use " "more than one policy will include the specified policy in the set of " "policies used. The specified policy must be available on the cluster under " "test." msgstr "" #: ../../source/development_guidelines.rst:130 msgid "" "For example, this command would run the functional tests using policy " "'silver'::" msgstr "" #: ../../source/development_guidelines.rst:135 msgid "" "To run a single functional test, use the ``--no-discover`` option together " "with a path to a specific test method, for example::" msgstr "" #: ../../source/development_guidelines.rst:142 msgid "In-process functional testing" msgstr "" #: ../../source/development_guidelines.rst:144 msgid "" "If the ``test.conf`` file is not found then the functional test framework " "will instantiate a set of Swift servers in the same process that executes " "the functional tests. This 'in-process test' mode may also be enabled (or " "disabled) by setting the environment variable ``SWIFT_TEST_IN_PROCESS`` to a " "true (or false) value prior to executing ``tox -e func``." msgstr "" #: ../../source/development_guidelines.rst:150 msgid "" "When using the 'in-process test' mode some server configuration options may " "be set using environment variables:" msgstr "" #: ../../source/development_guidelines.rst:153 msgid "" "the optional in-memory object server may be selected by setting the " "environment variable ``SWIFT_TEST_IN_MEMORY_OBJ`` to a true value." msgstr "" #: ../../source/development_guidelines.rst:156 msgid "" "encryption may be added to the proxy pipeline by setting the environment " "variable ``SWIFT_TEST_IN_PROCESS_CONF_LOADER`` to ``encryption``." msgstr "" #: ../../source/development_guidelines.rst:160 msgid "" "a 2+1 EC policy may be installed as the default policy by setting the " "environment variable ``SWIFT_TEST_IN_PROCESS_CONF_LOADER`` to ``ec``." msgstr "" #: ../../source/development_guidelines.rst:164 msgid "logging to stdout may be enabled by setting ``SWIFT_TEST_DEBUG_LOGS``." msgstr "" #: ../../source/development_guidelines.rst:166 msgid "" "For example, this command would run the in-process mode functional tests " "with encryption enabled in the proxy-server::" msgstr "" #: ../../source/development_guidelines.rst:172 msgid "" "This particular example may also be run using the ``func-encryption`` tox " "environment::" msgstr "" #: ../../source/development_guidelines.rst:177 msgid "" "The ``tox.ini`` file also specifies test environments for running other in-" "process functional test configurations, e.g.::" msgstr "" #: ../../source/development_guidelines.rst:182 msgid "" "To debug the functional tests, use the 'in-process test' mode and pass the " "``--pdb`` flag to ``tox``::" msgstr "" #: ../../source/development_guidelines.rst:188 msgid "" "The 'in-process test' mode searches for ``proxy-server.conf`` and ``swift." "conf`` config files from which it copies config options and overrides some " "options to suit in process testing. The search will first look for config " "files in a ```` that may optionally be specified " "using the environment variable::" msgstr "" #: ../../source/development_guidelines.rst:196 msgid "" "If ``SWIFT_TEST_IN_PROCESS_CONF_DIR`` is not set, or if a config file is not " "found in ````, the search will then look in the " "``etc/`` directory in the source tree. If the config file is still not " "found, the corresponding sample config file from ``etc/`` is used (e.g. " "``proxy-server.conf-sample`` or ``swift.conf-sample``)." msgstr "" #: ../../source/development_guidelines.rst:202 msgid "" "When using the 'in-process test' mode ``SWIFT_TEST_POLICY`` may be set to " "specify a particular storage policy *name* that will be used for testing as " "described above. When set, this policy must exist in the ``swift.conf`` file " "and its corresponding ring file must exist in ```` " "(if specified) or ``etc/``. The test setup will set the specified policy to " "be the default and use its ring file properties for constructing the test " "object ring. This allows in-process testing to be run against various policy " "types and ring files." msgstr "" #: ../../source/development_guidelines.rst:211 msgid "" "For example, this command would run the in-process mode functional tests " "using config files found in ``$HOME/my_tests`` and policy 'silver'::" msgstr "" #: ../../source/development_guidelines.rst:219 msgid "S3 API cross-compatibility tests" msgstr "" #: ../../source/development_guidelines.rst:221 msgid "" "The cross-compatibility tests in directory `test/s3api` are intended to " "verify that the Swift S3 API behaves in the same way as the AWS S3 API. They " "should pass when run against either a Swift endpoint (with S3 API enabled) " "or an AWS S3 endpoint." msgstr "" #: ../../source/development_guidelines.rst:226 msgid "" "To run against an AWS S3 endpoint, the `/etc/swift/test.conf` file must be " "edited to provide AWS key IDs and secrets. Alternatively, an AWS CLI style " "credentials file can be loaded by setting the ``SWIFT_TEST_AWS_CONFIG_FILE`` " "environment variable, e.g.::" msgstr "" #: ../../source/development_guidelines.rst:234 msgid "" "When using ``SWIFT_TEST_AWS_CONFIG_FILE``, the region defaults to ``us-" "east-1`` and only the default credentials are loaded." msgstr "" #: ../../source/development_guidelines.rst:240 msgid "Coding Style" msgstr "" #: ../../source/development_guidelines.rst:242 msgid "" "Swift uses flake8 with the OpenStack `hacking`_ module to enforce coding " "style." msgstr "" #: ../../source/development_guidelines.rst:245 msgid "" "Install flake8 and hacking with pip or by the packages of your Operating " "System." msgstr "" #: ../../source/development_guidelines.rst:248 msgid "" "It is advised to integrate flake8+hacking with your editor to get it " "automated and not get `caught` by Jenkins." msgstr "" #: ../../source/development_guidelines.rst:251 msgid "For example for Vim the `syntastic`_ plugin can do this for you." msgstr "" #: ../../source/development_guidelines.rst:258 msgid "Documentation Guidelines" msgstr "" #: ../../source/development_guidelines.rst:260 msgid "" "The documentation in docstrings should follow the PEP 257 conventions (as " "mentioned in the PEP 8 guidelines)." msgstr "" #: ../../source/development_guidelines.rst:263 msgid "More specifically:" msgstr "" #: ../../source/development_guidelines.rst:265 msgid "Triple quotes should be used for all docstrings." msgstr "" #: ../../source/development_guidelines.rst:266 msgid "" "If the docstring is simple and fits on one line, then just use one line." msgstr "" #: ../../source/development_guidelines.rst:268 msgid "" "For docstrings that take multiple lines, there should be a newline after the " "opening quotes, and before the closing quotes." msgstr "" #: ../../source/development_guidelines.rst:270 msgid "" "Sphinx is used to build documentation, so use the restructured text markup " "to designate parameters, return values, etc. Documentation on the sphinx " "specific markup can be found here: https://www.sphinx-doc.org/en/master/" msgstr "" #: ../../source/development_guidelines.rst:275 msgid "To build documentation run::" msgstr "" #: ../../source/development_guidelines.rst:280 msgid "" "and then browse to doc/build/html/index.html. These docs are auto-generated " "after every commit and available online at https://docs.openstack.org/swift/" "latest/." msgstr "" #: ../../source/development_guidelines.rst:286 msgid "Manpages" msgstr "" #: ../../source/development_guidelines.rst:288 msgid "" "For sanity check of your change in manpage, use this command in the root of " "your Swift repo::" msgstr "" #: ../../source/development_guidelines.rst:295 msgid "License and Copyright" msgstr "" #: ../../source/development_guidelines.rst:297 msgid "" "You can have the following copyright and license statement at the top of " "each source file. Copyright assignment is optional." msgstr "" #: ../../source/development_guidelines.rst:300 msgid "" "New files should contain the current year. Substantial updates can have " "another year added, and date ranges are not needed.::" msgstr "" #: ../../source/development_middleware.rst:3 msgid "Middleware and Metadata" msgstr "" #: ../../source/development_middleware.rst:7 msgid "Using Middleware" msgstr "" #: ../../source/development_middleware.rst:9 msgid "" "`Python WSGI Middleware`_ (or just \"middleware\") can be used to \"wrap\" " "the request and response of a Python WSGI application (i.e. a webapp, or " "REST/HTTP API), like Swift's WSGI servers (proxy-server, account-server, " "container-server, object-server). Swift uses middleware to add (sometimes " "optional) behaviors to the Swift WSGI servers." msgstr "" #: ../../source/development_middleware.rst:17 msgid "" "Middleware can be added to the Swift WSGI servers by modifying their " "`paste`_ configuration file. The majority of Swift middleware is applied to " "the :ref:`proxy-server`." msgstr "" #: ../../source/development_middleware.rst:23 msgid "Given the following basic configuration::" msgstr "" #: ../../source/development_middleware.rst:35 msgid "" "You could add the :ref:`healthcheck` middleware by adding a section for that " "filter and adding it to the pipeline::" msgstr "" #: ../../source/development_middleware.rst:52 msgid "" "Some middleware is required and will be inserted into your pipeline " "automatically by core swift code (e.g. the proxy-server will insert :ref:" "`catch_errors` and :ref:`gatekeeper` at the start of the pipeline if they " "are not already present). You can see which features are available on a " "given Swift endpoint (including middleware) using the :ref:`discoverability` " "interface." msgstr "" #: ../../source/development_middleware.rst:62 msgid "Creating Your Own Middleware" msgstr "" #: ../../source/development_middleware.rst:64 msgid "The best way to see how to write middleware is to look at examples." msgstr "" #: ../../source/development_middleware.rst:66 msgid "" "Many optional features in Swift are implemented as :ref:`common_middleware` " "and provided in ``swift.common.middleware``, but Swift middleware may be " "packaged and distributed as a separate project. Some examples are listed on " "the :ref:`associated_projects` page." msgstr "" #: ../../source/development_middleware.rst:71 msgid "" "A contrived middleware example that modifies request behavior by inspecting " "custom HTTP headers (e.g. X-Webhook) and uses :ref:`sysmeta` to persist data " "to backend storage as well as common patterns like a :func:`." "get_container_info` cache/query and :func:`.wsgify` decorator is presented " "below::" msgstr "" #: ../../source/development_middleware.rst:143 msgid "" "In practice this middleware will call the URL stored on the container as X-" "Webhook on all successful object uploads." msgstr "" #: ../../source/development_middleware.rst:146 msgid "" "If this example was at ``/swift/common/middleware/webhook.py`` - " "you could add it to your proxy by creating a new filter section and adding " "it to the pipeline::" msgstr "" #: ../../source/development_middleware.rst:166 msgid "" "Most python packages expose middleware as entrypoints. See `PasteDeploy`_ " "documentation for more information about the syntax of the ``use`` option. " "All middleware included with Swift is installed to support the ``egg:swift`` " "syntax." msgstr "" #: ../../source/development_middleware.rst:173 msgid "" "Middleware may advertize its availability and capabilities via Swift's :ref:" "`discoverability` support by using :func:`.register_swift_info`::" msgstr "" #: ../../source/development_middleware.rst:184 msgid "" "If a middleware handles sensitive information in headers or query parameters " "that may need redaction when logging, use the :func:`." "register_sensitive_header` and :func:`.register_sensitive_param` functions. " "This should be done in the filter factory::" msgstr "" #: ../../source/development_middleware.rst:197 msgid "" "Middlewares can override the status integer that is logged by proxy_logging " "middleware by setting ``swift.proxy_logging_status`` in the request WSGI " "environment. The value should be an integer. The value will replace the " "default status integer in the log message, unless the proxy_logging " "middleware detects a client disconnect or exception while handling the " "request, in which case ``swift.proxy_logging_status`` is overridden by a 499 " "or 500 respectively." msgstr "" #: ../../source/development_middleware.rst:206 msgid "Swift Metadata" msgstr "" #: ../../source/development_middleware.rst:208 msgid "" "Generally speaking metadata is information about a resource that is " "associated with the resource but is not the data contained in the resource " "itself - which is set and retrieved via HTTP headers. (e.g. the \"Content-" "Type\" of a Swift object that is returned in HTTP response headers)" msgstr "" #: ../../source/development_middleware.rst:214 msgid "" "All user resources in Swift (i.e. account, container, objects) can have user " "metadata associated with them. Middleware may also persist custom metadata " "to accounts and containers safely using System Metadata. Some core Swift " "features which predate sysmeta have added exceptions for custom non-user " "metadata headers (e.g. :ref:`acls`, :ref:`large-objects`)" msgstr "" #: ../../source/development_middleware.rst:225 msgid "User Metadata" msgstr "" #: ../../source/development_middleware.rst:227 msgid "" "User metadata takes the form of ``X--Meta-: ``, where " "```` depends on the resources type (i.e. Account, Container, Object) " "and ```` and ```` are set by the client." msgstr "" #: ../../source/development_middleware.rst:231 msgid "" "User metadata should generally be reserved for use by the client or client " "applications. A perfect example use-case for user metadata is `python-" "swiftclient`_'s ``X-Object-Meta-Mtime`` which it stores on object it uploads " "to implement its ``--changed`` option which will only upload files that have " "changed since the last upload." msgstr "" #: ../../source/development_middleware.rst:239 msgid "" "New middleware should avoid storing metadata within the User Metadata " "namespace to avoid potential conflict with existing user metadata when " "introducing new metadata keys. An example of legacy middleware that borrows " "the user metadata namespace is :ref:`tempurl`. An example of middleware " "which uses custom non-user metadata to avoid the user metadata namespace is :" "ref:`slo-doc`." msgstr "" #: ../../source/development_middleware.rst:246 msgid "" "User metadata that is stored by a PUT or POST request to a container or " "account resource persists until it is explicitly removed by a subsequent PUT " "or POST request that includes a header ``X--Meta-`` with no value " "or a header ``X-Remove--Meta-: ``. In the latter " "case the ```` is not stored. All user metadata stored with an " "account or container resource is deleted when the account or container is " "deleted." msgstr "" #: ../../source/development_middleware.rst:253 msgid "" "User metadata that is stored with an object resource has a different " "semantic; object user metadata persists until any subsequent PUT or POST " "request is made to the same object, at which point all user metadata stored " "with that object is deleted en-masse and replaced with any user metadata " "included with the PUT or POST request. As a result, it is not possible to " "update a subset of the user metadata items stored with an object while " "leaving some items unchanged." msgstr "" #: ../../source/development_middleware.rst:264 msgid "System Metadata (Sysmeta)" msgstr "" #: ../../source/development_middleware.rst:266 msgid "" "System metadata takes the form of ``X--Sysmeta-: ``, where " "```` depends on the resources type (i.e. Account, Container, Object) " "and ```` and ```` are set by trusted code running in a Swift " "WSGI Server." msgstr "" #: ../../source/development_middleware.rst:271 msgid "" "All headers on client requests in the form of ``X--Sysmeta-`` " "will be dropped from the request before being processed by any middleware. " "All headers on responses from back-end systems in the form of ``X--" "Sysmeta-`` will be removed after all middlewares have processed the " "response but before the response is sent to the client. See :ref:" "`gatekeeper` middleware for more information." msgstr "" #: ../../source/development_middleware.rst:278 msgid "" "System metadata provides a means to store potentially private custom " "metadata with associated Swift resources in a safe and secure fashion " "without actually having to plumb custom metadata through the core swift " "servers. The incoming filtering ensures that the namespace can not be " "modified directly by client requests, and the outgoing filter ensures that " "removing middleware that uses a specific system metadata key renders it " "benign. New middleware should take advantage of system metadata." msgstr "" #: ../../source/development_middleware.rst:287 msgid "" "System metadata may be set on accounts and containers by including headers " "with a PUT or POST request. Where a header name matches the name of an " "existing item of system metadata, the value of the existing item will be " "updated. Otherwise existing items are preserved. A system metadata header " "with an empty value will cause any existing item with the same name to be " "deleted." msgstr "" #: ../../source/development_middleware.rst:293 msgid "" "System metadata may be set on objects using only PUT requests. All items of " "existing system metadata will be deleted and replaced en-masse by any system " "metadata headers included with the PUT request. System metadata is neither " "updated nor deleted by a POST request: updating individual items of system " "metadata with a POST request is not yet supported in the same way that " "updating individual items of user metadata is not supported. In cases where " "middleware needs to store its own metadata with a POST request, it may use " "Object Transient Sysmeta." msgstr "" #: ../../source/development_middleware.rst:305 msgid "Object Metadata" msgstr "" #: ../../source/development_middleware.rst:307 msgid "" "Objects have other metadata in addition to the user metadata and system " "metadata described above." msgstr "" #: ../../source/development_middleware.rst:312 msgid "Immutable Metadata" msgstr "" #: ../../source/development_middleware.rst:314 msgid "" "Objects have several items of immutable metadata. Like system metadata, " "these may only be set using PUT requests. However, they do not follow the " "general ``X-Object-Sysmeta-`` naming scheme and they are not " "automatically removed from client responses." msgstr "" #: ../../source/development_middleware.rst:319 msgid "Object immutable metadata includes::" msgstr "" #: ../../source/development_middleware.rst:325 msgid "" "``X-Timestamp`` and ``Content-Length`` metadata MUST be included in PUT " "requests to object servers. ``Etag`` metadata is generated by object servers " "when they handle a PUT request, but checked against any ``Etag`` header sent " "with the PUT request." msgstr "" #: ../../source/development_middleware.rst:330 msgid "" "Object immutable metadata, along with ``Content-Type``, is the only object " "metadata that is stored by container servers and returned in object listings." msgstr "" #: ../../source/development_middleware.rst:335 msgid "Content-Type" msgstr "" #: ../../source/development_middleware.rst:337 msgid "" "Object ``Content-Type`` metadata is treated differently from immutable " "metadata, system metadata and user metadata." msgstr "" #: ../../source/development_middleware.rst:340 msgid "" "``Content-Type`` MUST be included in PUT requests to object servers. Unlike " "immutable metadata or system metadata, ``Content-Type`` is mutable and may " "be included in POST requests to object servers. However, unlike object user " "metadata, existing ``Content-Type`` metadata persists if a POST request does " "not include new ``Content-Type`` metadata. This is because an object must " "have ``Content-Type`` metadata, which is also stored by container servers " "and returned in object listings." msgstr "" #: ../../source/development_middleware.rst:348 msgid "" "``Content-Type`` is the only item of object metadata that is both mutable " "and yet also persists when not specified in a POST request." msgstr "" #: ../../source/development_middleware.rst:355 msgid "Object Transient-Sysmeta" msgstr "" #: ../../source/development_middleware.rst:357 msgid "" "If middleware needs to store object metadata with a POST request it may do " "so using headers of the form ``X-Object-Transient-Sysmeta-: ``." msgstr "" #: ../../source/development_middleware.rst:360 msgid "" "All headers on client requests in the form of ``X-Object-Transient-Sysmeta-" "`` will be dropped from the request before being processed by any " "middleware. All headers on responses from back-end systems in the form of " "``X-Object-Transient-Sysmeta-`` will be removed after all middlewares " "have processed the response but before the response is sent to the client. " "See :ref:`gatekeeper` middleware for more information." msgstr "" #: ../../source/development_middleware.rst:367 msgid "" "Transient-sysmeta updates on an object have the same semantic as user " "metadata updates on an object (see :ref:`usermeta`) i.e. whenever any PUT or " "POST request is made to an object, all existing items of transient-sysmeta " "are deleted en-masse and replaced with any transient-sysmeta included with " "the PUT or POST request. Transient-sysmeta set by a middleware is therefore " "prone to deletion by a subsequent client-generated POST request unless the " "middleware is careful to include its transient-sysmeta with every POST. " "Likewise, user metadata set by a client is prone to deletion by a subsequent " "middleware-generated POST request, and for that reason middleware should " "avoid generating POST requests that are independent of any client request." msgstr "" #: ../../source/development_middleware.rst:378 msgid "" "Transient-sysmeta deliberately uses a different header prefix to user " "metadata so that middlewares can avoid potential conflict with user metadata " "keys." msgstr "" #: ../../source/development_middleware.rst:381 msgid "" "Transient-sysmeta deliberately uses a different header prefix to system " "metadata to emphasize the fact that the data is only persisted until a " "subsequent POST." msgstr "" #: ../../source/development_ondisk_backends.rst:3 msgid "Pluggable On-Disk Back-end APIs" msgstr "" #: ../../source/development_ondisk_backends.rst:5 msgid "" "The internal REST API used between the proxy server and the account, " "container and object server is almost identical to public Swift REST API, " "but with a few internal extensions (for example, update an account with a " "new container)." msgstr "" #: ../../source/development_ondisk_backends.rst:9 msgid "" "The pluggable back-end APIs for the three REST API servers (account, " "container, object) abstracts the needs for servicing the various REST APIs " "from the details of how data is laid out and stored on-disk." msgstr "" #: ../../source/development_ondisk_backends.rst:13 msgid "" "The APIs are documented in the reference implementations for all three " "servers. For historical reasons, the object server backend reference " "implementation module is named ``diskfile``, while the account and container " "server backend reference implementation modules are named appropriately." msgstr "" #: ../../source/development_ondisk_backends.rst:18 msgid "This API is still under development and not yet finalized." msgstr "" #: ../../source/development_ondisk_backends.rst:22 msgid "Back-end API for Account Server REST APIs" msgstr "" #: ../../source/development_ondisk_backends.rst:29 msgid "Back-end API for Container Server REST APIs" msgstr "" #: ../../source/development_ondisk_backends.rst:36 msgid "Back-end API for Object Server REST APIs" msgstr "" #: ../../source/development_saio.rst:5 msgid "SAIO (Swift All In One)" msgstr "" #: ../../source/development_saio.rst:8 msgid "" "This guide assumes an existing Linux server. A physical machine or VM will " "work. We recommend configuring it with at least 2GB of memory and 40GB of " "storage space. We recommend using a VM in order to isolate Swift and its " "dependencies from other projects you may be working on." msgstr "" #: ../../source/development_saio.rst:15 msgid "Instructions for setting up a development VM" msgstr "" #: ../../source/development_saio.rst:17 msgid "" "This section documents setting up a virtual machine for doing Swift " "development. The virtual machine will emulate running a four node Swift " "cluster. To begin:" msgstr "" #: ../../source/development_saio.rst:21 msgid "Get a Linux system server image, this guide will cover:" msgstr "" #: ../../source/development_saio.rst:23 msgid "Ubuntu 24.04 LTS" msgstr "" #: ../../source/development_saio.rst:24 msgid "CentOS Stream 9" msgstr "" #: ../../source/development_saio.rst:25 msgid "Fedora" msgstr "" #: ../../source/development_saio.rst:26 msgid "OpenSuse" msgstr "" #: ../../source/development_saio.rst:28 msgid "Create guest virtual machine from the image." msgstr "" #: ../../source/development_saio.rst:32 msgid "What's in a " msgstr "" #: ../../source/development_saio.rst:34 msgid "" "Much of the configuration described in this guide requires escalated " "administrator (``root``) privileges; however, we assume that administrator " "logs in as an unprivileged user and can use ``sudo`` to run privileged " "commands." msgstr "" #: ../../source/development_saio.rst:38 msgid "" "Swift processes also run under a separate user and group, set by " "configuration option, and referenced as ``:``. The default user is ``swift``, which may not exist on your " "system. These instructions are intended to allow a developer to use his/her " "username for ``:``." msgstr "" #: ../../source/development_saio.rst:45 msgid "" "For OpenSuse users, a user's primary group is ``users``, so you have 2 " "options:" msgstr "" #: ../../source/development_saio.rst:47 msgid "" "Change ``${USER}:${USER}`` to ``${USER}:users`` in all references of this " "guide; or" msgstr "" #: ../../source/development_saio.rst:48 msgid "Create a group for your username and add yourself to it::" msgstr "" #: ../../source/development_saio.rst:54 msgid "Installing dependencies" msgstr "" #: ../../source/development_saio.rst:56 msgid "On ``apt`` based systems::" msgstr "" #: ../../source/development_saio.rst:67 msgid "On ``CentOS`` (requires additional repositories)::" msgstr "" #: ../../source/development_saio.rst:81 msgid "On ``Fedora``::" msgstr "" #: ../../source/development_saio.rst:92 msgid "On ``OpenSuse``::" msgstr "" #: ../../source/development_saio.rst:102 msgid "" "This installs necessary system dependencies and *most* of the python " "dependencies. Later in the process setuptools/distribute or pip will install " "and/or upgrade packages." msgstr "" #: ../../source/development_saio.rst:108 msgid "Configuring storage" msgstr "" #: ../../source/development_saio.rst:110 msgid "" "Swift requires some space on XFS filesystems to store data and run tests." msgstr "" #: ../../source/development_saio.rst:112 msgid "Choose either :ref:`partition-section` or :ref:`loopback-section`." msgstr "" #: ../../source/development_saio.rst:117 msgid "Using a partition for storage" msgstr "" #: ../../source/development_saio.rst:119 msgid "" "If you are going to use a separate partition for Swift data, be sure to add " "another device when creating the VM, and follow these instructions:" msgstr "" #: ../../source/development_saio.rst:123 msgid "" "The disk does not have to be ``/dev/sdb1`` (for example, it could be ``/dev/" "vdb1``) however the mount point should still be ``/mnt/sdb1``." msgstr "" #: ../../source/development_saio.rst:126 msgid "Set up a single partition on the device (this will wipe the drive)::" msgstr "" #: ../../source/development_saio.rst:130 msgid "Create an XFS file system on the partition::" msgstr "" #: ../../source/development_saio.rst:134 msgid "Find the UUID of the new partition::" msgstr "" #: ../../source/development_saio.rst:138 msgid "Edit ``/etc/fstab`` and add::" msgstr "" #: ../../source/development_saio.rst:142 ../../source/development_saio.rst:170 msgid "Create the Swift data mount point and test that mounting works::" msgstr "" #: ../../source/development_saio.rst:147 msgid "Next, skip to :ref:`common-dev-section`." msgstr "" #: ../../source/development_saio.rst:152 msgid "Using a loopback device for storage" msgstr "" #: ../../source/development_saio.rst:154 msgid "" "If you want to use a loopback device instead of another partition, follow " "these instructions:" msgstr "" #: ../../source/development_saio.rst:157 msgid "Create the file for the loopback device::" msgstr "" #: ../../source/development_saio.rst:163 msgid "" "Modify size specified in the ``truncate`` command to make a larger or " "smaller partition as needed." msgstr "" #: ../../source/development_saio.rst:166 msgid "Edit `/etc/fstab` and add::" msgstr "" #: ../../source/development_saio.rst:178 msgid "Common Post-Device Setup" msgstr "" #: ../../source/development_saio.rst:180 msgid "Create the individualized data links::" msgstr "" #: ../../source/development_saio.rst:198 msgid "" "We create the mount points and mount the loopback file under /mnt/sdb1. This " "file will contain one directory per simulated Swift node, each owned by the " "current Swift user." msgstr "" #: ../../source/development_saio.rst:202 msgid "" "We then create symlinks to these directories under /srv. If the disk sdb or " "loopback file is unmounted, files will not be written under /srv/\\*, " "because the symbolic link destination /mnt/sdb1/* will not exist. This " "prevents disk sync operations from writing to the root partition in the " "event a drive is unmounted." msgstr "" #: ../../source/development_saio.rst:208 msgid "Restore appropriate permissions on reboot." msgstr "" #: ../../source/development_saio.rst:210 msgid "" "On traditional Linux systems, add the following lines to ``/etc/rc.local`` " "(before the ``exit 0``)::" msgstr "" #: ../../source/development_saio.rst:217 msgid "On CentOS and Fedora we can use systemd (rc.local is deprecated)::" msgstr "" #: ../../source/development_saio.rst:227 msgid "On OpenSuse place the lines in ``/etc/init.d/boot.local``." msgstr "" #: ../../source/development_saio.rst:230 msgid "" "On some systems the rc file might need to be an executable shell script." msgstr "" #: ../../source/development_saio.rst:233 msgid "Creating an XFS tmp dir" msgstr "" #: ../../source/development_saio.rst:235 msgid "" "Tests require having a directory available on an XFS filesystem. By default " "the tests use ``/tmp``, however this can be pointed elsewhere with the " "``TMPDIR`` environment variable." msgstr "" #: ../../source/development_saio.rst:240 msgid "" "If your root filesystem is XFS, you can skip this section if ``/tmp`` is " "just a directory and not a mounted tmpfs. Or you could simply point to any " "existing directory owned by your user by specifying it with the ``TMPDIR`` " "environment variable." msgstr "" #: ../../source/development_saio.rst:245 msgid "" "If your root filesystem is not XFS, you should create a loopback device, " "format it with XFS and mount it. You can mount it over ``/tmp`` or to " "another location and specify it with the ``TMPDIR`` environment variable." msgstr "" #: ../../source/development_saio.rst:249 msgid "Create the file for the tmp loopback device::" msgstr "" #: ../../source/development_saio.rst:255 msgid "To mount the tmp loopback device at ``/tmp``, do the following::" msgstr "" #: ../../source/development_saio.rst:260 ../../source/development_saio.rst:271 msgid "To persist this, edit and add the following to ``/etc/fstab``::" msgstr "" #: ../../source/development_saio.rst:264 msgid "" "To mount the tmp loopback at an alternate location (for example, ``/mnt/" "tmp``), do the following::" msgstr "" #: ../../source/development_saio.rst:275 msgid "" "Set your ``TMPDIR`` environment dir so that Swift looks in the right " "location::" msgstr "" #: ../../source/development_saio.rst:282 msgid "Getting the code" msgstr "" #: ../../source/development_saio.rst:284 msgid "Check out the python-swiftclient repo::" msgstr "" #: ../../source/development_saio.rst:288 msgid "Build a development installation of python-swiftclient::" msgstr "" #: ../../source/development_saio.rst:292 msgid "Check out the Swift repo::" msgstr "" #: ../../source/development_saio.rst:296 msgid "Build a development installation of Swift::" msgstr "" #: ../../source/development_saio.rst:301 msgid "" "Due to a difference in how ``libssl.so`` is named in OpenSuse vs. other " "Linux distros the wheel/binary won't work; thus we use ``--no-binary " "cryptography`` to build ``cryptography`` locally." msgstr "" #: ../../source/development_saio.rst:305 msgid "" "Fedora users might have to perform the following if development installation " "of Swift fails::" msgstr "" #: ../../source/development_saio.rst:310 msgid "Install Swift's test dependencies::" msgstr "" #: ../../source/development_saio.rst:316 msgid "Setting up rsync" msgstr "" #: ../../source/development_saio.rst:318 msgid "Create ``/etc/rsyncd.conf``::" msgstr "" #: ../../source/development_saio.rst:323 msgid "" "Here is the default ``rsyncd.conf`` file contents maintained in the repo " "that is copied and fixed up above:" msgstr "" #: ../../source/development_saio.rst:329 msgid "Enable rsync daemon" msgstr "" #: ../../source/development_saio.rst:331 msgid "On Ubuntu, edit the following line in ``/etc/default/rsync``::" msgstr "" #: ../../source/development_saio.rst:336 msgid "You might have to create the file to perform the edits." msgstr "" #: ../../source/development_saio.rst:338 msgid "On CentOS and Fedora, enable the systemd service::" msgstr "" #: ../../source/development_saio.rst:342 msgid "On OpenSuse, nothing needs to happen here." msgstr "" #: ../../source/development_saio.rst:345 msgid "" "On platforms with SELinux in ``Enforcing`` mode, either set to " "``Permissive``::" msgstr "" #: ../../source/development_saio.rst:350 msgid "Or just allow rsync full access::" msgstr "" #: ../../source/development_saio.rst:354 msgid "Start the rsync daemon" msgstr "" #: ../../source/development_saio.rst:356 msgid "On Ubuntu 14.04, run::" msgstr "" #: ../../source/development_saio.rst:360 msgid "On Ubuntu 16.04, run::" msgstr "" #: ../../source/development_saio.rst:365 msgid "On CentOS, Fedora and OpenSuse, run::" msgstr "" #: ../../source/development_saio.rst:369 msgid "On other xinetd based systems simply run::" msgstr "" #: ../../source/development_saio.rst:373 msgid "Verify rsync is accepting connections for all servers::" msgstr "" #: ../../source/development_saio.rst:377 msgid "You should see the following output from the above command::" msgstr "" #: ../../source/development_saio.rst:394 msgid "Starting memcached" msgstr "" #: ../../source/development_saio.rst:396 msgid "On non-Ubuntu distros you need to ensure memcached is running::" msgstr "" #: ../../source/development_saio.rst:406 msgid "" "The tempauth middleware stores tokens in memcached. If memcached is not " "running, tokens cannot be validated, and accessing Swift becomes impossible." msgstr "" #: ../../source/development_saio.rst:411 msgid "Optional: Setting up rsyslog for individual logging" msgstr "" #: ../../source/development_saio.rst:413 msgid "" "Fedora and OpenSuse may not have rsyslog installed, in which case you will " "need to install it if you want to use individual logging." msgstr "" #: ../../source/development_saio.rst:416 msgid "Install rsyslogd" msgstr "" #: ../../source/development_saio.rst:419 msgid "On Fedora::" msgstr "" #: ../../source/development_saio.rst:423 msgid "On OpenSuse::" msgstr "" #: ../../source/development_saio.rst:427 msgid "Install the Swift rsyslogd configuration::" msgstr "" #: ../../source/development_saio.rst:431 msgid "" "Be sure to review that conf file to determine if you want all the logs in " "one file vs. all the logs separated out, and if you want hourly logs for " "stats processing. For convenience, we provide its default contents below:" msgstr "" #: ../../source/development_saio.rst:439 msgid "" "Edit ``/etc/rsyslog.conf`` and make the following change (usually in the " "\"GLOBAL DIRECTIVES\" section)::" msgstr "" #: ../../source/development_saio.rst:444 msgid "If using hourly logs (see above) perform::" msgstr "" #: ../../source/development_saio.rst:448 msgid "Otherwise perform::" msgstr "" #: ../../source/development_saio.rst:452 msgid "Setup the logging directory and start syslog:" msgstr "" #: ../../source/development_saio.rst:454 msgid "On Ubuntu::" msgstr "" #: ../../source/development_saio.rst:460 msgid "On CentOS, Fedora and OpenSuse::" msgstr "" #: ../../source/development_saio.rst:469 msgid "Configuring each node" msgstr "" #: ../../source/development_saio.rst:471 msgid "" "After performing the following steps, be sure to verify that Swift has " "access to resulting configuration files (sample configuration files are " "provided with all defaults in line-by-line comments)." msgstr "" #: ../../source/development_saio.rst:475 msgid "Optionally remove an existing swift directory::" msgstr "" #: ../../source/development_saio.rst:479 msgid "Populate the ``/etc/swift`` directory itself::" msgstr "" #: ../../source/development_saio.rst:484 msgid "Update ```` references in the Swift config files::" msgstr "" #: ../../source/development_saio.rst:488 msgid "" "The contents of the configuration files provided by executing the above " "commands are as follows:" msgstr "" #: ../../source/development_saio.rst:491 msgid "``/etc/swift/swift.conf``" msgstr "" #: ../../source/development_saio.rst:496 msgid "``/etc/swift/proxy-server.conf``" msgstr "" #: ../../source/development_saio.rst:501 msgid "``/etc/swift/object-expirer.conf``" msgstr "" #: ../../source/development_saio.rst:506 msgid "``/etc/swift/container-sync-realms.conf``" msgstr "" #: ../../source/development_saio.rst:511 msgid "``/etc/swift/account-server/1.conf``" msgstr "" #: ../../source/development_saio.rst:516 msgid "``/etc/swift/container-server/1.conf``" msgstr "" #: ../../source/development_saio.rst:521 msgid "``/etc/swift/container-reconciler/1.conf``" msgstr "" #: ../../source/development_saio.rst:526 msgid "``/etc/swift/object-server/1.conf``" msgstr "" #: ../../source/development_saio.rst:531 msgid "``/etc/swift/account-server/2.conf``" msgstr "" #: ../../source/development_saio.rst:536 msgid "``/etc/swift/container-server/2.conf``" msgstr "" #: ../../source/development_saio.rst:541 msgid "``/etc/swift/container-reconciler/2.conf``" msgstr "" #: ../../source/development_saio.rst:546 msgid "``/etc/swift/object-server/2.conf``" msgstr "" #: ../../source/development_saio.rst:551 msgid "``/etc/swift/account-server/3.conf``" msgstr "" #: ../../source/development_saio.rst:556 msgid "``/etc/swift/container-server/3.conf``" msgstr "" #: ../../source/development_saio.rst:561 msgid "``/etc/swift/container-reconciler/3.conf``" msgstr "" #: ../../source/development_saio.rst:566 msgid "``/etc/swift/object-server/3.conf``" msgstr "" #: ../../source/development_saio.rst:571 msgid "``/etc/swift/account-server/4.conf``" msgstr "" #: ../../source/development_saio.rst:576 msgid "``/etc/swift/container-server/4.conf``" msgstr "" #: ../../source/development_saio.rst:581 msgid "``/etc/swift/container-reconciler/4.conf``" msgstr "" #: ../../source/development_saio.rst:586 msgid "``/etc/swift/object-server/4.conf``" msgstr "" #: ../../source/development_saio.rst:595 msgid "Setting up scripts for running Swift" msgstr "" #: ../../source/development_saio.rst:597 msgid "Copy the SAIO scripts for resetting the environment::" msgstr "" #: ../../source/development_saio.rst:603 msgid "Edit the ``$HOME/bin/resetswift`` script" msgstr "" #: ../../source/development_saio.rst:605 msgid "The template ``resetswift`` script looks like the following:" msgstr "" #: ../../source/development_saio.rst:610 msgid "" "If you did not set up rsyslog for individual logging, remove the ``find /var/" "log/swift...`` line::" msgstr "" #: ../../source/development_saio.rst:616 msgid "Install the sample configuration file for running tests::" msgstr "" #: ../../source/development_saio.rst:620 msgid "The template ``test.conf`` looks like the following:" msgstr "" #: ../../source/development_saio.rst:627 msgid "Configure environment variables for Swift" msgstr "" #: ../../source/development_saio.rst:629 msgid "Add an environment variable for running tests below::" msgstr "" #: ../../source/development_saio.rst:633 msgid "Be sure that your ``PATH`` includes the ``bin`` directory::" msgstr "" #: ../../source/development_saio.rst:637 msgid "" "If you are using a loopback device for Swift Storage, add an environment var " "to substitute ``/dev/sdb1`` with ``/srv/swift-disk``::" msgstr "" #: ../../source/development_saio.rst:642 msgid "" "If you are using a device other than ``/dev/sdb1`` for Swift storage (for " "example, ``/dev/vdb1``), add an environment var to substitute it::" msgstr "" #: ../../source/development_saio.rst:647 msgid "" "If you are using a location other than ``/tmp`` for Swift tmp data (for " "example, ``/mnt/tmp``), add ``TMPDIR`` environment var to set it::" msgstr "" #: ../../source/development_saio.rst:653 msgid "Source the above environment variables into your current environment::" msgstr "" #: ../../source/development_saio.rst:659 msgid "Constructing initial rings" msgstr "" #: ../../source/development_saio.rst:661 msgid "Construct the initial rings using the provided script::" msgstr "" #: ../../source/development_saio.rst:665 msgid "The ``remakerings`` script looks like the following:" msgstr "" #: ../../source/development_saio.rst:670 msgid "" "You can expect the output from this command to produce the following. Note " "that 3 object rings are created in order to test storage policies and EC in " "the SAIO environment. The EC ring is the only one with all 8 devices. There " "are also two replication rings, one for 3x replication and another for 2x " "replication, but those rings only use 4 devices:" msgstr "" #: ../../source/development_saio.rst:710 msgid "Read more about Storage Policies and your SAIO :doc:`policies_saio`" msgstr "" #: ../../source/development_saio.rst:714 msgid "Testing Swift" msgstr "" #: ../../source/development_saio.rst:716 msgid "Verify the unit tests run::" msgstr "" #: ../../source/development_saio.rst:720 msgid "Note that the unit tests do not require any Swift daemons running." msgstr "" #: ../../source/development_saio.rst:722 msgid "" "Start the \"main\" Swift daemon processes (proxy, account, container, and " "object)::" msgstr "" #: ../../source/development_saio.rst:727 msgid "" "(The \"``Unable to increase file descriptor limit. Running as non-root?``\" " "warnings are expected and ok.)" msgstr "" #: ../../source/development_saio.rst:730 msgid "The ``startmain`` script looks like the following:" msgstr "" #: ../../source/development_saio.rst:735 msgid "Get an ``X-Storage-Url`` and ``X-Auth-Token``::" msgstr "" #: ../../source/development_saio.rst:739 msgid "Check that you can ``GET`` account::" msgstr "" #: ../../source/development_saio.rst:743 msgid "Check that the ``swift`` command provided by python-swiftclient works::" msgstr "" #: ../../source/development_saio.rst:747 msgid "Verify the functional tests run::" msgstr "" #: ../../source/development_saio.rst:751 msgid "" "(Note: functional tests will first delete everything in the configured " "accounts.)" msgstr "" #: ../../source/development_saio.rst:754 msgid "Verify the probe tests run::" msgstr "" #: ../../source/development_saio.rst:758 msgid "" "(Note: probe tests will reset your environment as they call ``resetswift`` " "for each test.)" msgstr "" #: ../../source/development_saio.rst:763 msgid "Debugging Issues" msgstr "" #: ../../source/development_saio.rst:765 msgid "" "If all doesn't go as planned, and tests fail, or you can't auth, or " "something doesn't work, here are some good starting places to look for " "issues:" msgstr "" #: ../../source/development_saio.rst:768 msgid "" "Everything is logged using system facilities -- usually in ``/var/log/" "syslog``, but possibly in ``/var/log/messages`` on e.g. Fedora -- so that is " "a good first place to look for errors (most likely python tracebacks)." msgstr "" #: ../../source/development_saio.rst:771 msgid "" "Make sure all of the server processes are running. For the base " "functionality, the Proxy, Account, Container, and Object servers should be " "running." msgstr "" #: ../../source/development_saio.rst:774 msgid "" "If one of the servers are not running, and no errors are logged to syslog, " "it may be useful to try to start the server manually, for example: ``swift-" "object-server /etc/swift/object-server/1.conf`` will start the object " "server. If there are problems not showing up in syslog, then you will " "likely see the traceback on startup." msgstr "" #: ../../source/development_saio.rst:779 msgid "" "If you need to, you can turn off syslog for unit tests. This can be useful " "for environments where ``/dev/log`` is unavailable, or which cannot rate " "limit (unit tests generate a lot of logs very quickly). Open the file " "``SWIFT_TEST_CONFIG_FILE`` points to, and change the value of " "``fake_syslog`` to ``True``." msgstr "" #: ../../source/development_saio.rst:784 msgid "" "If you encounter a ``401 Unauthorized`` when following Step 12 where you " "check that you can ``GET`` account, use ``sudo service memcached status`` " "and check if memcache is running. If memcache is not running, start it using " "``sudo service memcached start``. Once memcache is running, rerun ``GET`` " "account." msgstr "" #: ../../source/development_saio.rst:791 msgid "Known Issues" msgstr "" #: ../../source/development_saio.rst:793 msgid "" "Listed here are some \"gotcha's\" that you may run into when using or " "testing your SAIO:" msgstr "" #: ../../source/development_saio.rst:795 msgid "" "fallocate_reserve - in most cases a SAIO doesn't have a very large XFS " "partition so having fallocate enabled and fallocate_reserve set can cause " "issues, specifically when trying to run the functional tests. For this " "reason fallocate has been turned off on the object-servers in the SAIO. If " "you want to play with the fallocate_reserve settings then know that " "functional tests will fail unless you change the max_file_size constraint to " "something more reasonable then the default (5G). Ideally you'd make it 1/4 " "of your XFS file system size so the tests can pass." msgstr "" #: ../../source/development_watchers.rst:3 msgid "Auditor Watchers" msgstr "" #: ../../source/development_watchers.rst:7 msgid "Overview" msgstr "" #: ../../source/development_watchers.rst:9 msgid "" "The duty of auditors is to guard Swift against corruption in the storage " "media. But because auditors crawl all objects, they can be used to program " "Swift to operate on every object. It is done through an API known as " "\"watcher\"." msgstr "" #: ../../source/development_watchers.rst:14 msgid "" "Watchers do not have any private view into the cluster. An operator can " "write a standalone program that walks the directories and performs any " "desired inspection or maintenance. What watcher brings to the table is a " "framework to do the same job easily, under resource restrictions already in " "place for the auditor." msgstr "" #: ../../source/development_watchers.rst:21 msgid "" "Operations performed by watchers are often site-specific, or else they would " "be incorporated into Swift already. However, the code in the tree provides a " "reference implementation for convenience. It is located in swift/obj/" "watchers/dark_data.py and implements so-called \"Dark Data Watcher\"." msgstr "" #: ../../source/development_watchers.rst:27 msgid "Currently, only object auditor supports the watchers." msgstr "" #: ../../source/development_watchers.rst:31 msgid "The API class" msgstr "" #: ../../source/development_watchers.rst:33 msgid "" "The implementation of a watcher is a Python class that may look like this::" msgstr "" #: ../../source/development_watchers.rst:50 msgid "" "Arguments to watcher methods are passed as keyword arguments, and methods " "are expected to consume new, unknown arguments." msgstr "" #: ../../source/development_watchers.rst:53 msgid "" "The method __init__() is used to save configuration and logger at the start " "of the plug-in." msgstr "" #: ../../source/development_watchers.rst:56 msgid "" "The method start() is invoked when auditor starts a pass. It usually resets " "counters. The argument `auditor_type` is string of `\"ALL\"` or `\"ZBF\"`, " "according to the type of the auditor running the watcher. Watchers that talk " "to the network tend to hang off the ALL-type auditor, the lightweight ones " "are okay with the ZBF-type." msgstr "" #: ../../source/development_watchers.rst:62 msgid "" "The method end() is the closing bracket for start(). It is typically used to " "log something, or dump some statistics." msgstr "" #: ../../source/development_watchers.rst:65 msgid "" "The method see_object() is called when auditor completed an audit of an " "object. This is where most of the work is done." msgstr "" #: ../../source/development_watchers.rst:68 msgid "" "The protocol for see_object() allows it to raise a special exception, " "QuarantienRequested. Auditor catches it and quarantines the object. In " "general, it's okay for watcher methods to throw exceptions, so an author of " "a watcher plugin does not have to catch them explicitly with a try:; they " "can be just permitted to bubble up naturally." msgstr "" #: ../../source/development_watchers.rst:76 msgid "Loading the plugins" msgstr "" #: ../../source/development_watchers.rst:78 msgid "" "Swift auditor loads watcher classes from eggs, so it is necessary to wrap " "the class and provide it an entry point::" msgstr "" #: ../../source/development_watchers.rst:85 msgid "" "Operator tells Swift auditor what plugins to load by adding them to object-" "server.conf in the section [object-auditor]. It is also possible to pass " "parameters, arriving in the argument conf{} of method start()::" msgstr "" #: ../../source/development_watchers.rst:96 msgid "" "Do not forget to remove the watcher from auditors when done. Although the " "API itself is very lightweight, it is common for watchers to incur a " "significant performance penalty: they can talk to networked services or " "access additional objects." msgstr "" #: ../../source/development_watchers.rst:103 msgid "Dark Data Watcher" msgstr "" #: ../../source/development_watchers.rst:105 msgid "" "The watcher API is assumed to be under development. Operators who need " "extensions are welcome to report any needs for more arguments to " "see_object()." msgstr "" #: ../../source/development_watchers.rst:109 msgid "" "The :ref:`dark_data` watcher has been provided as an example. If an operator " "wants to create their own watcher, start by copying the provided example " "template ``swift/obj/watchers/dark_data.py`` and see if it is sufficient." msgstr "" #: ../../source/first_contribution_swift.rst:3 msgid "First Contribution to Swift" msgstr "" #: ../../source/first_contribution_swift.rst:7 msgid "Getting Swift" msgstr "" #: ../../source/first_contribution_swift.rst:11 msgid "" "Swift's source code is hosted on github and managed with git. The current " "trunk can be checked out like this::" msgstr "" #: ../../source/first_contribution_swift.rst:16 msgid "This will clone the Swift repository under your account." msgstr "" #: ../../source/first_contribution_swift.rst:18 msgid "" "A source tarball for the latest release of Swift is available on the " "`launchpad project page `_." msgstr "" #: ../../source/first_contribution_swift.rst:21 msgid "Prebuilt packages for Ubuntu and RHEL variants are available." msgstr "" #: ../../source/first_contribution_swift.rst:23 msgid "`Swift Ubuntu Packages `_" msgstr "" #: ../../source/first_contribution_swift.rst:24 msgid "" "`Swift RDO Packages `_" msgstr "" #: ../../source/first_contribution_swift.rst:28 msgid "Source Control Setup" msgstr "" #: ../../source/first_contribution_swift.rst:30 msgid "" "Swift uses ``git`` for source control. The OpenStack `Developer's Guide " "`_ describes the " "steps for setting up Git and all the necessary accounts for contributing " "code to Swift." msgstr "" #: ../../source/first_contribution_swift.rst:37 msgid "Changes to Swift" msgstr "" #: ../../source/first_contribution_swift.rst:39 msgid "" "Once you have the source code and source control set up, you can make your " "changes to Swift." msgstr "" #: ../../source/first_contribution_swift.rst:44 msgid "Testing" msgstr "" #: ../../source/first_contribution_swift.rst:46 msgid "" "The :doc:`Development Guidelines ` describe the " "testing requirements before submitting Swift code." msgstr "" #: ../../source/first_contribution_swift.rst:49 msgid "" "In summary, you can execute tox from the swift home directory (where you " "checked out the source code)::" msgstr "" #: ../../source/first_contribution_swift.rst:54 msgid "" "Tox will present tests results. Notice that in the beginning, it is very " "common to break many coding style guidelines." msgstr "" #: ../../source/first_contribution_swift.rst:59 msgid "Proposing changes to Swift" msgstr "" #: ../../source/first_contribution_swift.rst:61 msgid "" "The OpenStack `Developer's Guide `_ describes the most common ``git`` commands that you will " "need." msgstr "" #: ../../source/first_contribution_swift.rst:65 msgid "" "Following is a list of the commands that you need to know for your first " "contribution to Swift:" msgstr "" #: ../../source/first_contribution_swift.rst:68 msgid "To clone a copy of Swift::" msgstr "" #: ../../source/first_contribution_swift.rst:72 msgid "" "Under the swift directory, set up the Gerrit repository. The following " "command configures the repository to know about Gerrit and installs the " "``Change-Id`` commit hook. You only need to do this once::" msgstr "" #: ../../source/first_contribution_swift.rst:78 msgid "" "To create your development branch (substitute branch_name for a name of your " "choice::" msgstr "" #: ../../source/first_contribution_swift.rst:83 msgid "To check the files that have been updated in your branch::" msgstr "" #: ../../source/first_contribution_swift.rst:87 msgid "To check the differences between your branch and the repository::" msgstr "" #: ../../source/first_contribution_swift.rst:91 msgid "" "Assuming you have not added new files, you commit all your changes using::" msgstr "" #: ../../source/first_contribution_swift.rst:95 msgid "" "Read the `Summary of Git commit message structure `_ " "for best practices on writing the commit message. When you are ready to send " "your changes for review use::" msgstr "" #: ../../source/first_contribution_swift.rst:101 msgid "" "If successful, Git response message will contain a URL you can use to track " "your changes." msgstr "" #: ../../source/first_contribution_swift.rst:104 msgid "" "If you need to make further changes to the same review, you can commit them " "using::" msgstr "" #: ../../source/first_contribution_swift.rst:109 msgid "" "This will commit the changes under the same set of changes you issued " "earlier. Notice that in order to send your latest version for review, you " "will still need to call::" msgstr "" #: ../../source/first_contribution_swift.rst:117 msgid "Tracking your changes" msgstr "" #: ../../source/first_contribution_swift.rst:119 msgid "" "After proposing changes to Swift, you can track them at https://review." "opendev.org. After logging in, you will see a dashboard of \"Outgoing " "reviews\" for changes you have proposed, \"Incoming reviews\" for changes " "you are reviewing, and \"Recently closed\" changes for which you were either " "a reviewer or owner." msgstr "" #: ../../source/first_contribution_swift.rst:129 msgid "Post rebase instructions" msgstr "" #: ../../source/first_contribution_swift.rst:131 msgid "" "After rebasing, the following steps should be performed to rebuild the swift " "installation. Note that these commands should be performed from the root of " "the swift repo directory (e.g. ``$HOME/swift/``)::" msgstr "" #: ../../source/first_contribution_swift.rst:138 msgid "" "If using TOX, depending on the changes made during the rebase, you may need " "to rebuild the TOX environment (generally this will be the case if test-" "requirements.txt was updated such that a new version of a package is " "required), this can be accomplished using the ``-r`` argument to the TOX " "cli::" msgstr "" #: ../../source/first_contribution_swift.rst:145 msgid "" "You can include any of the other TOX arguments as well, for example, to run " "the pep8 suite and rebuild the TOX environment the following can be used::" msgstr "" #: ../../source/first_contribution_swift.rst:150 msgid "" "The rebuild option only needs to be specified once for a particular build (e." "g. pep8), that is further invocations of the same build will not require " "this until the next rebase." msgstr "" #: ../../source/first_contribution_swift.rst:156 msgid "Troubleshooting" msgstr "" #: ../../source/first_contribution_swift.rst:158 msgid "" "You may run into the following errors when starting Swift if you rebase your " "commit using::" msgstr "" #: ../../source/first_contribution_swift.rst:178 msgid "(where XXX represents a dev version of Swift)." msgstr "" #: ../../source/first_contribution_swift.rst:205 msgid "" "This happens because ``git rebase`` will retrieve code for a different " "version of Swift in the development stream, but the start scripts under ``/" "usr/local/bin`` have not been updated. The solution is to follow the steps " "described in the :ref:`post-rebase-instructions` section." msgstr "" #: ../../source/getting_started.rst:3 msgid "Getting Started" msgstr "" #: ../../source/getting_started.rst:7 msgid "System Requirements" msgstr "" #: ../../source/getting_started.rst:9 msgid "" "Swift development currently targets Ubuntu Server 22.04, but should work on " "most Linux platforms." msgstr "" #: ../../source/getting_started.rst:12 msgid "Swift is written in Python and has these dependencies:" msgstr "" #: ../../source/getting_started.rst:14 msgid "Python (3.6-3.12)" msgstr "" #: ../../source/getting_started.rst:15 msgid "rsync 3.x" msgstr "" #: ../../source/getting_started.rst:16 msgid "`liberasurecode `__" msgstr "" #: ../../source/getting_started.rst:17 msgid "" "The Python packages listed in `the requirements file `__" msgstr "" #: ../../source/getting_started.rst:18 msgid "" "Testing additionally requires `the test dependencies `__" msgstr "" #: ../../source/getting_started.rst:19 msgid "" "Testing requires `these distribution packages `__" msgstr "" #: ../../source/getting_started.rst:23 msgid "Development" msgstr "" #: ../../source/getting_started.rst:25 msgid "" "To get started with development with Swift, or to just play around, the " "following docs will be useful:" msgstr "" #: ../../source/getting_started.rst:28 msgid "" ":doc:`Swift All in One ` - Set up a VM with Swift installed" msgstr "" #: ../../source/getting_started.rst:29 msgid ":doc:`Development Guidelines `" msgstr "" #: ../../source/getting_started.rst:30 msgid ":doc:`First Contribution to Swift `" msgstr "" #: ../../source/getting_started.rst:31 msgid ":doc:`Associated Projects `" msgstr "" #: ../../source/getting_started.rst:35 msgid "CLI client and SDK library" msgstr "" #: ../../source/getting_started.rst:37 msgid "" "There are many clients in the :ref:`ecosystem `. The " "official CLI and SDK is python-swiftclient." msgstr "" #: ../../source/getting_started.rst:40 msgid "`Source code `__" msgstr "" #: ../../source/getting_started.rst:41 msgid "`Python Package Index `__" msgstr "" #: ../../source/getting_started.rst:45 msgid "Production" msgstr "" #: ../../source/getting_started.rst:47 msgid "" "If you want to set up and configure Swift for a production cluster, the " "following doc should be useful:" msgstr "" #: ../../source/getting_started.rst:50 msgid ":doc:`install/index`" msgstr "" #: ../../source/index.rst:19 msgid "Welcome to Swift's documentation!" msgstr "" #: ../../source/index.rst:21 msgid "" "Swift is a highly available, distributed, eventually consistent object/blob " "store. Organizations can use Swift to store lots of data efficiently, " "safely, and cheaply." msgstr "" #: ../../source/index.rst:24 msgid "" "This documentation is generated by the Sphinx toolkit and lives in the " "source tree. Additional documentation on Swift and other components of " "OpenStack can be found on the `OpenStack wiki`_ and at http://docs.openstack." "org." msgstr "" #: ../../source/index.rst:32 msgid "" "If you're looking for associated projects that enhance or use Swift, please " "see the :ref:`associated_projects` page." msgstr "" #: ../../source/index.rst:41 msgid "Overview and Concepts" msgstr "" #: ../../source/index.rst:71 msgid "Contributor Documentation" msgstr "" #: ../../source/index.rst:80 msgid "Developer Documentation" msgstr "" #: ../../source/index.rst:95 msgid "Administrator Documentation" msgstr "" #: ../../source/index.rst:112 msgid "Object Storage v1 REST API Documentation" msgstr "" #: ../../source/index.rst:114 msgid "" "See `Complete Reference for the Object Storage REST API `_" msgstr "" #: ../../source/index.rst:116 msgid "The following provides supporting information for the REST API:" msgstr "" #: ../../source/index.rst:139 msgid "S3 Compatibility Info" msgstr "" #: ../../source/index.rst:147 msgid "OpenStack End User Guide" msgstr "" #: ../../source/index.rst:149 msgid "" "The `OpenStack End User Guide `_ has " "additional information on using Swift. See the `Manage objects and " "containers `_ section." msgstr "" #: ../../source/index.rst:156 msgid "Source Documentation" msgstr "" #: ../../source/index.rst:173 msgid "Indices and tables" msgstr "" #: ../../source/index.rst:175 msgid ":ref:`genindex`" msgstr "" #: ../../source/index.rst:176 msgid ":ref:`modindex`" msgstr "" #: ../../source/index.rst:177 msgid ":ref:`search`" msgstr "" #: ../../source/logs.rst:3 msgid "Logs" msgstr "" #: ../../source/logs.rst:5 msgid "" "Swift has quite verbose logging, and the generated logs can be used for " "cluster monitoring, utilization calculations, audit records, and more. As an " "overview, Swift's logs are sent to syslog and organized by log level and " "syslog facility. All log lines related to the same request have the same " "transaction id. This page documents the log formats used in the system." msgstr "" #: ../../source/logs.rst:13 msgid "" "By default, Swift will log full log lines. However, with the " "``log_max_line_length`` setting and depending on your logging server " "software, lines may be truncated or shortened. With ``log_max_line_length < " "7``, the log line will be truncated. With ``log_max_line_length >= 7``, the " "log line will be \"shortened\": about half the max length followed by \" ... " "\" followed by the other half the max length. Unless you use exceptionally " "short values, you are unlikely to run across this with the following " "documented log lines, but you may see it with debugging and error log lines." msgstr "" #: ../../source/logs.rst:25 msgid "Proxy Logs" msgstr "" #: ../../source/logs.rst:27 msgid "" "The proxy logs contain the record of all external API requests made to the " "proxy server. Swift's proxy servers log requests using a custom format " "designed to provide robust information and simple processing. It is possible " "to change this format with the ``log_msg_template`` config parameter. The " "default log format is::" msgstr "" #: ../../source/logs.rst:38 msgid "" "Some keywords, signaled by the (anonymizable) flag, can be anonymized by " "using the transformer 'anonymized'. The data are applied the hashing method " "of ``log_anonymization_method`` and an optional salt " "``log_anonymization_salt``." msgstr "" #: ../../source/logs.rst:42 msgid "" "Some keywords, signaled by the (timestamp) flag, can be converted to " "standard dates formats using the matching transformers: 'datetime', " "'asctime' or 'iso8601'. Other transformers for timestamps are 's', 'ms', " "'us' and 'ns' for seconds, milliseconds, microseconds and nanoseconds. " "Python's strftime directives can also be used as tranformers (a, A, b, B, c, " "d, H, I, j, m, M, p, S, U, w, W, x, X, y, Y, Z)." msgstr "" #: ../../source/logs.rst:49 msgid "Example::" msgstr "" #: ../../source/logs.rst:56 ../../source/logs.rst:170 msgid "**Log Field**" msgstr "" #: ../../source/logs.rst:56 ../../source/logs.rst:170 msgid "**Value**" msgstr "" #: ../../source/logs.rst:58 msgid "" "Swift's guess at the end-client IP, taken from various headers in the " "request. (anonymizable)" msgstr "" #: ../../source/logs.rst:58 msgid "client_ip" msgstr "" #: ../../source/logs.rst:60 msgid "The IP address of the other end of the TCP connection. (anonymizable)" msgstr "" #: ../../source/logs.rst:60 ../../source/logs.rst:172 msgid "remote_addr" msgstr "" #: ../../source/logs.rst:62 msgid "Timestamp of the request. (timestamp)" msgstr "" #: ../../source/logs.rst:62 ../../source/logs.rst:88 msgid "end_time" msgstr "" #: ../../source/logs.rst:63 ../../source/logs.rst:175 msgid "The HTTP verb in the request." msgstr "" #: ../../source/logs.rst:63 msgid "method" msgstr "" #: ../../source/logs.rst:64 msgid "The domain in the request. (anonymizable)" msgstr "" #: ../../source/logs.rst:64 msgid "domain" msgstr "" #: ../../source/logs.rst:65 msgid "The path portion of the request. (anonymizable)" msgstr "" #: ../../source/logs.rst:65 msgid "path" msgstr "" #: ../../source/logs.rst:66 msgid "The transport protocol used (currently one of http or https)." msgstr "" #: ../../source/logs.rst:66 msgid "protocol" msgstr "" #: ../../source/logs.rst:68 ../../source/logs.rst:177 msgid "The response code for the request." msgstr "" #: ../../source/logs.rst:68 ../../source/logs.rst:177 msgid "status_int" msgstr "" #: ../../source/logs.rst:69 msgid "The value of the HTTP Referer header. (anonymizable)" msgstr "" #: ../../source/logs.rst:69 ../../source/logs.rst:179 msgid "referer" msgstr "" #: ../../source/logs.rst:70 msgid "The value of the HTTP User-Agent header. (anonymizable)" msgstr "" #: ../../source/logs.rst:70 ../../source/logs.rst:181 msgid "user_agent" msgstr "" #: ../../source/logs.rst:71 msgid "" "The value of the auth token. This may be truncated or otherwise obscured." msgstr "" #: ../../source/logs.rst:71 msgid "auth_token" msgstr "" #: ../../source/logs.rst:73 msgid "The number of bytes read from the client for this request." msgstr "" #: ../../source/logs.rst:73 msgid "bytes_recvd" msgstr "" #: ../../source/logs.rst:74 msgid "" "The number of bytes sent to the client in the body of the response. This is " "how many bytes were yielded to the WSGI server." msgstr "" #: ../../source/logs.rst:74 msgid "bytes_sent" msgstr "" #: ../../source/logs.rst:77 msgid "The etag header value given by the client. (anonymizable)" msgstr "" #: ../../source/logs.rst:77 msgid "client_etag" msgstr "" #: ../../source/logs.rst:78 ../../source/logs.rst:180 msgid "The transaction id of the request." msgstr "" #: ../../source/logs.rst:78 ../../source/logs.rst:180 msgid "transaction_id" msgstr "" #: ../../source/logs.rst:79 msgid "The headers given in the request. (anonymizable)" msgstr "" #: ../../source/logs.rst:79 msgid "headers" msgstr "" #: ../../source/logs.rst:80 msgid "The duration of the request." msgstr "" #: ../../source/logs.rst:80 ../../source/logs.rst:186 msgid "request_time" msgstr "" #: ../../source/logs.rst:81 msgid "" "The \"source\" of the request. This may be set for requests that are " "generated in order to fulfill client requests, e.g. bulk uploads." msgstr "" #: ../../source/logs.rst:81 msgid "source" msgstr "" #: ../../source/logs.rst:84 msgid "" "Various info that may be useful for diagnostics, e.g. the value of any x-" "delete-at header." msgstr "" #: ../../source/logs.rst:84 msgid "log_info" msgstr "" #: ../../source/logs.rst:86 msgid "High-resolution timestamp from the start of the request. (timestamp)" msgstr "" #: ../../source/logs.rst:86 msgid "start_time" msgstr "" #: ../../source/logs.rst:88 msgid "High-resolution timestamp from the end of the request. (timestamp)" msgstr "" #: ../../source/logs.rst:90 msgid "Duration between the request and the first bytes is sent." msgstr "" #: ../../source/logs.rst:90 msgid "ttfb" msgstr "" #: ../../source/logs.rst:91 ../../source/logs.rst:190 msgid "The value of the storage policy index." msgstr "" #: ../../source/logs.rst:91 ../../source/logs.rst:190 msgid "policy_index" msgstr "" #: ../../source/logs.rst:92 msgid "The account part extracted from the path of the request. (anonymizable)" msgstr "" #: ../../source/logs.rst:92 msgid "account" msgstr "" #: ../../source/logs.rst:94 msgid "" "The container part extracted from the path of the request. (anonymizable)" msgstr "" #: ../../source/logs.rst:94 msgid "container" msgstr "" #: ../../source/logs.rst:96 msgid "The object part extracted from the path of the request. (anonymizable)" msgstr "" #: ../../source/logs.rst:96 msgid "object" msgstr "" #: ../../source/logs.rst:98 msgid "PID of the process emitting the log line." msgstr "" #: ../../source/logs.rst:98 msgid "pid" msgstr "" #: ../../source/logs.rst:99 msgid "" "The status sent to the client, which may be different than the logged " "response code if there was an error during the body of the request or a " "disconnect." msgstr "" #: ../../source/logs.rst:99 msgid "wire_status_int" msgstr "" #: ../../source/logs.rst:104 msgid "" "In one log line, all of the above fields are space-separated and url-" "encoded. If any value is empty, it will be logged as a \"-\". This allows " "for simple parsing by splitting each line on whitespace. New values may be " "placed at the end of the log line from time to time, but the order of the " "existing values will not change. Swift log processing utilities should look " "for the first N fields they require (e.g. in Python using something like " "``log_line.split()[:14]`` to get up through the transaction id)." msgstr "" #: ../../source/logs.rst:114 msgid "" "Some log fields (like the request path) are already url quoted, so the " "logged value will be double-quoted. For example, if a client uploads an " "object name with a ``:`` in it, it will be url-quoted as ``%3A``. The log " "module will then quote this value as ``%253A``." msgstr "" #: ../../source/logs.rst:120 msgid "Swift Source" msgstr "" #: ../../source/logs.rst:122 msgid "" "The ``source`` value in the proxy logs is used to identify the originator of " "a request in the system. For example, if the client initiates a bulk upload, " "the proxy server may end up doing many requests. The initial bulk upload " "request will be logged as normal, but all of the internal \"child requests\" " "will have a source value indicating they came from the bulk functionality." msgstr "" #: ../../source/logs.rst:129 msgid "**Logged Source Value**" msgstr "" #: ../../source/logs.rst:129 msgid "**Originator of the Request**" msgstr "" #: ../../source/logs.rst:131 msgid ":ref:`formpost`" msgstr "" #: ../../source/logs.rst:131 msgid "FP" msgstr "" #: ../../source/logs.rst:132 msgid ":ref:`static-large-objects`" msgstr "" #: ../../source/logs.rst:132 msgid "SLO" msgstr "" #: ../../source/logs.rst:133 msgid ":ref:`staticweb`" msgstr "" #: ../../source/logs.rst:133 msgid "SW" msgstr "" #: ../../source/logs.rst:134 msgid ":ref:`tempurl`" msgstr "" #: ../../source/logs.rst:134 msgid "TU" msgstr "" #: ../../source/logs.rst:135 msgid ":ref:`bulk` (delete)" msgstr "" #: ../../source/logs.rst:135 msgid "BD" msgstr "" #: ../../source/logs.rst:136 msgid ":ref:`bulk` (extract)" msgstr "" #: ../../source/logs.rst:136 msgid "EA" msgstr "" #: ../../source/logs.rst:137 msgid ":ref:`account-quotas`" msgstr "" #: ../../source/logs.rst:137 msgid "AQ" msgstr "" #: ../../source/logs.rst:138 msgid ":ref:`container-quotas`" msgstr "" #: ../../source/logs.rst:138 msgid "CQ" msgstr "" #: ../../source/logs.rst:139 msgid ":ref:`container-sync`" msgstr "" #: ../../source/logs.rst:139 msgid "CS" msgstr "" #: ../../source/logs.rst:140 msgid ":ref:`common_tempauth`" msgstr "" #: ../../source/logs.rst:140 msgid "TA" msgstr "" #: ../../source/logs.rst:141 msgid ":ref:`dynamic-large-objects`" msgstr "" #: ../../source/logs.rst:141 msgid "DLO" msgstr "" #: ../../source/logs.rst:142 msgid ":ref:`list_endpoints`" msgstr "" #: ../../source/logs.rst:142 msgid "LE" msgstr "" #: ../../source/logs.rst:143 msgid ":ref:`keystoneauth`" msgstr "" #: ../../source/logs.rst:143 msgid "KS" msgstr "" #: ../../source/logs.rst:144 msgid ":ref:`ratelimit`" msgstr "" #: ../../source/logs.rst:144 msgid "RL" msgstr "" #: ../../source/logs.rst:145 msgid ":ref:`read_only`" msgstr "" #: ../../source/logs.rst:145 msgid "RO" msgstr "" #: ../../source/logs.rst:146 msgid ":ref:`versioned_writes`" msgstr "" #: ../../source/logs.rst:146 msgid "VW" msgstr "" #: ../../source/logs.rst:147 msgid ":ref:`copy`" msgstr "" #: ../../source/logs.rst:147 msgid "SSC" msgstr "" #: ../../source/logs.rst:148 msgid ":ref:`symlink`" msgstr "" #: ../../source/logs.rst:148 msgid "SYM" msgstr "" #: ../../source/logs.rst:149 msgid ":ref:`sharding_doc`" msgstr "" #: ../../source/logs.rst:149 msgid "SH" msgstr "" #: ../../source/logs.rst:150 msgid ":ref:`s3api`" msgstr "" #: ../../source/logs.rst:150 msgid "S3" msgstr "" #: ../../source/logs.rst:151 msgid ":ref:`object_versioning`" msgstr "" #: ../../source/logs.rst:151 msgid "OV" msgstr "" #: ../../source/logs.rst:152 msgid ":ref:`etag_quoter`" msgstr "" #: ../../source/logs.rst:152 msgid "EQ" msgstr "" #: ../../source/logs.rst:158 msgid "Storage Node Logs" msgstr "" #: ../../source/logs.rst:160 msgid "" "Swift's account, container, and object server processes each log requests " "that they receive, if they have been configured to do so with the " "``log_requests`` config parameter (which defaults to true). The format for " "these log lines is::" msgstr "" #: ../../source/logs.rst:172 msgid "The IP address of the other end of the TCP connection." msgstr "" #: ../../source/logs.rst:173 msgid "" "Timestamp of the request, in \"day/month/year:hour:minute:second +0000\" " "format." msgstr "" #: ../../source/logs.rst:173 msgid "datetime" msgstr "" #: ../../source/logs.rst:175 msgid "request_method" msgstr "" #: ../../source/logs.rst:176 msgid "The path portion of the request." msgstr "" #: ../../source/logs.rst:176 msgid "request_path" msgstr "" #: ../../source/logs.rst:178 msgid "The value of the Content-Length header in the response." msgstr "" #: ../../source/logs.rst:178 msgid "content_length" msgstr "" #: ../../source/logs.rst:179 msgid "The value of the HTTP Referer header." msgstr "" #: ../../source/logs.rst:181 msgid "" "The value of the HTTP User-Agent header. Swift services report a user-agent " "string of the service name followed by the process ID, such as ``\"proxy-" "server \"`` or ``\"object-updater \"``." msgstr "" #: ../../source/logs.rst:186 msgid "" "The time between request received and response started. **Note**: This " "includes transfer time on PUT, but not GET." msgstr "" #: ../../source/logs.rst:188 msgid "Additional useful information." msgstr "" #: ../../source/logs.rst:188 msgid "additional_info" msgstr "" #: ../../source/logs.rst:189 msgid "The process id of the server" msgstr "" #: ../../source/logs.rst:189 msgid "server_pid" msgstr "" #: ../../source/middleware.rst:5 msgid "Middleware" msgstr "" #: ../../source/middleware.rst:10 msgid "Account Quotas" msgstr "" #: ../../source/middleware.rst:19 msgid "AWS S3 Api" msgstr "" #: ../../source/middleware.rst:106 msgid "Backend Ratelimit" msgstr "" #: ../../source/middleware.rst:115 msgid "Bulk Operations (Delete and Archive Auto Extraction)" msgstr "" #: ../../source/middleware.rst:124 msgid "CatchErrors" msgstr "" #: ../../source/middleware.rst:131 msgid "CNAME Lookup" msgstr "" #: ../../source/middleware.rst:140 msgid "Container Quotas" msgstr "" #: ../../source/middleware.rst:149 msgid "Container Sync Middleware" msgstr "" #: ../../source/middleware.rst:156 msgid "Cross Domain Policies" msgstr "" #: ../../source/middleware.rst:165 msgid "Discoverability" msgstr "" #: ../../source/middleware.rst:167 msgid "" "Swift will by default provide clients with an interface providing details " "about the installation. Unless disabled (i.e ``expose_info=false`` in :ref:" "`proxy-server-config`), a GET request to ``/info`` will return configuration " "data in JSON format. An example response::" msgstr "" #: ../../source/middleware.rst:174 msgid "" "This would signify to the client that swift version 1.11.0 is running and " "that staticweb and tempurl are available in this installation." msgstr "" #: ../../source/middleware.rst:177 msgid "" "There may be administrator-only information available via ``/info``. To " "retrieve it, one must use an HMAC-signed request, similar to TempURL. The " "signature may be produced like so::" msgstr "" #: ../../source/middleware.rst:184 msgid "Domain Remap" msgstr "" #: ../../source/middleware.rst:191 msgid "Dynamic Large Objects" msgstr "" #: ../../source/middleware.rst:193 msgid "" "DLO support centers around a user specified filter that matches segments and " "concatenates them together in object listing order. Please see the DLO docs " "for :ref:`dlo-doc` further details." msgstr "" #: ../../source/middleware.rst:200 msgid "Encryption" msgstr "" #: ../../source/middleware.rst:202 msgid "" "Encryption middleware should be deployed in conjunction with the :ref:" "`keymaster` middleware." msgstr "" #: ../../source/middleware.rst:220 msgid "Etag Quoter" msgstr "" #: ../../source/middleware.rst:229 msgid "FormPost" msgstr "" #: ../../source/middleware.rst:238 msgid "GateKeeper" msgstr "" #: ../../source/middleware.rst:247 msgid "Healthcheck" msgstr "" #: ../../source/middleware.rst:256 msgid "Keymaster" msgstr "" #: ../../source/middleware.rst:258 msgid "" "Keymaster middleware should be deployed in conjunction with the :ref:" "`encryption` middleware." msgstr "" #: ../../source/middleware.rst:268 msgid "KeystoneAuth" msgstr "" #: ../../source/middleware.rst:277 msgid "List Endpoints" msgstr "" #: ../../source/middleware.rst:284 msgid "Memcache" msgstr "" #: ../../source/middleware.rst:291 msgid "Name Check (Forbidden Character Filter)" msgstr "" #: ../../source/middleware.rst:300 msgid "Object Versioning" msgstr "" #: ../../source/middleware.rst:307 msgid "Proxy Logging" msgstr "" #: ../../source/middleware.rst:314 msgid "Ratelimit" msgstr "" #: ../../source/middleware.rst:323 msgid "Read Only" msgstr "" #: ../../source/middleware.rst:332 msgid "Recon" msgstr "" #: ../../source/middleware.rst:341 msgid "Server Side Copy" msgstr "" #: ../../source/middleware.rst:348 msgid "Static Large Objects" msgstr "" #: ../../source/middleware.rst:350 msgid "Please see the SLO docs for :ref:`slo-doc` further details." msgstr "" #: ../../source/middleware.rst:357 msgid "StaticWeb" msgstr "" #: ../../source/middleware.rst:366 msgid "Symlink" msgstr "" #: ../../source/middleware.rst:375 msgid "TempAuth" msgstr "" #: ../../source/middleware.rst:384 msgid "TempURL" msgstr "" #: ../../source/middleware.rst:393 msgid "Versioned Writes" msgstr "" #: ../../source/middleware.rst:400 msgid "XProfile" msgstr "" #: ../../source/misc.rst:5 msgid "Misc" msgstr "" #: ../../source/misc.rst:10 msgid "ACLs" msgstr "" #: ../../source/misc.rst:19 msgid "Buffered HTTP" msgstr "" #: ../../source/misc.rst:29 msgid "Config" msgstr "" #: ../../source/misc.rst:38 msgid "Constraints" msgstr "" #: ../../source/misc.rst:46 msgid "Container Sync Realms" msgstr "" #: ../../source/misc.rst:56 msgid "Digest" msgstr "" #: ../../source/misc.rst:66 msgid "Direct Client" msgstr "" #: ../../source/misc.rst:76 msgid "Exceptions" msgstr "" #: ../../source/misc.rst:86 msgid "Internal Client" msgstr "" #: ../../source/misc.rst:96 msgid "IPAddrs" msgstr "" #: ../../source/misc.rst:105 msgid "Libc" msgstr "" #: ../../source/misc.rst:121 msgid "Manager" msgstr "" #: ../../source/misc.rst:128 msgid "MemCacheD" msgstr "" #: ../../source/misc.rst:137 msgid "Middleware Registry" msgstr "" #: ../../source/misc.rst:147 msgid "Request Helpers" msgstr "" #: ../../source/misc.rst:157 msgid "StatsdClient" msgstr "" #: ../../source/misc.rst:166 msgid "Storage Policy" msgstr "" #: ../../source/misc.rst:175 msgid "Swob" msgstr "" #: ../../source/misc.rst:185 msgid "Timestamp" msgstr "" #: ../../source/misc.rst:194 msgid "Utils Base" msgstr "" #: ../../source/misc.rst:203 msgid "Utils" msgstr "" #: ../../source/misc.rst:212 msgid "WSGI" msgstr "" #: ../../source/object.rst:5 msgid "Object" msgstr "" #: ../../source/object.rst:20 msgid "Object Backend" msgstr "" #: ../../source/object.rst:50 msgid "Object Reconstructor" msgstr "" #: ../../source/object.rst:60 msgid "Object Server" msgstr "" #: ../../source/object.rst:70 msgid "Object Updater" msgstr "" #: ../../source/overview_acl.rst:4 msgid "Access Control Lists (ACLs)" msgstr "" #: ../../source/overview_acl.rst:6 msgid "" "Normally to create, read and modify containers and objects, you must have " "the appropriate roles on the project associated with the account, i.e., you " "must be the owner of the account. However, an owner can grant access to " "other users by using an Access Control List (ACL)." msgstr "" #: ../../source/overview_acl.rst:11 msgid "There are two types of ACLs:" msgstr "" #: ../../source/overview_acl.rst:13 msgid "" ":ref:`container_acls`. These are specified on a container and apply to that " "container only and the objects in the container." msgstr "" #: ../../source/overview_acl.rst:15 msgid "" ":ref:`account_acls`. These are specified at the account level and apply to " "all containers and objects in the account." msgstr "" #: ../../source/overview_acl.rst:22 msgid "Container ACLs" msgstr "" #: ../../source/overview_acl.rst:24 msgid "" "Container ACLs are stored in the ``X-Container-Write`` and ``X-Container-" "Read`` metadata. The scope of the ACL is limited to the container where the " "metadata is set and the objects in the container. In addition:" msgstr "" #: ../../source/overview_acl.rst:28 msgid "" "``X-Container-Write`` grants the ability to perform PUT, POST and DELETE " "operations on objects within a container. It does not grant the ability to " "perform POST or DELETE operations on the container itself. Some ACL elements " "also grant the ability to perform HEAD or GET operations on the container." msgstr "" #: ../../source/overview_acl.rst:34 msgid "" "``X-Container-Read`` grants the ability to perform GET and HEAD operations " "on objects within a container. Some of the ACL elements also grant the " "ability to perform HEAD or GET operations on the container itself. However, " "a container ACL does not allow access to privileged metadata (such as ``X-" "Container-Sync-Key``)." msgstr "" #: ../../source/overview_acl.rst:40 msgid "" "Container ACLs use the \"V1\" ACL syntax which is a comma separated string " "of elements as shown in the following example::" msgstr "" #: ../../source/overview_acl.rst:45 msgid "Spaces may occur between elements as shown in the following example::" msgstr "" #: ../../source/overview_acl.rst:50 msgid "" "However, these spaces are removed from the value stored in the ``X-Container-" "Write`` and ``X-Container-Read`` metadata. In addition, the ``.r:`` string " "can be written as ``.referrer:``, but is stored as ``.r:``." msgstr "" #: ../../source/overview_acl.rst:54 msgid "" "While all auth systems use the same syntax, the meaning of some elements is " "different because of the different concepts used by different auth systems " "as explained in the following sections:" msgstr "" #: ../../source/overview_acl.rst:59 msgid ":ref:`acl_common_elements`" msgstr "" #: ../../source/overview_acl.rst:60 msgid ":ref:`acl_keystone_elements`" msgstr "" #: ../../source/overview_acl.rst:61 msgid ":ref:`acl_tempauth_elements`" msgstr "" #: ../../source/overview_acl.rst:67 msgid "Common ACL Elements" msgstr "" #: ../../source/overview_acl.rst:69 msgid "" "The following table describes elements of an ACL that are supported by both " "Keystone auth and TempAuth. These elements should only be used with ``X-" "Container-Read`` (with the exception of ``.rlistings``, an error will occur " "if used with ``X-Container-Write``):" msgstr "" #: ../../source/overview_acl.rst:76 ../../source/overview_acl.rst:107 #: ../../source/overview_acl.rst:168 msgid "Element" msgstr "" #: ../../source/overview_acl.rst:78 msgid ".r:*" msgstr "" #: ../../source/overview_acl.rst:78 msgid "Any user has access to objects. No token is required in the request." msgstr "" #: ../../source/overview_acl.rst:80 msgid ".r:" msgstr "" #: ../../source/overview_acl.rst:80 msgid "" "The referrer is granted access to objects. The referrer is identified by the " "``Referer`` request header in the request. No token is required." msgstr "" #: ../../source/overview_acl.rst:84 msgid ".r:-" msgstr "" #: ../../source/overview_acl.rst:84 msgid "" "This syntax (with \"-\" prepended to the referrer) is supported. However, it " "does not deny access if another element (e.g., ``.r:*``) grants access." msgstr "" #: ../../source/overview_acl.rst:88 msgid ".rlistings" msgstr "" #: ../../source/overview_acl.rst:88 msgid "" "Any user can perform a HEAD or GET operation on the container provided the " "user also has read access on objects (e.g., also has ``.r:*`` or ``.r:" "``. No token is required." msgstr "" #: ../../source/overview_acl.rst:97 msgid "Keystone Auth ACL Elements" msgstr "" #: ../../source/overview_acl.rst:99 msgid "" "The following table describes elements of an ACL that are supported only by " "Keystone auth. Keystone auth also supports the elements described in :ref:" "`acl_common_elements`." msgstr "" #: ../../source/overview_acl.rst:103 msgid "" "A token must be included in the request for any of these ACL elements to " "take effect." msgstr "" #: ../../source/overview_acl.rst:109 msgid ":" msgstr "" #: ../../source/overview_acl.rst:109 msgid "" "The specified user, provided a token scoped to the project is included in " "the request, is granted access. Access to the container is also granted when " "used in ``X-Container-Read``." msgstr "" #: ../../source/overview_acl.rst:114 msgid ":\\*" msgstr "" #: ../../source/overview_acl.rst:114 msgid "" "Any user with a role in the specified Keystone project has access. A token " "scoped to the project must be included in the request. Access to the " "container is also granted when used in ``X-Container-Read``." msgstr "" #: ../../source/overview_acl.rst:119 msgid "" "The specified user has access. A token for the user (scoped to any project) " "must be included in the request. Access to the container is also granted " "when used in ``X-Container-Read``." msgstr "" #: ../../source/overview_acl.rst:119 msgid "\\*:" msgstr "" #: ../../source/overview_acl.rst:124 msgid "" "Any user has access. Access to the container is also granted when used in " "``X-Container-Read``. The ``*:*`` element differs from the ``.r:*`` element " "because ``*:*`` requires that a valid token is included in the request " "whereas ``.r:*`` does not require a token. In addition, ``.r:*`` does not " "grant access to the container listing." msgstr "" #: ../../source/overview_acl.rst:124 msgid "\\*:\\*" msgstr "" #: ../../source/overview_acl.rst:134 msgid "" msgstr "" #: ../../source/overview_acl.rst:134 msgid "" "A user with the specified role *name* on the project within which the " "container is stored is granted access. A user token scoped to the project " "must be included in the request. Access to the container is also granted " "when used in ``X-Container-Read``." msgstr "" #: ../../source/overview_acl.rst:144 msgid "" "Keystone project (tenant) or user *names* (i.e., ``:``) must no longer be used because with the introduction of domains in " "Keystone, names are not globally unique. You should use user and project " "*ids* instead." msgstr "" #: ../../source/overview_acl.rst:150 msgid "" "For backwards compatibility, ACLs using names will be granted by " "keystoneauth when it can be established that the grantee project, the " "grantee user and the project being accessed are either not yet in a domain " "(e.g. the ``X-Auth-Token`` has been obtained via the Keystone V2 API) or are " "all in the default domain to which legacy accounts would have been migrated." msgstr "" #: ../../source/overview_acl.rst:161 msgid "TempAuth ACL Elements" msgstr "" #: ../../source/overview_acl.rst:163 msgid "" "The following table describes elements of an ACL that are supported only by " "TempAuth. TempAuth auth also supports the elements described in :ref:" "`acl_common_elements`." msgstr "" #: ../../source/overview_acl.rst:170 msgid "" msgstr "" #: ../../source/overview_acl.rst:170 msgid "" "The named user is granted access. The wildcard (\"*\") character is not " "supported. A token from the user must be included in the request." msgstr "" #: ../../source/overview_acl.rst:178 msgid "Container ACL Examples" msgstr "" #: ../../source/overview_acl.rst:180 msgid "" "Container ACLs may be set by including ``X-Container-Write`` and/or ``X-" "Container-Read`` headers with a PUT or a POST request to the container URL. " "The following examples use the ``swift`` command line client which support " "these headers being set via its ``--write-acl`` and ``--read-acl`` options." msgstr "" #: ../../source/overview_acl.rst:186 msgid "Example: Public Container" msgstr "" #: ../../source/overview_acl.rst:188 msgid "" "The following allows anybody to list objects in the ``www`` container and " "download objects. The users do not need to include a token in their request. " "This ACL is commonly referred to as making the container \"public\". It is " "useful when used with :ref:`staticweb`::" msgstr "" #: ../../source/overview_acl.rst:197 msgid "Example: Shared Writable Container" msgstr "" #: ../../source/overview_acl.rst:199 msgid "" "The following allows anybody to upload or download objects. However, to " "download an object, the exact name of the object must be known since users " "cannot list the objects in the container. The users must include a Keystone " "token in the upload request. However, it does not need to be scoped to the " "project associated with the container::" msgstr "" #: ../../source/overview_acl.rst:209 msgid "Example: Sharing a Container with Project Members" msgstr "" #: ../../source/overview_acl.rst:211 msgid "" "The following allows any member of the ``77b8f82565f14814bece56e50c4c240f`` " "project to upload and download objects or to list the contents of the " "``www`` container. A token scoped to the " "``77b8f82565f14814bece56e50c4c240f`` project must be included in the " "request::" msgstr "" #: ../../source/overview_acl.rst:221 msgid "Example: Sharing a Container with Users having a specified Role" msgstr "" #: ../../source/overview_acl.rst:223 msgid "" "The following allows any user that has been assigned the " "``my_read_access_role`` on the project within which the ``www`` container is " "stored to download objects or to list the contents of the ``www`` container. " "A user token scoped to the project must be included in the download or list " "request::" msgstr "" #: ../../source/overview_acl.rst:233 msgid "Example: Allowing a Referrer Domain to Download Objects" msgstr "" #: ../../source/overview_acl.rst:235 msgid "" "The following allows any request from the ``example.com`` domain to access " "an object in the container::" msgstr "" #: ../../source/overview_acl.rst:240 msgid "" "However, the request from the user **must** contain the appropriate " "`Referer` header as shown in this example request::" msgstr "" #: ../../source/overview_acl.rst:247 msgid "" "The `Referer` header is included in requests by many browsers. However, " "since it is easy to create a request with any desired value in the `Referer` " "header, the referrer ACL has very weak security." msgstr "" #: ../../source/overview_acl.rst:253 msgid "Example: Sharing a Container with Another User" msgstr "" #: ../../source/overview_acl.rst:255 msgid "" "Sharing a Container with another user requires the knowledge of few " "parameters regarding the users." msgstr "" #: ../../source/overview_acl.rst:258 msgid "The sharing user must know:" msgstr "" #: ../../source/overview_acl.rst:260 msgid "the ``OpenStack user id`` of the other user" msgstr "" #: ../../source/overview_acl.rst:262 msgid "The sharing user must communicate to the other user:" msgstr "" #: ../../source/overview_acl.rst:264 msgid "the name of the shared container" msgstr "" #: ../../source/overview_acl.rst:265 msgid "the ``OS_STORAGE_URL``" msgstr "" #: ../../source/overview_acl.rst:267 msgid "" "Usually the ``OS_STORAGE_URL`` is not exposed directly to the user because " "the ``swift client`` by default automatically construct the " "``OS_STORAGE_URL`` based on the User credential." msgstr "" #: ../../source/overview_acl.rst:271 msgid "" "We assume that in the current directory there are the two client environment " "script for the two users ``sharing.openrc`` and ``other.openrc``." msgstr "" #: ../../source/overview_acl.rst:275 msgid "The ``sharing.openrc`` should be similar to the following:" msgstr "" #: ../../source/overview_acl.rst:289 msgid "The ``other.openrc`` should be similar to the following:" msgstr "" #: ../../source/overview_acl.rst:303 msgid "" "For more information see `using the OpenStack RC file `_" msgstr "" #: ../../source/overview_acl.rst:306 msgid "First we figure out the other user id::" msgstr "" #: ../../source/overview_acl.rst:311 msgid "or alternatively::" msgstr "" #: ../../source/overview_acl.rst:316 msgid "Then we figure out the storage url of the sharing user::" msgstr "" #: ../../source/overview_acl.rst:321 msgid "" "Running as the sharing user create a shared container named ``shared`` in " "read-only mode with the other user using the proper acl::" msgstr "" #: ../../source/overview_acl.rst:327 msgid "Running as the sharing user create and upload a test file::" msgstr "" #: ../../source/overview_acl.rst:332 msgid "Running as the other user list the files in the ``shared`` container::" msgstr "" #: ../../source/overview_acl.rst:337 msgid "" "Running as the other user download the ``shared`` container in the ``/tmp`` " "directory::" msgstr "" #: ../../source/overview_acl.rst:348 msgid "Account ACLs" msgstr "" #: ../../source/overview_acl.rst:352 msgid "Account ACLs are not currently supported by Keystone auth" msgstr "" #: ../../source/overview_acl.rst:354 msgid "" "The ``X-Account-Access-Control`` header is used to specify account-level " "ACLs in a format specific to the auth system. These headers are visible and " "settable only by account owners (those for whom ``swift_owner`` is true). " "Behavior of account ACLs is auth-system-dependent. In the case of TempAuth, " "if an authenticated user has membership in a group which is listed in the " "ACL, then the user is allowed the access level of that ACL." msgstr "" #: ../../source/overview_acl.rst:362 msgid "" "Account ACLs use the \"V2\" ACL syntax, which is a JSON dictionary with keys " "named \"admin\", \"read-write\", and \"read-only\". (Note the case " "sensitivity.) An example value for the ``X-Account-Access-Control`` header " "looks like this, where ``a``, ``b`` and ``c`` are user names::" msgstr "" #: ../../source/overview_acl.rst:369 msgid "Keys may be absent (as shown in above example)." msgstr "" #: ../../source/overview_acl.rst:371 msgid "The recommended way to generate ACL strings is as follows::" msgstr "" #: ../../source/overview_acl.rst:377 msgid "" "Using the :func:`format_acl` method will ensure that JSON is encoded as " "ASCII (using e.g. '\\u1234' for Unicode). While it's permissible to " "manually send ``curl`` commands containing ``X-Account-Access-Control`` " "headers, you should exercise caution when doing so, due to the potential for " "human error." msgstr "" #: ../../source/overview_acl.rst:383 msgid "" "Within the JSON dictionary stored in ``X-Account-Access-Control``, the keys " "have the following meanings:" msgstr "" #: ../../source/overview_acl.rst:387 msgid "Access Level" msgstr "" #: ../../source/overview_acl.rst:389 msgid "" "These identities can read *everything* (except privileged headers) in the " "account. Specifically, a user with read-only account access can get a list " "of containers in the account, list the contents of any container, retrieve " "any object, and see the (non-privileged) headers of the account, any " "container, or any object." msgstr "" #: ../../source/overview_acl.rst:389 msgid "read-only" msgstr "" #: ../../source/overview_acl.rst:395 msgid "" "These identities can read or write (or create) any container. A user with " "read-write account access can create new containers, set any unprivileged " "container headers, overwrite objects, delete containers, etc. A read-write " "user can NOT set account headers (or perform any PUT/POST/DELETE requests on " "the account)." msgstr "" #: ../../source/overview_acl.rst:395 msgid "read-write" msgstr "" #: ../../source/overview_acl.rst:401 msgid "" "These identities have \"swift_owner\" privileges. A user with admin account " "access can do anything the account owner can, including setting account " "headers and any privileged headers -- and thus granting read-only, read-" "write, or admin access to other users." msgstr "" #: ../../source/overview_acl.rst:401 msgid "admin" msgstr "" #: ../../source/overview_acl.rst:409 msgid "" "For more details, see :mod:`swift.common.middleware.tempauth`. For details " "on the ACL format, see :mod:`swift.common.middleware.acl`." msgstr "" #: ../../source/overview_architecture.rst:3 msgid "Swift Architectural Overview" msgstr "" #: ../../source/overview_architecture.rst:7 msgid "Proxy Server" msgstr "" #: ../../source/overview_architecture.rst:9 msgid "" "The Proxy Server is responsible for tying together the rest of the Swift " "architecture. For each request, it will look up the location of the account, " "container, or object in the ring (see below) and route the request " "accordingly. For Erasure Code type policies, the Proxy Server is also " "responsible for encoding and decoding object data. See :doc:" "`overview_erasure_code` for complete information on Erasure Code support. " "The public API is also exposed through the Proxy Server." msgstr "" #: ../../source/overview_architecture.rst:17 msgid "" "A large number of failures are also handled in the Proxy Server. For " "example, if a server is unavailable for an object PUT, it will ask the ring " "for a handoff server and route there instead." msgstr "" #: ../../source/overview_architecture.rst:21 msgid "" "When objects are streamed to or from an object server, they are streamed " "directly through the proxy server to or from the user -- the proxy server " "does not spool them." msgstr "" #: ../../source/overview_architecture.rst:27 msgid "The Ring" msgstr "" #: ../../source/overview_architecture.rst:29 msgid "" "A ring represents a mapping between the names of entities stored on disk and " "their physical location. There are separate rings for accounts, containers, " "and one object ring per storage policy. When other components need to " "perform any operation on an object, container, or account, they need to " "interact with the appropriate ring to determine its location in the cluster." msgstr "" #: ../../source/overview_architecture.rst:35 msgid "" "The Ring maintains this mapping using zones, devices, partitions, and " "replicas. Each partition in the ring is replicated, by default, 3 times " "across the cluster, and the locations for a partition are stored in the " "mapping maintained by the ring. The ring is also responsible for determining " "which devices are used for handoff in failure scenarios." msgstr "" #: ../../source/overview_architecture.rst:41 msgid "" "The replicas of each partition will be isolated onto as many distinct " "regions, zones, servers and devices as the capacity of these failure domains " "allow. If there are less failure domains at a given tier than replicas of " "the partition assigned within a tier (e.g. a 3 replica cluster with 2 " "servers), or the available capacity across the failure domains within a tier " "are not well balanced it will not be possible to achieve both even capacity " "distribution (`balance`) as well as complete isolation of replicas across " "failure domains (`dispersion`). When this occurs the ring management tools " "will display a warning so that the operator can evaluate the cluster " "topology." msgstr "" #: ../../source/overview_architecture.rst:51 msgid "" "Data is evenly distributed across the capacity available in the cluster as " "described by the devices weight. Weights can be used to balance the " "distribution of partitions on drives across the cluster. This can be useful, " "for example, when different sized drives are used in a cluster. Device " "weights can also be used when adding or removing capacity or failure domains " "to control how many partitions are reassigned during a rebalance to be moved " "as soon as replication bandwidth allows." msgstr "" #: ../../source/overview_architecture.rst:60 msgid "" "Prior to Swift 2.1.0 it was not possible to restrict partition movement by " "device weight when adding new failure domains, and would allow extremely " "unbalanced rings. The greedy dispersion algorithm is now subject to the " "constraints of the physical capacity in the system, but can be adjusted with-" "in reason via the overload option. Artificially unbalancing the partition " "assignment without respect to capacity can introduce unexpected full devices " "when a given failure domain does not physically support its share of the " "used capacity in the tier." msgstr "" #: ../../source/overview_architecture.rst:69 msgid "" "When partitions need to be moved around (for example if a device is added to " "the cluster), the ring ensures that a minimum number of partitions are moved " "at a time, and only one replica of a partition is moved at a time." msgstr "" #: ../../source/overview_architecture.rst:73 msgid "" "The ring is used by the Proxy server and several background processes (like " "replication). See :doc:`overview_ring` for complete information on the ring." msgstr "" #: ../../source/overview_architecture.rst:79 msgid "Storage Policies" msgstr "" #: ../../source/overview_architecture.rst:81 msgid "" "Storage Policies provide a way for object storage providers to differentiate " "service levels, features and behaviors of a Swift deployment. Each Storage " "Policy configured in Swift is exposed to the client via an abstract name. " "Each device in the system is assigned to one or more Storage Policies. This " "is accomplished through the use of multiple object rings, where each Storage " "Policy has an independent object ring, which may include a subset of " "hardware implementing a particular differentiation." msgstr "" #: ../../source/overview_architecture.rst:89 msgid "" "For example, one might have the default policy with 3x replication, and " "create a second policy which, when applied to new containers only uses 2x " "replication. Another might add SSDs to a set of storage nodes and create a " "performance tier storage policy for certain containers to have their objects " "stored there. Yet another might be the use of Erasure Coding to define a " "cold-storage tier." msgstr "" #: ../../source/overview_architecture.rst:95 msgid "" "This mapping is then exposed on a per-container basis, where each container " "can be assigned a specific storage policy when it is created, which remains " "in effect for the lifetime of the container. Applications require minimal " "awareness of storage policies to use them; once a container has been created " "with a specific policy, all objects stored in it will be done so in " "accordance with that policy." msgstr "" #: ../../source/overview_architecture.rst:102 msgid "" "The Storage Policies feature is implemented throughout the entire code base " "so it is an important concept in understanding Swift architecture." msgstr "" #: ../../source/overview_architecture.rst:105 msgid "" "See :doc:`overview_policies` for complete information on storage policies." msgstr "" #: ../../source/overview_architecture.rst:111 msgid "" "The Object Server is a very simple blob storage server that can store, " "retrieve and delete objects stored on local devices. Objects are stored as " "binary files on the filesystem with metadata stored in the file's extended " "attributes (xattrs). This requires that the underlying filesystem choice for " "object servers support xattrs on files. Some filesystems, like ext3, have " "xattrs turned off by default." msgstr "" #: ../../source/overview_architecture.rst:118 msgid "" "Each object is stored using a path derived from the object name's hash and " "the operation's timestamp. Last write always wins, and ensures that the " "latest object version will be served. A deletion is also treated as a " "version of the file (a 0 byte file ending with \".ts\", which stands for " "tombstone). This ensures that deleted files are replicated correctly and " "older versions don't magically reappear due to failure scenarios." msgstr "" #: ../../source/overview_architecture.rst:129 msgid "" "The Container Server's primary job is to handle listings of objects. It " "doesn't know where those object's are, just what objects are in a specific " "container. The listings are stored as sqlite database files, and replicated " "across the cluster similar to how objects are. Statistics are also tracked " "that include the total number of objects, and total storage usage for that " "container." msgstr "" #: ../../source/overview_architecture.rst:140 msgid "" "The Account Server is very similar to the Container Server, excepting that " "it is responsible for listings of containers rather than objects." msgstr "" #: ../../source/overview_architecture.rst:145 msgid "Replication" msgstr "" #: ../../source/overview_architecture.rst:147 msgid "" "Replication is designed to keep the system in a consistent state in the face " "of temporary error conditions like network outages or drive failures." msgstr "" #: ../../source/overview_architecture.rst:150 msgid "" "The replication processes compare local data with each remote copy to ensure " "they all contain the latest version. Object replication uses a hash list to " "quickly compare subsections of each partition, and container and account " "replication use a combination of hashes and shared high water marks." msgstr "" #: ../../source/overview_architecture.rst:155 msgid "" "Replication updates are push based. For object replication, updating is just " "a matter of rsyncing files to the peer. Account and container replication " "push missing records over HTTP or rsync whole database files." msgstr "" #: ../../source/overview_architecture.rst:159 msgid "" "The replicator also ensures that data is removed from the system. When an " "item (object, container, or account) is deleted, a tombstone is set as the " "latest version of the item. The replicator will see the tombstone and ensure " "that the item is removed from the entire system." msgstr "" #: ../../source/overview_architecture.rst:164 msgid "" "See :doc:`overview_replication` for complete information on replication." msgstr "" #: ../../source/overview_architecture.rst:168 msgid "Reconstruction" msgstr "" #: ../../source/overview_architecture.rst:170 msgid "" "The reconstructor is used by Erasure Code policies and is analogous to the " "replicator for Replication type policies. See :doc:`overview_erasure_code` " "for complete information on both Erasure Code support as well as the " "reconstructor." msgstr "" #: ../../source/overview_architecture.rst:179 msgid "Updaters" msgstr "" #: ../../source/overview_architecture.rst:181 msgid "" "There are times when container or account data can not be immediately " "updated. This usually occurs during failure scenarios or periods of high " "load. If an update fails, the update is queued locally on the filesystem, " "and the updater will process the failed updates. This is where an eventual " "consistency window will most likely come in to play. For example, suppose a " "container server is under load and a new object is put in to the system. The " "object will be immediately available for reads as soon as the proxy server " "responds to the client with success. However, the container server did not " "update the object listing, and so the update would be queued for a later " "update. Container listings, therefore, may not immediately contain the " "object." msgstr "" #: ../../source/overview_architecture.rst:192 msgid "" "In practice, the consistency window is only as large as the frequency at " "which the updater runs and may not even be noticed as the proxy server will " "route listing requests to the first container server which responds. The " "server under load may not be the one that serves subsequent listing requests " "-- one of the other two replicas may handle the listing." msgstr "" #: ../../source/overview_architecture.rst:200 msgid "Auditors" msgstr "" #: ../../source/overview_architecture.rst:202 msgid "" "Auditors crawl the local server checking the integrity of the objects, " "containers, and accounts. If corruption is found (in the case of bit rot, " "for example), the file is quarantined, and replication will replace the bad " "file from another replica. If other errors are found they are logged (for " "example, an object's listing can't be found on any container server it " "should be)." msgstr "" #: ../../source/overview_auth.rst:3 msgid "The Auth System" msgstr "" #: ../../source/overview_auth.rst:9 msgid "" "Swift supports a number of auth systems that share the following common " "characteristics:" msgstr "" #: ../../source/overview_auth.rst:12 msgid "" "The authentication/authorization part can be an external system or a " "subsystem run within Swift as WSGI middleware" msgstr "" #: ../../source/overview_auth.rst:14 msgid "The user of Swift passes in an auth token with each request" msgstr "" #: ../../source/overview_auth.rst:15 msgid "" "Swift validates each token with the external auth system or auth subsystem " "and caches the result" msgstr "" #: ../../source/overview_auth.rst:17 msgid "The token does not change from request to request, but does expire" msgstr "" #: ../../source/overview_auth.rst:19 msgid "" "The token can be passed into Swift using the X-Auth-Token or the X-Storage-" "Token header. Both have the same format: just a simple string representing " "the token. Some auth systems use UUID tokens, some an MD5 hash of something " "unique, some use \"something else\" but the salient point is that the token " "is a string which can be sent as-is back to the auth system for validation." msgstr "" #: ../../source/overview_auth.rst:26 msgid "" "Swift will make calls to the auth system, giving the auth token to be " "validated. For a valid token, the auth system responds with an overall " "expiration time in seconds from now. To avoid the overhead in validating the " "same token over and over again, Swift will cache the token for a " "configurable time, but no longer than the expiration time." msgstr "" #: ../../source/overview_auth.rst:33 msgid "The Swift project includes two auth systems:" msgstr "" #: ../../source/overview_auth.rst:35 msgid ":ref:`temp_auth`" msgstr "" #: ../../source/overview_auth.rst:36 msgid ":ref:`keystone_auth`" msgstr "" #: ../../source/overview_auth.rst:38 msgid "" "It is also possible to write your own auth system as described in :ref:" "`extending_auth`." msgstr "" #: ../../source/overview_auth.rst:47 msgid "" "TempAuth is used primarily in Swift's functional test environment and can be " "used in other test environments (such as :doc:`development_saio`). It is not " "recommended to use TempAuth in a production system. However, TempAuth is " "fully functional and can be used as a model to develop your own auth system." msgstr "" #: ../../source/overview_auth.rst:52 msgid "" "TempAuth has the concept of admin and non-admin users within an account. " "Admin users can do anything within the account. Non-admin users can only " "perform read operations. However, some privileged metadata such as X-" "Container-Sync-Key is not accessible to non-admin users." msgstr "" #: ../../source/overview_auth.rst:58 msgid "" "Users with the special group ``.reseller_admin`` can operate on any account. " "For an example usage please see :mod:`swift.common.middleware.tempauth`. If " "a request is coming from a reseller the auth system sets the request environ " "reseller_request to True. This can be used by other middlewares." msgstr "" #: ../../source/overview_auth.rst:63 msgid "" "Other users may be granted the ability to perform operations on an account " "or container via ACLs. TempAuth supports two types of ACL:" msgstr "" #: ../../source/overview_auth.rst:66 msgid "" "Per container ACLs based on the container's ``X-Container-Read`` and ``X-" "Container-Write`` metadata. See :ref:`container_acls` for more information." msgstr "" #: ../../source/overview_auth.rst:70 msgid "" "Per account ACLs based on the account's ``X-Account-Access-Control`` " "metadata. For more information see :ref:`account_acls`." msgstr "" #: ../../source/overview_auth.rst:73 msgid "TempAuth will now allow OPTIONS requests to go through without a token." msgstr "" #: ../../source/overview_auth.rst:75 msgid "" "The TempAuth middleware is responsible for creating its own tokens. A user " "makes a request containing their username and password and TempAuth responds " "with a token. This token is then used to perform subsequent requests on the " "user's account, containers and objects." msgstr "" #: ../../source/overview_auth.rst:84 msgid "Keystone Auth" msgstr "" #: ../../source/overview_auth.rst:86 msgid "" "Swift is able to authenticate against OpenStack Keystone_. In this " "environment, Keystone is responsible for creating and validating tokens. " "The :ref:`keystoneauth` middleware is responsible for implementing the auth " "system within Swift as described here." msgstr "" #: ../../source/overview_auth.rst:91 msgid "" "The :ref:`keystoneauth` middleware supports per container based ACLs on the " "container's ``X-Container-Read`` and ``X-Container-Write`` metadata. For " "more information see :ref:`container_acls`." msgstr "" #: ../../source/overview_auth.rst:95 msgid "The account-level ACL is not supported by Keystone auth." msgstr "" #: ../../source/overview_auth.rst:97 msgid "" "In order to use the ``keystoneauth`` middleware the ``auth_token`` " "middleware from KeystoneMiddleware_ will need to be configured." msgstr "" #: ../../source/overview_auth.rst:100 msgid "" "The ``authtoken`` middleware performs the authentication token validation " "and retrieves actual user authentication information. It can be found in the " "KeystoneMiddleware_ distribution." msgstr "" #: ../../source/overview_auth.rst:104 msgid "" "The :ref:`keystoneauth` middleware performs authorization and mapping the " "Keystone roles to Swift's ACLs." msgstr "" #: ../../source/overview_auth.rst:113 msgid "Configuring Swift to use Keystone" msgstr "" #: ../../source/overview_auth.rst:115 msgid "" "Configuring Swift to use Keystone_ is relatively straightforward. The first " "step is to ensure that you have the ``auth_token`` middleware installed. It " "can either be dropped in your python path or installed via the " "KeystoneMiddleware_ package." msgstr "" #: ../../source/overview_auth.rst:121 msgid "" "You need at first make sure you have a service endpoint of type ``object-" "store`` in Keystone pointing to your Swift proxy. For example having this in " "your ``/etc/keystone/default_catalog.templates`` ::" msgstr "" #: ../../source/overview_auth.rst:130 msgid "" "On your Swift proxy server you will want to adjust your main pipeline and " "add auth_token and keystoneauth in your ``/etc/swift/proxy-server.conf`` " "like this ::" msgstr "" #: ../../source/overview_auth.rst:137 msgid "add the configuration for the authtoken middleware::" msgstr "" #: ../../source/overview_auth.rst:153 msgid "" "The actual values for these variables will need to be set depending on your " "situation, but in short:" msgstr "" #: ../../source/overview_auth.rst:156 msgid "" "``www_authenticate_uri`` should point to a Keystone service from which users " "may retrieve tokens. This value is used in the `WWW-Authenticate` header " "that auth_token sends with any denial response." msgstr "" #: ../../source/overview_auth.rst:159 msgid "" "``auth_url`` points to the Keystone Admin service. This information is used " "by the middleware to actually query Keystone about the validity of the " "authentication tokens. It is not necessary to append any Keystone API " "version number to this URI." msgstr "" #: ../../source/overview_auth.rst:163 msgid "" "The auth credentials (``project_domain_id``, ``user_domain_id``, " "``username``, ``project_name``, ``password``) will be used to retrieve an " "admin token. That token will be used to authorize user tokens behind the " "scenes. These credentials must match the Keystone credentials for the Swift " "service. The example values shown here assume a user named 'swift' with " "admin role on a project named 'service', both being in the Keystone domain " "with id 'default'. Refer to the `KeystoneMiddleware documentation `_ for other examples." msgstr "" #: ../../source/overview_auth.rst:173 msgid "" "``cache`` is set to ``swift.cache``. This means that the middleware will get " "the Swift memcache from the request environment." msgstr "" #: ../../source/overview_auth.rst:175 msgid "" "``include_service_catalog`` defaults to ``True`` if not set. This means that " "when validating a token, the service catalog is retrieved and stored in the " "``X-Service-Catalog`` header. This is required if you use access-rules in " "Application Credentials. You may also need to increase `max_header_size`." msgstr "" #: ../../source/overview_auth.rst:184 msgid "" "The authtoken config variable ``delay_auth_decision`` must be set to " "``True``. The default is ``False``, but that breaks public access, :ref:" "`staticweb`, :ref:`formpost`, :ref:`tempurl`, and authenticated capabilities " "requests (using :ref:`discoverability`)." msgstr "" #: ../../source/overview_auth.rst:189 msgid "" "and you can finally add the keystoneauth configuration. Here is a simple " "configuration::" msgstr "" #: ../../source/overview_auth.rst:196 msgid "" "Use an appropriate list of roles in operator_roles. For example, in some " "systems, the role ``_member_`` or ``Member`` is used to indicate that the " "user is allowed to operate on project resources." msgstr "" #: ../../source/overview_auth.rst:201 msgid "OpenStack Service Using Composite Tokens" msgstr "" #: ../../source/overview_auth.rst:203 msgid "" "Some OpenStack services such as Cinder and Glance may use a \"service " "account\". In this mode, you configure a separate account where the service " "stores project data that it manages. This account is not used directly by " "the end-user. Instead, all access is done through the service." msgstr "" #: ../../source/overview_auth.rst:208 msgid "" "To access the \"service\" account, the service must present two tokens: one " "from the end-user and another from its own service user. Only when both " "tokens are present can the account be accessed. This section describes how " "to set the configuration options to correctly control access to both the " "\"normal\" and \"service\" accounts." msgstr "" #: ../../source/overview_auth.rst:214 msgid "" "In this example, end users use the ``AUTH_`` prefix in account names, " "whereas services use the ``SERVICE_`` prefix::" msgstr "" #: ../../source/overview_auth.rst:223 msgid "" "The actual values for these variable will need to be set depending on your " "situation as follows:" msgstr "" #: ../../source/overview_auth.rst:226 msgid "" "The first item in the reseller_prefix list must match Keystone's endpoint " "(see ``/etc/keystone/default_catalog.templates`` above). Normally this is " "``AUTH``." msgstr "" #: ../../source/overview_auth.rst:229 msgid "" "The second item in the reseller_prefix list is the prefix used by the " "OpenStack services(s). You must configure this value (``SERVICE`` in the " "example) with whatever the other OpenStack service(s) use." msgstr "" #: ../../source/overview_auth.rst:232 msgid "" "Set the operator_roles option to contain a role or roles that end-user's " "have on project's they use." msgstr "" #: ../../source/overview_auth.rst:234 msgid "" "Set the SERVICE_service_roles value to a role or roles that only the " "OpenStack service user has. Do not use a role that is assigned to \"normal\" " "end users. In this example, the role ``service`` is used. The service user " "is granted this role to a *single* project only. You do not need to make the " "service user a member of every project." msgstr "" #: ../../source/overview_auth.rst:240 msgid "This configuration works as follows:" msgstr "" #: ../../source/overview_auth.rst:242 msgid "" "The end-user presents a user token to an OpenStack service. The service then " "makes a Swift request to the account with the ``SERVICE`` prefix." msgstr "" #: ../../source/overview_auth.rst:244 msgid "" "The service forwards the original user token with the request. It also adds " "it's own service token." msgstr "" #: ../../source/overview_auth.rst:246 msgid "" "Swift validates both tokens. When validated, the user token gives the " "``admin`` or ``swiftoperator`` role(s). When validated, the service token " "gives the ``service`` role." msgstr "" #: ../../source/overview_auth.rst:249 msgid "Swift interprets the above configuration as follows:" msgstr "" #: ../../source/overview_auth.rst:251 msgid "Did the user token provide one of the roles listed in operator_roles?" msgstr "" #: ../../source/overview_auth.rst:252 msgid "" "Did the service token have the ``service`` role as described by the " "``SERVICE_service_roles`` options." msgstr "" #: ../../source/overview_auth.rst:255 msgid "" "If both conditions are met, the request is granted. Otherwise, Swift rejects " "the request." msgstr "" #: ../../source/overview_auth.rst:258 msgid "" "In the above example, all services share the same account. You can separate " "each service into its own account. For example, the following provides a " "dedicated account for each of the Glance and Cinder services. In addition, " "you must assign the ``glance_service`` and ``cinder_service`` to the " "appropriate service users::" msgstr "" #: ../../source/overview_auth.rst:273 msgid "Access control using keystoneauth" msgstr "" #: ../../source/overview_auth.rst:275 msgid "" "By default the only users able to perform operations (e.g. create a " "container) on an account are those having a Keystone role for the " "corresponding Keystone project that matches one of the roles specified in " "the ``operator_roles`` option." msgstr "" #: ../../source/overview_auth.rst:280 msgid "" "Users who have one of the ``operator_roles`` will be able to set container " "ACLs to grant other users permission to read and/or write objects in " "specific containers, using ``X-Container-Read`` and ``X-Container-Write`` " "headers respectively. In addition to the ACL formats described :mod:`here " "`, keystoneauth supports ACLs using the format::" msgstr "" #: ../../source/overview_auth.rst:289 msgid "" "where ``other_project_id`` is the UUID of a Keystone project and " "``other_user_id`` is the UUID of a Keystone user. This will allow the other " "user to access a container provided their token is scoped on the other " "project. Both ``other_project_id`` and ``other_user_id`` may be replaced " "with the wildcard character ``*`` which will match any project or user " "respectively." msgstr "" #: ../../source/overview_auth.rst:295 msgid "Be sure to use Keystone UUIDs rather than names in container ACLs." msgstr "" #: ../../source/overview_auth.rst:299 msgid "" "For backwards compatibility, keystoneauth will by default grant container " "ACLs expressed as ``other_project_name:other_user_name`` (i.e. using " "Keystone names rather than UUIDs) in the special case when both the other " "project and the other user are in Keystone's default domain and the project " "being accessed is also in the default domain." msgstr "" #: ../../source/overview_auth.rst:305 msgid "For further information see :ref:`keystoneauth`" msgstr "" #: ../../source/overview_auth.rst:307 msgid "" "Users with the Keystone role defined in ``reseller_admin_role`` " "(``ResellerAdmin`` by default) can operate on any account. The auth system " "sets the request environ reseller_request to True if a request is coming " "from a user with this role. This can be used by other middlewares." msgstr "" #: ../../source/overview_auth.rst:313 msgid "Troubleshooting tips for keystoneauth deployment" msgstr "" #: ../../source/overview_auth.rst:315 msgid "" "Some common mistakes can result in API requests failing when first deploying " "keystone with Swift:" msgstr "" #: ../../source/overview_auth.rst:318 msgid "Incorrect configuration of the Swift endpoint in the Keystone service." msgstr "" #: ../../source/overview_auth.rst:320 msgid "" "By default, keystoneauth expects the account part of a URL to have the form " "``AUTH_``. Sometimes the ``AUTH_`` prefix is missed " "when configuring Swift endpoints in Keystone, as described in the `Install " "Guide `_. This is easily diagnosed by inspecting " "the proxy-server log file for a failed request URL and checking that the URL " "includes the ``AUTH_`` prefix (or whatever reseller prefix may have been " "configured for keystoneauth)::" msgstr "" #: ../../source/overview_auth.rst:335 msgid "" "Incorrect configuration of the ``authtoken`` middleware options in the Swift " "proxy server." msgstr "" #: ../../source/overview_auth.rst:338 msgid "" "The ``authtoken`` middleware communicates with the Keystone service to " "validate tokens that are presented with client requests. To do this " "``authtoken`` must authenticate itself with Keystone using the credentials " "configured in the ``[filter:authtoken]`` section of ``/etc/swift/proxy-" "server.conf``. Errors in these credentials can result in ``authtoken`` " "failing to validate tokens and may be revealed in the proxy server logs by a " "message such as::" msgstr "" #: ../../source/overview_auth.rst:350 msgid "" "More detailed log messaging may be seen by setting the ``authtoken`` option " "``log_level = debug``." msgstr "" #: ../../source/overview_auth.rst:353 msgid "" "The ``authtoken`` configuration options may be checked by attempting to use " "them to communicate directly with Keystone using an ``openstack`` command " "line. For example, given the ``authtoken`` configuration sample shown in :" "ref:`configuring_keystone_auth`, the following command should return a " "service catalog::" msgstr "" #: ../../source/overview_auth.rst:364 msgid "" "If this ``openstack`` command fails then it is likely that there is a " "problem with the ``authtoken`` configuration." msgstr "" #: ../../source/overview_auth.rst:371 msgid "Extending Auth" msgstr "" #: ../../source/overview_auth.rst:373 msgid "" "TempAuth is written as wsgi middleware, so implementing your own auth is as " "easy as writing new wsgi middleware, and plugging it in to the proxy server." msgstr "" #: ../../source/overview_auth.rst:376 msgid "" "See :doc:`development_auth` for detailed information on extending the auth " "system." msgstr "" #: ../../source/overview_backing_store.rst:4 msgid "Using Swift as Backing Store for Service Data" msgstr "" #: ../../source/overview_backing_store.rst:8 msgid "Background" msgstr "" #: ../../source/overview_backing_store.rst:10 msgid "" "This section provides guidance to OpenStack Service developers for how to " "store your users' data in Swift. An example of this is that a user requests " "that Nova save a snapshot of a VM. Nova passes the request to Glance, Glance " "writes the image to a Swift container as a set of objects." msgstr "" #: ../../source/overview_backing_store.rst:15 msgid "" "Throughout this section, the following terminology and concepts are used:" msgstr "" #: ../../source/overview_backing_store.rst:17 msgid "" "User or end-user. This is a person making a request that will result in an " "OpenStack Service making a request to Swift." msgstr "" #: ../../source/overview_backing_store.rst:20 msgid "" "Project (also known as Tenant). This is the unit of resource ownership. " "While data such as snapshot images or block volume backups may be stored as " "a result of an end-user's request, the reality is that these are project " "data." msgstr "" #: ../../source/overview_backing_store.rst:25 msgid "" "Service. This is a program or system used by end-users. Specifically, it is " "any program or system that is capable of receiving end-user's tokens and " "validating the token with the Keystone Service and has a need to store data " "in Swift. Glance and Cinder are examples of such Services." msgstr "" #: ../../source/overview_backing_store.rst:30 msgid "" "Service User. This is a Keystone user that has been assigned to a Service. " "This allows the Service to generate and use its own tokens so that it can " "interact with other Services as itself." msgstr "" #: ../../source/overview_backing_store.rst:34 msgid "" "Service Project. This is a project (tenant) that is associated with a " "Service. There may be a single project shared by many Services or there may " "be a project dedicated to each Service. In this document, the main purpose " "of the Service Project is to allow the system operator to configure specific " "roles for each Service User." msgstr "" #: ../../source/overview_backing_store.rst:42 msgid "Alternate Backing Store Schemes" msgstr "" #: ../../source/overview_backing_store.rst:44 msgid "There are three schemes described here:" msgstr "" #: ../../source/overview_backing_store.rst:46 msgid "Dedicated Service Account (Single Tenant)" msgstr "" #: ../../source/overview_backing_store.rst:48 msgid "" "Your Service has a dedicated Service Project (hence a single dedicated Swift " "account). Data for all users and projects are stored in this account. Your " "Service must have a user assigned to it (the Service User). When you have " "data to store on behalf of one of your users, you use the Service User " "credentials to get a token for the Service Project and request Swift to " "store the data in the Service Project." msgstr "" #: ../../source/overview_backing_store.rst:55 msgid "" "With this scheme, data for all users is stored in a single account. This is " "transparent to your users and since the credentials for the Service User are " "typically not shared with anyone, your users' cannot access their data by " "making a request directly to Swift. However, since data belonging to all " "users is stored in one account, it presents a single point of vulnerably to " "accidental deletion or a leak of the service-user credentials." msgstr "" #: ../../source/overview_backing_store.rst:63 msgid "Multi Project (Multi Tenant)" msgstr "" #: ../../source/overview_backing_store.rst:65 msgid "" "Data belonging to a project is stored in the Swift account associated with " "the project. Users make requests to your Service using a token scoped to a " "project in the normal way. You can then use this same token to store the " "user data in the project's Swift account." msgstr "" #: ../../source/overview_backing_store.rst:70 msgid "" "The effect is that data is stored in multiple projects (aka tenants). Hence " "this scheme has been known as the \"multi tenant\" scheme." msgstr "" #: ../../source/overview_backing_store.rst:73 msgid "" "With this scheme, access is controlled by Keystone. The users must have a " "role that allows them to perform the request to your Service. In addition, " "they must have a role that also allows them to store data in the Swift " "account. By default, the admin or swiftoperator roles are used for this " "purpose (specific systems may use other role names). If the user does not " "have the appropriate roles, when your Service attempts to access Swift, the " "operation will fail." msgstr "" #: ../../source/overview_backing_store.rst:81 msgid "" "Since you are using the user's token to access the data, it follows that the " "user can use the same token to access Swift directly -- bypassing your " "Service. When end-users are browsing containers, they will also see your " "Service's containers and objects -- and may potentially delete the data. " "Conversely, there is no single account where all data so leakage of " "credentials will only affect a single project/tenant." msgstr "" #: ../../source/overview_backing_store.rst:88 msgid "Service Prefix Account" msgstr "" #: ../../source/overview_backing_store.rst:90 msgid "" "Data belonging to a project is stored in a Swift account associated with the " "project. This is similar to the Multi Project scheme described above. " "However, the Swift account is different than the account that users access. " "Specifically, it has a different account prefix. For example, for the " "project 1234, the user account is named AUTH_1234. Your Service uses a " "different account, for example, SERVICE_1234." msgstr "" #: ../../source/overview_backing_store.rst:97 msgid "" "To access the SERVICE_1234 account, you must present two tokens: the user's " "token is put in the X-Auth-Token header. You present your Service's token in " "the X-Service-Token header. Swift is configured such that only when both " "tokens are presented will it allow access. Specifically, the user cannot " "bypass your Service because they only have their own token. Conversely, your " "Service can only access the data while it has a copy of the user's token -- " "the Service's token by itself will not grant access." msgstr "" #: ../../source/overview_backing_store.rst:105 msgid "" "The data stored in the Service Prefix Account cannot be seen by end-users. " "So they cannot delete this data -- they can only access the data if they " "make a request through your Service. The data is also more secure. To make " "an unauthorized access, someone would need to compromise both an end-user's " "and your Service User credentials. Even then, this would only expose one " "project -- not other projects." msgstr "" #: ../../source/overview_backing_store.rst:112 msgid "" "The Service Prefix Account scheme combines features of the Dedicated Service " "Account and Multi Project schemes. It has the private, dedicated, " "characteristics of the Dedicated Service Account scheme but does not present " "a single point of attack. Using the Service Prefix Account scheme is a " "little more involved than the other schemes, so the rest of this document " "describes it more detail." msgstr "" #: ../../source/overview_backing_store.rst:121 msgid "Service Prefix Account Overview" msgstr "" #: ../../source/overview_backing_store.rst:123 msgid "" "The following diagram shows the flow through the system from the end-user, " "to your Service and then onto Swift::" msgstr "" #: ../../source/overview_backing_store.rst:139 msgid "The sequence of events and actions are as follows:" msgstr "" #: ../../source/overview_backing_store.rst:141 msgid "Request arrives at your Service" msgstr "" #: ../../source/overview_backing_store.rst:143 msgid "" "The is validated by the keystonemiddleware.auth_token " "middleware. The user's role(s) are used to determine if the user can perform " "the request. See :doc:`overview_auth` for technical information on the " "authentication system." msgstr "" #: ../../source/overview_backing_store.rst:148 msgid "" "As part of this request, your Service needs to access Swift (either to write " "or read a container or object). In this example, you want to perform a PUT " "on /." msgstr "" #: ../../source/overview_backing_store.rst:152 msgid "" "In the wsgi environment, the auth_token module will have populated the " "HTTP_X_SERVICE_CATALOG item. This lists the Swift endpoint and account. This " "is something such as https:///v1/AUTH_1234 where ``AUTH_`` is a " "prefix and ``1234`` is the project id." msgstr "" #: ../../source/overview_backing_store.rst:157 msgid "" "The ``AUTH_`` prefix is the default value. However, your system may use a " "different prefix. To determine the actual prefix, search for the first " "underscore ('_') character in the account name. If there is no underscore " "character in the account name, this means there is no prefix." msgstr "" #: ../../source/overview_backing_store.rst:162 msgid "" "Your Service should have a configuration parameter that provides the " "appropriate prefix to use for storing data in Swift. There is more " "discussion of this below, but for now assume the prefix is ``SERVICE_``." msgstr "" #: ../../source/overview_backing_store.rst:166 msgid "" "Replace the prefix (``AUTH_`` in above examples) in the path with " "``SERVICE_``, so the full URL to access the object becomes https:///" "v1/SERVICE_1234//." msgstr "" #: ../../source/overview_backing_store.rst:170 msgid "" "Make the request to Swift, using this URL. In the X-Auth-Token header place " "a copy of the . In the X-Service-Token header, place your " "Service's token. If you use python-swiftclient you can achieve this by:" msgstr "" #: ../../source/overview_backing_store.rst:175 msgid "Putting the URL in the ``preauthurl`` parameter" msgstr "" #: ../../source/overview_backing_store.rst:176 msgid "Putting the in ``preauthtoken`` parameter" msgstr "" #: ../../source/overview_backing_store.rst:177 msgid "Adding the X-Service-Token to the ``headers`` parameter" msgstr "" #: ../../source/overview_backing_store.rst:181 msgid "Using the HTTP_X_SERVICE_CATALOG to get Swift Account Name" msgstr "" #: ../../source/overview_backing_store.rst:183 msgid "" "The auth_token middleware populates the wsgi environment with information " "when it validates the user's token. The HTTP_X_SERVICE_CATALOG item is a " "JSON string containing details of the OpenStack endpoints. For Swift, this " "also contains the project's Swift account name. Here is an example of a " "catalog entry for Swift::" msgstr "" #: ../../source/overview_backing_store.rst:207 msgid "To get the End-user's account:" msgstr "" #: ../../source/overview_backing_store.rst:209 msgid "Look for an entry with ``type`` of ``object-store``" msgstr "" #: ../../source/overview_backing_store.rst:211 msgid "" "If there are several regions, there will be several endpoints. Use the " "appropriate region name and select the ``publicURL`` item." msgstr "" #: ../../source/overview_backing_store.rst:214 msgid "" "The Swift account name is the final item in the path (\"AUTH_1234\" in this " "example)." msgstr "" #: ../../source/overview_backing_store.rst:218 msgid "Getting a Service Token" msgstr "" #: ../../source/overview_backing_store.rst:220 msgid "" "A Service Token is no different than any other token and is requested from " "Keystone using user credentials and project in the usual way. The core " "requirement is that your Service User has the appropriate role. In practice:" msgstr "" #: ../../source/overview_backing_store.rst:224 msgid "Your Service must have a user assigned to it (the Service User)." msgstr "" #: ../../source/overview_backing_store.rst:226 msgid "Your Service has a project assigned to it (the Service Project)." msgstr "" #: ../../source/overview_backing_store.rst:228 msgid "" "The Service User must have a role on the Service Project. This role is " "distinct from any of the normal end-user roles." msgstr "" #: ../../source/overview_backing_store.rst:231 msgid "" "The role used must the role configured in the /etc/swift/proxy-server.conf. " "This is the ``_service_roles`` option. In this example, the role is " "the ``service`` role::" msgstr "" #: ../../source/overview_backing_store.rst:239 msgid "" "The ``service`` role should only be granted to OpenStack Services. It should " "not be granted to users." msgstr "" #: ../../source/overview_backing_store.rst:243 msgid "Single or multiple Service Prefixes?" msgstr "" #: ../../source/overview_backing_store.rst:245 msgid "" "Most of the examples used in this document used a single prefix. The prefix, " "``SERVICE`` was used. By using a single prefix, an operator is allowing all " "OpenStack Services to share the same account for data associated with a " "given project. For test systems or deployments well protected on private " "firewalled networks, this is appropriate." msgstr "" #: ../../source/overview_backing_store.rst:251 msgid "" "However, if one Service is compromised, that Service can access data created " "by another Service. To prevent this, multiple Service Prefixes may be used. " "This also requires that the operator configure multiple service roles. For " "example, in a system that has Glance and Cinder, the following Swift " "configuration could be used::" msgstr "" #: ../../source/overview_backing_store.rst:262 msgid "" "The Service User for Glance would be granted the ``image_service`` role on " "its Service Project and the Cinder Service user is granted the " "``block_service`` role on its project. In this scheme, if the Cinder Service " "was compromised, it would not be able to access any Glance data." msgstr "" #: ../../source/overview_backing_store.rst:268 msgid "Container Naming" msgstr "" #: ../../source/overview_backing_store.rst:270 msgid "" "Since a single Service Prefix is possible, container names should be " "prefixed with a unique string to prevent name clashes. We suggest you use " "the service type field (as used in the service catalog). For example, The " "Glance Service would use \"image\" as a prefix." msgstr "" #: ../../source/overview_container_sharding.rst:5 msgid "Container Sharding" msgstr "" #: ../../source/overview_container_sharding.rst:7 msgid "" "Container sharding is an operator controlled feature that may be used to " "shard very large container databases into a number of smaller shard " "containers" msgstr "" #: ../../source/overview_container_sharding.rst:12 msgid "" "It is strongly recommended that operators gain experience of sharding " "containers in a non-production cluster before using in production." msgstr "" #: ../../source/overview_container_sharding.rst:15 msgid "" "The sharding process involves moving all sharding container database records " "via the container replication engine; the time taken to complete sharding is " "dependent upon the existing cluster load and the performance of the " "container database being sharded." msgstr "" #: ../../source/overview_container_sharding.rst:20 msgid "" "There is currently no documented process for reversing the sharding process " "once sharding has been enabled." msgstr "" #: ../../source/overview_container_sharding.rst:27 msgid "" "The metadata for each container in Swift is stored in an SQLite database. " "This metadata includes: information about the container such as its name, " "modification time and current object count; user metadata that may been " "written to the container by clients; a record of every object in the " "container. The container database object records are used to generate " "container listings in response to container GET requests; each object record " "stores the object's name, size, hash and content-type as well as associated " "timestamps." msgstr "" #: ../../source/overview_container_sharding.rst:35 msgid "" "As the number of objects in a container increases then the number of object " "records in the container database increases. Eventually the container " "database performance starts to degrade and the time taken to update an " "object record increases. This can result in object updates timing out, with " "a corresponding increase in the backlog of pending :ref:`asynchronous " "updates ` on object servers. Container databases are " "typically replicated on several nodes and any database performance " "degradation can also result in longer :doc:`container replication " "` times." msgstr "" #: ../../source/overview_container_sharding.rst:44 msgid "" "The point at which container database performance starts to degrade depends " "upon the choice of hardware in the container ring. Anecdotal evidence " "suggests that containers with tens of millions of object records have " "noticeably degraded performance." msgstr "" #: ../../source/overview_container_sharding.rst:49 msgid "" "This performance degradation can be avoided by ensuring that clients use an " "object naming scheme that disperses objects across a number of containers " "thereby distributing load across a number of container databases. However, " "that is not always desirable nor is it under the control of the cluster " "operator." msgstr "" #: ../../source/overview_container_sharding.rst:54 msgid "" "Swift's container sharding feature provides the operator with a mechanism to " "distribute the load on a single client-visible container across multiple, " "hidden, shard containers, each of which stores a subset of the container's " "object records. Clients are unaware of container sharding; clients continue " "to use the same API to access a container that, if sharded, maps to a number " "of shard containers within the Swift cluster." msgstr "" #: ../../source/overview_container_sharding.rst:63 msgid "Deployment and operation" msgstr "" #: ../../source/overview_container_sharding.rst:66 msgid "Upgrade Considerations" msgstr "" #: ../../source/overview_container_sharding.rst:68 msgid "" "It is essential that all servers in a Swift cluster have been upgraded to " "support the container sharding feature before attempting to shard a " "container." msgstr "" #: ../../source/overview_container_sharding.rst:72 msgid "Identifying containers in need of sharding" msgstr "" #: ../../source/overview_container_sharding.rst:74 msgid "" "Container sharding is currently initiated by the ``swift-manage-shard-" "ranges`` CLI tool :ref:`described below `. " "Operators must first identify containers that are candidates for sharding. " "To assist with this, the :ref:`sharder_daemon` inspects the size of " "containers that it visits and writes a list of sharding candidates to recon " "cache. For example::" msgstr "" #: ../../source/overview_container_sharding.rst:96 msgid "" "A container is considered to be a sharding candidate if its object count is " "greater than or equal to the ``shard_container_threshold`` option. The " "number of candidates reported is limited to a number configured by the " "``recon_candidates_limit`` option such that only the largest candidate " "containers are included in the ``sharding_candidates`` data." msgstr "" #: ../../source/overview_container_sharding.rst:106 msgid "``swift-manage-shard-ranges`` CLI tool" msgstr "" #: ../../source/overview_container_sharding.rst:116 msgid "``container-sharder`` daemon" msgstr "" #: ../../source/overview_container_sharding.rst:118 msgid "" "Once sharding has been enabled for a container, the act of sharding is " "performed by the :ref:`container-sharder`. The :ref:`container-sharder` " "daemon must be running on all container servers. The ``container-sharder`` " "daemon periodically visits each container database to perform any container " "sharding tasks that are required." msgstr "" #: ../../source/overview_container_sharding.rst:124 msgid "" "The ``container-sharder`` daemon requires a ``[container-sharder]`` config " "section to exist in the container server configuration file; a sample config " "section is shown in the `container-server.conf-sample` file." msgstr "" #: ../../source/overview_container_sharding.rst:130 msgid "" "The ``auto_shard`` option is currently **NOT** recommended for production " "systems and should be set to ``false`` (the default value)." msgstr "" #: ../../source/overview_container_sharding.rst:133 msgid "" "Several of the ``[container-sharder]`` config options are only significant " "when the ``auto_shard`` option is enabled. This option enables the " "``container-sharder`` daemon to automatically identify containers that are " "candidates for sharding and initiate the sharding process, instead of using " "the ``swift-manage-shard-ranges`` tool." msgstr "" #: ../../source/overview_container_sharding.rst:139 msgid "" "The container sharder uses an internal client and therefore requires an " "internal client configuration file to exist. By default the internal-client " "configuration file is expected to be found at `/etc/swift/internal-client." "conf`. An alternative location for the configuration file may be specified " "using the ``internal_client_conf_path`` option in the ``[container-" "sharder]`` config section." msgstr "" #: ../../source/overview_container_sharding.rst:146 msgid "" "The content of the internal-client configuration file should be the same as " "the `internal-client.conf-sample` file. In particular, the internal-client " "configuration should have::" msgstr "" #: ../../source/overview_container_sharding.rst:152 msgid "in the ``[proxy-server]`` section." msgstr "" #: ../../source/overview_container_sharding.rst:154 msgid "" "A container database may require several visits by the ``container-sharder`` " "daemon before it is fully sharded. On each visit the ``container-sharder`` " "daemon will move a subset of object records to new shard containers by " "cleaving new shard container databases from the original. By default, two " "shards are processed per visit; this number may be configured by the " "``cleave_batch_size`` option." msgstr "" #: ../../source/overview_container_sharding.rst:161 msgid "" "The ``container-sharder`` daemon periodically writes progress data for " "containers that are being sharded to recon cache. For example::" msgstr "" #: ../../source/overview_container_sharding.rst:186 msgid "" "This example indicates that from a total of 7 shard ranges, 2 have been " "cleaved whereas 5 remain in created state waiting to be cleaved." msgstr "" #: ../../source/overview_container_sharding.rst:189 msgid "" "Shard containers are created in an internal account and not visible to " "clients. By default, shard containers for an account ``AUTH_test`` are " "created in the internal account ``.shards_AUTH_test``." msgstr "" #: ../../source/overview_container_sharding.rst:193 msgid "" "Once a container has started sharding, object updates to that container may " "be redirected to the shard container. The ``container-sharder`` daemon is " "also responsible for sending updates of a shard's object count and " "bytes_used to the original container so that aggegrate object count and " "bytes used values can be returned in responses to client requests." msgstr "" #: ../../source/overview_container_sharding.rst:201 msgid "" "The ``container-sharder`` daemon must continue to run on all container " "servers in order for shards object stats updates to be generated." msgstr "" #: ../../source/overview_container_sharding.rst:207 msgid "Under the hood" msgstr "" #: ../../source/overview_container_sharding.rst:210 msgid "Terminology" msgstr "" #: ../../source/overview_container_sharding.rst:215 msgid "Root container" msgstr "" #: ../../source/overview_container_sharding.rst:215 msgid "" "The original container that lives in the user's account. It holds references " "to its shard containers." msgstr "" #: ../../source/overview_container_sharding.rst:218 msgid "Retiring DB" msgstr "" #: ../../source/overview_container_sharding.rst:218 msgid "The original database file that is to be sharded." msgstr "" #: ../../source/overview_container_sharding.rst:219 msgid "A database file that will replace the retiring database." msgstr "" #: ../../source/overview_container_sharding.rst:219 msgid "Fresh DB" msgstr "" #: ../../source/overview_container_sharding.rst:221 msgid "" "A timestamp at which the fresh DB is created; the epoch value is embedded in " "the fresh DB filename." msgstr "" #: ../../source/overview_container_sharding.rst:221 msgid "Epoch" msgstr "" #: ../../source/overview_container_sharding.rst:223 msgid "" "A range of the object namespace defined by a lower bound and upper bound." msgstr "" #: ../../source/overview_container_sharding.rst:223 msgid "Shard range" msgstr "" #: ../../source/overview_container_sharding.rst:225 msgid "" "A container that holds object records for a shard range. Shard containers " "exist in a hidden account mirroring the user's account." msgstr "" #: ../../source/overview_container_sharding.rst:225 msgid "Shard container" msgstr "" #: ../../source/overview_container_sharding.rst:228 msgid "Parent container" msgstr "" #: ../../source/overview_container_sharding.rst:228 msgid "" "The container from which a shard container has been cleaved. When first " "sharding a root container each shard's parent container will be the root " "container. When sharding a shard container each shard's parent container " "will be the sharding shard container." msgstr "" #: ../../source/overview_container_sharding.rst:233 msgid "" "Items that don't belong in a container's shard range. These will be moved to " "their correct location by the container-sharder." msgstr "" #: ../../source/overview_container_sharding.rst:233 msgid "Misplaced objects" msgstr "" #: ../../source/overview_container_sharding.rst:236 msgid "Cleaving" msgstr "" #: ../../source/overview_container_sharding.rst:236 msgid "" "The act of moving object records within a shard range to a shard container " "database." msgstr "" #: ../../source/overview_container_sharding.rst:238 msgid "Shrinking" msgstr "" #: ../../source/overview_container_sharding.rst:238 msgid "" "The act of merging a small shard container into another shard container in " "order to delete the small shard container." msgstr "" #: ../../source/overview_container_sharding.rst:241 msgid "Donor" msgstr "" #: ../../source/overview_container_sharding.rst:241 msgid "The shard range that is shrinking away." msgstr "" #: ../../source/overview_container_sharding.rst:242 msgid "Acceptor" msgstr "" #: ../../source/overview_container_sharding.rst:242 msgid "The shard range into which a donor is merged." msgstr "" #: ../../source/overview_container_sharding.rst:247 msgid "Finding shard ranges" msgstr "" #: ../../source/overview_container_sharding.rst:249 msgid "" "The end goal of sharding a container is to replace the original container " "database which has grown very large with a number of shard container " "databases, each of which is responsible for storing a range of the entire " "object namespace. The first step towards achieving this is to identify an " "appropriate set of contiguous object namespaces, known as shard ranges, each " "of which contains a similar sized portion of the container's current object " "content." msgstr "" #: ../../source/overview_container_sharding.rst:256 msgid "" "Shard ranges cannot simply be selected by sharding the namespace uniformly, " "because object names are not guaranteed to be distributed uniformly. If the " "container were naively sharded into two shard ranges, one containing all " "object names up to `m` and the other containing all object names beyond `m`, " "then if all object names actually start with `o` the outcome would be an " "extremely unbalanced pair of shard containers." msgstr "" #: ../../source/overview_container_sharding.rst:263 msgid "" "It is also too simplistic to assume that every container that requires " "sharding can be sharded into two. This might be the goal in the ideal world, " "but in practice there will be containers that have grown very large and " "should be sharded into many shards. Furthermore, the time required to find " "the exact mid-point of the existing object names in a large SQLite database " "would increase with container size." msgstr "" #: ../../source/overview_container_sharding.rst:270 msgid "" "For these reasons, shard ranges of size `N` are found by searching for the " "`Nth` object in the database table, sorted by object name, and then " "searching for the `(2 * N)th` object, and so on until all objects have been " "searched. For a container that has exactly `2N` objects, the end result is " "the same as sharding the container at the midpoint of its object names. In " "practice sharding would typically be enabled for containers with great than " "`2N` objects and more than two shard ranges will be found, the last one " "probably containing less than `N` objects. With containers having large " "multiples of `N` objects, shard ranges can be identified in batches which " "enables more scalable solution." msgstr "" #: ../../source/overview_container_sharding.rst:280 msgid "" "To illustrate this process, consider a very large container in a user " "account ``acct`` that is a candidate for sharding:" msgstr "" #: ../../source/overview_container_sharding.rst:285 msgid "" "The :ref:`swift-manage-shard-ranges` tool ``find`` sub-command searches the " "object table for the `Nth` object whose name will become the upper bound of " "the first shard range, and the lower bound of the second shard range. The " "lower bound of the first shard range is the empty string." msgstr "" #: ../../source/overview_container_sharding.rst:290 msgid "For the purposes of this example the first upper bound is `cat`:" msgstr "" #: ../../source/overview_container_sharding.rst:294 msgid "" ":ref:`swift-manage-shard-ranges` continues to search the container to find " "further shard ranges, with the final upper bound also being the empty string." msgstr "" #: ../../source/overview_container_sharding.rst:298 msgid "Enabling sharding" msgstr "" #: ../../source/overview_container_sharding.rst:300 msgid "" "Once shard ranges have been found the :ref:`swift-manage-shard-ranges` " "``replace`` sub-command is used to insert them into the `shard_ranges` table " "of the container database. In addition to its lower and upper bounds, each " "shard range is given a unique name." msgstr "" #: ../../source/overview_container_sharding.rst:305 msgid "" "The ``enable`` sub-command then creates some final state required to " "initiate sharding the container, including a special shard range record " "referred to as the container's `own_shard_range` whose name is equal to the " "container's path. This is used to keep a record of the object namespace that " "the container covers, which for user containers is always the entire " "namespace. Sharding of the container will only begin when its own shard " "range's state has been set to ``SHARDING``." msgstr "" #: ../../source/overview_container_sharding.rst:314 msgid "The :class:`~swift.common.utils.ShardRange` class" msgstr "" #: ../../source/overview_container_sharding.rst:316 msgid "" "The :class:`~swift.common.utils.ShardRange` class provides methods for " "interactng with the attributes and state of a shard range. The class " "encapsulates the following properties:" msgstr "" #: ../../source/overview_container_sharding.rst:320 msgid "" "The name of the shard range which is also the name of the shard container " "used to hold object records in its namespace." msgstr "" #: ../../source/overview_container_sharding.rst:322 msgid "" "Lower and upper bounds which define the object namespace of the shard range." msgstr "" #: ../../source/overview_container_sharding.rst:323 msgid "A deleted flag." msgstr "" #: ../../source/overview_container_sharding.rst:324 msgid "A timestamp at which the bounds and deleted flag were last modified." msgstr "" #: ../../source/overview_container_sharding.rst:325 msgid "The object stats for the shard range i.e. object count and bytes used." msgstr "" #: ../../source/overview_container_sharding.rst:326 msgid "A timestamp at which the object stats were last modified." msgstr "" #: ../../source/overview_container_sharding.rst:327 msgid "" "The state of the shard range, and an epoch, which is the timestamp used in " "the shard container's database file name." msgstr "" #: ../../source/overview_container_sharding.rst:329 msgid "A timestamp at which the state and epoch were last modified." msgstr "" #: ../../source/overview_container_sharding.rst:331 msgid "A shard range progresses through the following states:" msgstr "" #: ../../source/overview_container_sharding.rst:333 msgid "" "FOUND: the shard range has been identified in the container that is to be " "sharded but no resources have been created for it." msgstr "" #: ../../source/overview_container_sharding.rst:335 msgid "" "CREATED: a shard container has been created to store the contents of the " "shard range." msgstr "" #: ../../source/overview_container_sharding.rst:337 msgid "" "CLEAVED: the sharding container's contents for the shard range have been " "copied to the shard container from *at least one replica* of the sharding " "container." msgstr "" #: ../../source/overview_container_sharding.rst:340 msgid "" "ACTIVE: a sharding container's constituent shard ranges are moved to this " "state when all shard ranges in the sharding container have been cleaved." msgstr "" #: ../../source/overview_container_sharding.rst:342 msgid "SHRINKING: the shard range has been enabled for shrinking; or" msgstr "" #: ../../source/overview_container_sharding.rst:343 msgid "" "SHARDING: the shard range has been enabled for sharding into further sub-" "shards." msgstr "" #: ../../source/overview_container_sharding.rst:345 msgid "" "SHARDED: the shard range has completed sharding or shrinking; the container " "will typically now have a number of constituent ACTIVE shard ranges." msgstr "" #: ../../source/overview_container_sharding.rst:350 msgid "" "Shard range state represents the most advanced state of the shard range on " "any replica of the container. For example, a shard range in CLEAVED state " "may not have completed cleaving on all replicas but has cleaved on at least " "one replica." msgstr "" #: ../../source/overview_container_sharding.rst:356 msgid "Fresh and retiring database files" msgstr "" #: ../../source/overview_container_sharding.rst:358 msgid "" "As alluded to earlier, writing to a large container causes increased latency " "for the container servers. Once sharding has been initiated on a container " "it is desirable to stop writing to the large database; ultimately it will be " "unlinked. This is primarily achieved by redirecting object updates to new " "shard containers as they are created (see :ref:`redirecting_updates` below), " "but some object updates may still need to be accepted by the root container " "and other container metadata must still be modifiable." msgstr "" #: ../../source/overview_container_sharding.rst:366 msgid "" "To render the large `retiring` database effectively read-only, when the :ref:" "`sharder_daemon` finds a container with a set of shard range records, " "including an `own_shard_range`, it first creates a fresh database file which " "will ultimately replace the existing `retiring` database. For a retiring DB " "whose filename is::" msgstr "" #: ../../source/overview_container_sharding.rst:374 msgid "the fresh database file name is of the form::" msgstr "" #: ../../source/overview_container_sharding.rst:378 msgid "" "where `epoch` is a timestamp stored in the container's `own_shard_range`." msgstr "" #: ../../source/overview_container_sharding.rst:380 msgid "" "The fresh DB has a copy of the shard ranges table from the retiring DB and " "all other container metadata apart from the object records. Once a fresh DB " "file has been created it is used to store any new object updates and no more " "object records are written to the retiring DB file." msgstr "" #: ../../source/overview_container_sharding.rst:385 msgid "" "Once the sharding process has completed, the retiring DB file will be " "unlinked leaving only the fresh DB file in the container's directory. There " "are therefore three states that the container DB directory may be in during " "the sharding process: UNSHARDED, SHARDING and SHARDED." msgstr "" #: ../../source/overview_container_sharding.rst:392 msgid "" "If the container ever shrink to the point that is has no shards then the " "fresh DB starts to store object records, behaving the same as an unsharded " "container. This is known as the COLLAPSED state." msgstr "" #: ../../source/overview_container_sharding.rst:396 msgid "In summary, the DB states that any container replica may be in are:" msgstr "" #: ../../source/overview_container_sharding.rst:398 msgid "" "UNSHARDED - In this state there is just one standard container database. All " "containers are originally in this state." msgstr "" #: ../../source/overview_container_sharding.rst:400 msgid "" "SHARDING - There are now two databases, the retiring database and a fresh " "database. The fresh database stores any metadata, container level stats, an " "object holding table, and a table that stores shard ranges." msgstr "" #: ../../source/overview_container_sharding.rst:403 msgid "" "SHARDED - There is only one database, the fresh database, which has one or " "more shard ranges in addition to its own shard range. The retiring database " "has been unlinked." msgstr "" #: ../../source/overview_container_sharding.rst:406 msgid "" "COLLAPSED - There is only one database, the fresh database, which has only " "its own shard range and store object records." msgstr "" #: ../../source/overview_container_sharding.rst:411 msgid "" "DB state is unique to each replica of a container and is not necessarily " "synchronised with shard range state." msgstr "" #: ../../source/overview_container_sharding.rst:415 msgid "Creating shard containers" msgstr "" #: ../../source/overview_container_sharding.rst:417 msgid "" "The :ref:`sharder_daemon` next creates a shard container for each shard " "range using the shard range name as the name of the shard container:" msgstr "" #: ../../source/overview_container_sharding.rst:422 msgid "" "Each shard container has an `own_shard_range` record which has the lower and " "upper bounds of the object namespace for which it is responsible, and a " "reference to the sharding user container, which is referred to as the " "`root_container`. Unlike the `root_container`, the shard container's " "`own_shard_range` does not cover the entire namepsace." msgstr "" #: ../../source/overview_container_sharding.rst:428 msgid "" "A shard range name takes the form ``/`` where `` " "is a hidden account and `` is a container name that is derived from " "the root container." msgstr "" #: ../../source/overview_container_sharding.rst:432 msgid "" "The account name `` used for shard containers is formed by " "prefixing the user account with the string ``.shards_``. This avoids " "namespace collisions and also keeps all the shard containers out of view " "from users of the account." msgstr "" #: ../../source/overview_container_sharding.rst:436 msgid "The container name for each shard container has the form::" msgstr "" #: ../../source/overview_container_sharding.rst:440 msgid "" "where `root container name` is the name of the user container to which the " "contents of the shard container belong, `parent container` is the name of " "the container from which the shard is being cleaved, `timestamp` is the time " "at which the shard range was created and `shard index` is the position of " "the shard range in the name-ordered list of shard ranges for the `parent " "container`." msgstr "" #: ../../source/overview_container_sharding.rst:447 msgid "" "When sharding a user container the parent container name will be the same as " "the root container. However, if a *shard container* grows to a size that it " "requires sharding, then the parent container name for its shards will be the " "name of the sharding shard container." msgstr "" #: ../../source/overview_container_sharding.rst:452 msgid "" "For example, consider a user container with path ``AUTH_user/c`` which is " "sharded into two shard containers whose name will be::" msgstr "" #: ../../source/overview_container_sharding.rst:458 msgid "" "If the first shard container is subsequently sharded into a further two " "shard containers then they will be named::" msgstr "" #: ../../source/overview_container_sharding.rst:464 msgid "" "This naming scheme guarantees that shards, and shards of shards, each have a " "unique name of bounded length." msgstr "" #: ../../source/overview_container_sharding.rst:469 msgid "Cleaving shard containers" msgstr "" #: ../../source/overview_container_sharding.rst:471 msgid "" "Having created empty shard containers the sharder daemon will proceed to " "cleave objects from the retiring database to each shard range. Cleaving " "occurs in batches of two (by default) shard ranges, so if a container has " "more than two shard ranges then the daemon must visit it multiple times to " "complete cleaving." msgstr "" #: ../../source/overview_container_sharding.rst:476 msgid "" "To cleave a shard range the daemon creates a shard database for the shard " "container on a local device. This device may be one of the shard container's " "primary nodes but often it will not. Object records from the corresponding " "shard range namespace are then copied from the retiring DB to this shard DB." msgstr "" #: ../../source/overview_container_sharding.rst:481 msgid "" "Swift's container replication mechanism is then used to replicate the shard " "DB to its primary nodes. Checks are made to ensure that the new shard " "container DB has been replicated to a sufficient number of its primary nodes " "before it is considered to have been successfully cleaved. By default the " "daemon requires successful replication of a new shard broker to at least a " "quorum of the container rings replica count, but this requirement can be " "tuned using the ``shard_replication_quorum`` option." msgstr "" #: ../../source/overview_container_sharding.rst:489 msgid "" "Once a shard range has been successfully cleaved from a retiring database " "the daemon transitions its state to ``CLEAVED``. It should be noted that " "this state transition occurs as soon as any one of the retiring DB replicas " "has cleaved the shard range, and therefore does not imply that all retiring " "DB replicas have cleaved that range. The significance of the state " "transition is that the shard container is now considered suitable for " "contributing to object listings, since its contents are present on a quorum " "of its primary nodes and are the same as at least one of the retiring DBs " "for that namespace." msgstr "" #: ../../source/overview_container_sharding.rst:498 msgid "" "Once a shard range is in the ``CLEAVED`` state, the requirement for " "'successful' cleaving of other instances of the retirng DB may optionally be " "relaxed since it is not so imperative that their contents are replicated " "*immediately* to their primary nodes. The " "``existing_shard_replication_quorum`` option can be used to reduce the " "quorum required for a cleaved shard range to be considered successfully " "replicated by the sharder daemon." msgstr "" #: ../../source/overview_container_sharding.rst:507 msgid "" "Once cleaved, shard container DBs will continue to be replicated by the " "normal `container-replicator` daemon so that they will eventually be fully " "replicated to all primary nodes regardless of any replication quorum options " "used by the sharder daemon." msgstr "" #: ../../source/overview_container_sharding.rst:512 msgid "" "The cleaving progress of each replica of a retiring DB must be tracked " "independently of the shard range state. This is done using a per-DB " "CleavingContext object that maintains a cleaving cursor for the retiring DB " "that it is associated with. The cleaving cursor is simply the upper bound of " "the last shard range to have been cleaved *from that particular retiring DB*." msgstr "" #: ../../source/overview_container_sharding.rst:518 msgid "" "Each CleavingContext is stored in the sharding container's sysmeta under a " "key that is the ``id`` of the retiring DB. Since all container DB files have " "a unique ``id``, this guarantees that each retiring DB will have a unique " "CleavingContext. Furthermore, if the retiring DB file is changed, for " "example by an rsync_then_merge replication operation which might change the " "contents of the DB's object table, then it will get a new unique " "CleavingContext." msgstr "" #: ../../source/overview_container_sharding.rst:525 msgid "" "A CleavingContext maintains other state that is used to ensure that a " "retiring DB is only considered to be fully cleaved, and ready to be deleted, " "if *all* of its object rows have been cleaved to a shard range." msgstr "" #: ../../source/overview_container_sharding.rst:529 msgid "" "Once all shard ranges have been cleaved from the retiring DB it is deleted. " "The container is now represented by the fresh DB which has a table of shard " "range records that point to the shard containers that store the container's " "object records." msgstr "" #: ../../source/overview_container_sharding.rst:537 msgid "Redirecting object updates" msgstr "" #: ../../source/overview_container_sharding.rst:539 msgid "" "Once a shard container exists, object updates arising from new client " "requests and async pending files are directed to the shard container instead " "of the root container. This takes load off of the root container." msgstr "" #: ../../source/overview_container_sharding.rst:543 msgid "" "For a sharded (or partially sharded) container, when the proxy receives a " "new object request it issues a GET request to the container for data " "describing a shard container to which the object update should be sent. The " "proxy then annotates the object request with the shard container location so " "that the object server will forward object updates to the shard container. " "If those updates fail then the async pending file that is written on the " "object server contains the shard container location." msgstr "" #: ../../source/overview_container_sharding.rst:551 msgid "" "When the object updater processes async pending files for previously failed " "object updates, it may not find a shard container location. In this case the " "updater sends the update to the `root container`, which returns a " "redirection response with the shard container location." msgstr "" #: ../../source/overview_container_sharding.rst:558 msgid "" "Object updates are directed to shard containers as soon as they exist, even " "if the retiring DB object records have not yet been cleaved to the shard " "container. This prevents further writes to the retiring DB and also avoids " "the fresh DB being polluted by new object updates. The goal is to ultimately " "have all object records in the shard containers and none in the root " "container." msgstr "" #: ../../source/overview_container_sharding.rst:566 msgid "Building container listings" msgstr "" #: ../../source/overview_container_sharding.rst:568 msgid "" "Listing requests for a sharded container are handled by querying the shard " "containers for components of the listing. The proxy forwards the client " "listing request to the root container, as it would for an unsharded " "container, but the container server responds with a list of shard ranges " "rather than objects. The proxy then queries each shard container in " "namespace order for their listing, until either the listing length limit is " "reached or all shard ranges have been listed." msgstr "" #: ../../source/overview_container_sharding.rst:576 msgid "" "While a container is still in the process of sharding, only *cleaved* shard " "ranges are used when building a container listing. Shard ranges that have " "not yet cleaved will not have any object records from the root container. " "The root container continues to provide listings for the uncleaved part of " "its namespace." msgstr "" #: ../../source/overview_container_sharding.rst:584 msgid "" "New object updates are redirected to shard containers that have not yet been " "cleaved. These updates will not therefore be included in container listings " "until their shard range has been cleaved." msgstr "" #: ../../source/overview_container_sharding.rst:589 msgid "Example request redirection" msgstr "" #: ../../source/overview_container_sharding.rst:591 msgid "" "As an example, consider a sharding container in which 3 shard ranges have " "been found ending in cat, giraffe and igloo. Their respective shard " "containers have been created so update requests for objects up to \"igloo\" " "are redirected to the appropriate shard container. The root DB continues to " "handle listing requests and update requests for any object name beyond " "\"igloo\"." msgstr "" #: ../../source/overview_container_sharding.rst:599 msgid "" "The sharder daemon cleaves objects from the retiring DB to the shard range " "DBs; it also moves any misplaced objects from the root container's fresh DB " "to the shard DB. Cleaving progress is represented by the blue line. Once the " "first shard range has been cleaved listing requests for that namespace are " "directed to the shard container. The root container still provides listings " "for the remainder of the namespace." msgstr "" #: ../../source/overview_container_sharding.rst:608 msgid "" "The process continues: the sharder cleaves the next range and a new range is " "found with upper bound of \"linux\". Now the root container only needs to " "handle listing requests up to \"giraffe\" and update requests for objects " "whose name is greater than \"linux\". Load will continue to diminish on the " "root DB and be dispersed across the shard DBs." msgstr "" #: ../../source/overview_container_sharding.rst:618 msgid "Container replication" msgstr "" #: ../../source/overview_container_sharding.rst:620 msgid "" "Shard range records are replicated between container DB replicas in much the " "same way as object records are for unsharded containers. However, the usual " "replication of object records between replicas of a container is halted as " "soon as a container is capable of being sharded. Instead, object records are " "moved to their new locations in shard containers. This avoids unnecessary " "replication traffic between container replicas." msgstr "" #: ../../source/overview_container_sharding.rst:627 msgid "" "To facilitate this, shard ranges are both 'pushed' and 'pulled' during " "replication, prior to any attempt to replicate objects. This means that the " "node initiating replication learns about shard ranges from the destination " "node early during the replication process and is able to skip object " "replication if it discovers that it has shard ranges and is able to shard." msgstr "" #: ../../source/overview_container_sharding.rst:635 msgid "" "When the destination DB for container replication is missing then the " "'complete_rsync' replication mechanism is still used and in this case only " "both object records and shard range records are copied to the destination " "node." msgstr "" #: ../../source/overview_container_sharding.rst:641 msgid "Container deletion" msgstr "" #: ../../source/overview_container_sharding.rst:643 msgid "" "Sharded containers may be deleted by a ``DELETE`` request just like an " "unsharded container. A sharded container must be empty before it can be " "deleted which implies that all of its shard containers must have reported " "that they are empty." msgstr "" #: ../../source/overview_container_sharding.rst:648 msgid "" "Shard containers are *not* immediately deleted when their root container is " "deleted; the shard containers remain undeleted so that they are able to " "continue to receive object updates that might arrive after the root " "container has been deleted. Shard containers continue to update their " "deleted root container with their object stats. If a shard container does " "receive object updates that cause it to no longer be empty then the root " "container will no longer be considered deleted once that shard container " "sends an object stats update." msgstr "" #: ../../source/overview_container_sharding.rst:659 msgid "Sharding a shard container" msgstr "" #: ../../source/overview_container_sharding.rst:661 msgid "" "A shard container may grow to a size that requires it to be sharded. ``swift-" "manage-shard-ranges`` may be used to identify shard ranges within a shard " "container and enable sharding in the same way as for a root container. When " "a shard is sharding it notifies the root container of its shard ranges so " "that the root container can start to redirect object updates to the new 'sub-" "shards'. When the shard has completed sharding the root is aware of all the " "new sub-shards and the sharding shard deletes its shard range record in the " "root container shard ranges table. At this point the root container is aware " "of all the new sub-shards which collectively cover the namespace of the now-" "deleted shard." msgstr "" #: ../../source/overview_container_sharding.rst:672 msgid "" "There is no hierarchy of shards beyond the root container and its immediate " "shards. When a shard shards, its sub-shards are effectively re-parented with " "the root container." msgstr "" #: ../../source/overview_container_sharding.rst:678 msgid "Shrinking a shard container" msgstr "" #: ../../source/overview_container_sharding.rst:680 msgid "" "A shard container's contents may reduce to a point where the shard container " "is no longer required. If this happens then the shard container may be " "shrunk into another shard range. Shrinking is achieved in a similar way to " "sharding: an 'acceptor' shard range is written to the shrinking shard " "container's shard ranges table; unlike sharding, where shard ranges each " "cover a subset of the sharding container's namespace, the acceptor shard " "range is a superset of the shrinking shard range." msgstr "" #: ../../source/overview_container_sharding.rst:688 msgid "" "Once given an acceptor shard range the shrinking shard will cleave itself to " "its acceptor, and then delete itself from the root container shard ranges " "table." msgstr "" #: ../../source/overview_container_sync.rst:3 msgid "Container to Container Synchronization" msgstr "" #: ../../source/overview_container_sync.rst:9 msgid "" "Swift has a feature where all the contents of a container can be mirrored to " "another container through background synchronization. Swift cluster " "operators configure their cluster to allow/accept sync requests to/from " "other clusters, and the user specifies where to sync their container to " "along with a secret synchronization key." msgstr "" #: ../../source/overview_container_sync.rst:17 msgid "" "If you are using the :ref:`Large Objects ` feature and " "syncing to another cluster then you will need to ensure that manifest files " "and segment files are synced. If segment files are in a different container " "than their manifest then both the manifest's container and the segments' " "container must be synced. The target container for synced segment files must " "always have the same name as their source container in order for them to be " "resolved by synced manifests." msgstr "" #: ../../source/overview_container_sync.rst:25 msgid "" "Be aware that manifest files may be synced before segment files even if they " "are in the same container and were created after the segment files." msgstr "" #: ../../source/overview_container_sync.rst:28 msgid "" "In the case of :ref:`Static Large Objects `, a GET " "request for a manifest whose segments have yet to be completely synced will " "fail with none or only part of the large object content being returned." msgstr "" #: ../../source/overview_container_sync.rst:32 msgid "" "In the case of :ref:`Dynamic Large Objects `, a GET " "request for a manifest whose segments have yet to be completely synced will " "either fail or return unexpected (and most likely incorrect) content." msgstr "" #: ../../source/overview_container_sync.rst:38 msgid "" "If you are using encryption middleware in the cluster from which objects are " "being synced, then you should follow the instructions for :ref:" "`container_sync_client_config` to be compatible with encryption." msgstr "" #: ../../source/overview_container_sync.rst:44 msgid "" "If you are using symlink middleware in the cluster from which objects are " "being synced, then you should follow the instructions for :ref:" "`symlink_container_sync_client_config` to be compatible with symlinks." msgstr "" #: ../../source/overview_container_sync.rst:48 msgid "" "Be aware that symlinks may be synced before their targets even if they are " "in the same container and were created after the target objects. In such " "cases, a GET for the symlink will fail with a ``404 Not Found`` error. If " "the target has been overwritten, a GET may produce an older version (for " "dynamic links) or a ``409 Conflict`` error (for static links)." msgstr "" #: ../../source/overview_container_sync.rst:56 msgid "Configuring Container Sync" msgstr "" #: ../../source/overview_container_sync.rst:58 msgid "" "Create a ``container-sync-realms.conf`` file specifying the allowable " "clusters and their information::" msgstr "" #: ../../source/overview_container_sync.rst:74 msgid "" "Each section name is the name of a sync realm. A sync realm is a set of " "clusters that have agreed to allow container syncing with each other. Realm " "names will be considered case insensitive." msgstr "" #: ../../source/overview_container_sync.rst:78 msgid "" "``key`` is the overall cluster-to-cluster key used in combination with the " "external users' key that they set on their containers' ``X-Container-Sync-" "Key`` metadata header values. These keys will be used to sign each request " "the container sync daemon makes and used to validate each incoming container " "sync request." msgstr "" #: ../../source/overview_container_sync.rst:84 msgid "" "``key2`` is optional and is an additional key incoming requests will be " "checked against. This is so you can rotate keys if you wish; you move the " "existing ``key`` to ``key2`` and make a new ``key`` value." msgstr "" #: ../../source/overview_container_sync.rst:88 msgid "" "Any values in the realm section whose names begin with ``cluster_`` will " "indicate the name and endpoint of a cluster and will be used by external " "users in their containers' ``X-Container-Sync-To`` metadata header values " "with the format ``//realm_name/cluster_name/account_name/container_name``. " "Realm and cluster names are considered case insensitive." msgstr "" #: ../../source/overview_container_sync.rst:94 msgid "" "The endpoint is what the container sync daemon will use when sending out " "requests to that cluster. Keep in mind this endpoint must be reachable by " "all container servers, since that is where the container sync daemon runs. " "Note that the endpoint ends with ``/v1/`` and that the container sync daemon " "will then add the ``account/container/obj`` name after that." msgstr "" #: ../../source/overview_container_sync.rst:100 msgid "" "Distribute this ``container-sync-realms.conf`` file to all your proxy " "servers and container servers." msgstr "" #: ../../source/overview_container_sync.rst:103 msgid "" "You also need to add the container_sync middleware to your proxy pipeline. " "It needs to be after any memcache middleware and before any auth middleware. " "The ``[filter:container_sync]`` section only needs the ``use`` item. For " "example::" msgstr "" #: ../../source/overview_container_sync.rst:113 msgid "" "The container sync daemon will use an internal client to sync objects. Even " "if you don't configure the internal client, the container sync daemon will " "work with default configuration. The default configuration is the same as " "``internal-client.conf-sample``. If you want to configure the internal " "client, please update ``internal_client_conf_path`` in ``container-server." "conf``. The configuration file at the path will be used for the internal " "client." msgstr "" #: ../../source/overview_container_sync.rst:122 msgid "Old-Style: Configuring a Cluster's Allowable Sync Hosts" msgstr "" #: ../../source/overview_container_sync.rst:124 msgid "" "This section is for the old-style of using container sync. See the previous " "section, Configuring Container Sync, for the new-style." msgstr "" #: ../../source/overview_container_sync.rst:127 msgid "" "With the old-style, the Swift cluster operator must allow synchronization " "with a set of hosts before the user can enable container synchronization. " "First, the backend container server needs to be given this list of hosts in " "the ``container-server.conf`` file::" msgstr "" #: ../../source/overview_container_sync.rst:153 msgid "Logging Container Sync" msgstr "" #: ../../source/overview_container_sync.rst:155 msgid "" "Currently, log processing is the only way to track sync progress, problems, " "and even just general activity for container synchronization. In that light, " "you may wish to set the above ``log_`` options to direct the container-sync " "logs to a different file for easier monitoring. Additionally, it should be " "noted there is no way for an end user to monitor sync progress or detect " "problems other than HEADing both containers and comparing the overall " "information." msgstr "" #: ../../source/overview_container_sync.rst:167 msgid "Container Sync Statistics" msgstr "" #: ../../source/overview_container_sync.rst:169 msgid "" "Container Sync INFO level logs contain activity metrics and accounting " "information for insightful tracking. Currently two different statistics are " "collected:" msgstr "" #: ../../source/overview_container_sync.rst:173 msgid "" "About once an hour or so, accumulated statistics of all operations performed " "by Container Sync are reported to the log file with the following format::" msgstr "" #: ../../source/overview_container_sync.rst:178 msgid "time" msgstr "" #: ../../source/overview_container_sync.rst:179 msgid "last report time" msgstr "" #: ../../source/overview_container_sync.rst:180 msgid "sync" msgstr "" #: ../../source/overview_container_sync.rst:181 msgid "number of containers with sync turned on that were successfully synced" msgstr "" #: ../../source/overview_container_sync.rst:182 msgid "delete" msgstr "" #: ../../source/overview_container_sync.rst:183 msgid "number of successful DELETE object requests to the target cluster" msgstr "" #: ../../source/overview_container_sync.rst:184 msgid "put" msgstr "" #: ../../source/overview_container_sync.rst:185 msgid "number of successful PUT object request to the target cluster" msgstr "" #: ../../source/overview_container_sync.rst:187 msgid "" "number of containers whose sync has been turned off, but are not yet cleared " "from the sync store" msgstr "" #: ../../source/overview_container_sync.rst:187 msgid "skip" msgstr "" #: ../../source/overview_container_sync.rst:190 msgid "" "number of containers with failure (due to exception, timeout or other reason)" msgstr "" #: ../../source/overview_container_sync.rst:191 msgid "fail" msgstr "" #: ../../source/overview_container_sync.rst:193 msgid "" "For each container synced, per container statistics are reported with the " "following format::" msgstr "" #: ../../source/overview_container_sync.rst:199 msgid "account/container statistics are for" msgstr "" #: ../../source/overview_container_sync.rst:201 msgid "report start time" msgstr "" #: ../../source/overview_container_sync.rst:202 msgid "end" msgstr "" #: ../../source/overview_container_sync.rst:203 msgid "report end time" msgstr "" #: ../../source/overview_container_sync.rst:204 msgid "puts" msgstr "" #: ../../source/overview_container_sync.rst:205 msgid "number of successful PUT object requests to the target container" msgstr "" #: ../../source/overview_container_sync.rst:206 msgid "posts" msgstr "" #: ../../source/overview_container_sync.rst:207 msgid "N/A (0)" msgstr "" #: ../../source/overview_container_sync.rst:208 msgid "deletes" msgstr "" #: ../../source/overview_container_sync.rst:209 msgid "number of successful DELETE object requests to the target container" msgstr "" #: ../../source/overview_container_sync.rst:210 msgid "bytes" msgstr "" #: ../../source/overview_container_sync.rst:211 msgid "number of bytes sent over the network to the target container" msgstr "" #: ../../source/overview_container_sync.rst:212 msgid "point1" msgstr "" #: ../../source/overview_container_sync.rst:213 msgid "progress indication - the container's ``x_container_sync_point1``" msgstr "" #: ../../source/overview_container_sync.rst:214 msgid "point2" msgstr "" #: ../../source/overview_container_sync.rst:215 msgid "progress indication - the container's ``x_container_sync_point2``" msgstr "" #: ../../source/overview_container_sync.rst:217 msgid "number of objects processed at the container" msgstr "" #: ../../source/overview_container_sync.rst:217 msgid "total" msgstr "" #: ../../source/overview_container_sync.rst:219 msgid "" "It is possible that more than one server syncs a container, therefore log " "files from all servers need to be evaluated" msgstr "" #: ../../source/overview_container_sync.rst:226 msgid "Using the ``swift`` tool to set up synchronized containers" msgstr "" #: ../../source/overview_container_sync.rst:230 #: ../../source/overview_container_sync.rst:344 msgid "The ``swift`` tool is available from the `python-swiftclient`_ library." msgstr "" #: ../../source/overview_container_sync.rst:234 #: ../../source/overview_container_sync.rst:348 msgid "" "You must be the account admin on the account to set synchronization targets " "and keys." msgstr "" #: ../../source/overview_container_sync.rst:237 #: ../../source/overview_container_sync.rst:353 msgid "" "You simply tell each container where to sync to and give it a secret " "synchronization key. First, let's get the account details for our two " "cluster accounts::" msgstr "" #: ../../source/overview_container_sync.rst:257 #: ../../source/overview_container_sync.rst:373 msgid "" "Now, let's make our first container and tell it to synchronize to a second " "we'll make next::" msgstr "" #: ../../source/overview_container_sync.rst:264 msgid "" "The ``-t`` indicates the cluster to sync to, which is the realm name of the " "section from ``container-sync-realms.conf``, followed by the cluster name " "from that section (without the ``cluster_`` prefix), followed by the account " "and container names we want to sync to. The ``-k`` specifies the secret key " "the two containers will share for synchronization; this is the user key, the " "cluster key in ``container-sync-realms.conf`` will also be used behind the " "scenes." msgstr "" #: ../../source/overview_container_sync.rst:271 msgid "Now, we'll do something similar for the second cluster's container::" msgstr "" #: ../../source/overview_container_sync.rst:277 #: ../../source/overview_container_sync.rst:389 msgid "" "That's it. Now we can upload a bunch of stuff to the first container and " "watch as it gets synchronized over to the second::" msgstr "" #: ../../source/overview_container_sync.rst:294 msgid "" "If you're an operator running :ref:`saio` and just testing, each time you " "configure a container for synchronization and place objects in the source " "container you will need to ensure that container-sync runs before attempting " "to retrieve objects from the target container. That is, you need to run::" msgstr "" #: ../../source/overview_container_sync.rst:302 msgid "" "Now expect to see objects copied from the first container to the second::" msgstr "" #: ../../source/overview_container_sync.rst:311 #: ../../source/overview_container_sync.rst:413 msgid "" "You can also set up a chain of synced containers if you want more than two. " "You'd point 1 -> 2, then 2 -> 3, and finally 3 -> 1 for three containers. " "They'd all need to share the same secret synchronization key." msgstr "" #: ../../source/overview_container_sync.rst:319 msgid "Using curl (or other tools) instead" msgstr "" #: ../../source/overview_container_sync.rst:321 #: ../../source/overview_container_sync.rst:425 msgid "" "So what's ``swift`` doing behind the scenes? Nothing overly complicated. It " "translates the ``-t `` option into an ``X-Container-Sync-To: " "`` header and the ``-k `` option into an ``X-Container-Sync-" "Key: `` header." msgstr "" #: ../../source/overview_container_sync.rst:326 #: ../../source/overview_container_sync.rst:430 msgid "" "For instance, when we created the first container above and told it to " "synchronize to the second, we could have used this curl command::" msgstr "" #: ../../source/overview_container_sync.rst:340 msgid "Old-Style: Using the ``swift`` tool to set up synchronized containers" msgstr "" #: ../../source/overview_container_sync.rst:351 #: ../../source/overview_container_sync.rst:423 msgid "" "This is for the old-style of container syncing using ``allowed_sync_hosts``." msgstr "" #: ../../source/overview_container_sync.rst:380 msgid "" "The ``-t`` indicates the URL to sync to, which is the ``StorageURL`` from " "cluster2 we retrieved above plus the container name. The ``-k`` specifies " "the secret key the two containers will share for synchronization. Now, we'll " "do something similar for the second cluster's container::" msgstr "" #: ../../source/overview_container_sync.rst:421 msgid "Old-Style: Using curl (or other tools) instead" msgstr "" #: ../../source/overview_container_sync.rst:444 msgid "What's going on behind the scenes, in the cluster?" msgstr "" #: ../../source/overview_container_sync.rst:446 msgid "" "Container ring devices have a directory called ``containers``, where " "container databases reside. In addition to ``containers``, each container " "ring device also has a directory called ``sync-containers``. ``sync-" "containers`` holds symlinks to container databases that were configured for " "container sync using ``x-container-sync-to`` and ``x-container-sync-key`` " "metadata keys." msgstr "" #: ../../source/overview_container_sync.rst:452 msgid "" "The swift-container-sync process does the job of sending updates to the " "remote container. This is done by scanning ``sync-containers`` for container " "databases. For each container db found, newer rows since the last sync will " "trigger PUTs or DELETEs to the other container." msgstr "" #: ../../source/overview_container_sync.rst:457 msgid "" "``sync-containers`` is maintained as follows: Whenever the container-server " "processes a PUT or a POST request that carries ``x-container-sync-to`` and " "``x-container-sync-key`` metadata keys the server creates a symlink to the " "container database in ``sync-containers``. Whenever the container server " "deletes a synced container, the appropriate symlink is deleted from ``sync-" "containers``." msgstr "" #: ../../source/overview_container_sync.rst:464 msgid "" "In addition to the container-server, the container-replicator process does " "the job of identifying containers that should be synchronized. This is done " "by scanning the local devices for container databases and checking for ``x-" "container-sync-to`` and ``x-container-sync-key`` metadata values. If they " "exist then a symlink to the container database is created in a ``sync-" "containers`` sub-directory on the same device." msgstr "" #: ../../source/overview_container_sync.rst:471 msgid "" "Similarly, when the container sync metadata keys are deleted, the container " "server and container-replicator would take care of deleting the symlinks " "from ``sync-containers``." msgstr "" #: ../../source/overview_container_sync.rst:477 msgid "" "The swift-container-sync process runs on each container server in the " "cluster and talks to the proxy servers (or load balancers) in the remote " "cluster. Therefore, the container servers must be permitted to initiate " "outbound connections to the remote proxy servers (or load balancers)." msgstr "" #: ../../source/overview_container_sync.rst:482 msgid "" "The actual syncing is slightly more complicated to make use of the three (or " "number-of-replicas) main nodes for a container without each trying to do the " "exact same work but also without missing work if one node happens to be down." msgstr "" #: ../../source/overview_container_sync.rst:487 msgid "" "Two sync points are kept in each container database. When syncing a " "container, the container-sync process figures out which replica of the " "container it has. In a standard 3-replica scenario, the process will have " "either replica number 0, 1, or 2. This is used to figure out which rows " "belong to this sync process and which ones don't." msgstr "" #: ../../source/overview_container_sync.rst:493 msgid "" "An example may help. Assume a replica count of 3 and database row IDs are " "1..6. Also, assume that container-sync is running on this container for the " "first time, hence SP1 = SP2 = -1. ::" msgstr "" #: ../../source/overview_container_sync.rst:503 msgid "" "First, the container-sync process looks for rows with id between SP1 and " "SP2. Since this is the first run, SP1 = SP2 = -1, and there aren't any such " "rows. ::" msgstr "" #: ../../source/overview_container_sync.rst:513 msgid "" "Second, the container-sync process looks for rows with id greater than SP1, " "and syncs those rows which it owns. Ownership is based on the hash of the " "object name, so it's not always guaranteed to be exactly one out of every " "three rows, but it usually gets close. For the sake of example, let's say " "that this process ends up owning rows 2 and 5." msgstr "" #: ../../source/overview_container_sync.rst:519 msgid "" "Once it's finished trying to sync those rows, it updates SP1 to be the " "biggest row-id that it's seen, which is 6 in this example. ::" msgstr "" #: ../../source/overview_container_sync.rst:527 msgid "" "While all that was going on, clients uploaded new objects into the " "container, creating new rows in the database. ::" msgstr "" #: ../../source/overview_container_sync.rst:535 msgid "" "On the next run, the container-sync starts off looking at rows with ids " "between SP1 and SP2. This time, there are a bunch of them. The sync process " "try to sync all of them. If it succeeds, it will set SP2 to equal SP1. If it " "fails, it will set SP2 to the failed object and will continue to try all " "other objects till SP1, setting SP2 to the first object that failed." msgstr "" #: ../../source/overview_container_sync.rst:542 msgid "" "Under normal circumstances, the container-sync processes will have already " "taken care of synchronizing all rows, between SP1 and SP2, resulting in a " "set of quick checks. However, if one of the sync processes failed for some " "reason, then this is a vital fallback to make sure all the objects in the " "container get synchronized. Without this seemingly-redundant work, any " "container-sync failure results in unsynchronized objects. Note that the " "container sync will persistently retry to sync any faulty object until " "success, while logging each failure." msgstr "" #: ../../source/overview_container_sync.rst:552 msgid "" "Once it's done with the fallback rows, and assuming no faults occurred, SP2 " "is advanced to SP1. ::" msgstr "" #: ../../source/overview_container_sync.rst:561 msgid "" "Then, rows with row ID greater than SP1 are synchronized (provided this " "container-sync process is responsible for them), and SP1 is moved up to the " "greatest row ID seen. ::" msgstr "" #: ../../source/overview_encryption.rst:3 msgid "Object Encryption" msgstr "" #: ../../source/overview_encryption.rst:5 msgid "" "Swift supports the optional encryption of object data at rest on storage " "nodes. The encryption of object data is intended to mitigate the risk of " "users' data being read if an unauthorised party were to gain physical access " "to a disk." msgstr "" #: ../../source/overview_encryption.rst:11 msgid "" "Swift's data-at-rest encryption accepts plaintext object data from the " "client, encrypts it in the cluster, and stores the encrypted data. This " "protects object data from inadvertently being exposed if a data drive leaves " "the Swift cluster. If a user wishes to ensure that the plaintext data is " "always encrypted while in transit and in storage, it is strongly recommended " "that the data be encrypted before sending it to the Swift cluster. " "Encrypting on the client side is the only way to ensure that the data is " "fully encrypted for its entire lifecycle." msgstr "" #: ../../source/overview_encryption.rst:20 msgid "" "Encryption of data at rest is implemented by middleware that may be included " "in the proxy server WSGI pipeline. The feature is internal to a Swift " "cluster and not exposed through the API. Clients are unaware that data is " "encrypted by this feature internally to the Swift service; internally " "encrypted data should never be returned to clients via the Swift API." msgstr "" #: ../../source/overview_encryption.rst:26 msgid "The following data are encrypted while at rest in Swift:" msgstr "" #: ../../source/overview_encryption.rst:28 msgid "Object content i.e. the content of an object PUT request's body" msgstr "" #: ../../source/overview_encryption.rst:29 msgid "The entity tag (ETag) of objects that have non-zero content" msgstr "" #: ../../source/overview_encryption.rst:30 msgid "" "All custom user object metadata values i.e. metadata sent using X-Object-" "Meta- prefixed headers with PUT or POST requests" msgstr "" #: ../../source/overview_encryption.rst:33 msgid "" "Any data or metadata not included in the list above are not encrypted, " "including:" msgstr "" #: ../../source/overview_encryption.rst:36 msgid "Account, container and object names" msgstr "" #: ../../source/overview_encryption.rst:37 msgid "Account and container custom user metadata values" msgstr "" #: ../../source/overview_encryption.rst:38 msgid "All custom user metadata names" msgstr "" #: ../../source/overview_encryption.rst:39 msgid "Object Content-Type values" msgstr "" #: ../../source/overview_encryption.rst:40 msgid "Object size" msgstr "" #: ../../source/overview_encryption.rst:41 msgid "System metadata" msgstr "" #: ../../source/overview_encryption.rst:45 msgid "" "This feature is intended to provide `confidentiality` of data that is at " "rest i.e. to protect user data from being read by an attacker that gains " "access to disks on which object data is stored." msgstr "" #: ../../source/overview_encryption.rst:49 msgid "" "This feature is not intended to prevent undetectable `modification` of user " "data at rest." msgstr "" #: ../../source/overview_encryption.rst:52 msgid "" "This feature is not intended to protect against an attacker that gains " "access to Swift's internal network connections, or gains access to key " "material or is able to modify the Swift code running on Swift nodes." msgstr "" #: ../../source/overview_encryption.rst:62 msgid "" "Encryption is deployed by adding two middleware filters to the proxy server " "WSGI pipeline and including their respective filter configuration sections " "in the `proxy-server.conf` file. :ref:`Additional steps " "` are required if the container sync feature " "is being used." msgstr "" #: ../../source/overview_encryption.rst:68 msgid "" "The `keymaster` and `encryption` middleware filters must be to the right of " "all other middleware in the pipeline apart from the final proxy-logging " "middleware, and in the order shown in this example::" msgstr "" #: ../../source/overview_encryption.rst:82 msgid "" "See the `proxy-server.conf-sample` file for further details on the " "middleware configuration options." msgstr "" #: ../../source/overview_encryption.rst:86 msgid "Keymaster middleware" msgstr "" #: ../../source/overview_encryption.rst:88 msgid "" "The `keymaster` middleware must be configured with a root secret before it " "is used. By default the `keymaster` middleware will use the root secret " "configured using the ``encryption_root_secret`` option in the middleware " "filter section of the `proxy-server.conf` file, for example::" msgstr "" #: ../../source/overview_encryption.rst:97 msgid "" "Root secret values MUST be at least 44 valid base-64 characters and should " "be consistent across all proxy servers. The minimum length of 44 has been " "chosen because it is the length of a base-64 encoded 32 byte value." msgstr "" #: ../../source/overview_encryption.rst:103 msgid "" "The ``encryption_root_secret`` option holds the master secret key used for " "encryption. The security of all encrypted data critically depends on this " "key and it should therefore be set to a high-entropy value. For example, a " "suitable ``encryption_root_secret`` may be obtained by base-64 encoding a 32 " "byte (or longer) value generated by a cryptographically secure random number " "generator." msgstr "" #: ../../source/overview_encryption.rst:110 msgid "" "The ``encryption_root_secret`` value is necessary to recover any encrypted " "data from the storage system, and therefore, it must be guarded against " "accidental loss. Its value (and consequently, the proxy-server.conf file) " "should not be stored on any disk that is in any account, container or object " "ring." msgstr "" #: ../../source/overview_encryption.rst:116 msgid "" "The ``encryption_root_secret`` value should not be changed once deployed. " "Doing so would prevent Swift from properly decrypting data that was " "encrypted using the former value, and would therefore result in the loss of " "that data." msgstr "" #: ../../source/overview_encryption.rst:121 msgid "" "One method for generating a suitable value for ``encryption_root_secret`` is " "to use the ``openssl`` command line tool::" msgstr "" #: ../../source/overview_encryption.rst:128 msgid "Separate keymaster configuration file" msgstr "" #: ../../source/overview_encryption.rst:130 msgid "" "The ``encryption_root_secret`` option may alternatively be specified in a " "separate config file at a path specified by the ``keymaster_config_path`` " "option, for example::" msgstr "" #: ../../source/overview_encryption.rst:138 msgid "" "This has the advantage of allowing multiple processes which need to be " "encryption-aware (for example, proxy-server and container-sync) to share the " "same config file, ensuring that consistent encryption keys are used by those " "processes. It also allows the keymaster configuration file to have different " "permissions than the `proxy-server.conf` file." msgstr "" #: ../../source/overview_encryption.rst:144 msgid "" "A separate keymaster config file should have a ``[keymaster]`` section " "containing the ``encryption_root_secret`` option::" msgstr "" #: ../../source/overview_encryption.rst:153 msgid "" "Alternative keymaster middleware is available to retrieve encryption root " "secrets from an :ref:`external key management system " "` such as `Barbican `_ rather than storing root secrets in configuration " "files." msgstr "" #: ../../source/overview_encryption.rst:159 #: ../../source/overview_encryption.rst:228 msgid "" "Once deployed, the encryption filter will by default encrypt object data and " "metadata when handling PUT and POST requests and decrypt object data and " "metadata when handling GET and HEAD requests. COPY requests are transformed " "into GET and PUT requests by the :ref:`copy` middleware before reaching the " "encryption middleware and as a result object data and metadata is decrypted " "and re-encrypted when copied." msgstr "" #: ../../source/overview_encryption.rst:169 msgid "Changing the encryption root secret" msgstr "" #: ../../source/overview_encryption.rst:171 msgid "" "From time to time it may be desirable to change the root secret that is used " "to derive encryption keys for new data written to the cluster. The " "`keymaster` middleware allows alternative root secrets to be specified in " "its configuration using options of the form::" msgstr "" #: ../../source/overview_encryption.rst:178 msgid "" "where ``secret_id`` is a unique identifier for the root secret and ``secret " "value`` is a value that meets the requirements for a root secret described " "above." msgstr "" #: ../../source/overview_encryption.rst:182 msgid "" "Only one root secret is used to encrypt new data at any moment in time. This " "root secret is specified using the ``active_root_secret_id`` option. If " "specified, the value of this option should be one of the configured root " "secret ``secret_id`` values; otherwise the value of " "``encryption_root_secret`` will be taken as the default active root secret." msgstr "" #: ../../source/overview_encryption.rst:190 msgid "" "The active root secret is only used to derive keys for new data written to " "the cluster. Changing the active root secret does not cause any existing " "data to be re-encrypted." msgstr "" #: ../../source/overview_encryption.rst:194 msgid "" "Existing encrypted data will be decrypted using the root secret that was " "active when that data was written. All previous active root secrets must " "therefore remain in the middleware configuration in order for decryption of " "existing data to succeed. Existing encrypted data will reference previous " "root secret by the ``secret_id`` so it must be kept consistent in the " "configuration." msgstr "" #: ../../source/overview_encryption.rst:202 msgid "" "Do not remove or change any previously active ```` or " "````." msgstr "" #: ../../source/overview_encryption.rst:204 msgid "" "For example, the following keymaster configuration file specifies three root " "secrets, with the value of ``encryption_root_secret_2`` being the current " "active root secret::" msgstr "" #: ../../source/overview_encryption.rst:216 msgid "" "To ensure there is no loss of data availability, deploying a new key to your " "cluster requires a two-stage config change. First, add the new key to the " "``encryption_root_secret_`` option and restart the proxy-server. " "Do this for all proxies. Next, set the ``active_root_secret_id`` option to " "the new secret id and restart the proxy. Again, do this for all proxies. " "This process ensures that all proxies will have the new key available for " "*decryption* before any proxy uses it for *encryption*." msgstr "" #: ../../source/overview_encryption.rst:226 #: ../../source/overview_encryption.rst:610 msgid "Encryption middleware" msgstr "" #: ../../source/overview_encryption.rst:239 msgid "Encryption Root Secret in External Key Management System" msgstr "" #: ../../source/overview_encryption.rst:241 msgid "" "The benefits of using a dedicated system for storing the encryption root " "secret include the auditing and access control infrastructure that are " "already in place in such a system, and the fact that an encryption root " "secret stored in a key management system (KMS) may be backed by a hardware " "security module (HSM) for additional security. Another significant benefit " "of storing the root encryption secret in an external KMS is that it is in " "this case never stored on a disk in the Swift cluster." msgstr "" #: ../../source/overview_encryption.rst:249 msgid "" "Swift supports fetching encryption root secrets from a `Barbican `_ service or a KMIP_ service using the " "``kms_keymaster`` or ``kmip_keymaster`` middleware respectively." msgstr "" #: ../../source/overview_encryption.rst:256 msgid "Encryption Root Secret in a Barbican KMS" msgstr "" #: ../../source/overview_encryption.rst:258 msgid "" "Make sure the required dependencies are installed for retrieving an " "encryption root secret from an external KMS. This can be done when " "installing Swift (add the ``-e`` flag to install as a development version) " "by changing to the Swift directory and running the following command to " "install Swift together with the ``kms_keymaster`` extra dependencies::" msgstr "" #: ../../source/overview_encryption.rst:266 msgid "" "Another way to install the dependencies is by making sure the following " "lines exist in the requirements.txt file, and installing them using ``pip " "install -r requirements.txt``::" msgstr "" #: ../../source/overview_encryption.rst:275 msgid "" "If any of the required packages is already installed, the ``--upgrade`` flag " "may be required for the ``pip`` commands in order for the required minimum " "version to be installed." msgstr "" #: ../../source/overview_encryption.rst:279 msgid "" "To make use of an encryption root secret stored in an external KMS, replace " "the keymaster middleware with the kms_keymaster middleware in the proxy " "server WSGI pipeline in `proxy-server.conf`, in the order shown in this " "example::" msgstr "" #: ../../source/overview_encryption.rst:286 msgid "and add a section to the same file::" msgstr "" #: ../../source/overview_encryption.rst:292 msgid "" "Create or edit the file `file_with_kms_keymaster_config` referenced above. " "For further details on the middleware configuration options, see the " "`keymaster.conf-sample` file. An example of the content of this file, with " "optional parameters omitted, is below::" msgstr "" #: ../../source/overview_encryption.rst:304 msgid "" "The encryption root secret shall be created and stored in the external key " "management system before it can be used by the keymaster. It shall be stored " "as a symmetric key, with content type ``application/octet-stream``, " "``base64`` content encoding, ``AES`` algorithm, bit length ``256``, and " "secret type ``symmetric``. The mode ``ctr`` may also be stored for " "informational purposes - it is not currently checked by the keymaster." msgstr "" #: ../../source/overview_encryption.rst:311 msgid "" "The following command can be used to store the currently configured " "``encryption_root_secret`` value from the `proxy-server.conf` file in " "Barbican::" msgstr "" #: ../../source/overview_encryption.rst:320 msgid "" "Alternatively, the existing root secret can also be stored in Barbican using " "`curl `__." msgstr "" #: ../../source/overview_encryption.rst:325 msgid "" "The credentials used to store the secret in Barbican shall be the same ones " "that the proxy server uses to retrieve the secret, i.e., the ones configured " "in the `keymaster.conf` file. For clarity reasons the commands shown here " "omit the credentials - they may be specified explicitly, or in environment " "variables." msgstr "" #: ../../source/overview_encryption.rst:331 msgid "" "Instead of using an existing root secret, Barbican can also be asked to " "generate a new 256-bit root secret, with content type ``application/octet-" "stream`` and algorithm ``AES`` (the ``mode`` parameter is currently " "optional)::" msgstr "" #: ../../source/overview_encryption.rst:340 msgid "" "The ``order create`` creates an asynchronous request to create the actual " "secret. The order can be retrieved using ``openstack secret order get``, and " "once the order completes successfully, the output will show the key id of " "the generated root secret. Keys currently stored in Barbican can be listed " "using the ``openstack secret list`` command." msgstr "" #: ../../source/overview_encryption.rst:350 msgid "" "Both the order (the asynchronous request for creating or storing a secret), " "and the actual secret itself, have similar unique identifiers. Once the " "order has been completed, the key id is shown in the output of the ``order " "get`` command." msgstr "" #: ../../source/overview_encryption.rst:355 msgid "" "The keymaster uses the explicitly configured username and password (and " "project name etc.) from the `keymaster.conf` file for retrieving the " "encryption root secret from an external key management system. The " "`Castellan library `_ is used " "to communicate with Barbican." msgstr "" #: ../../source/overview_encryption.rst:361 msgid "" "For the proxy server, reading the encryption root secret directly from the " "`proxy-server.conf` file, from the `keymaster.conf` file pointed to from the " "`proxy-server.conf` file, or from an external key management system such as " "Barbican, are all functionally equivalent. In case reading the encryption " "root secret from the external key management system fails, the proxy server " "will not start up. If the encryption root secret is retrieved successfully, " "it is cached in memory in the proxy server." msgstr "" #: ../../source/overview_encryption.rst:369 msgid "" "For further details on the configuration options, see the `[filter:" "kms_keymaster]` section in the `proxy-server.conf-sample` file, and the " "`keymaster.conf-sample` file." msgstr "" #: ../../source/overview_encryption.rst:375 msgid "Encryption Root Secret in a KMIP service" msgstr "" #: ../../source/overview_encryption.rst:377 msgid "" "This middleware enables Swift to fetch a root secret from a KMIP_ service. " "The root secret is expected to have been previously created in the KMIP_ " "service and is referenced by its unique identifier. The secret should be an " "AES-256 symmetric key." msgstr "" #: ../../source/overview_encryption.rst:382 msgid "" "To use this middleware Swift must be installed with the extra required " "dependencies::" msgstr "" #: ../../source/overview_encryption.rst:387 msgid "Add the ``-e`` flag to install as a development version." msgstr "" #: ../../source/overview_encryption.rst:389 msgid "" "Edit the swift `proxy-server.conf` file to insert the middleware in the wsgi " "pipeline, replacing any other keymaster middleware::" msgstr "" #: ../../source/overview_encryption.rst:396 msgid "and add a new filter section::" msgstr "" #: ../../source/overview_encryption.rst:409 msgid "" "Apart from ``use`` and ``key_id`` the options are as defined for a PyKMIP " "client. The authoritative definition of these options can be found at " "``_." msgstr "" #: ../../source/overview_encryption.rst:413 msgid "" "The value of the ``key_id`` option should be the unique identifier for a " "secret that will be retrieved from the KMIP_ service." msgstr "" #: ../../source/overview_encryption.rst:416 msgid "" "The keymaster configuration can alternatively be defined in a separate " "config file by using the ``keymaster_config_path`` option::" msgstr "" #: ../../source/overview_encryption.rst:423 msgid "" "In this case, the ``filter:kmip_keymaster`` section should contain no other " "options than ``use`` and ``keymaster_config_path``. All other options should " "be defined in the separate config file in a section named " "``kmip_keymaster``. For example::" msgstr "" #: ../../source/overview_encryption.rst:439 msgid "Changing the encryption root secret of external KMS's" msgstr "" #: ../../source/overview_encryption.rst:441 msgid "" "Because the KMS and KMIP keymaster's derive from the default KeyMaster they " "also have to ability to define multiple keys. The only difference is the key " "option names. Instead of using the form `encryption_root_secret_` " "both external KMS's use `key_id_`, as it is an extension of their " "existing configuration. For example::" msgstr "" #: ../../source/overview_encryption.rst:454 msgid "" "Other then that, the process is the same as :ref:`changing_the_root_secret`." msgstr "" #: ../../source/overview_encryption.rst:459 msgid "" "When upgrading an existing cluster to deploy encryption, the following " "sequence of steps is recommended:" msgstr "" #: ../../source/overview_encryption.rst:462 msgid "Upgrade all object servers" msgstr "" #: ../../source/overview_encryption.rst:463 msgid "Upgrade all proxy servers" msgstr "" #: ../../source/overview_encryption.rst:464 msgid "" "Add keymaster and encryption middlewares to every proxy server's middleware " "pipeline with the encryption ``disable_encryption`` option set to ``True`` " "and the keymaster ``encryption_root_secret`` value set as described above." msgstr "" #: ../../source/overview_encryption.rst:467 msgid "If required, follow the steps for :ref:`container_sync_client_config`." msgstr "" #: ../../source/overview_encryption.rst:468 msgid "" "Finally, change the encryption ``disable_encryption`` option to ``False``" msgstr "" #: ../../source/overview_encryption.rst:470 msgid "" "Objects that existed in the cluster prior to the keymaster and encryption " "middlewares being deployed are still readable with GET and HEAD requests. " "The content of those objects will not be encrypted unless they are written " "again by a PUT or COPY request. Any user metadata of those objects will not " "be encrypted unless it is written again by a PUT, POST or COPY request." msgstr "" #: ../../source/overview_encryption.rst:477 msgid "Disabling Encryption" msgstr "" #: ../../source/overview_encryption.rst:479 msgid "" "Once deployed, the keymaster and encryption middlewares should not be " "removed from the pipeline. To do so will cause encrypted object data and/or " "metadata to be returned in response to GET or HEAD requests for objects that " "were previously encrypted." msgstr "" #: ../../source/overview_encryption.rst:484 msgid "" "Encryption of inbound object data may be disabled by setting the encryption " "``disable_encryption`` option to ``True``, in which case existing encrypted " "objects will remain encrypted but new data written with PUT, POST or COPY " "requests will not be encrypted. The keymaster and encryption middlewares " "should remain in the pipeline even when encryption of new objects is not " "required. The encryption middleware is needed to handle GET requests for " "objects that may have been previously encrypted. The keymaster is needed to " "provide keys for those requests." msgstr "" #: ../../source/overview_encryption.rst:496 msgid "Container sync configuration" msgstr "" #: ../../source/overview_encryption.rst:498 msgid "" "If container sync is being used then the keymaster and encryption " "middlewares must be added to the container sync internal client pipeline. " "The following configuration steps are required:" msgstr "" #: ../../source/overview_encryption.rst:502 msgid "" "Create a custom internal client configuration file for container sync (if " "one is not already in use) based on the sample file `internal-client.conf-" "sample`. For example, copy `internal-client.conf-sample` to `/etc/swift/" "container-sync-client.conf`." msgstr "" #: ../../source/overview_encryption.rst:506 msgid "" "Modify this file to include the middlewares in the pipeline in the same way " "as described above for the proxy server." msgstr "" #: ../../source/overview_encryption.rst:508 msgid "" "Modify the container-sync section of all container server config files to " "point to this internal client config file using the " "``internal_client_conf_path`` option. For example::" msgstr "" #: ../../source/overview_encryption.rst:516 msgid "" "The ``encryption_root_secret`` value is necessary to recover any encrypted " "data from the storage system, and therefore, it must be guarded against " "accidental loss. Its value (and consequently, the custom internal client " "configuration file) should not be stored on any disk that is in any account, " "container or object ring." msgstr "" #: ../../source/overview_encryption.rst:524 msgid "" "These container sync configuration steps will be necessary for container " "sync probe tests to pass if the encryption middlewares are included in the " "proxy pipeline of a test cluster." msgstr "" #: ../../source/overview_encryption.rst:530 msgid "Implementation" msgstr "" #: ../../source/overview_encryption.rst:533 msgid "Encryption scheme" msgstr "" #: ../../source/overview_encryption.rst:535 msgid "" "Plaintext data is encrypted to ciphertext using the AES cipher with 256-bit " "keys implemented by the python `cryptography package `_. The cipher is used in counter (CTR) mode so that " "any byte or range of bytes in the ciphertext may be decrypted independently " "of any other bytes in the ciphertext. This enables very simple handling of " "ranged GETs." msgstr "" #: ../../source/overview_encryption.rst:542 msgid "" "In general an item of unencrypted data, ``plaintext``, is transformed to an " "item of encrypted data, ``ciphertext``::" msgstr "" #: ../../source/overview_encryption.rst:547 msgid "" "where ``E`` is the encryption function, ``k`` is an encryption key and " "``iv`` is a unique initialization vector (IV) chosen for each encryption " "context. For example, the object body is one encryption context with a " "randomly chosen IV. The IV is stored as metadata of the encrypted item so " "that it is available for decryption::" msgstr "" #: ../../source/overview_encryption.rst:555 msgid "where ``D`` is the decryption function." msgstr "" #: ../../source/overview_encryption.rst:557 msgid "" "The implementation of CTR mode follows `NIST SP800-38A `_, and the full IV passed to " "the encryption or decryption function serves as the initial counter block." msgstr "" #: ../../source/overview_encryption.rst:562 msgid "" "In general any encrypted item has accompanying crypto-metadata that " "describes the IV and the cipher algorithm used for the encryption::" msgstr "" #: ../../source/overview_encryption.rst:568 msgid "" "This crypto-metadata is stored either with the ciphertext (for user metadata " "and etags) or as a separate header (for object bodies)." msgstr "" #: ../../source/overview_encryption.rst:572 msgid "Key management" msgstr "" #: ../../source/overview_encryption.rst:574 msgid "" "A keymaster middleware is responsible for providing the keys required for " "each encryption and decryption operation. Two keys are required when " "handling object requests: a `container key` that is uniquely associated with " "the container path and an `object key` that is uniquely associated with the " "object path. These keys are made available to the encryption middleware via " "a callback function that the keymaster installs in the WSGI request environ." msgstr "" #: ../../source/overview_encryption.rst:581 msgid "" "The current keymaster implementation derives container and object keys from " "the ``encryption_root_secret`` in a deterministic way by constructing a " "SHA256 HMAC using the ``encryption_root_secret`` as a key and the container " "or object path as a message, for example::" msgstr "" #: ../../source/overview_encryption.rst:588 msgid "" "Other strategies for providing object and container keys may be employed by " "future implementations of alternative keymaster middleware." msgstr "" #: ../../source/overview_encryption.rst:591 msgid "" "During each object PUT, a random key is generated to encrypt the object " "body. This random key is then encrypted using the object key provided by the " "keymaster. This makes it safe to store the encrypted random key alongside " "the encrypted object data and metadata." msgstr "" #: ../../source/overview_encryption.rst:596 msgid "" "This process of `key wrapping` enables more efficient re-keying events when " "the object key may need to be replaced and consequently any data encrypted " "using that key must be re-encrypted. Key wrapping minimizes the amount of " "data encrypted using those keys to just other randomly chosen keys which can " "be re-wrapped efficiently without needing to re-encrypt the larger amounts " "of data that were encrypted using the random keys." msgstr "" #: ../../source/overview_encryption.rst:605 msgid "" "Re-keying is not currently implemented. Key wrapping is implemented in " "anticipation of future re-keying operations." msgstr "" #: ../../source/overview_encryption.rst:612 msgid "" "The encryption middleware is composed of an `encrypter` component and a " "`decrypter` component." msgstr "" #: ../../source/overview_encryption.rst:616 msgid "Encrypter operation" msgstr "" #: ../../source/overview_encryption.rst:619 msgid "Custom user metadata" msgstr "" #: ../../source/overview_encryption.rst:621 msgid "" "The encrypter encrypts each item of custom user metadata using the object " "key provided by the keymaster and an IV that is randomly chosen for that " "metadata item. The encrypted values are stored as :ref:`transient_sysmeta` " "with associated crypto-metadata appended to the encrypted value. For " "example::" msgstr "" #: ../../source/overview_encryption.rst:629 msgid "are transformed to::" msgstr "" #: ../../source/overview_encryption.rst:638 msgid "The unencrypted custom user metadata headers are removed." msgstr "" #: ../../source/overview_encryption.rst:641 msgid "Object body" msgstr "" #: ../../source/overview_encryption.rst:643 msgid "" "Encryption of an object body is performed using a randomly chosen body key " "and a randomly chosen IV::" msgstr "" #: ../../source/overview_encryption.rst:648 msgid "" "The body_key is wrapped using the object key provided by the keymaster and a " "randomly chosen IV::" msgstr "" #: ../../source/overview_encryption.rst:653 msgid "" "The encrypter stores the associated crypto-metadata in a system metadata " "header::" msgstr "" #: ../../source/overview_encryption.rst:662 msgid "" "Note that in this case there is an extra item of crypto-metadata which " "stores the wrapped body key and its IV." msgstr "" #: ../../source/overview_encryption.rst:666 msgid "Entity tag" msgstr "" #: ../../source/overview_encryption.rst:668 msgid "" "While encrypting the object body the encrypter also calculates the ETag (md5 " "digest) of the plaintext body. This value is encrypted using the object key " "provided by the keymaster and a randomly chosen IV, and saved as an item of " "system metadata, with associated crypto-metadata appended to the encrypted " "value::" msgstr "" #: ../../source/overview_encryption.rst:678 msgid "" "The encrypter also forces an encrypted version of the plaintext ETag to be " "sent with container updates by adding an update override header to the PUT " "request. The associated crypto-metadata is appended to the encrypted ETag " "value of this update override header::" msgstr "" #: ../../source/overview_encryption.rst:687 msgid "" "The container key is used for this encryption so that the decrypter is able " "to decrypt the ETags in container listings when handling a container " "request, since object keys may not be available in that context." msgstr "" #: ../../source/overview_encryption.rst:691 msgid "" "Since the plaintext ETag value is only known once the encrypter has " "completed processing the entire object body, the ``X-Object-Sysmeta-Crypto-" "Etag`` and ``X-Object-Sysmeta-Container-Update-Override-Etag`` headers are " "sent after the encrypted object body using the proxy server's support for " "request footers." msgstr "" #: ../../source/overview_encryption.rst:699 msgid "Conditional Requests" msgstr "" #: ../../source/overview_encryption.rst:701 msgid "" "In general, an object server evaluates conditional requests with ``If[-None]-" "Match`` headers by comparing values listed in an ``If[-None]-Match`` header " "against the ETag that is stored in the object metadata. This is not possible " "when the ETag stored in object metadata has been encrypted. The encrypter " "therefore calculates an HMAC using the object key and the ETag while " "handling object PUT requests, and stores this under the metadata key ``X-" "Object-Sysmeta-Crypto-Etag-Mac``::" msgstr "" #: ../../source/overview_encryption.rst:711 msgid "" "Like other ETag-related metadata, this is sent after the encrypted object " "body using the proxy server's support for request footers." msgstr "" #: ../../source/overview_encryption.rst:714 msgid "" "The encrypter similarly calculates an HMAC for each ETag value included in " "``If[-None]-Match`` headers of conditional GET or HEAD requests, and appends " "these to the ``If[-None]-Match`` header. The encrypter also sets the ``X-" "Backend-Etag-Is-At`` header to point to the previously stored ``X-Object-" "Sysmeta-Crypto-Etag-Mac`` metadata so that the object server evaluates the " "conditional request by comparing the HMAC values included in the ``If[-None]-" "Match`` with the value stored under ``X-Object-Sysmeta-Crypto-Etag-Mac``. " "For example, given a conditional request with header::" msgstr "" #: ../../source/overview_encryption.rst:726 msgid "the encrypter would transform the request headers to include::" msgstr "" #: ../../source/overview_encryption.rst:731 msgid "" "This enables the object server to perform an encrypted comparison to check " "whether the ETags match, without leaking the ETag itself or leaking " "information about the object body." msgstr "" #: ../../source/overview_encryption.rst:736 msgid "Decrypter operation" msgstr "" #: ../../source/overview_encryption.rst:738 msgid "" "For each GET or HEAD request to an object, the decrypter inspects the " "response for encrypted items (revealed by crypto-metadata headers), and if " "any are discovered then it will:" msgstr "" #: ../../source/overview_encryption.rst:742 msgid "Fetch the object and container keys from the keymaster via its callback" msgstr "" #: ../../source/overview_encryption.rst:743 msgid "Decrypt the ``X-Object-Sysmeta-Crypto-Etag`` value" msgstr "" #: ../../source/overview_encryption.rst:744 msgid "Decrypt the ``X-Object-Sysmeta-Container-Update-Override-Etag`` value" msgstr "" #: ../../source/overview_encryption.rst:745 msgid "Decrypt metadata header values using the object key" msgstr "" #: ../../source/overview_encryption.rst:746 msgid "" "Decrypt the wrapped body key found in ``X-Object-Sysmeta-Crypto-Body-Meta``" msgstr "" #: ../../source/overview_encryption.rst:747 msgid "Decrypt the body using the body key" msgstr "" #: ../../source/overview_encryption.rst:749 msgid "" "For each GET request to a container that would include ETags in its response " "body, the decrypter will:" msgstr "" #: ../../source/overview_encryption.rst:752 msgid "GET the response body with the container listing" msgstr "" #: ../../source/overview_encryption.rst:753 msgid "Fetch the container key from the keymaster via its callback" msgstr "" #: ../../source/overview_encryption.rst:754 msgid "" "Decrypt any encrypted ETag entries in the container listing using the " "container key" msgstr "" #: ../../source/overview_encryption.rst:759 msgid "Impact on other Swift services and features" msgstr "" #: ../../source/overview_encryption.rst:761 msgid "" "Encryption has no impact on :ref:`versioned_writes` other than that any " "previously unencrypted objects will be encrypted as they are copied to or " "from the versions container. Keymaster and encryption middlewares should be " "placed after ``versioned_writes`` in the proxy server pipeline, as described " "in :ref:`encryption_deployment`." msgstr "" #: ../../source/overview_encryption.rst:767 msgid "" "`Container Sync` uses an internal client to GET objects that are to be " "sync'd. This internal client must be configured to use the keymaster and " "encryption middlewares as described :ref:`above " "`." msgstr "" #: ../../source/overview_encryption.rst:771 msgid "" "Encryption has no impact on the `object-auditor` service. Since the ETag " "header saved with the object at rest is the md5 sum of the encrypted object " "body then the auditor will verify that encrypted data is valid." msgstr "" #: ../../source/overview_encryption.rst:775 msgid "" "Encryption has no impact on the `object-expirer` service. ``X-Delete-At`` " "and ``X-Delete-After`` headers are not encrypted." msgstr "" #: ../../source/overview_encryption.rst:778 msgid "" "Encryption has no impact on the `object-replicator` and `object-" "reconstructor` services. These services are unaware of the object or EC " "fragment data being encrypted." msgstr "" #: ../../source/overview_encryption.rst:782 msgid "" "Encryption has no impact on the `container-reconciler` service. The " "`container-reconciler` uses an internal client to move objects between " "different policy rings. The reconciler's pipeline *MUST NOT* have encryption " "enabled. The destination object has the same URL as the source object and " "the object is moved without re-encryption." msgstr "" #: ../../source/overview_encryption.rst:790 msgid "Considerations for developers" msgstr "" #: ../../source/overview_encryption.rst:792 msgid "" "Developers should be aware that keymaster and encryption middlewares rely on " "the path of an object remaining unchanged. The included keymaster derives " "keys for containers and objects based on their paths and the " "``encryption_root_secret``. The keymaster does not rely on object metadata " "to inform its generation of keys for GET and HEAD requests because when " "handling :ref:`conditional_requests` it is required to provide the object " "key before any metadata has been read from the object." msgstr "" #: ../../source/overview_encryption.rst:800 msgid "" "Developers should therefore give careful consideration to any new features " "that would relocate object data and metadata within a Swift cluster by means " "that do not cause the object data and metadata to pass through the " "encryption middlewares in the proxy pipeline and be re-encrypted." msgstr "" #: ../../source/overview_encryption.rst:805 msgid "" "The crypto-metadata associated with each encrypted item does include some " "`key_id` metadata that is provided by the keymaster and contains the path " "used to derive keys. This `key_id` metadata is persisted in anticipation of " "future scenarios when it may be necessary to decrypt an object that has been " "relocated without re-encrypting, in which case the metadata could be used to " "derive the keys that were used for encryption. However, this alone is not " "sufficient to handle conditional requests and to decrypt container listings " "where objects have been relocated, and further work will be required to " "solve those issues." msgstr "" #: ../../source/overview_erasure_code.rst:3 msgid "Erasure Code Support" msgstr "" #: ../../source/overview_erasure_code.rst:7 msgid "History and Theory of Operation" msgstr "" #: ../../source/overview_erasure_code.rst:9 msgid "" "There's a lot of good material out there on Erasure Code (EC) theory, this " "short introduction is just meant to provide some basic context to help the " "reader better understand the implementation in Swift." msgstr "" #: ../../source/overview_erasure_code.rst:13 msgid "" "Erasure Coding for storage applications grew out of Coding Theory as far " "back as the 1960s with the Reed-Solomon codes. These codes have been used " "for years in applications ranging from CDs to DVDs to general communications " "and, yes, even in the space program starting with Voyager! The basic idea is " "that some amount of data is broken up into smaller pieces called fragments " "and coded in such a way that it can be transmitted with the ability to " "tolerate the loss of some number of the coded fragments. That's where the " "word \"erasure\" comes in, if you transmit 14 fragments and only 13 are " "received then one of them is said to be \"erased\". The word \"erasure\" " "provides an important distinction with EC; it isn't about detecting errors, " "it's about dealing with failures. Another important element of EC is that " "the number of erasures that can be tolerated can be adjusted to meet the " "needs of the application." msgstr "" #: ../../source/overview_erasure_code.rst:26 msgid "" "At a high level EC works by using a specific scheme to break up a single " "data buffer into several smaller data buffers then, depending on the scheme, " "performing some encoding operation on that data in order to generate " "additional information. So you end up with more data than you started with " "and that extra data is often called \"parity\". Note that there are many, " "many different encoding techniques that vary both in how they organize and " "manipulate the data as well by what means they use to calculate parity. For " "example, one scheme might rely on `Galois Field Arithmetic `_ while others may work with only XOR. " "The number of variations and details about their differences are well beyond " "the scope of this introduction, but we will talk more about a few of them " "when we get into the implementation of EC in Swift." msgstr "" #: ../../source/overview_erasure_code.rst:40 msgid "Overview of EC Support in Swift" msgstr "" #: ../../source/overview_erasure_code.rst:42 msgid "" "First and foremost, from an application perspective EC support is totally " "transparent. There are no EC related external API; a container is simply " "created using a Storage Policy defined to use EC and then interaction with " "the cluster is the same as any other durability policy." msgstr "" #: ../../source/overview_erasure_code.rst:47 msgid "" "EC is implemented in Swift as a Storage Policy, see :doc:`overview_policies` " "for complete details on Storage Policies. Because support is implemented as " "a Storage Policy, all of the storage devices associated with your cluster's " "EC capability can be isolated. It is entirely possible to share devices " "between storage policies, but for EC it may make more sense to not only use " "separate devices but possibly even entire nodes dedicated for EC." msgstr "" #: ../../source/overview_erasure_code.rst:54 msgid "" "Which direction one chooses depends on why the EC policy is being deployed. " "If, for example, there is a production replication policy in place already " "and the goal is to add a cold storage tier such that the existing nodes " "performing replication are impacted as little as possible, adding a new set " "of nodes dedicated to EC might make the most sense but also incurs the most " "cost. On the other hand, if EC is being added as a capability to provide " "additional durability for a specific set of applications and the existing " "infrastructure is well suited for EC (sufficient number of nodes, zones for " "the EC scheme that is chosen) then leveraging the existing infrastructure " "such that the EC ring shares nodes with the replication ring makes the most " "sense. These are some of the main considerations:" msgstr "" #: ../../source/overview_erasure_code.rst:66 msgid "Layout of existing infrastructure." msgstr "" #: ../../source/overview_erasure_code.rst:67 msgid "Cost of adding dedicated EC nodes (or just dedicated EC devices)." msgstr "" #: ../../source/overview_erasure_code.rst:68 msgid "Intended usage model(s)." msgstr "" #: ../../source/overview_erasure_code.rst:70 msgid "" "The Swift code base does not include any of the algorithms necessary to " "perform the actual encoding and decoding of data; that is left to external " "libraries. The Storage Policies architecture is leveraged to enable EC on a " "per container basis -- the object rings are still used to determine the " "placement of EC data fragments. Although there are several code paths that " "are unique to an operation associated with an EC policy, an external " "dependency to an Erasure Code library is what Swift counts on to perform the " "low level EC functions. The use of an external library allows for maximum " "flexibility as there are a significant number of options out there, each " "with its owns pros and cons that can vary greatly from one use case to " "another." msgstr "" #: ../../source/overview_erasure_code.rst:82 msgid "PyECLib: External Erasure Code Library" msgstr "" #: ../../source/overview_erasure_code.rst:84 msgid "" "PyECLib is a Python Erasure Coding Library originally designed and written " "as part of the effort to add EC support to the Swift project, however it is " "an independent project. The library provides a well-defined and simple " "Python interface and internally implements a plug-in architecture allowing " "it to take advantage of many well-known C libraries such as:" msgstr "" #: ../../source/overview_erasure_code.rst:90 msgid "Jerasure and GFComplete at http://jerasure.org." msgstr "" #: ../../source/overview_erasure_code.rst:91 msgid "" "Intel(R) ISA-L at http://01.org/intel%C2%AE-storage-acceleration-library-" "open-source-version." msgstr "" #: ../../source/overview_erasure_code.rst:92 msgid "Or write your own!" msgstr "" #: ../../source/overview_erasure_code.rst:94 msgid "" "PyECLib uses a C based library called liberasurecode to implement the plug " "in infrastructure; liberasurecode is available at:" msgstr "" #: ../../source/overview_erasure_code.rst:97 msgid "liberasurecode: https://github.com/openstack/liberasurecode" msgstr "" #: ../../source/overview_erasure_code.rst:99 msgid "" "PyECLib itself therefore allows for not only choice but further " "extensibility as well. PyECLib also comes with a handy utility to help " "determine the best algorithm to use based on the equipment that will be used " "(processors and server configurations may vary in performance per " "algorithm). More on this will be covered in the configuration section. " "PyECLib is included as a Swift requirement." msgstr "" #: ../../source/overview_erasure_code.rst:106 msgid "" "For complete details see `PyECLib `_" msgstr "" #: ../../source/overview_erasure_code.rst:109 msgid "Storing and Retrieving Objects" msgstr "" #: ../../source/overview_erasure_code.rst:111 msgid "" "We will discuss the details of how PUT and GET work in the \"Under the " "Hood\" section later on. The key point here is that all of the erasure code " "work goes on behind the scenes; this summary is a high level information " "overview only." msgstr "" #: ../../source/overview_erasure_code.rst:115 msgid "The PUT flow looks like this:" msgstr "" #: ../../source/overview_erasure_code.rst:117 msgid "" "The proxy server streams in an object and buffers up \"a segment\" of data " "(size is configurable)." msgstr "" #: ../../source/overview_erasure_code.rst:119 msgid "" "The proxy server calls on PyECLib to encode the data into smaller fragments." msgstr "" #: ../../source/overview_erasure_code.rst:120 msgid "" "The proxy streams the encoded fragments out to the storage nodes based on " "ring locations." msgstr "" #: ../../source/overview_erasure_code.rst:122 msgid "Repeat until the client is done sending data." msgstr "" #: ../../source/overview_erasure_code.rst:123 msgid "The client is notified of completion when a quorum is met." msgstr "" #: ../../source/overview_erasure_code.rst:125 msgid "The GET flow looks like this:" msgstr "" #: ../../source/overview_erasure_code.rst:127 msgid "The proxy server makes simultaneous requests to participating nodes." msgstr "" #: ../../source/overview_erasure_code.rst:128 msgid "" "As soon as the proxy has the fragments it needs, it calls on PyECLib to " "decode the data." msgstr "" #: ../../source/overview_erasure_code.rst:130 #: ../../source/overview_erasure_code.rst:740 msgid "The proxy streams the decoded data it has back to the client." msgstr "" #: ../../source/overview_erasure_code.rst:131 msgid "Repeat until the proxy is done sending data back to the client." msgstr "" #: ../../source/overview_erasure_code.rst:133 msgid "" "It may sound like, from this high level overview, that using EC is going to " "cause an explosion in the number of actual files stored in each node's local " "file system. Although it is true that more files will be stored (because an " "object is broken into pieces), the implementation works to minimize this " "where possible, more details are available in the Under the Hood section." msgstr "" #: ../../source/overview_erasure_code.rst:140 msgid "Handoff Nodes" msgstr "" #: ../../source/overview_erasure_code.rst:142 msgid "" "In EC policies, similarly to replication, handoff nodes are a set of storage " "nodes used to augment the list of primary nodes responsible for storing an " "erasure coded object. These handoff nodes are used in the event that one or " "more of the primaries are unavailable. Handoff nodes are still selected " "with an attempt to achieve maximum separation of the data being placed." msgstr "" #: ../../source/overview_erasure_code.rst:151 msgid "" "For an EC policy, reconstruction is analogous to the process of replication " "for a replication type policy -- essentially \"the reconstructor\" replaces " "\"the replicator\" for EC policy types. The basic framework of " "reconstruction is very similar to that of replication with a few notable " "exceptions:" msgstr "" #: ../../source/overview_erasure_code.rst:156 msgid "" "Because EC does not actually replicate partitions, it needs to operate at a " "finer granularity than what is provided with rsync, therefore EC leverages " "much of ssync behind the scenes (you do not need to manually configure " "ssync)." msgstr "" #: ../../source/overview_erasure_code.rst:159 msgid "" "Once a pair of nodes has determined the need to replace a missing object " "fragment, instead of pushing over a copy like replication would do, the " "reconstructor has to read in enough surviving fragments from other nodes and " "perform a local reconstruction before it has the correct data to push to the " "other node." msgstr "" #: ../../source/overview_erasure_code.rst:164 msgid "" "A reconstructor does not talk to all other reconstructors in the set of " "nodes responsible for an EC partition, this would be far too chatty, instead " "each reconstructor is responsible for sync'ing with the partition's closest " "two neighbors (closest meaning left and right on the ring)." msgstr "" #: ../../source/overview_erasure_code.rst:171 msgid "" "EC work (encode and decode) takes place both on the proxy nodes, for PUT/GET " "operations, as well as on the storage nodes for reconstruction. As with " "replication, reconstruction can be the result of rebalancing, bit-rot, drive " "failure or reverting data from a hand-off node back to its primary." msgstr "" #: ../../source/overview_erasure_code.rst:178 msgid "Performance Considerations" msgstr "" #: ../../source/overview_erasure_code.rst:180 msgid "" "In general, EC has different performance characteristics than replicated " "data. EC requires substantially more CPU to read and write data, and is more " "suited for larger objects that are not frequently accessed (e.g. backups)." msgstr "" #: ../../source/overview_erasure_code.rst:184 msgid "" "Operators are encouraged to characterize the performance of various EC " "schemes and share their observations with the developer community." msgstr "" #: ../../source/overview_erasure_code.rst:192 msgid "Using an Erasure Code Policy" msgstr "" #: ../../source/overview_erasure_code.rst:194 msgid "" "To use an EC policy, the administrator simply needs to define an EC policy " "in `swift.conf` and create/configure the associated object ring. An example " "of how an EC policy can be setup is shown below::" msgstr "" #: ../../source/overview_erasure_code.rst:206 msgid "Let's take a closer look at each configuration parameter:" msgstr "" #: ../../source/overview_erasure_code.rst:208 msgid "" "``name``: This is a standard storage policy parameter. See :doc:" "`overview_policies` for details." msgstr "" #: ../../source/overview_erasure_code.rst:210 msgid "" "``policy_type``: Set this to ``erasure_coding`` to indicate that this is an " "EC policy." msgstr "" #: ../../source/overview_erasure_code.rst:212 msgid "" "``ec_type``: Set this value according to the available options in the " "selected PyECLib back-end. This specifies the EC scheme that is to be used. " "For example the option shown here selects Vandermonde Reed-Solomon encoding " "while an option of ``flat_xor_hd_3`` would select Flat-XOR based HD " "combination codes. See the `PyECLib `_ " "page for full details." msgstr "" #: ../../source/overview_erasure_code.rst:218 msgid "" "``ec_num_data_fragments``: The total number of fragments that will be " "comprised of data." msgstr "" #: ../../source/overview_erasure_code.rst:220 msgid "" "``ec_num_parity_fragments``: The total number of fragments that will be " "comprised of parity." msgstr "" #: ../../source/overview_erasure_code.rst:222 msgid "" "``ec_object_segment_size``: The amount of data that will be buffered up " "before feeding a segment into the encoder/decoder. The default value is " "1048576." msgstr "" #: ../../source/overview_erasure_code.rst:225 msgid "" "When PyECLib encodes an object, it will break it into N fragments. However, " "what is important during configuration, is how many of those are data and " "how many are parity. So in the example above, PyECLib will actually break " "an object in 14 different fragments, 10 of them will be made up of actual " "object data and 4 of them will be made of parity data (calculations " "depending on ec_type)." msgstr "" #: ../../source/overview_erasure_code.rst:231 msgid "" "When deciding which devices to use in the EC policy's object ring, be sure " "to carefully consider the performance impacts. Running some performance " "benchmarking in a test environment for your configuration is highly " "recommended before deployment." msgstr "" #: ../../source/overview_erasure_code.rst:236 msgid "" "To create the EC policy's object ring, the only difference in the usage of " "the ``swift-ring-builder create`` command is the ``replicas`` parameter. " "The ``replicas`` value is the number of fragments spread across the object " "servers associated with the ring; ``replicas`` must be equal to the sum of " "``ec_num_data_fragments`` and ``ec_num_parity_fragments``. For example::" msgstr "" #: ../../source/overview_erasure_code.rst:244 msgid "" "Note that in this example the ``replicas`` value of ``14`` is based on the " "sum of ``10`` EC data fragments and ``4`` EC parity fragments." msgstr "" #: ../../source/overview_erasure_code.rst:247 msgid "" "Once you have configured your EC policy in `swift.conf` and created your " "object ring, your application is ready to start using EC simply by creating " "a container with the specified policy name and interacting as usual." msgstr "" #: ../../source/overview_erasure_code.rst:253 msgid "" "It's important to note that once you have deployed a policy and have created " "objects with that policy, these configurations options cannot be changed. In " "case a change in the configuration is desired, you must create a new policy " "and migrate the data to a new container." msgstr "" #: ../../source/overview_erasure_code.rst:260 msgid "" "Using ``isa_l_rs_vand`` with more than 4 parity fragments creates fragments " "which may in some circumstances fail to reconstruct properly or (with " "liberasurecode < 1.3.1) reconstruct corrupted data. New policies that need " "large numbers of parity fragments should consider using ``isa_l_rs_cauchy``. " "Any existing affected policies must be marked deprecated, and data in " "containers with that policy should be migrated to a new policy." msgstr "" #: ../../source/overview_erasure_code.rst:268 msgid "Migrating Between Policies" msgstr "" #: ../../source/overview_erasure_code.rst:270 msgid "" "A common usage of EC is to migrate less commonly accessed data from a more " "expensive but lower latency policy such as replication. When an application " "determines that it wants to move data from a replication policy to an EC " "policy, it simply needs to move the data from the replicated container to an " "EC container that was created with the target durability policy." msgstr "" #: ../../source/overview_erasure_code.rst:279 msgid "Global EC" msgstr "" #: ../../source/overview_erasure_code.rst:281 msgid "" "The following recommendations are made when deploying an EC policy that " "spans multiple regions in a :doc:`Global Cluster `:" msgstr "" #: ../../source/overview_erasure_code.rst:284 msgid "" "The global EC policy should use :ref:`ec_duplication` in conjunction with a :" "ref:`Composite Ring `, as described below." msgstr "" #: ../../source/overview_erasure_code.rst:286 msgid "" "Proxy servers should be :ref:`configured to use read affinity " "` to prefer reading from their local region for " "the global EC policy. :ref:`proxy_server_per_policy_config` allows this to " "be configured for individual policies." msgstr "" #: ../../source/overview_erasure_code.rst:293 msgid "" "Before deploying a Global EC policy, consideration should be given to the :" "ref:`global_ec_known_issues`, in particular the relatively poor performance " "anticipated from the object-reconstructor." msgstr "" #: ../../source/overview_erasure_code.rst:300 msgid "EC Duplication" msgstr "" #: ../../source/overview_erasure_code.rst:302 msgid "" "EC Duplication enables Swift to make duplicated copies of fragments of " "erasure coded objects. If an EC storage policy is configured with a non-" "default ``ec_duplication_factor`` of ``N > 1``, then the policy will create " "``N`` duplicates of each unique fragment that is returned from the " "configured EC engine." msgstr "" #: ../../source/overview_erasure_code.rst:308 msgid "" "Duplication of EC fragments is optimal for Global EC storage policies, which " "require dispersion of fragment data across failure domains. Without fragment " "duplication, common EC parameters will not distribute enough unique " "fragments between large failure domains to allow for a rebuild using " "fragments from any one domain. For example a uniformly distributed ``10+4`` " "EC policy schema would place 7 fragments in each of two failure domains, " "which is less in each failure domain than the 10 fragments needed to rebuild " "a missing fragment." msgstr "" #: ../../source/overview_erasure_code.rst:316 msgid "" "Without fragment duplication, an EC policy schema must be adjusted to " "include additional parity fragments in order to guarantee the number of " "fragments in each failure domain is greater than the number required to " "rebuild. For example, a uniformly distributed ``10+18`` EC policy schema " "would place 14 fragments in each of two failure domains, which is more than " "sufficient in each failure domain to rebuild a missing fragment. However, " "empirical testing has shown encoding a schema with ``num_parity > num_data`` " "(such as ``10+18``) is less efficient than using duplication of fragments. " "EC fragment duplication enables Swift's Global EC to maintain more " "independence between failure domains without sacrificing efficiency on read/" "write or rebuild!" msgstr "" #: ../../source/overview_erasure_code.rst:327 msgid "" "The ``ec_duplication_factor`` option may be configured in `swift.conf` in " "each ``storage-policy`` section. The option may be omitted - the default " "value is ``1`` (i.e. no duplication)::" msgstr "" #: ../../source/overview_erasure_code.rst:342 msgid "" "EC duplication is intended for use with Global EC policies. To ensure " "independent availability of data in all regions, the " "``ec_duplication_factor`` option should only be used in conjunction with :" "ref:`composite_rings`, as described in this document." msgstr "" #: ../../source/overview_erasure_code.rst:347 msgid "" "In this example, a ``10+4`` schema and a duplication factor of ``2`` will " "result in ``(10+4)x2 = 28`` fragments being stored (we will use the " "shorthand ``10+4x2`` to denote that policy configuration) . The ring for " "this policy should be configured with 28 replicas (i.e. " "``(ec_num_data_fragments + ec_num_parity_fragments) * " "ec_duplication_factor``). A ``10+4x2`` schema **can** allow a multi-region " "deployment to rebuild an object to full durability even when *more* than 14 " "fragments are unavailable. This is advantageous with respect to a ``10+18`` " "configuration not only because reads from data fragments will be more common " "and more efficient, but also because a ``10+4x2`` can grow into a ``10+4x3`` " "to expand into another region." msgstr "" #: ../../source/overview_erasure_code.rst:359 msgid "EC duplication with composite rings" msgstr "" #: ../../source/overview_erasure_code.rst:361 msgid "" "It is recommended that EC Duplication is used with :ref:`composite_rings` in " "order to disperse duplicate fragments across regions." msgstr "" #: ../../source/overview_erasure_code.rst:364 msgid "" "When EC duplication is used, it is highly desirable to have one duplicate of " "each fragment placed in each region. This ensures that a set of " "``ec_num_data_fragments`` unique fragments (the minimum needed to " "reconstruct an object) can always be assembled from a single region. This in " "turn means that objects are robust in the event of an entire region becoming " "unavailable." msgstr "" #: ../../source/overview_erasure_code.rst:370 msgid "" "This can be achieved by using a :ref:`composite ring ` with " "the following properties:" msgstr "" #: ../../source/overview_erasure_code.rst:373 msgid "" "The number of component rings in the composite ring is equal to the " "``ec_duplication_factor`` for the policy." msgstr "" #: ../../source/overview_erasure_code.rst:375 msgid "" "Each *component* ring has a number of ``replicas`` that is equal to the sum " "of ``ec_num_data_fragments`` and ``ec_num_parity_fragments``." msgstr "" #: ../../source/overview_erasure_code.rst:377 msgid "Each component ring is populated with devices in a unique region." msgstr "" #: ../../source/overview_erasure_code.rst:379 msgid "" "This arrangement results in each component ring in the composite ring, and " "therefore each region, having one copy of each fragment." msgstr "" #: ../../source/overview_erasure_code.rst:382 msgid "" "For example, consider a Swift cluster with two regions, ``region1`` and " "``region2`` and a ``4+2x2`` EC policy schema. This policy should use a " "composite ring with two component rings, ``ring1`` and ``ring2``, having " "devices exclusively in regions ``region1`` and ``region2`` respectively. " "Each component ring should have ``replicas = 6``. As a result, the first 6 " "fragments for an object will always be placed in ``ring1`` (i.e. in " "``region1``) and the second 6 duplicate fragments will always be placed in " "``ring2`` (i.e. in ``region2``)." msgstr "" #: ../../source/overview_erasure_code.rst:391 msgid "" "Conversely, a conventional ring spanning the two regions may give a " "suboptimal distribution of duplicates across the regions; it is possible for " "duplicates of the same fragment to be placed in the same region, and " "consequently for another region to have no copies of that fragment. This may " "make it impossible to assemble a set of ``ec_num_data_fragments`` unique " "fragments from a single region. For example, the conventional ring could " "have a pathologically sub-optimal placement such as::" msgstr "" #: ../../source/overview_erasure_code.rst:414 msgid "" "In this case, the object cannot be reconstructed from a single region; " "``region1`` has only the fragments with index ``0, 2, 4`` and ``region2`` " "has the other 3 indexes, but we need 4 unique indexes to be able to rebuild " "an object." msgstr "" #: ../../source/overview_erasure_code.rst:420 msgid "Node Selection Strategy for Reads" msgstr "" #: ../../source/overview_erasure_code.rst:422 msgid "" "Proxy servers require a set of *unique* fragment indexes to decode the " "original object when handling a GET request to an EC policy. With a " "conventional EC policy, this is very likely to be the outcome of reading " "fragments from a random selection of backend nodes. With an EC Duplication " "policy it is significantly more likely that responses from a *random* " "selection of backend nodes might include some duplicated fragments." msgstr "" #: ../../source/overview_erasure_code.rst:429 msgid "" "For this reason it is strongly recommended that EC Duplication always be " "deployed in combination with :ref:`composite_rings` and :ref:`proxy server " "read affinity `." msgstr "" #: ../../source/overview_erasure_code.rst:433 msgid "" "Under normal conditions with the recommended deployment, read affinity will " "cause a proxy server to first attempt to read fragments from nodes in its " "local region. These fragments are guaranteed to be unique with respect to " "each other. Even if there are a small number of local failures, unique local " "parity fragments will make up the difference. However, should enough local " "primary storage nodes fail, such that sufficient unique fragments are not " "available in the local region, a global EC cluster will proceed to read " "fragments from the other region(s). Random reads from the remote region are " "not guaranteed to return unique fragments; with EC Duplication there is a " "significantly high probability that the proxy server will encounter a " "fragment that is a duplicate of one it has already found in the local " "region. The proxy server will ignore these and make additional requests " "until it accumulates the required set of unique fragments, potentially " "searching all the primary and handoff locations in the local and remote " "regions before ultimately failing the read." msgstr "" #: ../../source/overview_erasure_code.rst:448 msgid "" "A global EC deployment configured as recommended is therefore extremely " "resilient. However, under extreme failure conditions read handling can be " "inefficient because nodes in other regions are guaranteed to have some " "fragments which are duplicates of those the proxy server has already " "received. Work is in progress to improve the proxy server node selection " "strategy such that when it is necessary to read from other regions, nodes " "that are likely to have useful fragments are preferred over those that are " "likely to return a duplicate." msgstr "" #: ../../source/overview_erasure_code.rst:463 msgid "Efficient Cross Region Rebuild" msgstr "" #: ../../source/overview_erasure_code.rst:465 msgid "" "Work is also in progress to improve the object-reconstructor efficiency for " "Global EC policies. Unlike the proxy server, the reconstructor does not " "apply any read affinity settings when gathering fragments. It is therefore " "likely to receive duplicated fragments (i.e. make wasted backend GET " "requests) while performing *every* fragment reconstruction." msgstr "" #: ../../source/overview_erasure_code.rst:471 msgid "" "Additionally, other reconstructor optimisations for Global EC are under " "investigation:" msgstr "" #: ../../source/overview_erasure_code.rst:474 msgid "" "Since fragments are duplicated between regions it may in some cases be more " "attractive to restore failed fragments from their duplicates in another " "region instead of rebuilding them from other fragments in the local region." msgstr "" #: ../../source/overview_erasure_code.rst:478 msgid "" "Conversely, to avoid WAN transfer it may be more attractive to rebuild " "fragments from local parity." msgstr "" #: ../../source/overview_erasure_code.rst:481 msgid "" "During rebalance it will always be more attractive to revert a fragment from " "it's old-primary to it's new primary rather than rebuilding or transferring " "a duplicate from the remote region." msgstr "" #: ../../source/overview_erasure_code.rst:488 msgid "Under the Hood" msgstr "" #: ../../source/overview_erasure_code.rst:490 msgid "" "Now that we've explained a little about EC support in Swift and how to " "configure and use it, let's explore how EC fits in at the nuts-n-bolts level." msgstr "" #: ../../source/overview_erasure_code.rst:496 msgid "" "The term 'fragment' has been used already to describe the output of the EC " "process (a series of fragments) however we need to define some other key " "terms here before going any deeper. Without paying special attention to " "using the correct terms consistently, it is very easy to get confused in a " "hurry!" msgstr "" #: ../../source/overview_erasure_code.rst:501 msgid "" "**chunk**: HTTP chunks received over wire (term not used to describe any EC " "specific operation)." msgstr "" #: ../../source/overview_erasure_code.rst:503 msgid "" "**segment**: Not to be confused with SLO/DLO use of the word, in EC we call " "a segment a series of consecutive HTTP chunks buffered up before performing " "an EC operation." msgstr "" #: ../../source/overview_erasure_code.rst:506 msgid "" "**fragment**: Data and parity 'fragments' are generated when erasure coding " "transformation is applied to a segment." msgstr "" #: ../../source/overview_erasure_code.rst:508 msgid "" "**EC archive**: A concatenation of EC fragments; to a storage node this " "looks like an object." msgstr "" #: ../../source/overview_erasure_code.rst:510 msgid "**ec_ndata**: Number of EC data fragments." msgstr "" #: ../../source/overview_erasure_code.rst:511 msgid "**ec_nparity**: Number of EC parity fragments." msgstr "" #: ../../source/overview_erasure_code.rst:516 msgid "" "Middleware remains unchanged. For most middleware (e.g., SLO/DLO) the fact " "that the proxy is fragmenting incoming objects is transparent. For list " "endpoints, however, it is a bit different. A caller of list endpoints will " "get back the locations of all of the fragments. The caller will be unable " "to re-assemble the original object with this information, however the node " "locations may still prove to be useful information for some applications." msgstr "" #: ../../source/overview_erasure_code.rst:524 msgid "On Disk Storage" msgstr "" #: ../../source/overview_erasure_code.rst:526 msgid "" "EC archives are stored on disk in their respective objects-N directory based " "on their policy index. See :doc:`overview_policies` for details on per " "policy directory information." msgstr "" #: ../../source/overview_erasure_code.rst:530 msgid "" "In addition to the object timestamp, the filenames of EC archives encode " "other information related to the archive:" msgstr "" #: ../../source/overview_erasure_code.rst:533 msgid "" "The fragment archive index. This is required for a few reasons. For one, it " "allows us to store fragment archives of different indexes on the same " "storage node which is not typical however it is possible in many " "circumstances. Without unique filenames for the different EC archive files " "in a set, we would be at risk of overwriting one archive of index `n` with " "another of index `m` in some scenarios." msgstr "" #: ../../source/overview_erasure_code.rst:540 msgid "" "The index is appended to the filename just before the ``.data`` extension. " "For example, the filename for a fragment archive storing the 5th fragment " "would be::" msgstr "" #: ../../source/overview_erasure_code.rst:546 msgid "" "The durable state of the archive. The meaning of this will be described in " "more detail later, but a fragment archive that is considered durable has an " "additional ``#d`` string included in its filename immediately before the ``." "data`` extension. For example::" msgstr "" #: ../../source/overview_erasure_code.rst:553 msgid "" "A policy-specific transformation function is therefore used to build the " "archive filename. These functions are implemented in the diskfile module as " "methods of policy specific sub classes of ``BaseDiskFileManager``." msgstr "" #: ../../source/overview_erasure_code.rst:557 msgid "The transformation function for the replication policy is simply a NOP." msgstr "" #: ../../source/overview_erasure_code.rst:561 msgid "" "In older versions the durable state of an archive was represented by an " "additional file called the ``.durable`` file instead of the ``#d`` substring " "in the ``.data`` filename. The ``.durable`` for the example above would be::" msgstr "" #: ../../source/overview_erasure_code.rst:573 msgid "High Level" msgstr "" #: ../../source/overview_erasure_code.rst:575 msgid "" "The Proxy Server handles Erasure Coding in a different manner than " "replication, therefore there are several code paths unique to EC policies " "either though sub classing or simple conditionals. Taking a closer look at " "the PUT and the GET paths will help make this clearer. But first, a high " "level overview of how an object flows through the system:" msgstr "" #: ../../source/overview_erasure_code.rst:583 msgid "Note how:" msgstr "" #: ../../source/overview_erasure_code.rst:585 msgid "Incoming objects are buffered into segments at the proxy." msgstr "" #: ../../source/overview_erasure_code.rst:586 msgid "Segments are erasure coded into fragments at the proxy." msgstr "" #: ../../source/overview_erasure_code.rst:587 msgid "" "The proxy stripes fragments across participating nodes such that the on-disk " "stored files that we call a fragment archive is appended with each new " "fragment." msgstr "" #: ../../source/overview_erasure_code.rst:591 msgid "" "This scheme makes it possible to minimize the number of on-disk files given " "our segmenting and fragmenting." msgstr "" #: ../../source/overview_erasure_code.rst:595 msgid "Multi_Phase Conversation" msgstr "" #: ../../source/overview_erasure_code.rst:597 msgid "" "Multi-part MIME document support is used to allow the proxy to engage in a " "handshake conversation with the storage node for processing PUT requests. " "This is required for a few different reasons." msgstr "" #: ../../source/overview_erasure_code.rst:601 msgid "" "From the perspective of the storage node, a fragment archive is really just " "another object, we need a mechanism to send down the original object etag " "after all fragment archives have landed." msgstr "" #: ../../source/overview_erasure_code.rst:604 msgid "" "Without introducing strong consistency semantics, the proxy needs a " "mechanism to know when a quorum of fragment archives have actually made it " "to disk before it can inform the client of a successful PUT." msgstr "" #: ../../source/overview_erasure_code.rst:608 msgid "" "MIME supports a conversation between the proxy and the storage nodes for " "every PUT. This provides us with the ability to handle a PUT in one " "connection and assure that we have the essence of a 2 phase commit, " "basically having the proxy communicate back to the storage nodes once it has " "confirmation that a quorum of fragment archives in the set have been written." msgstr "" #: ../../source/overview_erasure_code.rst:614 msgid "" "For the first phase of the conversation the proxy requires a quorum of " "`ec_ndata + 1` fragment archives to be successfully put to storage nodes. " "This ensures that the object could still be reconstructed even if one of the " "fragment archives becomes unavailable. As described above, each fragment " "archive file is named::" msgstr "" #: ../../source/overview_erasure_code.rst:622 msgid "" "where ``ts`` is the timestamp and ``frag_index`` is the fragment archive " "index." msgstr "" #: ../../source/overview_erasure_code.rst:624 msgid "" "During the second phase of the conversation the proxy communicates a " "confirmation to storage nodes that the fragment archive quorum has been " "achieved. This causes each storage node to rename the fragment archive " "written in the first phase of the conversation to include the substring " "``#d`` in its name::" msgstr "" #: ../../source/overview_erasure_code.rst:632 msgid "" "This indicates to the object server that this fragment archive is `durable` " "and that there is a set of data files that are durable at timestamp ``ts``." msgstr "" #: ../../source/overview_erasure_code.rst:635 msgid "" "For the second phase of the conversation the proxy requires a quorum of " "`ec_ndata + 1` successful commits on storage nodes. This ensures that there " "are sufficient committed fragment archives for the object to be " "reconstructed even if one becomes unavailable. The reconstructor ensures " "that the durable state is replicated on storage nodes where it may be " "missing." msgstr "" #: ../../source/overview_erasure_code.rst:641 msgid "" "Note that the completion of the commit phase of the conversation is also a " "signal for the object server to go ahead and immediately delete older " "timestamp files for this object. This is critical as we do not want to " "delete the older object until the storage node has confirmation from the " "proxy, via the multi-phase conversation, that the other nodes have landed " "enough for a quorum." msgstr "" #: ../../source/overview_erasure_code.rst:647 msgid "The basic flow looks like this:" msgstr "" #: ../../source/overview_erasure_code.rst:649 msgid "" "The Proxy Server erasure codes and streams the object fragments (ec_ndata + " "ec_nparity) to the storage nodes." msgstr "" #: ../../source/overview_erasure_code.rst:651 msgid "" "The storage nodes store objects as EC archives and upon finishing object " "data/metadata write, send a 1st-phase response to proxy." msgstr "" #: ../../source/overview_erasure_code.rst:653 msgid "" "Upon quorum of storage nodes responses, the proxy initiates 2nd-phase by " "sending commit confirmations to object servers." msgstr "" #: ../../source/overview_erasure_code.rst:655 msgid "" "Upon receipt of commit message, object servers rename ``.data`` files to " "include the ``#d`` substring, indicating successful PUT, and send a final " "response to the proxy server." msgstr "" #: ../../source/overview_erasure_code.rst:658 msgid "" "The proxy waits for `ec_ndata + 1` object servers to respond with a success " "(2xx) status before responding to the client with a successful status." msgstr "" #: ../../source/overview_erasure_code.rst:662 msgid "Here is a high level example of what the conversation looks like::" msgstr "" #: ../../source/overview_erasure_code.rst:689 msgid "A few key points on the durable state of a fragment archive:" msgstr "" #: ../../source/overview_erasure_code.rst:691 msgid "" "A durable fragment archive means that there exist sufficient other fragment " "archives elsewhere in the cluster (durable and/or non-durable) to " "reconstruct the object." msgstr "" #: ../../source/overview_erasure_code.rst:694 msgid "" "When a proxy does a GET, it will require at least one object server to " "respond with a fragment archive is durable before reconstructing and " "returning the object to the client." msgstr "" #: ../../source/overview_erasure_code.rst:699 msgid "Partial PUT Failures" msgstr "" #: ../../source/overview_erasure_code.rst:701 msgid "" "A partial PUT failure has a few different modes. In one scenario the Proxy " "Server is alive through the entire PUT conversation. This is a very " "straightforward case. The client will receive a good response if and only if " "a quorum of fragment archives were successfully landed on their storage " "nodes. In this case the Reconstructor will discover the missing fragment " "archives, perform a reconstruction and deliver those fragment archives to " "their nodes." msgstr "" #: ../../source/overview_erasure_code.rst:708 msgid "" "The more interesting case is what happens if the proxy dies in the middle of " "a conversation. If it turns out that a quorum had been met and the commit " "phase of the conversation finished, its as simple as the previous case in " "that the reconstructor will repair things. However, if the commit didn't " "get a chance to happen then some number of the storage nodes have .data " "files on them (fragment archives) but none of them knows whether there are " "enough elsewhere for the entire object to be reconstructed. In this case " "the client will not have received a 2xx response so there is no issue there, " "however, it is left to the storage nodes to clean up the stale fragment " "archives. Work is ongoing in this area to enable the proxy to play a role " "in reviving these fragment archives, however, for the current release, a " "proxy failure after the start of a conversation but before the commit " "message will simply result in a PUT failure." msgstr "" #: ../../source/overview_erasure_code.rst:722 msgid "GET" msgstr "" #: ../../source/overview_erasure_code.rst:724 msgid "" "The GET for EC is different enough from replication that subclassing the " "`BaseObjectController` to the `ECObjectController` enables an efficient way " "to implement the high level steps described earlier:" msgstr "" #: ../../source/overview_erasure_code.rst:728 msgid "" "The proxy server makes simultaneous requests to `ec_ndata` primary object " "server nodes with goal of finding a set of `ec_ndata` distinct EC archives " "at the same timestamp, and an indication from at least one object server " "that a durable fragment archive exists for that timestamp. If this goal is " "not achieved with the first `ec_ndata` requests then the proxy server " "continues to issue requests to the remaining primary nodes and then handoff " "nodes." msgstr "" #: ../../source/overview_erasure_code.rst:735 msgid "" "As soon as the proxy server has found a usable set of `ec_ndata` EC " "archives, it starts to call PyECLib to decode fragments as they are returned " "by the object server nodes." msgstr "" #: ../../source/overview_erasure_code.rst:738 msgid "" "The proxy server creates Etag and content length headers for the client " "response since each EC archive's metadata is valid only for that archive." msgstr "" #: ../../source/overview_erasure_code.rst:742 msgid "" "Note that the proxy does not require all objects servers to have a durable " "fragment archive to return in response to a GET. The proxy will be satisfied " "if just one object server has a durable fragment archive at the same " "timestamp as EC archives returned from other object servers. This means that " "the proxy can successfully GET an object that had missing durable state on " "some nodes when it was PUT (i.e. a partial PUT failure occurred)." msgstr "" #: ../../source/overview_erasure_code.rst:749 msgid "" "Note also that an object server may inform the proxy server that it has more " "than one EC archive for different timestamps and/or fragment indexes, which " "may cause the proxy server to issue multiple requests for distinct EC " "archives to that object server. (This situation can temporarily occur after " "a ring rebalance when a handoff node storing an archive has become a primary " "node and received its primary archive but not yet moved the handoff archive " "to its primary node.)" msgstr "" #: ../../source/overview_erasure_code.rst:757 msgid "" "The proxy may receive EC archives having different timestamps, and may " "receive several EC archives having the same index. The proxy therefore " "ensures that it has sufficient EC archives with the same timestamp and " "distinct fragment indexes before considering a GET to be successful." msgstr "" #: ../../source/overview_erasure_code.rst:765 msgid "" "The Object Server, like the Proxy Server, supports MIME conversations as " "described in the proxy section earlier. This includes processing of the " "commit message and decoding various sections of the MIME document to extract " "the footer which includes things like the entire object etag." msgstr "" #: ../../source/overview_erasure_code.rst:771 msgid "DiskFile" msgstr "" #: ../../source/overview_erasure_code.rst:773 msgid "" "Erasure code policies use subclassed ``ECDiskFile``, ``ECDiskFileWriter``, " "``ECDiskFileReader`` and ``ECDiskFileManager`` to implement EC specific " "handling of on disk files. This includes things like file name manipulation " "to include the fragment index and durable state in the filename, " "construction of EC specific ``hashes.pkl`` file to include fragment index " "information, etc." msgstr "" #: ../../source/overview_erasure_code.rst:782 msgid "" "There are few different categories of metadata that are associated with EC:" msgstr "" #: ../../source/overview_erasure_code.rst:784 msgid "" "System Metadata: EC has a set of object level system metadata that it " "attaches to each of the EC archives. The metadata is for internal use only:" msgstr "" #: ../../source/overview_erasure_code.rst:787 msgid "``X-Object-Sysmeta-EC-Etag``: The Etag of the original object." msgstr "" #: ../../source/overview_erasure_code.rst:788 msgid "" "``X-Object-Sysmeta-EC-Content-Length``: The content length of the original " "object." msgstr "" #: ../../source/overview_erasure_code.rst:790 msgid "``X-Object-Sysmeta-EC-Frag-Index``: The fragment index for the object." msgstr "" #: ../../source/overview_erasure_code.rst:791 msgid "" "``X-Object-Sysmeta-EC-Scheme``: Description of the EC policy used to encode " "the object." msgstr "" #: ../../source/overview_erasure_code.rst:793 msgid "" "``X-Object-Sysmeta-EC-Segment-Size``: The segment size used for the object." msgstr "" #: ../../source/overview_erasure_code.rst:795 msgid "" "User Metadata: User metadata is unaffected by EC, however, a full copy of " "the user metadata is stored with every EC archive. This is required as the " "reconstructor needs this information and each reconstructor only " "communicates with its closest neighbors on the ring." msgstr "" #: ../../source/overview_erasure_code.rst:800 msgid "" "PyECLib Metadata: PyECLib stores a small amount of metadata on a per " "fragment basis. This metadata is not documented here as it is opaque to " "Swift." msgstr "" #: ../../source/overview_erasure_code.rst:804 msgid "Database Updates" msgstr "" #: ../../source/overview_erasure_code.rst:806 msgid "" "As account and container rings are not associated with a Storage Policy, " "there is no change to how these database updates occur when using an EC " "policy." msgstr "" #: ../../source/overview_erasure_code.rst:810 msgid "The Reconstructor" msgstr "" #: ../../source/overview_erasure_code.rst:812 msgid "The Reconstructor performs analogous functions to the replicator:" msgstr "" #: ../../source/overview_erasure_code.rst:814 msgid "Recovering from disk drive failure." msgstr "" #: ../../source/overview_erasure_code.rst:815 msgid "Moving data around because of a rebalance." msgstr "" #: ../../source/overview_erasure_code.rst:816 msgid "Reverting data back to a primary from a handoff." msgstr "" #: ../../source/overview_erasure_code.rst:817 msgid "Recovering fragment archives from bit rot discovered by the auditor." msgstr "" #: ../../source/overview_erasure_code.rst:819 msgid "" "However, under the hood it operates quite differently. The following are " "some of the key elements in understanding how the reconstructor operates." msgstr "" #: ../../source/overview_erasure_code.rst:822 msgid "" "Unlike the replicator, the work that the reconstructor does is not always as " "easy to break down into the 2 basic tasks of synchronize or revert (move " "data from handoff back to primary) because of the fact that one storage node " "can house fragment archives of various indexes and each index really \\" "\"belongs\\\" to a different node. So, whereas when the replicator is " "reverting data from a handoff it has just one node to send its data to, the " "reconstructor can have several. Additionally, it is not always the case " "that the processing of a particular suffix directory means one or the other " "job type for the entire directory (as it does for replication). The " "scenarios that create these mixed situations can be pretty complex so we " "will just focus on what the reconstructor does here and not a detailed " "explanation of why." msgstr "" #: ../../source/overview_erasure_code.rst:835 msgid "Job Construction and Processing" msgstr "" #: ../../source/overview_erasure_code.rst:837 msgid "" "Because of the nature of the work it has to do as described above, the " "reconstructor builds jobs for a single job processor. The job itself " "contains all of the information needed for the processor to execute the job " "which may be a synchronization or a data reversion. There may be a mix of " "jobs that perform both of these operations on the same suffix directory." msgstr "" #: ../../source/overview_erasure_code.rst:843 msgid "" "Jobs are constructed on a per-partition basis and then per-fragment-index " "basis. That is, there will be one job for every fragment index in a " "partition. Performing this construction \\\"up front\\\" like this helps " "minimize the interaction between nodes collecting hashes.pkl information." msgstr "" #: ../../source/overview_erasure_code.rst:848 msgid "" "Once a set of jobs for a partition has been constructed, those jobs are sent " "off to threads for execution. The single job processor then performs the " "necessary actions, working closely with ssync to carry out its " "instructions. For data reversion, the actual objects themselves are cleaned " "up via the ssync module and once that partition's set of jobs is complete, " "the reconstructor will attempt to remove the relevant directory structures." msgstr "" #: ../../source/overview_erasure_code.rst:855 msgid "Job construction must account for a variety of scenarios, including:" msgstr "" #: ../../source/overview_erasure_code.rst:857 msgid "" "A partition directory with all fragment indexes matching the local node " "index. This is the case where everything is where it belongs and we just " "need to compare hashes and sync if needed. Here we simply sync with our " "partners." msgstr "" #: ../../source/overview_erasure_code.rst:861 msgid "" "A partition directory with at least one local fragment index and mix of " "others. Here we need to sync with our partners where fragment indexes " "matches the local_id, all others are sync'd with their home nodes and then " "deleted." msgstr "" #: ../../source/overview_erasure_code.rst:865 msgid "" "A partition directory with no local fragment index and just one or more of " "others. Here we sync with just the home nodes for the fragment indexes that " "we have and then all the local archives are deleted. This is the basic " "handoff reversion case." msgstr "" #: ../../source/overview_erasure_code.rst:871 msgid "" "A \\\"home node\\\" is the node where the fragment index encoded in the " "fragment archive's filename matches the node index of a node in the primary " "partition list." msgstr "" #: ../../source/overview_erasure_code.rst:876 msgid "Node Communication" msgstr "" #: ../../source/overview_erasure_code.rst:878 msgid "" "The replicators talk to all nodes who have a copy of their object, typically " "just 2 other nodes. For EC, having each reconstructor node talk to all " "nodes would incur a large amount of overhead as there will typically be a " "much larger number of nodes participating in the EC scheme. Therefore, the " "reconstructor is built to talk to its adjacent nodes on the ring only. " "These nodes are typically referred to as partners." msgstr "" #: ../../source/overview_erasure_code.rst:888 msgid "" "Reconstruction can be thought of sort of like replication but with an extra " "step in the middle. The reconstructor is hard-wired to use ssync to " "determine what is missing and desired by the other side. However, before an " "object is sent over the wire it needs to be reconstructed from the remaining " "fragments as the local fragment is just that - a different fragment index " "than what the other end is asking for." msgstr "" #: ../../source/overview_erasure_code.rst:895 msgid "" "Thus, there are hooks in ssync for EC based policies. One case would be for " "basic reconstruction which, at a high level, looks like this:" msgstr "" #: ../../source/overview_erasure_code.rst:898 msgid "" "Determine which nodes need to be contacted to collect other EC archives " "needed to perform reconstruction." msgstr "" #: ../../source/overview_erasure_code.rst:900 msgid "" "Update the etag and fragment index metadata elements of the newly " "constructed fragment archive." msgstr "" #: ../../source/overview_erasure_code.rst:902 msgid "" "Establish a connection to the target nodes and give ssync a DiskFileLike " "class from which it can stream data." msgstr "" #: ../../source/overview_erasure_code.rst:905 msgid "" "The reader in this class gathers fragments from the nodes and uses PyECLib " "to reconstruct each segment before yielding data back to ssync. Essentially " "what this means is that data is buffered, in memory, on a per segment basis " "at the node performing reconstruction and each segment is dynamically " "reconstructed and delivered to ``ssync_sender`` where the ``send_put()`` " "method will ship them on over. The sender is then responsible for deleting " "the objects as they are sent in the case of data reversion." msgstr "" #: ../../source/overview_erasure_code.rst:914 msgid "The Auditor" msgstr "" #: ../../source/overview_erasure_code.rst:916 msgid "" "Because the auditor already operates on a per storage policy basis, there " "are no specific auditor changes associated with EC. Each EC archive looks " "like, and is treated like, a regular object from the perspective of the " "auditor. Therefore, if the auditor finds bit-rot in an EC archive, it " "simply quarantines it and the reconstructor will take care of the rest just " "as the replicator does for replication policies." msgstr "" #: ../../source/overview_expiring_objects.rst:3 msgid "Expiring Object Support" msgstr "" #: ../../source/overview_expiring_objects.rst:5 msgid "" "The ``swift-object-expirer`` offers scheduled deletion of objects. The Swift " "client would use the ``X-Delete-At`` or ``X-Delete-After`` headers during an " "object ``PUT`` or ``POST`` and the cluster would automatically quit serving " "that object at the specified time and would shortly thereafter remove the " "object from the system." msgstr "" #: ../../source/overview_expiring_objects.rst:11 msgid "" "The ``X-Delete-At`` header takes a Unix Epoch timestamp, in integer form; " "for example: ``1317070737`` represents ``Mon Sep 26 20:58:57 2011 UTC``." msgstr "" #: ../../source/overview_expiring_objects.rst:14 msgid "" "The ``X-Delete-After`` header takes a positive integer number of seconds. " "The proxy server that receives the request will convert this header into an " "``X-Delete-At`` header using the request timestamp plus the value given." msgstr "" #: ../../source/overview_expiring_objects.rst:18 msgid "" "If both the ``X-Delete-At`` and ``X-Delete-After`` headers are sent with a " "request then the ``X-Delete-After`` header will take precedence." msgstr "" #: ../../source/overview_expiring_objects.rst:21 msgid "" "As expiring objects are added to the system, the object servers will record " "the expirations in a hidden ``.expiring_objects`` account for the ``swift-" "object-expirer`` to handle later." msgstr "" #: ../../source/overview_expiring_objects.rst:25 msgid "" "Usually, just one instance of the ``swift-object-expirer`` daemon needs to " "run for a cluster. This isn't exactly automatic failover high availability, " "but if this daemon doesn't run for a few hours it should not be any real " "issue. The expired-but-not-yet-deleted objects will still ``404 Not Found`` " "if someone tries to ``GET`` or ``HEAD`` them and they'll just be deleted a " "bit later when the daemon is restarted." msgstr "" #: ../../source/overview_expiring_objects.rst:32 msgid "" "By default, the ``swift-object-expirer`` daemon will run with a concurrency " "of 1. Increase this value to get more concurrency. A concurrency of 1 may " "not be enough to delete expiring objects in a timely fashion for a " "particular Swift cluster." msgstr "" #: ../../source/overview_expiring_objects.rst:37 msgid "" "It is possible to run multiple daemons to do different parts of the work if " "a single process with a concurrency of more than 1 is not enough (see the " "sample config file for details)." msgstr "" #: ../../source/overview_expiring_objects.rst:41 msgid "" "To run the ``swift-object-expirer`` as multiple processes, set ``processes`` " "to the number of processes (either in the config file or on the command " "line). Then run one process for each part. Use ``process`` to specify the " "part of the work to be done by a process using the command line or the " "config. So, for example, if you'd like to run three processes, set " "``processes`` to 3 and run three processes with ``process`` set to 0, 1, and " "2 for the three processes. If multiple processes are used, it's necessary to " "run one for each part of the work or that part of the work will not be done." msgstr "" #: ../../source/overview_expiring_objects.rst:50 msgid "" "By default the daemon looks for two different config files. When launching, " "the process searches for the ``[object-expirer]`` section in the" msgstr "" #: ../../source/overview_expiring_objects.rst:53 msgid "" "``/etc/swift/object-server.conf`` config. If the section or the config is " "missing it will then look for and use the ``/etc/swift/object-expirer.conf`` " "config. The latter config file is considered deprecated and is searched for " "to aid in cluster upgrades." msgstr "" #: ../../source/overview_expiring_objects.rst:59 msgid "Delay Reaping of Objects from Disk" msgstr "" #: ../../source/overview_expiring_objects.rst:61 msgid "" "Swift's expiring object ``x-delete-at`` feature can be used to have the " "cluster reap user's objects automatically from disk on their behalf when " "they no longer want them stored in their account. In some cases it may be " "necessary to \"intervene\" in the expected expiration process to prevent " "accidental or premature data loss if an object marked for expiration should " "NOT be deleted immediately when it expires for whatever reason. In these " "cases ``swift-object-expirer`` offers configuration of a ``delay_reaping`` " "value on accounts and containers, which provides a delay between when an " "object is marked for deletion, or expired, and when it is actually reaped " "from disk. When this is set in the object expirer config the object expirer " "leaves expired objects on disk (and in container listings) for the " "``delay_reaping`` time. After this delay has passed objects will be reaped " "as normal." msgstr "" #: ../../source/overview_expiring_objects.rst:74 msgid "" "The ``delay_reaping`` value can be set either at an account level or a " "container level. When set at an account level, the object expirer will only " "reap objects within the account after the delay. A container level " "``delay_reaping`` works similarly for containers and overrides an account " "level ``delay_reaping`` value." msgstr "" #: ../../source/overview_expiring_objects.rst:80 msgid "" "The ``delay_reaping`` values are set in the ``[object-expirer]`` section in " "either the object-server or object-expirer config files. They are configured " "with dynamic config option names prefixed with ``delay_reaping_`` at " "the account level and ``delay_reaping_/`` at the container " "level, with the ``delay_reaping`` value in seconds." msgstr "" #: ../../source/overview_expiring_objects.rst:86 msgid "" "Here is an example of ``delay_reaping`` configs in the``object-expirer`` " "section in the ``object-server.conf``::" msgstr "" #: ../../source/overview_expiring_objects.rst:96 msgid "" "A container level ``delay_reaping`` value does not require an account level " "``delay_reaping`` value but overrides the account level value for the same " "account if it exists. By default, no ``delay_reaping`` value is configured " "for any accounts or containers." msgstr "" #: ../../source/overview_expiring_objects.rst:102 msgid "Accessing Objects After Expiration" msgstr "" #: ../../source/overview_expiring_objects.rst:104 msgid "" "By default, objects that expire become inaccessible, even to the account " "owner. The object may not have been deleted, but any GET/HEAD/POST client " "request for the object will respond 404 Not Found after the ``x-delete-at`` " "timestamp has passed." msgstr "" #: ../../source/overview_expiring_objects.rst:109 msgid "" "The ``swift-proxy-server`` offers the ability to globally configure a flag " "to allow requests to access expired objects that have not yet been deleted. " "When this flag is enabled, a user can make a GET, HEAD, or POST request with " "the header ``x-open-expired`` set to true to access the expired object." msgstr "" #: ../../source/overview_expiring_objects.rst:114 msgid "" "The global configuration is an opt-in flag that can be set in the ``[proxy-" "server]`` section of the ``proxy-server.conf`` file. It is configured with a " "single flag ``allow_open_expired`` set to true or false. By default, this " "flag is set to false." msgstr "" #: ../../source/overview_expiring_objects.rst:119 msgid "" "Here is an example in the ``proxy-server`` section in ``proxy-server.conf``::" msgstr "" #: ../../source/overview_expiring_objects.rst:124 msgid "" "To discover whether this flag is set, you can send a **GET** request to the " "``/info`` :ref:`discoverability ` path. This will return " "configuration data in JSON format where the value of ``allow_open_expired`` " "is exposed." msgstr "" #: ../../source/overview_expiring_objects.rst:129 msgid "" "When using a temporary URL to access the object, this feature is not " "enabled. This means that adding the header will not allow requests to " "temporary URLs to access expired objects." msgstr "" #: ../../source/overview_expiring_objects.rst:134 msgid "Upgrading impact: General Task Queue vs Legacy Queue" msgstr "" #: ../../source/overview_expiring_objects.rst:136 msgid "" "The expirer daemon will be moving to a new general task-queue based design " "that will divide the work across all object servers, as such only expirers " "defined in the object-server config will be able to use the new system." msgstr "" #: ../../source/overview_expiring_objects.rst:140 msgid "" "The legacy object expirer config is documented in ``etc/object-expirer.conf-" "sample``. The alternative object-server config section is documented in " "``etc/object-server.conf-sample``." msgstr "" #: ../../source/overview_expiring_objects.rst:144 msgid "" "The parameters in both files are identical except for a new option in the " "object-server ``[object-expirer]`` section, ``dequeue_from_legacy`` which " "when set to ``True`` will tell the expirer that in addition to using the new " "task queueing system to also check the legacy (soon to be deprecated) queue." msgstr "" #: ../../source/overview_expiring_objects.rst:151 msgid "" "The new task-queue system has not been completed yet. So an expirer's with " "``dequeue_from_legacy`` set to ``False`` will currently do nothing." msgstr "" #: ../../source/overview_expiring_objects.rst:154 msgid "" "By default ``dequeue_from_legacy`` will be ``False``, it is necessary to be " "set to ``True`` explicitly while migrating from the old expiring queue." msgstr "" #: ../../source/overview_expiring_objects.rst:157 msgid "" "Any expirer using the old config ``/etc/swift/object-expirer.conf`` will not " "use the new general task queue. It'll ignore the ``dequeue_from_legacy`` and " "will only check the legacy queue. Meaning it'll run as a legacy expirer." msgstr "" #: ../../source/overview_expiring_objects.rst:161 msgid "" "Why is this important? If you are currently running object-expirers on nodes " "that are not object storage nodes, then for the time being they will still " "work but only by dequeuing from the old queue. When the new general task " "queue is introduced, expirers will be required to run on the object servers " "so that any new objects added can be removed. If you're in this situation, " "you can safely setup the new expirer section in the ``object-server.conf`` " "to deal with the new queue and leave the legacy expirers running elsewhere." msgstr "" #: ../../source/overview_expiring_objects.rst:170 msgid "" "However, if your old expirers are running on the object-servers, the most " "common topology, then you would add the new section to all object servers, " "to deal the new queue. In order to maintain the same number of expirers " "checking the legacy queue, pick the same number of nodes as you previously " "had and turn on ``dequeue_from_legacy`` on those nodes only. Also note on " "these nodes you'd need to keep the legacy ``process`` and ``processes`` " "options to maintain the concurrency level for the legacy queue." msgstr "" #: ../../source/overview_expiring_objects.rst:179 msgid "" "Be careful not to enable ``dequeue_from_legacy`` on too many expirers as all " "legacy tasks are stored in a single hidden account and the same hidden " "containers. On a large cluster one may inadvertently overload the acccount/" "container servers handling the legacy expirer queue." msgstr "" #: ../../source/overview_expiring_objects.rst:185 msgid "" "When running legacy expirers, the daemon needs to run on a machine with " "access to all the backend servers in the cluster, but does not need proxy " "server or public access. The daemon will use its own internal proxy code " "instance to access the backend servers." msgstr "" #: ../../source/overview_global_cluster.rst:3 msgid "Global Clusters" msgstr "" #: ../../source/overview_global_cluster.rst:9 msgid "" "Swift's default configuration is currently designed to work in a single " "region, where a region is defined as a group of machines with high-" "bandwidth, low-latency links between them. However, configuration options " "exist that make running a performant multi-region Swift cluster possible." msgstr "" #: ../../source/overview_global_cluster.rst:15 msgid "" "For the rest of this section, we will assume a two-region Swift cluster: " "region 1 in San Francisco (SF), and region 2 in New York (NY). Each region " "shall contain within it 3 zones, numbered 1, 2, and 3, for a total of 6 " "zones." msgstr "" #: ../../source/overview_global_cluster.rst:24 msgid "Configuring Global Clusters" msgstr "" #: ../../source/overview_global_cluster.rst:28 msgid "" "The proxy-server configuration options described below can be given generic " "settings in the ``[app:proxy-server]`` configuration section and/or given " "specific settings for individual policies using :ref:" "`proxy_server_per_policy_config`." msgstr "" #: ../../source/overview_global_cluster.rst:35 msgid "read_affinity" msgstr "" #: ../../source/overview_global_cluster.rst:37 msgid "" "This setting, combined with sorting_method setting, makes the proxy server " "prefer local backend servers for GET and HEAD requests over non-local ones. " "For example, it is preferable for an SF proxy server to service object GET " "requests by talking to SF object servers, as the client will receive lower " "latency and higher throughput." msgstr "" #: ../../source/overview_global_cluster.rst:43 msgid "" "By default, Swift randomly chooses one of the three replicas to give to the " "client, thereby spreading the load evenly. In the case of a geographically-" "distributed cluster, the administrator is likely to prioritize keeping " "traffic local over even distribution of results. This is where the " "read_affinity setting comes in." msgstr "" #: ../../source/overview_global_cluster.rst:55 msgid "" "This will make the proxy attempt to service GET and HEAD requests from " "backends in region 1 before contacting any backends in region 2. However, if " "no region 1 backends are available (due to replica placement, failed " "hardware, or other reasons), then the proxy will fall back to backend " "servers in other regions." msgstr "" #: ../../source/overview_global_cluster.rst:67 msgid "" "This will make the proxy attempt to service GET and HEAD requests from " "backends in region 1 zone 1, then backends in region 1, then any other " "backends. If a proxy is physically close to a particular zone or zones, this " "can provide bandwidth savings. For example, if a zone corresponds to servers " "in a particular rack, and the proxy server is in that same rack, then " "setting read_affinity to prefer reads from within the rack will result in " "less traffic between the top-of-rack switches." msgstr "" #: ../../source/overview_global_cluster.rst:76 msgid "" "The read_affinity setting may contain any number of region/zone specifiers; " "the priority number (after the equals sign) determines the ordering in which " "backend servers will be contacted. A lower number means higher priority." msgstr "" #: ../../source/overview_global_cluster.rst:81 msgid "" "Note that read_affinity only affects the ordering of primary nodes (see ring " "docs for definition of primary node), not the ordering of handoff nodes." msgstr "" #: ../../source/overview_global_cluster.rst:87 msgid "write_affinity" msgstr "" #: ../../source/overview_global_cluster.rst:89 msgid "" "This setting makes the proxy server prefer local backend servers for object " "PUT requests over non-local ones. For example, it may be preferable for an " "SF proxy server to service object PUT requests by talking to SF object " "servers, as the client will receive lower latency and higher throughput. " "However, if this setting is used, note that a NY proxy server handling a GET " "request for an object that was PUT using write affinity may have to fetch it " "across the WAN link, as the object won't immediately have any replicas in " "NY. However, replication will move the object's replicas to their proper " "homes in both SF and NY." msgstr "" #: ../../source/overview_global_cluster.rst:100 msgid "" "One potential issue with write_affinity is, end user may get 404 error when " "deleting objects before replication. The write_affinity_handoff_delete_count " "setting is used together with write_affinity in order to solve that issue. " "With its default configuration, Swift will calculate the proper number of " "handoff nodes to send requests to." msgstr "" #: ../../source/overview_global_cluster.rst:106 msgid "" "Note that only object PUT/DELETE requests are affected by the write_affinity " "setting; POST, GET, HEAD, OPTIONS, and account/container PUT requests are " "not affected." msgstr "" #: ../../source/overview_global_cluster.rst:110 msgid "" "This setting lets you trade data distribution for throughput. If " "write_affinity is enabled, then object replicas will initially be stored all " "within a particular region or zone, thereby decreasing the quality of the " "data distribution, but the replicas will be distributed over fast WAN links, " "giving higher throughput to clients. Note that the replicators will " "eventually move objects to their proper, well-distributed homes." msgstr "" #: ../../source/overview_global_cluster.rst:118 msgid "" "The write_affinity setting is useful only when you don't typically read " "objects immediately after writing them. For example, consider a workload of " "mainly backups: if you have a bunch of machines in NY that periodically " "write backups to Swift, then odds are that you don't then immediately read " "those backups in SF. If your workload doesn't look like that, then you " "probably shouldn't use write_affinity." msgstr "" #: ../../source/overview_global_cluster.rst:125 msgid "" "The write_affinity_node_count setting is only useful in conjunction with " "write_affinity; it governs how many local object servers will be tried " "before falling back to non-local ones." msgstr "" #: ../../source/overview_global_cluster.rst:135 msgid "" "Assuming 3 replicas, this configuration will make object PUTs try storing " "the object's replicas on up to 6 disks (\"2 * replicas\") in region 1 " "(\"r1\"). Proxy server tries to find 3 devices for storing the object. While " "a device is unavailable, it queries the ring for the 4th device and so on " "until 6th device. If the 6th disk is still unavailable, the last replica " "will be sent to other region. It doesn't mean there'll have 6 replicas in " "region 1." msgstr "" #: ../../source/overview_global_cluster.rst:144 msgid "" "You should be aware that, if you have data coming into SF faster than your " "replicators are transferring it to NY, then your cluster's data distribution " "will get worse and worse over time as objects pile up in SF. If this " "happens, it is recommended to disable write_affinity and simply let object " "PUTs traverse the WAN link, as that will naturally limit the object growth " "rate to what your WAN link can handle." msgstr "" #: ../../source/overview_large_objects.rst:5 msgid "Large Object Support" msgstr "" #: ../../source/overview_large_objects.rst:11 msgid "" "Swift has a limit on the size of a single uploaded object; by default this " "is 5GB. However, the download size of a single object is virtually unlimited " "with the concept of segmentation. Segments of the larger object are uploaded " "and a special manifest file is created that, when downloaded, sends all the " "segments concatenated as a single object. This also offers much greater " "upload speed with the possibility of parallel uploads of the segments." msgstr "" #: ../../source/overview_large_objects.rst:44 msgid "Direct API" msgstr "" #: ../../source/overview_large_objects.rst:46 msgid "" "SLO support centers around the user generated manifest file. After the user " "has uploaded the segments into their account a manifest file needs to be " "built and uploaded. All object segments, must be at least 1 byte in size. " "Please see the SLO docs for :ref:`slo-doc` further details." msgstr "" #: ../../source/overview_large_objects.rst:54 msgid "Additional Notes" msgstr "" #: ../../source/overview_large_objects.rst:56 msgid "" "With a ``GET`` or ``HEAD`` of a manifest file, the ``X-Object-Manifest: " "/`` header will be returned with the concatenated object " "so you can tell where it's getting its segments from." msgstr "" #: ../../source/overview_large_objects.rst:60 msgid "" "When updating a manifest object using a POST request, a ``X-Object-" "Manifest`` header must be included for the object to continue to behave as a " "manifest object." msgstr "" #: ../../source/overview_large_objects.rst:64 msgid "" "The response's ``Content-Length`` for a ``GET`` or ``HEAD`` on the manifest " "file will be the sum of all the segments in the ``/`` " "listing, dynamically. So, uploading additional segments after the manifest " "is created will cause the concatenated object to be that much larger; " "there's no need to recreate the manifest file." msgstr "" #: ../../source/overview_large_objects.rst:70 msgid "" "The response's ``Content-Type`` for a ``GET`` or ``HEAD`` on the manifest " "will be the same as the ``Content-Type`` set during the ``PUT`` request that " "created the manifest. You can easily change the ``Content-Type`` by " "reissuing the ``PUT``." msgstr "" #: ../../source/overview_large_objects.rst:75 msgid "" "The response's ``ETag`` for a ``GET`` or ``HEAD`` on the manifest file will " "be the MD5 sum of the concatenated string of ETags for each of the segments " "in the manifest (for DLO, from the listing ``/``). " "Usually in Swift the ETag is the MD5 sum of the contents of the object, and " "that holds true for each segment independently. But it's not meaningful to " "generate such an ETag for the manifest itself so this method was chosen to " "at least offer change detection." msgstr "" #: ../../source/overview_large_objects.rst:86 msgid "" "If you are using the container sync feature you will need to ensure both " "your manifest file and your segment files are synced if they happen to be in " "different containers." msgstr "" #: ../../source/overview_large_objects.rst:92 msgid "History" msgstr "" #: ../../source/overview_large_objects.rst:94 msgid "" "Dynamic large object support has gone through various iterations before " "settling on this implementation." msgstr "" #: ../../source/overview_large_objects.rst:97 msgid "" "The primary factor driving the limitation of object size in Swift is " "maintaining balance among the partitions of the ring. To maintain an even " "dispersion of disk usage throughout the cluster the obvious storage pattern " "was to simply split larger objects into smaller segments, which could then " "be glued together during a read." msgstr "" #: ../../source/overview_large_objects.rst:103 msgid "" "Before the introduction of large object support some applications were " "already splitting their uploads into segments and re-assembling them on the " "client side after retrieving the individual pieces. This design allowed the " "client to support backup and archiving of large data sets, but was also " "frequently employed to improve performance or reduce errors due to network " "interruption. The major disadvantage of this method is that knowledge of the " "original partitioning scheme is required to properly reassemble the object, " "which is not practical for some use cases, such as CDN origination." msgstr "" #: ../../source/overview_large_objects.rst:112 msgid "" "In order to eliminate any barrier to entry for clients wanting to store " "objects larger than 5GB, initially we also prototyped fully transparent " "support for large object uploads. A fully transparent implementation would " "support a larger max size by automatically splitting objects into segments " "during upload within the proxy without any changes to the client API. All " "segments were completely hidden from the client API." msgstr "" #: ../../source/overview_large_objects.rst:119 msgid "" "This solution introduced a number of challenging failure conditions into the " "cluster, wouldn't provide the client with any option to do parallel uploads, " "and had no basis for a resume feature. The transparent implementation was " "deemed just too complex for the benefit." msgstr "" #: ../../source/overview_large_objects.rst:124 msgid "" "The current \"user manifest\" design was chosen in order to provide a " "transparent download of large objects to the client and still provide the " "uploading client a clean API to support segmented uploads." msgstr "" #: ../../source/overview_large_objects.rst:128 msgid "" "To meet an many use cases as possible Swift supports two types of large " "object manifests. Dynamic and static large object manifests both support the " "same idea of allowing the user to upload many segments to be later " "downloaded as a single file." msgstr "" #: ../../source/overview_large_objects.rst:133 msgid "" "Dynamic large objects rely on a container listing to provide the manifest. " "This has the advantage of allowing the user to add/removes segments from the " "manifest at any time. It has the disadvantage of relying on eventually " "consistent container listings. All three copies of the container dbs must be " "updated for a complete list to be guaranteed. Also, all segments must be in " "a single container, which can limit concurrent upload speed." msgstr "" #: ../../source/overview_large_objects.rst:140 msgid "" "Static large objects rely on a user provided manifest file. A user can " "upload objects into multiple containers and then reference those objects " "(segments) in a self generated manifest file. Future GETs to that file will " "download the concatenation of the specified segments. This has the advantage " "of being able to immediately download the complete object once the manifest " "has been successfully PUT. Being able to upload segments into separate " "containers also improves concurrent upload speed. It has the disadvantage " "that the manifest is finalized once PUT. Any changes to it means it has to " "be replaced." msgstr "" #: ../../source/overview_large_objects.rst:149 msgid "" "Between these two methods the user has great flexibility in how (s)he " "chooses to upload and retrieve large objects to Swift. Swift does not, " "however, stop the user from harming themselves. In both cases the segments " "are deletable by the user at any time. If a segment was deleted by mistake, " "a dynamic large object, having no way of knowing it was ever there, would " "happily ignore the deleted file and the user will get an incomplete file. A " "static large object would, when failing to retrieve the object specified in " "the manifest, drop the connection and the user would receive partial results." msgstr "" #: ../../source/overview_policies.rst:5 msgid "" "Storage Policies allow for some level of segmenting the cluster for various " "purposes through the creation of multiple object rings. The Storage Policies " "feature is implemented throughout the entire code base so it is an important " "concept in understanding Swift architecture." msgstr "" #: ../../source/overview_policies.rst:10 msgid "" "As described in :doc:`overview_ring`, Swift uses modified hashing rings to " "determine where data should reside in the cluster. There is a separate ring " "for account databases, container databases, and there is also one object " "ring per storage policy. Each object ring behaves exactly the same way and " "is maintained in the same manner, but with policies, different devices can " "belong to different rings. By supporting multiple object rings, Swift allows " "the application and/or deployer to essentially segregate the object storage " "within a single cluster. There are many reasons why this might be desirable:" msgstr "" #: ../../source/overview_policies.rst:19 msgid "" "Different levels of durability: If a provider wants to offer, for example, " "2x replication and 3x replication but doesn't want to maintain 2 separate " "clusters, they would setup a 2x and a 3x replication policy and assign the " "nodes to their respective rings. Furthermore, if a provider wanted to offer " "a cold storage tier, they could create an erasure coded policy." msgstr "" #: ../../source/overview_policies.rst:25 msgid "" "Performance: Just as SSDs can be used as the exclusive members of an " "account or database ring, an SSD-only object ring can be created as well and " "used to implement a low-latency/high performance policy." msgstr "" #: ../../source/overview_policies.rst:29 msgid "" "Collecting nodes into group: Different object rings may have different " "physical servers so that objects in specific storage policies are always " "placed in a particular data center or geography." msgstr "" #: ../../source/overview_policies.rst:33 msgid "" "Different Storage implementations: Another example would be to collect " "together a set of nodes that use a different Diskfile (e.g., Kinetic, " "GlusterFS) and use a policy to direct traffic just to those nodes." msgstr "" #: ../../source/overview_policies.rst:37 msgid "" "Different read and write affinity settings: proxy-servers can be configured " "to use different read and write affinity options for each policy. See :ref:" "`proxy_server_per_policy_config` for more details." msgstr "" #: ../../source/overview_policies.rst:43 msgid "" "Today, Swift supports two different policy types: Replication and Erasure " "Code. See :doc:`overview_erasure_code` for details." msgstr "" #: ../../source/overview_policies.rst:46 msgid "" "Also note that Diskfile refers to backend object storage plug-in " "architecture. See :doc:`development_ondisk_backends` for details." msgstr "" #: ../../source/overview_policies.rst:51 msgid "Containers and Policies" msgstr "" #: ../../source/overview_policies.rst:53 msgid "" "Policies are implemented at the container level. There are many advantages " "to this approach, not the least of which is how easy it makes life on " "applications that want to take advantage of them. It also ensures that " "Storage Policies remain a core feature of Swift independent of the auth " "implementation. Policies were not implemented at the account/auth layer " "because it would require changes to all auth systems in use by Swift " "deployers. Each container has a new special immutable metadata element " "called the storage policy index. Note that internally, Swift relies on " "policy indexes and not policy names. Policy names exist for human " "readability and translation is managed in the proxy. When a container is " "created, one new optional header is supported to specify the policy name. If " "no name is specified, the default policy is used (and if no other policies " "defined, Policy-0 is considered the default). We will be covering the " "difference between default and Policy-0 in the next section." msgstr "" #: ../../source/overview_policies.rst:68 msgid "" "Policies are assigned when a container is created. Once a container has " "been assigned a policy, it cannot be changed (unless it is deleted/" "recreated). The implications on data placement/movement for large datasets " "would make this a task best left for applications to perform. Therefore, if " "a container has an existing policy of, for example 3x replication, and one " "wanted to migrate that data to an Erasure Code policy, the application would " "create another container specifying the other policy parameters and then " "simply move the data from one container to the other. Policies apply on a " "per container basis allowing for minimal application awareness; once a " "container has been created with a specific policy, all objects stored in it " "will be done so in accordance with that policy. If a container with a " "specific name is deleted (requires the container be empty) a new container " "may be created with the same name without any restriction on storage policy " "enforced by the deleted container which previously shared the same name." msgstr "" #: ../../source/overview_policies.rst:83 msgid "" "Containers have a many-to-one relationship with policies meaning that any " "number of containers can share one policy. There is no limit to how many " "containers can use a specific policy." msgstr "" #: ../../source/overview_policies.rst:87 msgid "" "The notion of associating a ring with a container introduces an interesting " "scenario: What would happen if 2 containers of the same name were created " "with different Storage Policies on either side of a network outage at the " "same time? Furthermore, what would happen if objects were placed in those " "containers, a whole bunch of them, and then later the network outage was " "restored? Well, without special care it would be a big problem as an " "application could end up using the wrong ring to try and find an object. " "Luckily there is a solution for this problem, a daemon known as the " "Container Reconciler works tirelessly to identify and rectify this potential " "scenario." msgstr "" #: ../../source/overview_policies.rst:101 msgid "" "Because atomicity of container creation cannot be enforced in a distributed " "eventually consistent system, object writes into the wrong storage policy " "must be eventually merged into the correct storage policy by an asynchronous " "daemon. Recovery from object writes during a network partition which " "resulted in a split brain container created with different storage policies " "are handled by the `swift-container-reconciler` daemon." msgstr "" #: ../../source/overview_policies.rst:109 msgid "" "The container reconciler works off a queue similar to the object-expirer. " "The queue is populated during container-replication. It is never considered " "incorrect to enqueue an object to be evaluated by the container-reconciler " "because if there is nothing wrong with the location of the object the " "reconciler will simply dequeue it. The container-reconciler queue is an " "indexed log for the real location of an object for which a discrepancy in " "the storage policy of the container was discovered." msgstr "" #: ../../source/overview_policies.rst:118 msgid "" "To determine the correct storage policy of a container, it is necessary to " "update the status_changed_at field in the container_stat table when a " "container changes status from deleted to re-created. This transaction log " "allows the container-replicator to update the correct storage policy both " "when replicating a container and handling REPLICATE requests." msgstr "" #: ../../source/overview_policies.rst:124 msgid "" "Because each object write is a separate distributed transaction it is not " "possible to determine the correctness of the storage policy for each object " "write with respect to the entire transaction log at a given container " "database. As such, container databases will always record the object write " "regardless of the storage policy on a per object row basis. Object byte and " "count stats are tracked per storage policy in each container and reconciled " "using normal object row merge semantics." msgstr "" #: ../../source/overview_policies.rst:132 msgid "" "The object rows are ensured to be fully durable during replication using the " "normal container replication. After the container replicator pushes its " "object rows to available primary nodes any misplaced object rows are bulk " "loaded into containers based off the object timestamp under the ``." "misplaced_objects`` system account. The rows are initially written to a " "handoff container on the local node, and at the end of the replication pass " "the ``.misplaced_objects`` containers are replicated to the correct primary " "nodes." msgstr "" #: ../../source/overview_policies.rst:141 msgid "" "The container-reconciler processes the ``.misplaced_objects`` containers in " "descending order and reaps its containers as the objects represented by the " "rows are successfully reconciled. The container-reconciler will always " "validate the correct storage policy for enqueued objects using direct " "container HEAD requests which are accelerated via caching." msgstr "" #: ../../source/overview_policies.rst:147 msgid "" "Because failure of individual storage nodes in aggregate is assumed to be " "common at scale, the container-reconciler will make forward progress with a " "simple quorum majority. During a combination of failures and rebalances it " "is possible that a quorum could provide an incomplete record of the correct " "storage policy - so an object write may have to be applied more than once. " "Because storage nodes and container databases will not process writes with " "an ``X-Timestamp`` less than or equal to their existing record when objects " "writes are re-applied their timestamp is slightly incremented. In order for " "this increment to be applied transparently to the client a second vector of " "time has been added to Swift for internal use. See :class:`~swift.common." "utils.Timestamp`." msgstr "" #: ../../source/overview_policies.rst:159 msgid "" "As the reconciler applies object writes to the correct storage policy it " "cleans up writes which no longer apply to the incorrect storage policy and " "removes the rows from the ``.misplaced_objects`` containers. After all rows " "have been successfully processed it sleeps and will periodically check for " "newly enqueued rows to be discovered during container replication." msgstr "" #: ../../source/overview_policies.rst:170 msgid "Default versus 'Policy-0'" msgstr "" #: ../../source/overview_policies.rst:172 msgid "" "Storage Policies is a versatile feature intended to support both new and pre-" "existing clusters with the same level of flexibility. For that reason, we " "introduce the ``Policy-0`` concept which is not the same as the \"default\" " "policy. As you will see when we begin to configure policies, each policy " "has a single name and an arbitrary number of aliases (human friendly, " "configurable) as well as an index (or simply policy number). Swift reserves " "index 0 to map to the object ring that's present in all installations (e.g., " "``/etc/swift/object.ring.gz``). You can name this policy anything you like, " "and if no policies are defined it will report itself as ``Policy-0``, " "however you cannot change the index as there must always be a policy with " "index 0." msgstr "" #: ../../source/overview_policies.rst:184 msgid "" "Another important concept is the default policy which can be any policy in " "the cluster. The default policy is the policy that is automatically chosen " "when a container creation request is sent without a storage policy being " "specified. :ref:`configure-policy` describes how to set the default policy. " "The difference from ``Policy-0`` is subtle but extremely important. " "``Policy-0`` is what is used by Swift when accessing pre-storage-policy " "containers which won't have a policy - in this case we would not use the " "default as it might not have the same policy as legacy containers. When no " "other policies are defined, Swift will always choose ``Policy-0`` as the " "default." msgstr "" #: ../../source/overview_policies.rst:195 msgid "" "In other words, default means \"create using this policy if nothing else is " "specified\" and ``Policy-0`` means \"use the legacy policy if a container " "doesn't have one\" which really means use ``object.ring.gz`` for lookups." msgstr "" #: ../../source/overview_policies.rst:201 msgid "" "With the Storage Policy based code, it's not possible to create a container " "that doesn't have a policy. If nothing is provided, Swift will still select " "the default and assign it to the container. For containers created before " "Storage Policies were introduced, the legacy Policy-0 will be used." msgstr "" #: ../../source/overview_policies.rst:211 msgid "Deprecating Policies" msgstr "" #: ../../source/overview_policies.rst:213 msgid "" "There will be times when a policy is no longer desired; however simply " "deleting the policy and associated rings would be problematic for existing " "data. In order to ensure that resources are not orphaned in the cluster " "(left on disk but no longer accessible) and to provide proper messaging to " "applications when a policy needs to be retired, the notion of deprecation is " "used. :ref:`configure-policy` describes how to deprecate a policy." msgstr "" #: ../../source/overview_policies.rst:220 msgid "Swift's behavior with deprecated policies is as follows:" msgstr "" #: ../../source/overview_policies.rst:222 msgid "The deprecated policy will not appear in /info" msgstr "" #: ../../source/overview_policies.rst:223 msgid "" "PUT/GET/DELETE/POST/HEAD are still allowed on the pre-existing containers " "created with a deprecated policy" msgstr "" #: ../../source/overview_policies.rst:225 msgid "" "Clients will get an ''400 Bad Request'' error when trying to create a new " "container using the deprecated policy" msgstr "" #: ../../source/overview_policies.rst:227 msgid "" "Clients still have access to policy statistics via HEAD on pre-existing " "containers" msgstr "" #: ../../source/overview_policies.rst:232 msgid "" "A policy cannot be both the default and deprecated. If you deprecate the " "default policy, you must specify a new default." msgstr "" #: ../../source/overview_policies.rst:235 msgid "" "You can also use the deprecated feature to rollout new policies. If you " "want to test a new storage policy before making it generally available you " "could deprecate the policy when you initially roll it the new configuration " "and rings to all nodes. Being deprecated will render it innate and unable " "to be used. To test it you will need to create a container with that " "storage policy; which will require a single proxy instance (or a set of " "proxy-servers which are only internally accessible) that has been one-off " "configured with the new policy NOT marked deprecated. Once the container " "has been created with the new storage policy any client authorized to use " "that container will be able to add and access data stored in that container " "in the new storage policy. When satisfied you can roll out a new ``swift." "conf`` which does not mark the policy as deprecated to all nodes." msgstr "" #: ../../source/overview_policies.rst:253 msgid "Configuring Policies" msgstr "" #: ../../source/overview_policies.rst:257 msgid "" "See :doc:`policies_saio` for a step by step guide on adding a policy to the " "SAIO setup." msgstr "" #: ../../source/overview_policies.rst:260 msgid "" "It is important that the deployer have a solid understanding of the " "semantics for configuring policies. Configuring a policy is a three-step " "process:" msgstr "" #: ../../source/overview_policies.rst:263 msgid "Edit your ``/etc/swift/swift.conf`` file to define your new policy." msgstr "" #: ../../source/overview_policies.rst:264 msgid "Create the corresponding policy object ring file." msgstr "" #: ../../source/overview_policies.rst:265 msgid "(Optional) Create policy-specific proxy-server configuration settings." msgstr "" #: ../../source/overview_policies.rst:268 msgid "Defining a policy" msgstr "" #: ../../source/overview_policies.rst:270 msgid "" "Each policy is defined by a section in the ``/etc/swift/swift.conf`` file. " "The section name must be of the form ``[storage-policy:]`` where ```` " "is the policy index. There's no reason other than readability that policy " "indexes be sequential but the following rules are enforced:" msgstr "" #: ../../source/overview_policies.rst:275 msgid "" "If a policy with index ``0`` is not declared and no other policies are " "defined, Swift will create a default policy with index ``0``." msgstr "" #: ../../source/overview_policies.rst:277 msgid "The policy index must be a non-negative integer." msgstr "" #: ../../source/overview_policies.rst:278 msgid "Policy indexes must be unique." msgstr "" #: ../../source/overview_policies.rst:282 msgid "" "The index of a policy should never be changed once a policy has been created " "and used. Changing a policy index may cause loss of access to data." msgstr "" #: ../../source/overview_policies.rst:285 msgid "Each policy section contains the following options:" msgstr "" #: ../../source/overview_policies.rst:288 msgid "The primary name of the policy." msgstr "" #: ../../source/overview_policies.rst:289 msgid "Policy names are case insensitive." msgstr "" #: ../../source/overview_policies.rst:290 msgid "Policy names must contain only letters, digits or a dash." msgstr "" #: ../../source/overview_policies.rst:291 msgid "Policy names must be unique." msgstr "" #: ../../source/overview_policies.rst:292 msgid "Policy names can be changed." msgstr "" #: ../../source/overview_policies.rst:293 msgid "The name ``Policy-0`` can only be used for the policy with index ``0``." msgstr "" #: ../../source/overview_policies.rst:295 msgid "" "To avoid confusion with policy indexes it is strongly recommended that " "policy names are not numbers (e.g. '1'). However, for backwards " "compatibility, names that are numbers are supported." msgstr "" #: ../../source/overview_policies.rst:296 msgid "``name = `` (required)" msgstr "" #: ../../source/overview_policies.rst:299 msgid "A comma-separated list of alternative names for the policy." msgstr "" #: ../../source/overview_policies.rst:300 msgid "The default value is an empty list (i.e. no aliases)." msgstr "" #: ../../source/overview_policies.rst:301 msgid "All alias names must follow the rules for the ``name`` option." msgstr "" #: ../../source/overview_policies.rst:302 msgid "Aliases can be added to and removed from the list." msgstr "" #: ../../source/overview_policies.rst:303 msgid "" "Aliases can be useful to retain support for old primary names if the primary " "name is changed." msgstr "" #: ../../source/overview_policies.rst:303 msgid "``aliases = [, , ...]`` (optional)" msgstr "" #: ../../source/overview_policies.rst:306 msgid "" "If ``true`` then this policy will be used when the client does not specify a " "policy." msgstr "" #: ../../source/overview_policies.rst:308 #: ../../source/overview_policies.rst:318 msgid "The default value is ``false``." msgstr "" #: ../../source/overview_policies.rst:309 msgid "" "The default policy can be changed at any time, by setting ``default = true`` " "in the desired policy section." msgstr "" #: ../../source/overview_policies.rst:311 msgid "" "If no policy is declared as the default and no other policies are defined, " "the policy with index ``0`` is set as the default;" msgstr "" #: ../../source/overview_policies.rst:313 msgid "Otherwise, exactly one policy must be declared default." msgstr "" #: ../../source/overview_policies.rst:314 msgid "Deprecated policies cannot be declared the default." msgstr "" #: ../../source/overview_policies.rst:314 msgid "``default = [true|false]`` (optional)" msgstr "" #: ../../source/overview_policies.rst:315 msgid "See :ref:`default-policy` for more information." msgstr "" #: ../../source/overview_policies.rst:317 msgid "If ``true`` then new containers cannot be created using this policy." msgstr "" #: ../../source/overview_policies.rst:319 msgid "" "Any policy may be deprecated by adding the ``deprecated`` option to the " "desired policy section. However, a deprecated policy may not also be " "declared the default. Therefore, since there must always be a default " "policy, there must also always be at least one policy which is not " "deprecated." msgstr "" #: ../../source/overview_policies.rst:323 msgid "``deprecated = [true|false]`` (optional)" msgstr "" #: ../../source/overview_policies.rst:324 msgid "See :ref:`deprecate-policy` for more information." msgstr "" #: ../../source/overview_policies.rst:326 msgid "" "The option ``policy_type`` is used to distinguish between different policy " "types." msgstr "" #: ../../source/overview_policies.rst:328 msgid "The default value is ``replication``." msgstr "" #: ../../source/overview_policies.rst:328 msgid "``policy_type = [replication|erasure_coding]`` (optional)" msgstr "" #: ../../source/overview_policies.rst:329 msgid "When defining an EC policy use the value ``erasure_coding``." msgstr "" #: ../../source/overview_policies.rst:331 msgid "" "The option ``diskfile_module`` is used to load an alternate backend object " "storage plug-in architecture." msgstr "" #: ../../source/overview_policies.rst:333 msgid "" "The default value is ``egg:swift#replication.fs`` or ``egg:" "swift#erasure_coding.fs`` depending on the policy type. The scheme and " "package name are optionals and default to ``egg`` and ``swift``." msgstr "" #: ../../source/overview_policies.rst:335 msgid "``diskfile_module = `` (optional)" msgstr "" #: ../../source/overview_policies.rst:337 msgid "" "The EC policy type has additional required options. See :ref:" "`using_ec_policy` for details." msgstr "" #: ../../source/overview_policies.rst:340 msgid "" "The following is an example of a properly configured ``swift.conf`` file. " "See :doc:`policies_saio` for full instructions on setting up an all-in-one " "with this example configuration.::" msgstr "" #: ../../source/overview_policies.rst:364 msgid "Creating a ring" msgstr "" #: ../../source/overview_policies.rst:366 msgid "" "Once ``swift.conf`` is configured for a new policy, a new ring must be " "created. The ring tools are not policy name aware so it's critical that the " "correct policy index be used when creating the new policy's ring file. " "Additional object rings are created using ``swift-ring-builder`` in the same " "manner as the legacy ring except that ``-N`` is appended after the word " "``object`` in the builder file name, where ``N`` matches the policy index " "used in ``swift.conf``. So, to create the ring for policy index ``1``::" msgstr "" #: ../../source/overview_policies.rst:376 msgid "" "Continue to use the same naming convention when using ``swift-ring-builder`` " "to add devices, rebalance etc. This naming convention is also used in the " "pattern for per-policy storage node data directories." msgstr "" #: ../../source/overview_policies.rst:382 msgid "" "The same drives can indeed be used for multiple policies and the details of " "how that's managed on disk will be covered in a later section, it's " "important to understand the implications of such a configuration before " "setting one up. Make sure it's really what you want to do, in many cases it " "will be, but in others maybe not." msgstr "" #: ../../source/overview_policies.rst:390 msgid "Proxy server configuration (optional)" msgstr "" #: ../../source/overview_policies.rst:392 msgid "" "The :ref:`proxy-server` configuration options related to read and write " "affinity may optionally be overridden for individual storage policies. See :" "ref:`proxy_server_per_policy_config` for more details." msgstr "" #: ../../source/overview_policies.rst:399 msgid "Using Policies" msgstr "" #: ../../source/overview_policies.rst:401 msgid "" "Using policies is very simple - a policy is only specified when a container " "is initially created. There are no other API changes. Creating a container " "can be done without any special policy information::" msgstr "" #: ../../source/overview_policies.rst:408 msgid "" "Which will result in a container created that is associated with the policy " "name 'gold' assuming we're using the swift.conf example from above. It " "would use 'gold' because it was specified as the default. Now, when we put " "an object into this container, it will get placed on nodes that are part of " "the ring we created for policy 'gold'." msgstr "" #: ../../source/overview_policies.rst:414 msgid "" "If we wanted to explicitly state that we wanted policy 'gold' the command " "would simply need to include a new header as shown below::" msgstr "" #: ../../source/overview_policies.rst:420 msgid "" "And that's it! The application does not need to specify the policy name " "ever again. There are some illegal operations however:" msgstr "" #: ../../source/overview_policies.rst:423 msgid "If an invalid (typo, non-existent) policy is specified: 400 Bad Request" msgstr "" #: ../../source/overview_policies.rst:424 msgid "if you try to change the policy either via PUT or POST: 409 Conflict" msgstr "" #: ../../source/overview_policies.rst:426 msgid "" "If you'd like to see how the storage in the cluster is being used, simply " "HEAD the account and you'll see not only the cumulative numbers, as before, " "but per policy statistics as well. In the example below there's 3 objects " "total with two of them in policy 'gold' and one in policy 'silver'::" msgstr "" #: ../../source/overview_policies.rst:434 msgid "and your results will include (some output removed for readability)::" msgstr "" #: ../../source/overview_policies.rst:448 msgid "" "Now that we've explained a little about what Policies are and how to " "configure/use them, let's explore how Storage Policies fit in at the nuts-n-" "bolts level." msgstr "" #: ../../source/overview_policies.rst:453 msgid "Parsing and Configuring" msgstr "" #: ../../source/overview_policies.rst:455 msgid "" "The module, :ref:`storage_policy`, is responsible for parsing the ``swift." "conf`` file, validating the input, and creating a global collection of " "configured policies via class :class:`.StoragePolicyCollection`. This " "collection is made up of policies of class :class:`.StoragePolicy`. The " "collection class includes handy functions for getting to a policy either by " "name or by index , getting info about the policies, etc. There's also one " "very important function, :meth:`~.StoragePolicyCollection.get_object_ring`. " "Object rings are members of the :class:`.StoragePolicy` class and are " "actually not instantiated until the :meth:`~.StoragePolicy.load_ring` method " "is called. Any caller anywhere in the code base that needs to access an " "object ring must use the :data:`.POLICIES` global singleton to access the :" "meth:`~.StoragePolicyCollection.get_object_ring` function and provide the " "policy index which will call :meth:`~.StoragePolicy.load_ring` if needed; " "however, when starting request handling services such as the :ref:`proxy-" "server` rings are proactively loaded to provide moderate protection against " "a mis-configuration resulting in a run time error. The global is " "instantiated when Swift starts and provides a mechanism to patch policies " "for the test code." msgstr "" #: ../../source/overview_policies.rst:477 msgid "" "Middleware can take advantage of policies through the :data:`.POLICIES` " "global and by importing :func:`.get_container_info` to gain access to the " "policy index associated with the container in question. From the index it " "can then use the :data:`.POLICIES` singleton to grab the right ring. For " "example, :ref:`list_endpoints` is policy aware using the means just " "described. Another example is :ref:`recon` which will report the md5 sums " "for all of the rings." msgstr "" #: ../../source/overview_policies.rst:487 msgid "" "The :ref:`proxy-server` module's role in Storage Policies is essentially to " "make sure the correct ring is used as its member element. Before policies, " "the one object ring would be instantiated when the :class:`.Application` " "class was instantiated and could be overridden by test code via init " "parameter. With policies, however, there is no init parameter and the :" "class:`.Application` class instead depends on the :data:`.POLICIES` global " "singleton to retrieve the ring which is instantiated the first time it's " "needed. So, instead of an object ring member of the :class:`.Application` " "class, there is an accessor function, :meth:`~.Application.get_object_ring`, " "that gets the ring from :data:`.POLICIES`." msgstr "" #: ../../source/overview_policies.rst:498 msgid "" "In general, when any module running on the proxy requires an object ring, it " "does so via first getting the policy index from the cached container info. " "The exception is during container creation where it uses the policy name " "from the request header to look up policy index from the :data:`.POLICIES` " "global. Once the proxy has determined the policy index, it can use the :" "meth:`~.Application.get_object_ring` method described earlier to gain access " "to the correct ring. It then has the responsibility of passing the index " "information, not the policy name, on to the back-end servers via the header " "``X -Backend-Storage-Policy-Index``. Going the other way, the proxy also " "strips the index out of headers that go back to clients, and makes sure they " "only see the friendly policy names." msgstr "" #: ../../source/overview_policies.rst:513 msgid "" "Policies each have their own directories on the back-end servers and are " "identified by their storage policy indexes. Organizing the back-end " "directory structures by policy index helps keep track of things and also " "allows for sharing of disks between policies which may or may not make sense " "depending on the needs of the provider. More on this later, but for now be " "aware of the following directory naming convention:" msgstr "" #: ../../source/overview_policies.rst:520 msgid "``/objects`` maps to objects associated with Policy-0" msgstr "" #: ../../source/overview_policies.rst:521 msgid "``/objects-N`` maps to storage policy index #N" msgstr "" #: ../../source/overview_policies.rst:522 msgid "``/async_pending`` maps to async pending update for Policy-0" msgstr "" #: ../../source/overview_policies.rst:523 msgid "" "``/async_pending-N`` maps to async pending update for storage policy index #N" msgstr "" #: ../../source/overview_policies.rst:524 msgid "``/tmp`` maps to the DiskFile temporary directory for Policy-0" msgstr "" #: ../../source/overview_policies.rst:525 msgid "``/tmp-N`` maps to the DiskFile temporary directory for policy index #N" msgstr "" #: ../../source/overview_policies.rst:526 msgid "``/quarantined/objects`` maps to the quarantine directory for Policy-0" msgstr "" #: ../../source/overview_policies.rst:527 msgid "" "``/quarantined/objects-N`` maps to the quarantine directory for policy index " "#N" msgstr "" #: ../../source/overview_policies.rst:529 msgid "" "Note that these directory names are actually owned by the specific Diskfile " "implementation, the names shown above are used by the default Diskfile." msgstr "" #: ../../source/overview_policies.rst:535 msgid "" "The :ref:`object-server` is not involved with selecting the storage policy " "placement directly. However, because of how back-end directory structures " "are setup for policies, as described earlier, the object server modules do " "play a role. When the object server gets a :class:`.Diskfile`, it passes in " "the policy index and leaves the actual directory naming/structure mechanisms " "to :class:`.Diskfile`. By passing in the index, the instance of :class:`." "Diskfile` being used will assure that data is properly located in the tree " "based on its policy." msgstr "" #: ../../source/overview_policies.rst:544 msgid "" "For the same reason, the :ref:`object-updater` also is policy aware. As " "previously described, different policies use different async pending " "directories so the updater needs to know how to scan them appropriately." msgstr "" #: ../../source/overview_policies.rst:548 msgid "" "The :ref:`object-replicator` is policy aware in that, depending on the " "policy, it may have to do drastically different things, or maybe not. For " "example, the difference in handling a replication job for 2x versus 3x is " "trivial; however, the difference in handling replication between 3x and " "erasure code is most definitely not. In fact, the term 'replication' really " "isn't appropriate for some policies like erasure code; however, the majority " "of the framework for collecting and processing jobs is common. Thus, those " "functions in the replicator are leveraged for all policies and then there is " "policy specific code required for each policy, added when the policy is " "defined if needed." msgstr "" #: ../../source/overview_policies.rst:558 msgid "" "The ssync functionality is policy aware for the same reason. Some of the " "other modules may not obviously be affected, but the back-end directory " "structure owned by :class:`.Diskfile` requires the policy index parameter. " "Therefore ssync being policy aware really means passing the policy index " "along. See :class:`~swift.obj.ssync_sender` and :class:`~swift.obj." "ssync_receiver` for more information on ssync." msgstr "" #: ../../source/overview_policies.rst:565 msgid "" "For :class:`.Diskfile` itself, being policy aware is all about managing the " "back-end structure using the provided policy index. In other words, callers " "who get a :class:`.Diskfile` instance provide a policy index and :class:`." "Diskfile`'s job is to keep data separated via this index (however it " "chooses) such that policies can share the same media/nodes if desired. The " "included implementation of :class:`.Diskfile` lays out the directory " "structure described earlier but that's owned within :class:`.Diskfile`; " "external modules have no visibility into that detail. A common function is " "provided to map various directory names and/or strings based on their policy " "index. For example :class:`.Diskfile` defines :func:`~swift.obj.diskfile." "get_data_dir` which builds off of a generic :func:`.get_policy_string` to " "consistently build policy aware strings for various usage." msgstr "" #: ../../source/overview_policies.rst:581 msgid "" "The :ref:`container-server` plays a very important role in Storage Policies, " "it is responsible for handling the assignment of a policy to a container and " "the prevention of bad things like changing policies or picking the wrong " "policy to use when nothing is specified (recall earlier discussion on " "Policy-0 versus default)." msgstr "" #: ../../source/overview_policies.rst:587 msgid "" "The :ref:`container-updater` is policy aware, however its job is very " "simple, to pass the policy index along to the :ref:`account-server` via a " "request header." msgstr "" #: ../../source/overview_policies.rst:590 msgid "" "The :ref:`container-backend` is responsible for both altering existing DB " "schema as well as assuring new DBs are created with a schema that supports " "storage policies. The \"on-demand\" migration of container schemas allows " "Swift to upgrade without downtime (sqlite's alter statements are fast " "regardless of row count). To support rolling upgrades (and downgrades) the " "incompatible schema changes to the ``container_stat`` table are made to a " "``container_info`` table, and the ``container_stat`` table is replaced with " "a view that includes an ``INSTEAD OF UPDATE`` trigger which makes it behave " "like the old table." msgstr "" #: ../../source/overview_policies.rst:600 msgid "" "The policy index is stored here for use in reporting information about the " "container as well as managing split-brain scenario induced discrepancies " "between containers and their storage policies. Furthermore, during split-" "brain, containers must be prepared to track object updates from multiple " "policies so the object table also includes a ``storage_policy_index`` " "column. Per-policy object counts and bytes are updated in the " "``policy_stat`` table using ``INSERT`` and ``DELETE`` triggers similar to " "the pre-policy triggers that updated ``container_stat`` directly." msgstr "" #: ../../source/overview_policies.rst:609 msgid "" "The :ref:`container-replicator` daemon will pro-actively migrate legacy " "schemas as part of its normal consistency checking process when it updates " "the ``reconciler_sync_point`` entry in the ``container_info`` table. This " "ensures that read heavy containers which do not encounter any writes will " "still get migrated to be fully compatible with the post-storage-policy " "queries without having to fall back and retry queries with the legacy schema " "to service container read requests." msgstr "" #: ../../source/overview_policies.rst:617 msgid "" "The :ref:`container-sync-daemon` functionality only needs to be policy aware " "in that it accesses the object rings. Therefore, it needs to pull the " "policy index out of the container information and use it to select the " "appropriate object ring from the :data:`.POLICIES` global." msgstr "" #: ../../source/overview_policies.rst:625 msgid "" "The :ref:`account-server`'s role in Storage Policies is really limited to " "reporting. When a HEAD request is made on an account (see example provided " "earlier), the account server is provided with the storage policy index and " "builds the ``object_count`` and ``byte_count`` information for the client on " "a per policy basis." msgstr "" #: ../../source/overview_policies.rst:631 msgid "" "The account servers are able to report per-storage-policy object and byte " "counts because of some policy specific DB schema changes. A policy specific " "table, ``policy_stat``, maintains information on a per policy basis (one row " "per policy) in the same manner in which the ``account_stat`` table does. " "The ``account_stat`` table still serves the same purpose and is not replaced " "by ``policy_stat``, it holds the total account stats whereas ``policy_stat`` " "just has the break downs. The backend is also responsible for migrating pre-" "storage-policy accounts by altering the DB schema and populating the " "``policy_stat`` table for Policy-0 with current ``account_stat`` data at " "that point in time." msgstr "" #: ../../source/overview_policies.rst:642 msgid "" "The per-storage-policy object and byte counts are not updated with each " "object PUT and DELETE request, instead container updates to the account " "server are performed asynchronously by the ``swift-container-updater``." msgstr "" #: ../../source/overview_policies.rst:649 msgid "Upgrading and Confirming Functionality" msgstr "" #: ../../source/overview_policies.rst:651 msgid "" "Upgrading to a version of Swift that has Storage Policy support is not " "difficult, in fact, the cluster administrator isn't required to make any " "special configuration changes to get going. Swift will automatically begin " "using the existing object ring as both the default ring and the Policy-0 " "ring. Adding the declaration of policy 0 is totally optional and in its " "absence, the name given to the implicit policy 0 will be 'Policy-0'. Let's " "say for testing purposes that you wanted to take an existing cluster that " "already has lots of data on it and upgrade to Swift with Storage Policies. " "From there you want to go ahead and create a policy and test a few things " "out. All you need to do is:" msgstr "" #: ../../source/overview_policies.rst:661 msgid "Upgrade all of your Swift nodes to a policy-aware version of Swift" msgstr "" #: ../../source/overview_policies.rst:664 msgid "" "Create containers and objects and confirm their placement is as expected" msgstr "" #: ../../source/overview_policies.rst:671 msgid "" "If you downgrade from a Storage Policy enabled version of Swift to an older " "version that doesn't support policies, you will not be able to access any " "data stored in policies other than the policy with index 0 but those objects " "WILL appear in container listings (possibly as duplicates if there was a " "network partition and un-reconciled objects). It is EXTREMELY important " "that you perform any necessary integration testing on the upgraded " "deployment before enabling an additional storage policy to ensure a " "consistent API experience for your clients. DO NOT downgrade to a version " "of Swift that does not support storage policies once you expose multiple " "storage policies." msgstr "" #: ../../source/overview_reaper.rst:3 msgid "The Account Reaper" msgstr "" #: ../../source/overview_reaper.rst:5 msgid "" "The Account Reaper removes data from deleted accounts in the background." msgstr "" #: ../../source/overview_reaper.rst:7 msgid "" "An account is marked for deletion by a reseller issuing a DELETE request on " "the account's storage URL. This simply puts the value DELETED into the " "status column of the account_stat table in the account database (and " "replicas), indicating the data for the account should be deleted later." msgstr "" #: ../../source/overview_reaper.rst:12 msgid "" "There is normally no set retention time and no undelete; it is assumed the " "reseller will implement such features and only call DELETE on the account " "once it is truly desired the account's data be removed. However, in order to " "protect the Swift cluster accounts from an improper or mistaken delete " "request, you can set a delay_reaping value in the [account-reaper] section " "of the account-server.conf to delay the actual deletion of data. At this " "time, there is no utility to undelete an account; one would have to update " "the account database replicas directly, setting the status column to an " "empty string and updating the put_timestamp to be greater than the " "delete_timestamp. (On the TODO list is writing a utility to perform this " "task, preferably through a REST call.)" msgstr "" #: ../../source/overview_reaper.rst:24 msgid "" "The account reaper runs on each account server and scans the server " "occasionally for account databases marked for deletion. It will only trigger " "on accounts that server is the primary node for, so that multiple account " "servers aren't all trying to do the same work at the same time. Using " "multiple servers to delete one account might improve deletion speed, but " "requires coordination so they aren't duplicating effort. Speed really isn't " "as much of a concern with data deletion and large accounts aren't deleted " "that often." msgstr "" #: ../../source/overview_reaper.rst:32 msgid "" "The deletion process for an account itself is pretty straightforward. For " "each container in the account, each object is deleted and then the container " "is deleted. Any deletion requests that fail won't stop the overall process, " "but will cause the overall process to fail eventually (for example, if an " "object delete times out, the container won't be able to be deleted later and " "therefore the account won't be deleted either). The overall process " "continues even on a failure so that it doesn't get hung up reclaiming " "cluster space because of one troublesome spot. The account reaper will keep " "trying to delete an account until it eventually becomes empty, at which " "point the database reclaim process within the db_replicator will eventually " "remove the database files." msgstr "" #: ../../source/overview_reaper.rst:43 msgid "" "Sometimes a persistent error state can prevent some object or container from " "being deleted. If this happens, you will see a message such as \"Account " " has not been reaped since \" in the log. You can control when " "this is logged with the reap_warn_after value in the [account-reaper] " "section of the account-server.conf file. By default this is 30 days." msgstr "" #: ../../source/overview_reaper.rst:53 msgid "" "At first, a simple approach of deleting an account through completely " "external calls was considered as it required no changes to the system. All " "data would simply be deleted in the same way the actual user would, through " "the public REST API. However, the downside was that it would use proxy " "resources and log everything when it didn't really need to. Also, it would " "likely need a dedicated server or two, just for issuing the delete requests." msgstr "" #: ../../source/overview_reaper.rst:60 msgid "" "A completely bottom-up approach was also considered, where the object and " "container servers would occasionally scan the data they held and check if " "the account was deleted, removing the data if so. The upside was the speed " "of reclamation with no impact on the proxies or logging, but the downside " "was that nearly 100% of the scanning would result in no action creating a " "lot of I/O load for no reason." msgstr "" #: ../../source/overview_reaper.rst:67 msgid "" "A more container server centric approach was also considered, where the " "account server would mark all the containers for deletion and the container " "servers would delete the objects in each container and then themselves. This " "has the benefit of still speedy reclamation for accounts with a lot of " "containers, but has the downside of a pretty big load spike. The process " "could be slowed down to alleviate the load spike possibility, but then the " "benefit of speedy reclamation is lost and what's left is just a more complex " "process. Also, scanning all the containers for those marked for deletion " "when the majority wouldn't be seemed wasteful. The db_replicator could do " "this work while performing its replication scan, but it would have to spawn " "and track deletion processes which seemed needlessly complex." msgstr "" #: ../../source/overview_reaper.rst:79 msgid "" "In the end, an account server centric approach seemed best, as described " "above." msgstr "" #: ../../source/overview_replication.rst:5 msgid "" "Because each replica in Swift functions independently, and clients generally " "require only a simple majority of nodes responding to consider an operation " "successful, transient failures like network partitions can quickly cause " "replicas to diverge. These differences are eventually reconciled by " "asynchronous, peer-to-peer replicator processes. The replicator processes " "traverse their local filesystems, concurrently performing operations in a " "manner that balances load across physical disks." msgstr "" #: ../../source/overview_replication.rst:13 msgid "" "Replication uses a push model, with records and files generally only being " "copied from local to remote replicas. This is important because data on the " "node may not belong there (as in the case of handoffs and ring changes), and " "a replicator can't know what data exists elsewhere in the cluster that it " "should pull in. It's the duty of any node that contains data to ensure that " "data gets to where it belongs. Replica placement is handled by the ring." msgstr "" #: ../../source/overview_replication.rst:20 msgid "" "Every deleted record or file in the system is marked by a tombstone, so that " "deletions can be replicated alongside creations. The replication process " "cleans up tombstones after a time period known as the consistency window. " "The consistency window encompasses replication duration and how long " "transient failure can remove a node from the cluster. Tombstone cleanup must " "be tied to replication to reach replica convergence." msgstr "" #: ../../source/overview_replication.rst:27 msgid "" "If a replicator detects that a remote drive has failed, the replicator uses " "the get_more_nodes interface for the ring to choose an alternate node with " "which to synchronize. The replicator can maintain desired levels of " "replication in the face of disk failures, though some replicas may not be in " "an immediately usable location. Note that the replicator doesn't maintain " "desired levels of replication when other failures, such as entire node " "failures, occur because most failure are transient." msgstr "" #: ../../source/overview_replication.rst:35 msgid "" "Replication is an area of active development, and likely rife with potential " "improvements to speed and correctness." msgstr "" #: ../../source/overview_replication.rst:38 msgid "" "There are two major classes of replicator - the db replicator, which " "replicates accounts and containers, and the object replicator, which " "replicates object data." msgstr "" #: ../../source/overview_replication.rst:44 msgid "DB Replication" msgstr "" #: ../../source/overview_replication.rst:46 msgid "" "The first step performed by db replication is a low-cost hash comparison to " "determine whether two replicas already match. Under normal operation, this " "check is able to verify that most databases in the system are already " "synchronized very quickly. If the hashes differ, the replicator brings the " "databases in sync by sharing records added since the last sync point." msgstr "" #: ../../source/overview_replication.rst:52 msgid "" "This sync point is a high water mark noting the last record at which two " "databases were known to be in sync, and is stored in each database as a " "tuple of the remote database id and record id. Database ids are unique " "amongst all replicas of the database, and record ids are monotonically " "increasing integers. After all new records have been pushed to the remote " "database, the entire sync table of the local database is pushed, so the " "remote database can guarantee that it is in sync with everything with which " "the local database has previously synchronized." msgstr "" #: ../../source/overview_replication.rst:61 msgid "" "If a replica is found to be missing entirely, the whole local database file " "is transmitted to the peer using rsync(1) and vested with a new unique id." msgstr "" #: ../../source/overview_replication.rst:64 msgid "" "In practice, DB replication can process hundreds of databases per " "concurrency setting per second (up to the number of available CPUs or disks) " "and is bound by the number of DB transactions that must be performed." msgstr "" #: ../../source/overview_replication.rst:71 msgid "Object Replication" msgstr "" #: ../../source/overview_replication.rst:73 msgid "" "The initial implementation of object replication simply performed an rsync " "to push data from a local partition to all remote servers it was expected to " "exist on. While this performed adequately at small scale, replication times " "skyrocketed once directory structures could no longer be held in RAM. We now " "use a modification of this scheme in which a hash of the contents for each " "suffix directory is saved to a per-partition hashes file. The hash for a " "suffix directory is invalidated when the contents of that suffix directory " "are modified." msgstr "" #: ../../source/overview_replication.rst:82 msgid "" "The object replication process reads in these hash files, calculating any " "invalidated hashes. It then transmits the hashes to each remote server that " "should hold the partition, and only suffix directories with differing hashes " "on the remote server are rsynced. After pushing files to the remote server, " "the replication process notifies it to recalculate hashes for the rsynced " "suffix directories." msgstr "" #: ../../source/overview_replication.rst:89 msgid "" "Performance of object replication is generally bound by the number of " "uncached directories it has to traverse, usually as a result of invalidated " "suffix directory hashes. Using write volume and partition counts from our " "running systems, it was designed so that around 2% of the hash space on a " "normal node will be invalidated per day, which has experimentally given us " "acceptable replication speeds." msgstr "" #: ../../source/overview_replication.rst:98 msgid "" "Work continues with a new ssync method where rsync is not used at all and " "instead all-Swift code is used to transfer the objects. At first, this ssync " "will just strive to emulate the rsync behavior. Once deemed stable it will " "open the way for future improvements in replication since we'll be able to " "easily add code in the replication path instead of trying to alter the rsync " "code base and distributing such modifications." msgstr "" #: ../../source/overview_replication.rst:105 msgid "" "One of the first improvements planned is an \"index.db\" that will replace " "the hashes.pkl. This will allow quicker updates to that data as well as more " "streamlined queries. Quite likely we'll implement a better scheme than the " "current one hashes.pkl uses (hash-trees, that sort of thing)." msgstr "" #: ../../source/overview_replication.rst:110 msgid "" "Another improvement planned all along the way is separating the local disk " "structure from the protocol path structure. This separation will allow ring " "resizing at some point, or at least ring-doubling." msgstr "" #: ../../source/overview_replication.rst:114 msgid "" "Note that for objects being stored with an Erasure Code policy, the " "replicator daemon is not involved. Instead, the reconstructor is used by " "Erasure Code policies and is analogous to the replicator for Replication " "type policies. See :doc:`overview_erasure_code` for complete information on " "both Erasure Code support as well as the reconstructor." msgstr "" #: ../../source/overview_replication.rst:122 msgid "Hashes.pkl" msgstr "" #: ../../source/overview_replication.rst:124 msgid "" "The hashes.pkl file is a key element for both replication and reconstruction " "(for Erasure Coding). Both daemons use this file to determine if any kind " "of action is required between nodes that are participating in the durability " "scheme. The file itself is a pickled dictionary with slightly different " "formats depending on whether the policy is Replication or Erasure Code. In " "either case, however, the same basic information is provided between the " "nodes. The dictionary contains a dictionary where the key is a suffix " "directory name and the value is the MD5 hash of the directory listing for " "that suffix. In this manner, the daemon can quickly identify differences " "between local and remote suffix directories on a per partition basis as the " "scope of any one hashes.pkl file is a partition directory." msgstr "" #: ../../source/overview_replication.rst:136 msgid "" "For Erasure Code policies, there is a little more information required. An " "object's hash directory may contain multiple fragments of a single object in " "the event that the node is acting as a handoff or perhaps if a rebalance is " "underway. Each fragment of an object is stored with a fragment index, so " "the hashes.pkl for an Erasure Code partition will still be a dictionary " "keyed on the suffix directory name, however, the value is another dictionary " "keyed on the fragment index with subsequent MD5 hashes for each one as " "values. Some files within an object hash directory don't require a fragment " "index so None is used to represent those. Below are examples of what these " "dictionaries might look like." msgstr "" #: ../../source/overview_replication.rst:147 msgid "Replication hashes.pkl::" msgstr "" #: ../../source/overview_replication.rst:152 msgid "Erasure Code hashes.pkl::" msgstr "" #: ../../source/overview_replication.rst:165 msgid "Dedicated replication network" msgstr "" #: ../../source/overview_replication.rst:167 msgid "" "Swift has support for using dedicated network for replication traffic. For " "more information see :ref:`Overview of dedicated replication network " "`." msgstr "" #: ../../source/overview_ring.rst:3 msgid "The Rings" msgstr "" #: ../../source/overview_ring.rst:5 msgid "" "The rings determine where data should reside in the cluster. There is a " "separate ring for account databases, container databases, and individual " "object storage policies but each ring works in the same way. These rings are " "externally managed. The server processes themselves do not modify the rings; " "they are instead given new rings modified by other tools." msgstr "" #: ../../source/overview_ring.rst:11 msgid "" "The ring uses a configurable number of bits from the MD5 hash of an item's " "path as a partition index that designates the device(s) on which that item " "should be stored. The number of bits kept from the hash is known as the " "partition power, and 2 to the partition power indicates the partition count. " "Partitioning the full MD5 hash ring allows the cluster components to process " "resources in batches. This ends up either more efficient or at least less " "complex than working with each item separately or the entire cluster all at " "once." msgstr "" #: ../../source/overview_ring.rst:19 msgid "" "Another configurable value is the replica count, which indicates how many " "devices to assign for each partition in the ring. By having multiple devices " "responsible for each partition, the cluster can recover from drive or " "network failures." msgstr "" #: ../../source/overview_ring.rst:24 msgid "" "Devices are added to the ring to describe the capacity available for " "partition replica assignments. Devices are placed into failure domains " "consisting of region, zone, and server. Regions can be used to describe " "geographical systems characterized by lower bandwidth or higher latency " "between machines in different regions. Many rings will consist of only a " "single region. Zones can be used to group devices based on physical " "locations, power separations, network separations, or any other attribute " "that would lessen multiple replicas being unavailable at the same time." msgstr "" #: ../../source/overview_ring.rst:33 msgid "" "Devices are given a weight which describes the relative storage capacity " "contributed by the device in comparison to other devices." msgstr "" #: ../../source/overview_ring.rst:36 msgid "" "When building a ring, replicas for each partition will be assigned to " "devices according to the devices' weights. Additionally, each replica of a " "partition will preferentially be assigned to a device whose failure domain " "does not already have a replica for that partition. Only a single replica " "of a partition may be assigned to each device - you must have at least as " "many devices as replicas." msgstr "" #: ../../source/overview_ring.rst:47 msgid "Ring Builder" msgstr "" #: ../../source/overview_ring.rst:49 msgid "" "The rings are built and managed manually by a utility called the ring-" "builder. The ring-builder assigns partitions to devices and writes an " "optimized structure to a gzipped, serialized file on disk for shipping out " "to the servers. The server processes check the modification time of the file " "occasionally and reload their in-memory copies of the ring structure as " "needed. Because of how the ring-builder manages changes to the ring, using a " "slightly older ring usually just means that for a subset of the partitions " "the device for one of the replicas will be incorrect, which can be easily " "worked around." msgstr "" #: ../../source/overview_ring.rst:58 msgid "" "The ring-builder also keeps a separate builder file which includes the ring " "information as well as additional data required to build future rings. It is " "very important to keep multiple backup copies of these builder files. One " "option is to copy the builder files out to every server while copying the " "ring files themselves. Another is to upload the builder files into the " "cluster itself. Complete loss of a builder file will mean creating a new " "ring from scratch, nearly all partitions will end up assigned to different " "devices, and therefore nearly all data stored will have to be replicated to " "new locations. So, recovery from a builder file loss is possible, but data " "will definitely be unreachable for an extended time." msgstr "" #: ../../source/overview_ring.rst:71 msgid "Ring Data Structure" msgstr "" #: ../../source/overview_ring.rst:73 msgid "" "The ring data structure consists of three top level fields: a list of " "devices in the cluster, a list of lists of device ids indicating partition " "to device assignments, and an integer indicating the number of bits to shift " "an MD5 hash to calculate the partition for the hash." msgstr "" #: ../../source/overview_ring.rst:80 msgid "List of Devices" msgstr "" #: ../../source/overview_ring.rst:82 msgid "" "The list of devices is known internally to the Ring class as ``devs``. Each " "item in the list of devices is a dictionary with the following keys:" msgstr "" #: ../../source/overview_ring.rst:89 msgid "The index into the list of devices." msgstr "" #: ../../source/overview_ring.rst:89 msgid "id" msgstr "" #: ../../source/overview_ring.rst:89 ../../source/overview_ring.rst:90 #: ../../source/overview_ring.rst:91 msgid "integer" msgstr "" #: ../../source/overview_ring.rst:90 msgid "The zone in which the device resides." msgstr "" #: ../../source/overview_ring.rst:90 msgid "zone" msgstr "" #: ../../source/overview_ring.rst:91 msgid "The region in which the zone resides." msgstr "" #: ../../source/overview_ring.rst:91 msgid "region" msgstr "" #: ../../source/overview_ring.rst:92 msgid "" "The relative weight of the device in comparison to other devices. This " "usually corresponds directly to the amount of disk space the device has " "compared to other devices. For instance a device with 1 terabyte of space " "might have a weight of 100.0 and another device with 2 terabytes of space " "might have a weight of 200.0. This weight can also be used to bring back " "into balance a device that has ended up with more or less data than desired " "over time. A good average weight of 100.0 allows flexibility in lowering the " "weight later if necessary." msgstr "" #: ../../source/overview_ring.rst:92 msgid "float" msgstr "" #: ../../source/overview_ring.rst:92 msgid "weight" msgstr "" #: ../../source/overview_ring.rst:101 msgid "The IP address or hostname of the server containing the device." msgstr "" #: ../../source/overview_ring.rst:101 msgid "ip" msgstr "" #: ../../source/overview_ring.rst:101 ../../source/overview_ring.rst:104 #: ../../source/overview_ring.rst:106 msgid "string" msgstr "" #: ../../source/overview_ring.rst:102 msgid "" "The TCP port on which the server process listens to serve requests for the " "device." msgstr "" #: ../../source/overview_ring.rst:102 msgid "int" msgstr "" #: ../../source/overview_ring.rst:102 msgid "port" msgstr "" #: ../../source/overview_ring.rst:104 msgid "The on-disk name of the device on the server. For example: ``sdb1``" msgstr "" #: ../../source/overview_ring.rst:104 msgid "device" msgstr "" #: ../../source/overview_ring.rst:106 msgid "" "A general-use field for storing additional information for the device. This " "information isn't used directly by the server processes, but can be useful " "in debugging. For example, the date and time of installation and hardware " "manufacturer could be stored here." msgstr "" #: ../../source/overview_ring.rst:106 msgid "meta" msgstr "" #: ../../source/overview_ring.rst:114 msgid "" "The list of devices may contain holes, or indexes set to ``None``, for " "devices that have been removed from the cluster. However, device ids are " "reused. Device ids are reused to avoid potentially running out of device id " "slots when there are available slots (from prior removal of devices). A " "consequence of this device id reuse is that the device id (integer value) " "does not necessarily correspond with the chronology of when the device was " "added to the ring. Also, some devices may be temporarily disabled by setting " "their weight to ``0.0``. To obtain a list of active devices (for uptime " "polling, for example) the Python code would look like::" msgstr "" #: ../../source/overview_ring.rst:128 msgid "Partition Assignment List" msgstr "" #: ../../source/overview_ring.rst:130 msgid "" "The partition assignment list is known internally to the Ring class as " "``_replica2part2dev_id``. This is a list of ``array('H')``\\s, one for each " "replica. Each ``array('H')`` has a length equal to the partition count for " "the ring. Each integer in the ``array('H')`` is an index into the above list " "of devices." msgstr "" #: ../../source/overview_ring.rst:136 msgid "" "So, to create a list of device dictionaries assigned to a partition, the " "Python code would look like::" msgstr "" #: ../../source/overview_ring.rst:142 msgid "" "``array('H')`` is used for memory conservation as there may be millions of " "partitions." msgstr "" #: ../../source/overview_ring.rst:147 msgid "Partition Shift Value" msgstr "" #: ../../source/overview_ring.rst:149 msgid "" "The partition shift value is known internally to the Ring class as " "``_part_shift``. This value is used to shift an MD5 hash of an item's path " "to calculate the partition on which the data for that item should reside. " "Only the top four bytes of the hash are used in this process. For example, " "to compute the partition for the path ``/account/container/object``, the " "Python code might look like::" msgstr "" #: ../../source/overview_ring.rst:159 msgid "" "For a ring generated with partition power ``P``, the partition shift value " "is ``32 - P``." msgstr "" #: ../../source/overview_ring.rst:164 msgid "Fractional Replicas" msgstr "" #: ../../source/overview_ring.rst:166 msgid "" "A ring is not restricted to having an integer number of replicas. In order " "to support the gradual changing of replica counts, the ring is able to have " "a real number of replicas." msgstr "" #: ../../source/overview_ring.rst:170 msgid "" "When the number of replicas is not an integer, the last element of " "``_replica2part2dev_id`` will have a length that is less than the partition " "count for the ring. This means that some partitions will have more replicas " "than others. For example, if a ring has ``3.25`` replicas, then 25% of its " "partitions will have four replicas, while the remaining 75% will have just " "three." msgstr "" #: ../../source/overview_ring.rst:181 msgid "Dispersion" msgstr "" #: ../../source/overview_ring.rst:183 msgid "" "With each rebalance, the ring builder calculates a dispersion metric. This " "is the percentage of partitions in the ring that have too many replicas " "within a particular failure domain." msgstr "" #: ../../source/overview_ring.rst:187 msgid "" "For example, if you have three servers in a cluster but two replicas for a " "partition get placed onto the same server, that partition will count towards " "the dispersion metric." msgstr "" #: ../../source/overview_ring.rst:191 msgid "" "A lower dispersion value is better, and the value can be used to find the " "proper value for \"overload\"." msgstr "" #: ../../source/overview_ring.rst:198 msgid "Overload" msgstr "" #: ../../source/overview_ring.rst:200 msgid "" "The ring builder tries to keep replicas as far apart as possible while still " "respecting device weights. When it can't do both, the overload factor " "determines what happens. Each device may take some extra fraction of its " "desired partitions to allow for replica dispersion; once that extra fraction " "is exhausted, replicas will be placed closer together than is optimal for " "durability." msgstr "" #: ../../source/overview_ring.rst:207 msgid "" "Essentially, the overload factor lets the operator trade off replica " "dispersion (durability) against device balance (uniform disk usage)." msgstr "" #: ../../source/overview_ring.rst:210 msgid "" "The default overload factor is ``0``, so device weights will be strictly " "followed." msgstr "" #: ../../source/overview_ring.rst:213 msgid "" "With an overload factor of ``0.1``, each device will accept 10% more " "partitions than it otherwise would, but only if needed to maintain " "dispersion." msgstr "" #: ../../source/overview_ring.rst:217 msgid "" "Example: Consider a 3-node cluster of machines with equal-size disks; let " "node A have 12 disks, node B have 12 disks, and node C have only 11 disks. " "Let the ring have an overload factor of ``0.1`` (10%)." msgstr "" #: ../../source/overview_ring.rst:221 msgid "" "Without the overload, some partitions would end up with replicas only on " "nodes A and B. However, with the overload, every device is willing to accept " "up to 10% more partitions for the sake of dispersion. The missing disk in C " "means there is one disk's worth of partitions that would like to spread " "across the remaining 11 disks, which gives each disk in C an extra 9.09% " "load. Since this is less than the 10% overload, there is one replica of each " "partition on each node." msgstr "" #: ../../source/overview_ring.rst:229 msgid "" "However, this does mean that the disks in node C will have more data on them " "than the disks in nodes A and B. If 80% full is the warning threshold for " "the cluster, node C's disks will reach 80% full while A and B's disks are " "only 72.7% full." msgstr "" #: ../../source/overview_ring.rst:236 msgid "Partition & Replica Terminology" msgstr "" #: ../../source/overview_ring.rst:238 msgid "" "All descriptions of consistent hashing describe the process of breaking the " "keyspace up into multiple ranges (vnodes, buckets, etc.) - many more than " "the number of \"nodes\" to which keys in the keyspace must be assigned. " "Swift calls these ranges `partitions` - they are partitions of the total " "keyspace." msgstr "" #: ../../source/overview_ring.rst:243 msgid "" "Each partition will have multiple replicas. Every replica of each partition " "must be assigned to a device in the ring. When describing a specific " "replica of a partition (like when it's assigned a device) it is described as " "a `part-replica` in that it is a specific `replica` of the specific " "`partition`. A single device will likely be assigned different replicas from " "many partitions, but it may not be assigned multiple replicas of a single " "partition." msgstr "" #: ../../source/overview_ring.rst:250 msgid "" "The total number of partitions in a ring is calculated as ``2 ** ``. The total number of part-replicas in a ring is calculated as " "`` * 2 ** ``." msgstr "" #: ../../source/overview_ring.rst:254 msgid "" "When considering a device's `weight` it is useful to describe the number of " "part-replicas it would like to be assigned. A single device, regardless of " "weight, will never hold more than ``2 ** `` part-replicas " "because it can not have more than one replica of any partition assigned. " "The number of part-replicas a device can take by weights is calculated as " "its `parts-wanted`. The true number of part-replicas assigned to a device " "can be compared to its parts-wanted similarly to a calculation of percentage " "error - this deviation in the observed result from the idealized target is " "called a device's `balance`." msgstr "" #: ../../source/overview_ring.rst:263 msgid "" "When considering a device's `failure domain` it is useful to describe the " "number of part-replicas it would like to be assigned. The number of part-" "replicas wanted in a failure domain of a tier is the sum of the part-" "replicas wanted in the failure domains of its sub-tier. However, " "collectively when the total number of part-replicas in a failure domain " "exceeds or is equal to ``2 ** `` it is most obvious that it's no " "longer sufficient to consider only the number of total part-replicas, but " "rather the fraction of each replica's partitions. Consider for example a " "ring with 3 replicas and 3 servers: while dispersion requires that each " "server hold only ⅓ of the total part-replicas, placement is additionally " "constrained to require ``1.0`` replica of *each* partition per server. It " "would not be sufficient to satisfy dispersion if two devices on one of the " "servers each held a replica of a single partition, while another server held " "none. By considering a decimal fraction of one replica's worth of " "partitions in a failure domain we can derive the total part-replicas wanted " "in a failure domain (``1.0 * 2 ** ``). Additionally we infer " "more about `which` part-replicas must go in the failure domain. Consider a " "ring with three replicas and two zones, each with two servers (four servers " "total). The three replicas worth of partitions will be assigned into two " "failure domains at the zone tier. Each zone must hold more than one replica " "of some partitions. We represent this improper fraction of a replica's " "worth of partitions in decimal form as ``1.5`` (``3.0 / 2``). This tells us " "not only the *number* of total partitions (``1.5 * 2 ** ``) but " "also that *each* partition must have `at least` one replica in this failure " "domain (in fact ``0.5`` of the partitions will have 2 replicas). Within " "each zone the two servers will hold ``0.75`` of a replica's worth of " "partitions - this is equal both to \"the fraction of a replica's worth of " "partitions assigned to each zone (``1.5``) divided evenly among the number " "of failure domains in its sub-tier (2 servers in each zone, i.e. ``1.5 / " "2``)\" but *also* \"the total number of replicas (``3.0``) divided evenly " "among the total number of failure domains in the server tier (2 servers × 2 " "zones = 4, i.e. ``3.0 / 4``)\". It is useful to consider that each server " "in this ring will hold only ``0.75`` of a replica's worth of partitions " "which tells that any server should have `at most` one replica of a given " "partition assigned. In the interests of brevity, some variable names will " "often refer to the concept representing the fraction of a replica's worth of " "partitions in decimal form as *replicanths* - this is meant to invoke " "connotations similar to ordinal numbers as applied to fractions, but " "generalized to a replica instead of a four\\*th* or a fif\\*th*. The \"n\" " "was probably thrown in because of Blade Runner." msgstr "" #: ../../source/overview_ring.rst:304 msgid "Building the Ring" msgstr "" #: ../../source/overview_ring.rst:306 msgid "" "First the ring builder calculates the replicanths wanted at each tier in the " "ring's topology based on weight." msgstr "" #: ../../source/overview_ring.rst:309 msgid "" "Then the ring builder calculates the replicanths wanted at each tier in the " "ring's topology based on dispersion." msgstr "" #: ../../source/overview_ring.rst:312 msgid "" "Then the ring builder calculates the maximum deviation on a single device " "between its weighted replicanths and wanted replicanths." msgstr "" #: ../../source/overview_ring.rst:315 msgid "" "Next we interpolate between the two replicanth values (weighted & wanted) at " "each tier using the specified overload (up to the maximum required " "overload). It's a linear interpolation, similar to solving for a point on a " "line between two points - we calculate the slope across the max required " "overload and then calculate the intersection of the line with the desired " "overload. This becomes the target." msgstr "" #: ../../source/overview_ring.rst:322 msgid "" "From the target we calculate the minimum and maximum number of replicas any " "partition may have in a tier. This becomes the `replica-plan`." msgstr "" #: ../../source/overview_ring.rst:325 msgid "" "Finally, we calculate the number of partitions that should ideally be " "assigned to each device based the replica-plan." msgstr "" #: ../../source/overview_ring.rst:328 msgid "" "On initial balance (i.e., the first time partitions are placed to generate a " "ring) we must assign each replica of each partition to the device that " "desires the most partitions excluding any devices that already have their " "maximum number of replicas of that partition assigned to some parent tier of " "that device's failure domain." msgstr "" #: ../../source/overview_ring.rst:334 msgid "" "When building a new ring based on an old ring, the desired number of " "partitions each device wants is recalculated from the current replica-plan. " "Next the partitions to be reassigned are gathered up. Any removed devices " "have all their assigned partitions unassigned and added to the gathered " "list. Any partition replicas that (due to the addition of new devices) can " "be spread out for better durability are unassigned and added to the gathered " "list. Any devices that have more partitions than they now desire have random " "partitions unassigned from them and added to the gathered list. Lastly, the " "gathered partitions are then reassigned to devices using a similar method as " "in the initial assignment described above." msgstr "" #: ../../source/overview_ring.rst:345 msgid "" "Whenever a partition has a replica reassigned, the time of the reassignment " "is recorded. This is taken into account when gathering partitions to " "reassign so that no partition is moved twice in a configurable amount of " "time. This configurable amount of time is known internally to the " "RingBuilder class as ``min_part_hours``. This restriction is ignored for " "replicas of partitions on devices that have been removed, as device removal " "should only happens on device failure and there's no choice but to make a " "reassignment." msgstr "" #: ../../source/overview_ring.rst:353 msgid "" "The above processes don't always perfectly rebalance a ring due to the " "random nature of gathering partitions for reassignment. To help reach a more " "balanced ring, the rebalance process is repeated a fixed number of times " "until the replica-plan is fulfilled or unable to be fulfilled (indicating we " "probably can't get perfect balance due to too many partitions recently " "moved)." msgstr "" #: ../../source/overview_ring.rst:364 msgid "Composite Rings" msgstr "" #: ../../source/overview_ring.rst:366 msgid "See :ref:`composite_builder`." msgstr "" #: ../../source/overview_ring.rst:370 msgid "swift-ring-composer (Experimental)" msgstr "" #: ../../source/overview_ring.rst:375 msgid "Ring Builder Analyzer" msgstr "" #: ../../source/overview_ring.rst:382 msgid "" "The ring code went through many iterations before arriving at what it is now " "and while it has largely been stable, the algorithm has seen a few tweaks or " "perhaps even fundamentally changed as new ideas emerge. This section will " "try to describe the previous ideas attempted and attempt to explain why they " "were discarded." msgstr "" #: ../../source/overview_ring.rst:388 msgid "" "A \"live ring\" option was considered where each server could maintain its " "own copy of the ring and the servers would use a gossip protocol to " "communicate the changes they made. This was discarded as too complex and " "error prone to code correctly in the project timespan available. One bug " "could easily gossip bad data out to the entire cluster and be difficult to " "recover from. Having an externally managed ring simplifies the process, " "allows full validation of data before it's shipped out to the servers, and " "guarantees each server is using a ring from the same timeline. It also means " "that the servers themselves aren't spending a lot of resources maintaining " "rings." msgstr "" #: ../../source/overview_ring.rst:398 msgid "" "A couple of \"ring server\" options were considered. One was where all ring " "lookups would be done by calling a service on a separate server or set of " "servers, but this was discarded due to the latency involved. Another was " "much like the current process but where servers could submit change requests " "to the ring server to have a new ring built and shipped back out to the " "servers. This was discarded due to project time constraints and because ring " "changes are currently infrequent enough that manual control was sufficient. " "However, lack of quick automatic ring changes did mean that other components " "of the system had to be coded to handle devices being unavailable for a " "period of hours until someone could manually update the ring." msgstr "" #: ../../source/overview_ring.rst:409 msgid "" "The current ring process has each replica of a partition independently " "assigned to a device. A version of the ring that used a third of the memory " "was tried, where the first replica of a partition was directly assigned and " "the other two were determined by \"walking\" the ring until finding " "additional devices in other zones. This was discarded due to the loss of " "control over how many replicas for a given partition moved at once. Keeping " "each replica independent allows for moving only one partition replica within " "a given time window (except due to device failures). Using the additional " "memory was deemed a good trade-off for moving data around the cluster much " "less often." msgstr "" #: ../../source/overview_ring.rst:419 msgid "" "Another ring design was tried where the partition to device assignments " "weren't stored in a big list in memory but instead each device was assigned " "a set of hashes, or anchors. The partition would be determined from the data " "item's hash and the nearest device anchors would determine where the " "replicas should be stored. However, to get reasonable distribution of data " "each device had to have a lot of anchors and walking through those anchors " "to find replicas started to add up. In the end, the memory savings wasn't " "that great and more processing power was used, so the idea was discarded." msgstr "" #: ../../source/overview_ring.rst:428 msgid "" "A completely non-partitioned ring was also tried but discarded as the " "partitioning helps many other components of the system, especially " "replication. Replication can be attempted and retried in a partition batch " "with the other replicas rather than each data item independently attempted " "and retried. Hashes of directory structures can be calculated and compared " "with other replicas to reduce directory walking and network traffic." msgstr "" #: ../../source/overview_ring.rst:435 msgid "" "Partitioning and independently assigning partition replicas also allowed for " "the best-balanced cluster. The best of the other strategies tended to give " "±10% variance on device balance with devices of equal weight and ±15% with " "devices of varying weights. The current strategy allows us to get ±3% and " "±8% respectively." msgstr "" #: ../../source/overview_ring.rst:441 msgid "" "Various hashing algorithms were tried. SHA offers better security, but the " "ring doesn't need to be cryptographically secure and SHA is slower. Murmur " "was much faster, but MD5 was built-in and hash computation is a small " "percentage of the overall request handling time. In all, once it was decided " "the servers wouldn't be maintaining the rings themselves anyway and only " "doing hash lookups, MD5 was chosen for its general availability, good " "distribution, and adequate speed." msgstr "" #: ../../source/overview_ring.rst:448 msgid "" "The placement algorithm has seen a number of behavioral changes for " "unbalanceable rings. The ring builder wants to keep replicas as far apart as " "possible while still respecting device weights. In most cases, the ring " "builder can achieve both, but sometimes they conflict. At first, the " "behavior was to keep the replicas far apart and ignore device weight, but " "that made it impossible to gradually go from one region to two, or from two " "to three. Then it was changed to favor device weight over dispersion, but " "that wasn't so good for rings that were close to balanceable, like 3 " "machines with 60TB, 60TB, and 57TB of disk space; operators were expecting " "one replica per machine, but didn't always get it. After that, overload was " "added to the ring builder so that operators could choose a balance between " "dispersion and device weights. In time the overload concept was improved and " "made more accurate." msgstr "" #: ../../source/overview_ring.rst:461 msgid "" "For more background on consistent hashing rings, please see :doc:" "`ring_background`." msgstr "" #: ../../source/overview_wsgi_management.rst:2 msgid "WSGI Server Process Management" msgstr "" #: ../../source/overview_wsgi_management.rst:5 msgid "Graceful Shutdowns with ``SIGHUP``" msgstr "" #: ../../source/overview_wsgi_management.rst:7 msgid "" "Swift has always supported graceful WSGI server shutdown via ``SIGHUP``. " "This causes the manager process to fall out of its ensure-all-workers-are-" "running loop, close all workers' listen sockets, and exit. Closing the " "listen sockets causes all new ``accept`` calls to fail, but does not impact " "any established connections." msgstr "" #: ../../source/overview_wsgi_management.rst:13 msgid "" "The workers are re-parented, likely to PID 1, and are discoverable with " "``swift-orphans``. When the ``accept`` call fails, it waits for the " "connection-handling ``GreenPool`` to complete, then exits. Each worker " "continues processing the current request, then closes the connection. Note " "that clients will get connection errors if they try to re-use a connection " "for further requests." msgstr "" #: ../../source/overview_wsgi_management.rst:20 msgid "" "Prior to the introduction of seamless reloads (see below), a common reload " "strategy was to perform a graceful shutdown followed by a fresh service " "start." msgstr "" #: ../../source/overview_wsgi_management.rst:25 msgid "Seamless Reloads with ``SIGUSR1``" msgstr "" #: ../../source/overview_wsgi_management.rst:27 msgid "" "Beginning with Swift 2.24.0, WSGI servers support seamless reloads via " "``SIGUSR1``. This allows servers to restart to pick up configuration or code " "changes while being minimally-disruptive to clients. The process is as " "follows:" msgstr "" #: ../../source/overview_wsgi_management.rst:34 msgid "" "Manager process receives ``USR1`` signal. This causes the process to fall " "out of its loop ensuring that all workers are running and instead begin " "reloading. The workers continue servicing client requests as long as their " "listen sockets remain open." msgstr "" #: ../../source/overview_wsgi_management.rst:41 msgid "" "Manager process forks. The new child knows about all the existing workers " "and their listen sockets; it will be responsible for closing the old worker " "listen sockets so they stop accepting new connections." msgstr "" #: ../../source/overview_wsgi_management.rst:47 msgid "" "Manager process re-exec's itself. It picks up new configuration and code " "while maintaining the same PID as the old manager process. At this point " "only the socket-closer is tracking the old workers, but everything " "(including old workers) remains a child of the new manager process. As a " "result, old workers are *not* discoverable with ``swift-orphans``; ``swift-" "oldies`` may be useful, but will also find the manager process." msgstr "" #: ../../source/overview_wsgi_management.rst:57 msgid "" "New manager process forks off new workers, each with its own listen socket. " "Once all workers have started and can accept new connections, the manager " "notifies the socket-closer via a pipe. The socket-closer closes the old " "worker listen sockets so they stop accepting new connections, passes the " "list of old workers to the new manager, then exits." msgstr "" #: ../../source/overview_wsgi_management.rst:66 msgid "" "Old workers continue servicing any in-progress connections, while new " "connections are picked up by new workers. Once an old worker completes all " "of its oustanding requests, it exits. Beginning with Swift 2.35.0, if any " "workers persist beyond ``stale_worker_timeout``, the new manager will clean " "them up with ``KILL`` signals." msgstr "" #: ../../source/overview_wsgi_management.rst:74 msgid "All old workers have now exited. Only new code and configs are in use." msgstr "" #: ../../source/overview_wsgi_management.rst:77 msgid "``swift-reload``" msgstr "" #: ../../source/overview_wsgi_management.rst:79 msgid "" "Beginning with Swift 2.33.0, a new ``swift-reload`` helper is included to " "help validate the reload process. Given a PID, it will" msgstr "" #: ../../source/overview_wsgi_management.rst:82 msgid "" "Validate that the PID seems to belong to a Swift WSGI server manager process," msgstr "" #: ../../source/overview_wsgi_management.rst:84 msgid "Check that the config file used by that PID is currently valid," msgstr "" #: ../../source/overview_wsgi_management.rst:85 msgid "Send the ``USR1`` signal to initiate a reload, and" msgstr "" #: ../../source/overview_wsgi_management.rst:86 msgid "" "Wait for the new workers to come up (indicating the reload is complete) " "before exiting." msgstr "" #: ../../source/policies_saio.rst:3 msgid "Adding Storage Policies to an Existing SAIO" msgstr "" #: ../../source/policies_saio.rst:5 msgid "" "Depending on when you downloaded your SAIO environment, it may already be " "prepared with two storage policies that enable some basic functional tests. " "In the event that you are adding a storage policy to an existing " "installation, however, the following section will walk you through the steps " "for setting up Storage Policies. Note that configuring more than one " "storage policy on your development environment is recommended but optional. " "Enabling multiple Storage Policies is very easy regardless of whether you " "are working with an existing installation or starting a brand new one." msgstr "" #: ../../source/policies_saio.rst:15 msgid "" "Now we will create two policies - the first one will be a standard triple " "replication policy that we will also explicitly set as the default and the " "second will be setup for reduced replication using a factor of 2x. We will " "call the first one 'gold' and the second one 'silver'. In this example both " "policies map to the same devices because it's also important for this sample " "implementation to be simple and easy to understand and adding a bunch of new " "devices isn't really required to implement a usable set of policies." msgstr "" #: ../../source/policies_saio.rst:24 msgid "" "To define your policies, add the following to your ``/etc/swift/swift.conf`` " "file:" msgstr "" #: ../../source/policies_saio.rst:37 msgid "" "See :doc:`overview_policies` for detailed information on ``swift.conf`` " "policy options." msgstr "" #: ../../source/policies_saio.rst:40 msgid "" "To create the object ring for the silver policy (index 1), add the following " "to your ``bin/remakerings`` script and re-run it (your script may already " "have these changes):" msgstr "" #: ../../source/policies_saio.rst:53 msgid "" "Note that the reduced replication of the silver policy is only a function of " "the replication parameter in the ``swift-ring-builder create`` command and " "is not specified in ``/etc/swift/swift.conf``." msgstr "" #: ../../source/policies_saio.rst:57 msgid "" "Copy ``etc/container-reconciler.conf-sample`` to ``/etc/swift/container-" "reconciler.conf`` and fix the user option:" msgstr "" #: ../../source/policies_saio.rst:69 msgid "" "Setting up Storage Policies was very simple, and using them is even " "simpler. In this section, we will run some commands to create a few " "containers with different policies and store objects in them and see how " "Storage Policies effect placement of data in Swift." msgstr "" #: ../../source/policies_saio.rst:74 msgid "" "We will be using the list_endpoints middleware to confirm object locations, " "so enable that now in your ``proxy-server.conf`` file by adding it to the " "pipeline and including the filter section as shown below (be sure to restart " "your proxy after making these changes):" msgstr "" #: ../../source/policies_saio.rst:88 msgid "Check to see that your policies are reported via /info:" msgstr "" #: ../../source/policies_saio.rst:94 msgid "You should see this: (only showing the policy output here):" msgstr "" #: ../../source/policies_saio.rst:101 msgid "" "Now create a container without specifying a policy, it will use the default, " "'gold' and then put a test object in it (create the file ``file0.txt`` with " "your favorite editor with some content):" msgstr "" #: ../../source/policies_saio.rst:112 msgid "" "Now confirm placement of the object with the :ref:`list_endpoints` " "middleware:" msgstr "" #: ../../source/policies_saio.rst:118 ../../source/policies_saio.rst:142 msgid "You should see this: (note placement on expected devices):" msgstr "" #: ../../source/policies_saio.rst:126 msgid "" "Create a container using policy 'silver' and put a different file in it:" msgstr "" #: ../../source/policies_saio.rst:136 msgid "Confirm placement of the object for policy 'silver':" msgstr "" #: ../../source/policies_saio.rst:149 msgid "" "Confirm account information with HEAD, make sure that your container-updater " "service is running and has executed once since you performed the PUTs or the " "account database won't be updated yet:" msgstr "" #: ../../source/policies_saio.rst:158 msgid "" "You should see something like this (note that total and per policy stats " "object sizes will vary):" msgstr "" #: ../../source/proxy.rst:5 msgid "Proxy" msgstr "" #: ../../source/proxy.rst:10 msgid "Proxy Controllers" msgstr "" #: ../../source/proxy.rst:13 msgid "Base" msgstr "" #: ../../source/ratelimit.rst:5 msgid "Rate Limiting" msgstr "" #: ../../source/ratelimit.rst:7 msgid "" "Rate limiting in Swift is implemented as a pluggable middleware. Rate " "limiting is performed on requests that result in database writes to the " "account and container sqlite dbs. It uses memcached and is dependent on the " "proxy servers having highly synchronized time. The rate limits are limited " "by the accuracy of the proxy server clocks." msgstr "" #: ../../source/ratelimit.rst:17 msgid "" "All configuration is optional. If no account or container limits are " "provided there will be no rate limiting. Configuration available:" msgstr "" #: ../../source/ratelimit.rst:23 ../../source/ratelimit.rst:77 msgid "1000" msgstr "" #: ../../source/ratelimit.rst:23 msgid "" "Represents how accurate the proxy servers' system clocks are with each " "other. 1000 means that all the proxies' clock are accurate to each other " "within 1 millisecond. No ratelimit should be higher than the clock accuracy." msgstr "" #: ../../source/ratelimit.rst:23 msgid "clock_accuracy" msgstr "" #: ../../source/ratelimit.rst:30 msgid "" "App will immediately return a 498 response if the necessary sleep time ever " "exceeds the given max_sleep_time_seconds." msgstr "" #: ../../source/ratelimit.rst:30 msgid "max_sleep_time_seconds" msgstr "" #: ../../source/ratelimit.rst:34 ../../source/ratelimit.rst:43 msgid "0" msgstr "" #: ../../source/ratelimit.rst:34 msgid "" "To allow visibility into rate limiting set this value > 0 and all sleeps " "greater than the number will be logged." msgstr "" #: ../../source/ratelimit.rst:34 msgid "log_sleep_time_seconds" msgstr "" #: ../../source/ratelimit.rst:38 msgid "5" msgstr "" #: ../../source/ratelimit.rst:38 msgid "" "Number of seconds the rate counter can drop and be allowed to catch up (at a " "faster than listed rate). A larger number will result in larger spikes in " "rate but better average accuracy." msgstr "" #: ../../source/ratelimit.rst:38 msgid "rate_buffer_seconds" msgstr "" #: ../../source/ratelimit.rst:43 msgid "" "If set, will limit PUT and DELETE requests to /account_name/container_name. " "Number is in requests per second." msgstr "" #: ../../source/ratelimit.rst:43 msgid "account_ratelimit" msgstr "" #: ../../source/ratelimit.rst:47 ../../source/ratelimit.rst:52 msgid "''" msgstr "" #: ../../source/ratelimit.rst:47 msgid "" "When set with container_ratelimit_x = r: for containers of size x, limit " "requests per second to r. Will limit PUT, DELETE, and POST requests to /a/c/" "o." msgstr "" #: ../../source/ratelimit.rst:47 msgid "container_ratelimit_size" msgstr "" #: ../../source/ratelimit.rst:52 msgid "" "When set with container_listing_ratelimit_x = r: for containers of size x, " "limit listing requests per second to r. Will limit GET requests to /a/c." msgstr "" #: ../../source/ratelimit.rst:52 msgid "container_listing_ratelimit_size" msgstr "" #: ../../source/ratelimit.rst:59 msgid "" "The container rate limits are linearly interpolated from the values given. " "A sample container rate limiting could be:" msgstr "" #: ../../source/ratelimit.rst:62 msgid "container_ratelimit_100 = 100" msgstr "" #: ../../source/ratelimit.rst:64 msgid "container_ratelimit_200 = 50" msgstr "" #: ../../source/ratelimit.rst:66 msgid "container_ratelimit_500 = 20" msgstr "" #: ../../source/ratelimit.rst:68 msgid "This would result in" msgstr "" #: ../../source/ratelimit.rst:71 msgid "Container Size" msgstr "" #: ../../source/ratelimit.rst:71 msgid "Rate Limit" msgstr "" #: ../../source/ratelimit.rst:73 msgid "0-99" msgstr "" #: ../../source/ratelimit.rst:73 msgid "No limiting" msgstr "" #: ../../source/ratelimit.rst:74 msgid "100" msgstr "" #: ../../source/ratelimit.rst:75 msgid "150" msgstr "" #: ../../source/ratelimit.rst:75 msgid "75" msgstr "" #: ../../source/ratelimit.rst:76 ../../source/ratelimit.rst:77 msgid "20" msgstr "" #: ../../source/ratelimit.rst:76 msgid "500" msgstr "" #: ../../source/ratelimit.rst:83 msgid "Account Specific Ratelimiting" msgstr "" #: ../../source/ratelimit.rst:86 msgid "" "The above ratelimiting is to prevent the \"many writes to a single " "container\" bottleneck from causing a problem. There could also be a problem " "where a single account is just using too much of the cluster's resources. " "In this case, the container ratelimits may not help because the customer " "could be doing thousands of reqs/sec to distributed containers each getting " "a small fraction of the total so those limits would never trigger. If a " "system administrator notices this, he/she can set the X-Account-Sysmeta-" "Global-Write-Ratelimit on an account and that will limit the total number of " "write requests (PUT, POST, DELETE, COPY) that account can do for the whole " "account. This limit will be in addition to the applicable account/container " "limits from above. This header will be hidden from the user, because of the " "gatekeeper middleware, and can only be set using a direct client to the " "account nodes. It accepts a float value and will only limit requests if the " "value is > 0." msgstr "" #: ../../source/ratelimit.rst:102 msgid "Black/White-listing" msgstr "" #: ../../source/ratelimit.rst:104 msgid "To blacklist or whitelist an account set:" msgstr "" #: ../../source/ratelimit.rst:106 msgid "X-Account-Sysmeta-Global-Write-Ratelimit: BLACKLIST" msgstr "" #: ../../source/ratelimit.rst:108 msgid "or" msgstr "" #: ../../source/ratelimit.rst:110 msgid "X-Account-Sysmeta-Global-Write-Ratelimit: WHITELIST" msgstr "" #: ../../source/ratelimit.rst:112 msgid "in the account headers." msgstr "" #: ../../source/replication_network.rst:9 msgid "Summary" msgstr "" #: ../../source/replication_network.rst:11 msgid "" "Swift's replication process is essential for consistency and availability of " "data. By default, replication activity will use the same network interface " "as other cluster operations. However, if a replication interface is set in " "the ring for a node, that node will send replication traffic on its " "designated separate replication network interface. Replication traffic " "includes REPLICATE requests and rsync traffic." msgstr "" #: ../../source/replication_network.rst:18 msgid "" "To separate the cluster-internal replication traffic from client traffic, " "separate replication servers can be used. These replication servers are " "based on the standard storage servers, but they listen on the replication IP " "and only respond to REPLICATE requests. Storage servers can serve REPLICATE " "requests, so an operator can transition to using a separate replication " "network with no cluster downtime." msgstr "" #: ../../source/replication_network.rst:25 msgid "" "Replication IP and port information is stored in the ring on a per-node " "basis. These parameters will be used if they are present, but they are not " "required. If this information does not exist or is empty for a particular " "node, the node's standard IP and port will be used for replication." msgstr "" #: ../../source/replication_network.rst:32 msgid "For SAIO replication" msgstr "" #: ../../source/replication_network.rst:34 msgid "Create new script in ``~/bin/`` (for example: ``remakerings_new``)::" msgstr "" #: ../../source/replication_network.rst:76 msgid "" "Syntax of adding device has been changed: ``R:" "`` was added between ``z-:`` and ``/" "_ ``. Added devices will use and " " for replication activities." msgstr "" #: ../../source/replication_network.rst:80 msgid "Add next rows in ``/etc/rsyncd.conf``::" msgstr "" #: ../../source/replication_network.rst:156 msgid "Restart rsync daemon::" msgstr "" #: ../../source/replication_network.rst:160 msgid "Update configuration files in directories:" msgstr "" #: ../../source/replication_network.rst:162 msgid "/etc/swift/object-server(files: 1.conf, 2.conf, 3.conf, 4.conf)" msgstr "" #: ../../source/replication_network.rst:163 msgid "/etc/swift/container-server(files: 1.conf, 2.conf, 3.conf, 4.conf)" msgstr "" #: ../../source/replication_network.rst:164 msgid "/etc/swift/account-server(files: 1.conf, 2.conf, 3.conf, 4.conf)" msgstr "" #: ../../source/replication_network.rst:166 msgid "delete all configuration options in section ``[<*>-replicator]``" msgstr "" #: ../../source/replication_network.rst:168 msgid "" "Add configuration files for object-server, in ``/etc/swift/object-server/``" msgstr "" #: ../../source/replication_network.rst:170 #: ../../source/replication_network.rst:268 #: ../../source/replication_network.rst:366 msgid "5.conf::" msgstr "" #: ../../source/replication_network.rst:194 #: ../../source/replication_network.rst:292 #: ../../source/replication_network.rst:390 msgid "6.conf::" msgstr "" #: ../../source/replication_network.rst:218 #: ../../source/replication_network.rst:316 #: ../../source/replication_network.rst:414 msgid "7.conf::" msgstr "" #: ../../source/replication_network.rst:242 #: ../../source/replication_network.rst:340 #: ../../source/replication_network.rst:438 msgid "8.conf::" msgstr "" #: ../../source/replication_network.rst:266 msgid "" "Add configuration files for container-server, in ``/etc/swift/container-" "server/``" msgstr "" #: ../../source/replication_network.rst:364 msgid "" "Add configuration files for account-server, in ``/etc/swift/account-server/``" msgstr "" #: ../../source/replication_network.rst:465 msgid "For a Multiple Server replication" msgstr "" #: ../../source/replication_network.rst:467 msgid "Move configuration file." msgstr "" #: ../../source/replication_network.rst:469 msgid "" "Configuration file for object-server from /etc/swift/object-server.conf to /" "etc/swift/object-server/1.conf" msgstr "" #: ../../source/replication_network.rst:471 msgid "" "Configuration file for container-server from /etc/swift/container-server." "conf to /etc/swift/container-server/1.conf" msgstr "" #: ../../source/replication_network.rst:473 msgid "" "Configuration file for account-server from /etc/swift/account-server.conf " "to /etc/swift/account-server/1.conf" msgstr "" #: ../../source/replication_network.rst:475 msgid "Add changes in configuration files in directories:" msgstr "" #: ../../source/replication_network.rst:477 msgid "/etc/swift/object-server(files: 1.conf)" msgstr "" #: ../../source/replication_network.rst:478 msgid "/etc/swift/container-server(files: 1.conf)" msgstr "" #: ../../source/replication_network.rst:479 msgid "/etc/swift/account-server(files: 1.conf)" msgstr "" #: ../../source/replication_network.rst:481 msgid "delete all configuration options in section [<*>-replicator]" msgstr "" #: ../../source/replication_network.rst:483 msgid "" "Add configuration files for object-server, in /etc/swift/object-server/2." "conf::" msgstr "" #: ../../source/replication_network.rst:498 msgid "" "Add configuration files for container-server, in /etc/swift/container-" "server/2.conf::" msgstr "" #: ../../source/replication_network.rst:513 msgid "" "Add configuration files for account-server, in /etc/swift/account-server/2." "conf::" msgstr "" #: ../../source/ring.rst:5 msgid "Partitioned Consistent Hash Ring" msgstr "" #: ../../source/ring.rst:10 msgid "Ring" msgstr "" #: ../../source/ring.rst:30 msgid "Composite Ring Builder" msgstr "" #: ../../source/ring_background.rst:3 msgid "Building a Consistent Hashing Ring" msgstr "" #: ../../source/ring_background.rst:7 msgid "Authored by Greg Holt, February 2011" msgstr "" #: ../../source/ring_background.rst:9 msgid "" "This is a compilation of five posts I made earlier discussing how to build a " "consistent hashing ring. The posts seemed to be accessed quite frequently, " "so I've gathered them all here on one page for easier reading." msgstr "" #: ../../source/ring_background.rst:14 msgid "" "This is an historical document; as such, all code examples are Python 2. If " "this makes you squirm, think of it as pseudo-code. Regardless of " "implementation language, the state of the art in consistent-hashing and " "distributed systems more generally has advanced. We hope that this " "introduction from first principles will still prove informative, " "particularly with regard to how data is distributed within a Swift cluster." msgstr "" #: ../../source/ring_background.rst:23 msgid "Part 1" msgstr "" #: ../../source/ring_background.rst:24 msgid "" "\"Consistent Hashing\" is a term used to describe a process where data is " "distributed using a hashing algorithm to determine its location. Using only " "the hash of the id of the data you can determine exactly where that data " "should be. This mapping of hashes to locations is usually termed a \"ring\"." msgstr "" #: ../../source/ring_background.rst:30 msgid "" "Probably the simplest hash is just a modulus of the id. For instance, if all " "ids are numbers and you have two machines you wish to distribute data to, " "you could just put all odd numbered ids on one machine and even numbered ids " "on the other. Assuming you have a balanced number of odd and even numbered " "ids, and a balanced data size per id, your data would be balanced between " "the two machines." msgstr "" #: ../../source/ring_background.rst:37 msgid "" "Since data ids are often textual names and not numbers, like paths for files " "or URLs, it makes sense to use a \"real\" hashing algorithm to convert the " "names to numbers first. Using MD5 for instance, the hash of the name 'mom." "png' is '4559a12e3e8da7c2186250c2f292e3af' and the hash of 'dad.png' is " "'096edcc4107e9e18d6a03a43b3853bea'. Now, using the modulus, we can place " "'mom.jpg' on the odd machine and 'dad.png' on the even one. Another benefit " "of using a hashing algorithm like MD5 is that the resulting hashes have a " "known even distribution, meaning your ids will be evenly distributed without " "worrying about keeping the id values themselves evenly distributed." msgstr "" #: ../../source/ring_background.rst:47 msgid "Here is a simple example of this in action:" msgstr "" #: ../../source/ring_background.rst:81 msgid "" "So that's not bad at all; less than a percent over/under for distribution " "per node. In the next part of this series we'll examine where modulus " "distribution causes problems and how to improve our ring to overcome them." msgstr "" #: ../../source/ring_background.rst:86 msgid "Part 2" msgstr "" #: ../../source/ring_background.rst:87 msgid "" "In Part 1 of this series, we did a simple test of using the modulus of a " "hash to locate data. We saw very good distribution, but that's only part of " "the story. Distributed systems not only need to distribute load, but they " "often also need to grow as more and more data is placed in it." msgstr "" #: ../../source/ring_background.rst:92 msgid "" "So let's imagine we have a 100 node system up and running using our previous " "algorithm, but it's starting to get full so we want to add another node. " "When we add that 101st node to our algorithm we notice that many ids now map " "to different nodes than they previously did. We're going to have to shuffle " "a ton of data around our system to get it all into place again." msgstr "" #: ../../source/ring_background.rst:99 msgid "" "Let's examine what's happened on a much smaller scale: just 2 nodes again, " "node 0 gets even ids and node 1 gets odd ids. So data id 100 would map to " "node 0, data id 101 to node 1, data id 102 to node 0, etc. This is simply " "node = id % 2. Now we add a third node (node 2) for more space, so we want " "node = id % 3. So now data id 100 maps to node id 1, data id 101 to node 2, " "and data id 102 to node 0. So we have to move data for 2 of our 3 ids so " "they can be found again." msgstr "" #: ../../source/ring_background.rst:107 ../../source/ring_background.rst:145 msgid "Let's examine this at a larger scale:" msgstr "" #: ../../source/ring_background.rst:133 msgid "" "Wow, that's severe. We'd have to shuffle around 99% of our data just to " "increase our capacity 1%! We need a new algorithm that combats this behavior." msgstr "" #: ../../source/ring_background.rst:137 msgid "" "This is where the \"ring\" really comes in. We can assign ranges of hashes " "directly to nodes and then use an algorithm that minimizes the changes to " "those ranges. Back to our small scale, let's say our ids range from 0 to " "999. We have two nodes and we'll assign data ids 0–499 to node 0 and 500–999 " "to node 1. Later, when we add node 2, we can take half the data ids from " "node 0 and half from node 1, minimizing the amount of data that needs to " "move." msgstr "" #: ../../source/ring_background.rst:182 msgid "" "Okay, that is better. But still, moving 50% of our data to add 1% capacity " "is not very good. If we examine what happened more closely we'll see what is " "an \"accordion effect\". We shrunk node 0's range a bit to give to the new " "node, but that shifted all the other node's ranges by the same amount." msgstr "" #: ../../source/ring_background.rst:187 msgid "" "We can minimize the change to a node's assigned range by assigning several " "smaller ranges instead of the single broad range we were before. This can be " "done by creating \"virtual nodes\" for each node. So 100 nodes might have " "1000 virtual nodes. Let's examine how that might work." msgstr "" #: ../../source/ring_background.rst:238 msgid "" "There we go, we added 1% capacity and only moved 0.9% of existing data. The " "vnode_range_starts list seems a bit out of place though. Its values are " "calculated and never change for the lifetime of the cluster, so let's " "optimize that out." msgstr "" #: ../../source/ring_background.rst:284 msgid "" "There we go. In the next part of this series, will further examine the " "algorithm's limitations and how to improve on it." msgstr "" #: ../../source/ring_background.rst:288 msgid "Part 3" msgstr "" #: ../../source/ring_background.rst:289 msgid "" "In Part 2 of this series, we reached an algorithm that performed well even " "when adding new nodes to the cluster. We used 1000 virtual nodes that could " "be independently assigned to nodes, allowing us to minimize the amount of " "data moved when a node was added." msgstr "" #: ../../source/ring_background.rst:294 msgid "" "The number of virtual nodes puts a cap on how many real nodes you can have. " "For example, if you have 1000 virtual nodes and you try to add a 1001st real " "node, you can't assign a virtual node to it without leaving another real " "node with no assignment, leaving you with just 1000 active real nodes still." msgstr "" #: ../../source/ring_background.rst:300 msgid "" "Unfortunately, the number of virtual nodes created at the beginning can " "never change for the life of the cluster without a lot of careful work. For " "example, you could double the virtual node count by splitting each existing " "virtual node in half and assigning both halves to the same real node. " "However, if the real node uses the virtual node's id to optimally store the " "data (for example, all data might be stored in /[virtual node id]/[data id]) " "it would have to move data around locally to reflect the change. And it " "would have to resolve data using both the new and old locations while the " "moves were taking place, making atomic operations difficult or impossible." msgstr "" #: ../../source/ring_background.rst:311 msgid "" "Let's continue with this assumption that changing the virtual node count is " "more work than it's worth, but keep in mind that some applications might be " "fine with this." msgstr "" #: ../../source/ring_background.rst:315 msgid "" "The easiest way to deal with this limitation is to make the limit high " "enough that it won't matter. For instance, if we decide our cluster will " "never exceed 60,000 real nodes, we can just make 60,000 virtual nodes." msgstr "" #: ../../source/ring_background.rst:319 msgid "" "Also, we should include in our calculations the relative size of our nodes. " "For instance, a year from now we might have real nodes that can handle twice " "the capacity of our current nodes. So we'd want to assign twice the virtual " "nodes to those future nodes, so maybe we should raise our virtual node " "estimate to 120,000." msgstr "" #: ../../source/ring_background.rst:325 msgid "" "A good rule to follow might be to calculate 100 virtual nodes to each real " "node at maximum capacity. This would allow you to alter the load on any " "given node by 1%, even at max capacity, which is pretty fine tuning. So now " "we're at 6,000,000 virtual nodes for a max capacity cluster of 60,000 real " "nodes." msgstr "" #: ../../source/ring_background.rst:331 msgid "" "6 million virtual nodes seems like a lot, and it might seem like we'd use up " "way too much memory. But the only structure this affects is the virtual node " "to real node mapping. The base amount of memory required would be 6 million " "times 2 bytes (to store a real node id from 0 to 65,535). 12 megabytes of " "memory just isn't that much to use these days." msgstr "" #: ../../source/ring_background.rst:337 msgid "" "Even with all the overhead of flexible data types, things aren't that bad. I " "changed the code from the previous part in this series to have 60,000 real " "and 6,000,000 virtual nodes, changed the list to an array('H'), and python " "topped out at 27m of resident memory – and that includes two rings." msgstr "" #: ../../source/ring_background.rst:343 msgid "" "To change terminology a bit, we're going to start calling these virtual " "nodes \"partitions\". This will make it a bit easier to discern between the " "two types of nodes we've been talking about so far. Also, it makes sense to " "talk about partitions as they are really just unchanging sections of the " "hash space." msgstr "" #: ../../source/ring_background.rst:349 msgid "" "We're also going to always keep the partition count a power of two. This " "makes it easy to just use bit manipulation on the hash to determine the " "partition rather than modulus. It isn't much faster, but it is a little. So, " "here's our updated ring code, using 8,388,608 (2 ** 23) partitions and " "65,536 nodes. We've upped the sample data id set and checked the " "distribution to make sure we haven't broken anything." msgstr "" #: ../../source/ring_background.rst:394 msgid "" "Hmm. +–10% seems a bit high, but I reran with 65,536 partitions and 256 " "nodes and got +–0.4% so it's just that our sample size (100m) is too small " "for our number of partitions (8m). It'll take way too long to run " "experiments with an even larger sample size, so let's reduce back down to " "these lesser numbers. (To be certain, I reran at the full version with a 10 " "billion data id sample set and got +–1%, but it took 6.5 hours to run.)" msgstr "" #: ../../source/ring_background.rst:402 msgid "" "In the next part of this series, we'll talk about how to increase the " "durability of our data in the cluster." msgstr "" #: ../../source/ring_background.rst:406 msgid "Part 4" msgstr "" #: ../../source/ring_background.rst:407 msgid "" "In Part 3 of this series, we just further discussed partitions (virtual " "nodes) and cleaned up our code a bit based on that. Now, let's talk about " "how to increase the durability and availability of our data in the cluster." msgstr "" #: ../../source/ring_background.rst:412 msgid "" "For many distributed data stores, durability is quite important. Either RAID " "arrays or individually distinct copies of data are required. While RAID will " "increase the durability, it does nothing to increase the availability – if " "the RAID machine crashes, the data may be safe but inaccessible until " "repairs are done. If we keep distinct copies of the data on different " "machines and a machine crashes, the other copies will still be available " "while we repair the broken machine." msgstr "" #: ../../source/ring_background.rst:420 msgid "" "An easy way to gain this multiple copy durability/availability is to just " "use multiple rings and groups of nodes. For instance, to achieve the " "industry standard of three copies, you'd split the nodes into three groups " "and each group would have its own ring and each would receive a copy of each " "data item. This can work well enough, but has the drawback that expanding " "capacity requires adding three nodes at a time and that losing one node " "essentially lowers capacity by three times that node's capacity." msgstr "" #: ../../source/ring_background.rst:429 msgid "" "Instead, let's use a different, but common, approach of meeting our " "requirements with a single ring. This can be done by walking the ring from " "the starting point and looking for additional distinct nodes. Here's code " "that supports a variable number of replicas (set to 3 for testing):" msgstr "" #: ../../source/ring_background.rst:482 msgid "" "That's pretty good; less than 1% over/under. While this works well, there " "are a couple of problems." msgstr "" #: ../../source/ring_background.rst:485 msgid "" "First, because of how we've initially assigned the partitions to nodes, all " "the partitions for a given node have their extra copies on the same other " "two nodes. The problem here is that when a machine fails, the load on these " "other nodes will jump by that amount. It'd be better if we initially " "shuffled the partition assignment to distribute the failover load better." msgstr "" #: ../../source/ring_background.rst:492 msgid "" "The other problem is a bit harder to explain, but deals with physical " "separation of machines. Imagine you can only put 16 machines in a rack in " "your datacenter. The 256 nodes we've been using would fill 16 racks. With " "our current code, if a rack goes out (power problem, network issue, etc.) " "there is a good chance some data will have all three copies in that rack, " "becoming inaccessible. We can fix this shortcoming by adding the concept of " "zones to our nodes, and then ensuring that replicas are stored in distinct " "zones." msgstr "" #: ../../source/ring_background.rst:576 msgid "" "So the shuffle and zone distinctions affected our distribution some, but " "still definitely good enough. This test took about 64 seconds to run on my " "machine." msgstr "" #: ../../source/ring_background.rst:580 msgid "" "There's a completely alternate, and quite common, way of accomplishing these " "same requirements. This alternate method doesn't use partitions at all, but " "instead just assigns anchors to the nodes within the hash space. Finding the " "first node for a given hash just involves walking this anchor ring for the " "next node, and finding additional nodes works similarly as before. To attain " "the equivalent of our virtual nodes, each real node is assigned multiple " "anchors." msgstr "" #: ../../source/ring_background.rst:668 msgid "" "This test took over 15 minutes to run! Unfortunately, this method also gives " "much less control over the distribution. To get better distribution, you " "have to add more virtual nodes, which eats up more memory and takes even " "more time to build the ring and perform distinct node lookups. The most " "common operation, data id lookup, can be improved (by predetermining each " "virtual node's failover nodes, for instance) but it starts off so far behind " "our first approach that we'll just stick with that." msgstr "" #: ../../source/ring_background.rst:676 msgid "" "In the next part of this series, we'll start to wrap all this up into a " "useful Python module." msgstr "" #: ../../source/ring_background.rst:680 msgid "Part 5" msgstr "" #: ../../source/ring_background.rst:681 msgid "" "In Part 4 of this series, we ended up with a multiple copy, distinctly zoned " "ring. Or at least the start of it. In this final part we'll package the code " "up into a useable Python module and then add one last feature. First, let's " "separate the ring itself from the building of the data for the ring and its " "testing." msgstr "" #: ../../source/ring_background.rst:802 msgid "" "It takes a bit longer to test our ring, but that's mostly because of the " "switch to dictionaries from arrays for various items. Having node " "dictionaries is nice because you can attach any node information you want " "directly there (ip addresses, tcp ports, drive paths, etc.). But we're still " "on track for further testing; our distribution is still good." msgstr "" #: ../../source/ring_background.rst:808 msgid "" "Now, let's add our one last feature to our ring: the concept of weights. " "Weights are useful because the nodes you add later in a ring's life are " "likely to have more capacity than those you have at the outset. For this " "test, we'll make half our nodes have twice the weight. We'll have to change " "build_ring to give more partitions to the nodes with more weight and we'll " "change test_ring to take into account these weights. Since we've changed so " "much I'll just post the entire module again:" msgstr "" #: ../../source/ring_background.rst:955 msgid "" "So things are still good, even though we have differently weighted nodes. I " "ran another test with this code using random weights from 1 to 100 and got " "over/under values for nodes of 7.35%/18.12% and zones of 0.24%/0.22%, still " "pretty good considering the crazy weight ranges." msgstr "" #: ../../source/ring_background.rst:962 msgid "" "Hopefully this series has been a good introduction to building a ring. This " "code is essentially how the OpenStack Swift ring works, except that Swift's " "ring has lots of additional optimizations, such as storing each replica " "assignment separately, and lots of extra features for building, validating, " "and otherwise working with rings." msgstr "" #: ../../source/ring_partpower.rst:3 msgid "Modifying Ring Partition Power" msgstr "" #: ../../source/ring_partpower.rst:5 msgid "" "The ring partition power determines the on-disk location of data files and " "is selected when creating a new ring. In normal operation, it is a fixed " "value. This is because a different partition power results in a different on-" "disk location for all data files." msgstr "" #: ../../source/ring_partpower.rst:10 msgid "" "However, increasing the partition power by 1 can be done by choosing " "locations that are on the same disk. As a result, we can create hard-links " "for both the new and old locations, avoiding data movement without impacting " "availability." msgstr "" #: ../../source/ring_partpower.rst:14 msgid "" "To enable a partition power change without interrupting user access, object " "servers need to be aware of it in advance. Therefore a partition power " "change needs to be done in multiple steps." msgstr "" #: ../../source/ring_partpower.rst:20 msgid "" "Do not increase the partition power on account and container rings. " "Increasing the partition power is *only* supported for object rings. Trying " "to increase the part_power for account and container rings *will* result in " "unavailability, maybe even data loss." msgstr "" #: ../../source/ring_partpower.rst:28 msgid "Caveats" msgstr "" #: ../../source/ring_partpower.rst:30 msgid "" "Before increasing the partition power, consider the possible drawbacks. " "There are a few caveats when increasing the partition power:" msgstr "" #: ../../source/ring_partpower.rst:33 msgid "" "Almost all diskfiles in the cluster need to be relinked then cleaned up, and " "all partition directories need to be rehashed. This imposes significant I/O " "load on object servers, which may impact client requests. Consider using " "cgroups, ``ionice``, or even just the built-in ``--files-per-second`` rate-" "limiting to reduce client impact." msgstr "" #: ../../source/ring_partpower.rst:38 msgid "" "Object replicators and reconstructors will skip affected policies during the " "partition power increase. Replicators are not aware of hard-links, and would " "simply copy the content; this would result in heavy data movement and the " "worst case would be that all data is stored twice." msgstr "" #: ../../source/ring_partpower.rst:42 msgid "" "Due to the fact that each object will now be hard linked from two locations, " "many more inodes will be used temporarily - expect around twice the amount. " "You need to check the free inode count *before* increasing the partition " "power. Even after the increase is complete and extra hardlinks are cleaned " "up, expect increased inode usage since there will be twice as many partition " "and suffix directories." msgstr "" #: ../../source/ring_partpower.rst:48 msgid "" "Also, object auditors might read each object twice before cleanup removes " "the second hard link." msgstr "" #: ../../source/ring_partpower.rst:50 msgid "" "Due to the new inodes more memory is needed to cache them, and your object " "servers should have plenty of available memory to avoid running out of inode " "cache. Setting ``vfs_cache_pressure`` to 1 might help with that." msgstr "" #: ../../source/ring_partpower.rst:53 msgid "" "All nodes in the cluster *must* run at least Swift version 2.13.0 or later." msgstr "" #: ../../source/ring_partpower.rst:55 msgid "" "Due to these caveats you should only increase the partition power if really " "needed, i.e. if the number of partitions per disk is extremely low and the " "data is distributed unevenly across disks." msgstr "" #: ../../source/ring_partpower.rst:61 msgid "1. Prepare partition power increase" msgstr "" #: ../../source/ring_partpower.rst:63 msgid "" "The swift-ring-builder is used to prepare the ring for an upcoming partition " "power increase. It will store a new variable ``next_part_power`` with the " "current partition power + 1. Object servers recognize this, and hard links " "to the new location will be created (or deleted) on every PUT or DELETE. " "This will make it possible to access newly written objects using the future " "partition power::" msgstr "" #: ../../source/ring_partpower.rst:72 msgid "" "Now you need to copy the updated .ring.gz to all nodes. Already existing " "data needs to be relinked too; therefore an operator has to run a relinker " "command on all object servers in this phase::" msgstr "" #: ../../source/ring_partpower.rst:80 msgid "" "Start relinking after *all* the servers re-read the modified ring files, " "which normally happens within 15 seconds after writing a modified ring. " "Also, make sure the modified rings are pushed to all nodes running object " "services (replicators, reconstructors and reconcilers)- they have to skip " "the policy during relinking." msgstr "" #: ../../source/ring_partpower.rst:88 msgid "" "The relinking command must run as the same user as the daemon processes " "(usually swift). It will create files and directories that must be " "manipulable by the daemon processes (server, auditor, replicator, ...). If " "necessary, the ``--user`` option may be used to drop privileges." msgstr "" #: ../../source/ring_partpower.rst:93 msgid "" "Relinking might take some time; while there is no data copied or actually " "moved, the tool still needs to walk the whole file system and create new " "hard links as required." msgstr "" #: ../../source/ring_partpower.rst:99 msgid "2. Increase partition power" msgstr "" #: ../../source/ring_partpower.rst:101 msgid "" "Now that all existing data can be found using the new location, it's time to " "actually increase the partition power itself::" msgstr "" #: ../../source/ring_partpower.rst:107 msgid "" "Now you need to copy the updated .ring.gz again to all nodes. Object servers " "are now using the new, increased partition power and no longer create " "additional hard links." msgstr "" #: ../../source/ring_partpower.rst:114 msgid "" "The object servers will create additional hard links for each modified or " "new object, and this requires more inodes." msgstr "" #: ../../source/ring_partpower.rst:119 msgid "" "If you decide you don't want to increase the partition power, you should " "instead cancel the increase. It is not possible to revert this operation " "once started. To abort the partition power increase, execute the following " "commands, copy the updated .ring.gz files to all nodes and continue with `3. " "Cleanup`_ afterwards::" msgstr "" #: ../../source/ring_partpower.rst:131 msgid "3. Cleanup" msgstr "" #: ../../source/ring_partpower.rst:133 msgid "" "Existing hard links in the old locations need to be removed, and a cleanup " "tool is provided to do this. Run the following command on each storage node::" msgstr "" #: ../../source/ring_partpower.rst:140 msgid "" "The cleanup must be finished within your object servers ``reclaim_age`` " "period (which is by default 1 week). Otherwise objects that have been " "overwritten between step #1 and step #2 and deleted afterwards can't be " "cleaned up anymore. You may want to increase your ``reclaim_age`` before or " "during relinking." msgstr "" #: ../../source/ring_partpower.rst:146 msgid "" "Afterwards it is required to update the rings one last time to inform " "servers that all steps to increase the partition power are done, and " "replicators should resume their job::" msgstr "" #: ../../source/ring_partpower.rst:153 msgid "Now you need to copy the updated .ring.gz again to all nodes." msgstr "" #: ../../source/ring_partpower.rst:159 msgid "" "An existing object that is currently located on partition X will be placed " "either on partition 2*X or 2*X+1 after the partition power is increased. The " "reason for this is the Ring.get_part() method, that does a bitwise shift to " "the right." msgstr "" #: ../../source/ring_partpower.rst:164 msgid "" "To avoid actual data movement to different disks or even nodes, the " "allocation of partitions to nodes needs to be changed. The allocation is " "pairwise due to the above mentioned new partition scheme. Therefore devices " "are allocated like this, with the partition being the index and the value " "being the device id::" msgstr "" #: ../../source/ring_partpower.rst:185 msgid "" "There is a helper method to compute the new path, and the following example " "shows the mapping between old and new location::" msgstr "" #: ../../source/ring_partpower.rst:195 msgid "" "Using the original partition power (14) it returned the same path; however " "after an increase to 15 it returns the new path, and the new partition is " "2*X+1 in this case." msgstr "" #: ../../source/s3_compat.rst:2 msgid "S3/Swift REST API Comparison Matrix" msgstr "" #: ../../source/s3_compat.rst:5 msgid "General compatibility statement" msgstr "" #: ../../source/s3_compat.rst:7 msgid "" "S3 is a product from Amazon, and as such, it includes \"features\" that are " "outside the scope of Swift itself. For example, Swift doesn't have anything " "to do with billing, whereas S3 buckets can be tied to Amazon's billing " "system. Similarly, log delivery is a service outside of Swift. It's entirely " "possible for a Swift deployment to provide that functionality, but it is not " "part of Swift itself. Likewise, a Swift deployment can provide similar " "geographic availability as S3, but this is tied to the deployer's " "willingness to build the infrastructure and support systems to do so." msgstr "" #: ../../source/s3_compat.rst:18 msgid "Amazon S3 operations" msgstr "" #: ../../source/s3_compat.rst:21 msgid "Category" msgstr "" #: ../../source/s3_compat.rst:21 msgid "S3 REST API method" msgstr "" #: ../../source/s3_compat.rst:21 msgid "Swift S3 API" msgstr "" #: ../../source/s3_compat.rst:23 ../../source/s3_compat.rst:25 #: ../../source/s3_compat.rst:27 ../../source/s3_compat.rst:29 #: ../../source/s3_compat.rst:31 ../../source/s3_compat.rst:33 #: ../../source/s3_compat.rst:35 ../../source/s3_compat.rst:37 #: ../../source/s3_compat.rst:39 ../../source/s3_compat.rst:41 #: ../../source/s3_compat.rst:43 ../../source/s3_compat.rst:45 #: ../../source/s3_compat.rst:47 ../../source/s3_compat.rst:49 #: ../../source/s3_compat.rst:51 ../../source/s3_compat.rst:53 #: ../../source/s3_compat.rst:55 ../../source/s3_compat.rst:57 #: ../../source/s3_compat.rst:59 ../../source/s3_compat.rst:61 msgid "Core-API" msgstr "" #: ../../source/s3_compat.rst:23 ../../source/s3_compat.rst:25 #: ../../source/s3_compat.rst:27 ../../source/s3_compat.rst:29 #: ../../source/s3_compat.rst:31 ../../source/s3_compat.rst:33 #: ../../source/s3_compat.rst:35 ../../source/s3_compat.rst:37 #: ../../source/s3_compat.rst:39 ../../source/s3_compat.rst:41 #: ../../source/s3_compat.rst:43 ../../source/s3_compat.rst:45 #: ../../source/s3_compat.rst:47 ../../source/s3_compat.rst:49 #: ../../source/s3_compat.rst:51 ../../source/s3_compat.rst:53 #: ../../source/s3_compat.rst:55 ../../source/s3_compat.rst:57 #: ../../source/s3_compat.rst:59 ../../source/s3_compat.rst:61 #: ../../source/s3_compat.rst:63 ../../source/s3_compat.rst:75 #: ../../source/s3_compat.rst:77 msgid "Yes" msgstr "" #: ../../source/s3_compat.rst:23 msgid "`GET Object`_" msgstr "" #: ../../source/s3_compat.rst:25 msgid "`HEAD Object`_" msgstr "" #: ../../source/s3_compat.rst:27 msgid "`PUT Object`_" msgstr "" #: ../../source/s3_compat.rst:29 msgid "`PUT Object Copy`_" msgstr "" #: ../../source/s3_compat.rst:31 msgid "`DELETE Object`_" msgstr "" #: ../../source/s3_compat.rst:33 msgid "`Initiate Multipart Upload`_" msgstr "" #: ../../source/s3_compat.rst:35 msgid "`Upload Part`_" msgstr "" #: ../../source/s3_compat.rst:37 msgid "`Upload Part Copy`_" msgstr "" #: ../../source/s3_compat.rst:39 msgid "`Complete Multipart Upload`_" msgstr "" #: ../../source/s3_compat.rst:41 msgid "`Abort Multipart Upload`_" msgstr "" #: ../../source/s3_compat.rst:43 msgid "`List Parts`_" msgstr "" #: ../../source/s3_compat.rst:45 msgid "`GET Object ACL`_" msgstr "" #: ../../source/s3_compat.rst:47 msgid "`PUT Object ACL`_" msgstr "" #: ../../source/s3_compat.rst:49 msgid "`PUT Bucket`_" msgstr "" #: ../../source/s3_compat.rst:51 msgid "`GET Bucket List Objects`_" msgstr "" #: ../../source/s3_compat.rst:53 msgid "`HEAD Bucket`_" msgstr "" #: ../../source/s3_compat.rst:55 msgid "`DELETE Bucket`_" msgstr "" #: ../../source/s3_compat.rst:57 msgid "`List Multipart Uploads`_" msgstr "" #: ../../source/s3_compat.rst:59 msgid "`GET Bucket acl`_" msgstr "" #: ../../source/s3_compat.rst:61 msgid "`PUT Bucket acl`_" msgstr "" #: ../../source/s3_compat.rst:63 msgid "Versioning" msgstr "" #: ../../source/s3_compat.rst:63 msgid "`Versioning`_" msgstr "" #: ../../source/s3_compat.rst:65 ../../source/s3_compat.rst:67 #: ../../source/s3_compat.rst:69 ../../source/s3_compat.rst:71 #: ../../source/s3_compat.rst:73 ../../source/s3_compat.rst:79 #: ../../source/s3_compat.rst:81 ../../source/s3_compat.rst:83 #: ../../source/s3_compat.rst:85 ../../source/s3_compat.rst:87 msgid "No" msgstr "" #: ../../source/s3_compat.rst:65 msgid "Notifications" msgstr "" #: ../../source/s3_compat.rst:65 msgid "`Bucket notification`_" msgstr "" #: ../../source/s3_compat.rst:67 msgid "Bucket Lifecycle" msgstr "" #: ../../source/s3_compat.rst:67 msgid "Bucket Lifecycle [1]_ [2]_ [3]_ [4]_ [5]_ [6]_" msgstr "" #: ../../source/s3_compat.rst:69 msgid "Advanced ACLs" msgstr "" #: ../../source/s3_compat.rst:69 msgid "`Bucket policy`_" msgstr "" #: ../../source/s3_compat.rst:71 msgid "Public Website" msgstr "" #: ../../source/s3_compat.rst:71 msgid "Public website [7]_ [8]_ [9]_ [10]_" msgstr "" #: ../../source/s3_compat.rst:73 msgid "Billing" msgstr "" #: ../../source/s3_compat.rst:73 msgid "Billing [11]_ [12]_" msgstr "" #: ../../source/s3_compat.rst:75 ../../source/s3_compat.rst:77 #: ../../source/s3_compat.rst:79 ../../source/s3_compat.rst:81 #: ../../source/s3_compat.rst:83 ../../source/s3_compat.rst:85 msgid "Advanced Feature" msgstr "" #: ../../source/s3_compat.rst:75 msgid "`GET Bucket location`_" msgstr "" #: ../../source/s3_compat.rst:77 msgid "`Delete Multiple Objects`_" msgstr "" #: ../../source/s3_compat.rst:79 msgid "`Object tagging`_" msgstr "" #: ../../source/s3_compat.rst:81 msgid "`GET Object torrent`_" msgstr "" #: ../../source/s3_compat.rst:83 msgid "`Bucket inventory`_" msgstr "" #: ../../source/s3_compat.rst:85 msgid "`GET Bucket service`_" msgstr "" #: ../../source/s3_compat.rst:87 msgid "CDN Integration" msgstr "" #: ../../source/s3_compat.rst:87 msgid "`Bucket accelerate`_" msgstr "" #: ../../source/s3_compat.rst:128 msgid "" "`POST restore `_" msgstr "" #: ../../source/s3_compat.rst:129 msgid "" "`Bucket lifecycle `_" msgstr "" #: ../../source/s3_compat.rst:130 msgid "" "`Bucket logging `_" msgstr "" #: ../../source/s3_compat.rst:131 msgid "" "`Bucket analytics `_" msgstr "" #: ../../source/s3_compat.rst:132 msgid "" "`Bucket metrics `_" msgstr "" #: ../../source/s3_compat.rst:133 msgid "" "`Bucket replication `_" msgstr "" #: ../../source/s3_compat.rst:137 msgid "" "`OPTIONS object `_" msgstr "" #: ../../source/s3_compat.rst:138 msgid "" "`Object POST from HTML form `_" msgstr "" #: ../../source/s3_compat.rst:139 msgid "" "`Bucket public website `_" msgstr "" #: ../../source/s3_compat.rst:140 msgid "" "`Bucket CORS `_" msgstr "" #: ../../source/s3_compat.rst:144 msgid "" "`Request payment `_" msgstr "" #: ../../source/s3_compat.rst:145 msgid "" "`Bucket tagging `_" msgstr ""