AWS CLI
AWS CLI (AWS Command Line Interface) is a command line interface for working with AWS services.
Customize AWS CLI
- Configure access to S3.
- Install the client.
- Create an AWS CLI configuration.
- Install the certificate.
1. Set up access to S3
Access can be configured by the Account Owner or a user with the role of iam.admin.
- Create a service user with role with access to S3. If you have created a service user with the role object_storage_user or
s3.bucket.user, the bucket must have a configured access policy and its rules must allow access to this user. - Issue an S3 key to the user.
2. Install the client
Use the Install or update to the latest version of the AWS CLI documentation from Amazon.
3. Create an AWS CLI configuration
-
Open the terminal.
-
Open the configuration mode:
aws configure -
Enter the
AWS Access Key ID,which is the value of the Access key field from the S3 key. -
Enter
AWS Secret Access Key- the value of the Secret key field from the S3 key. -
Enter
Default region name- the pool in which S3 is located (for example,ru-1). -
Enter
Default output formator leave blank. -
The settings will be saved in the configuration files:
-
credentials in
.aws/credentials:[default]
aws_access_key_id = <access_key>
aws_secret_access_key = <secret_key> -
default pool in
.aws/config:[default]
region = <pool>
-
-
In the
.aws/configfile, add theendpoint_urlparameter:[default]
region = <pool>
endpoint_url = https://<s3_domain>Specify
<s3_domain>- the S3 API domain for the desired pool.
4. Install the certificate
Linux/macOS
Windows
-
Create a folder
~/.servercores3/:mkdir -p ~/.servercores3/ -
Download the certificate and place it in the
~/.servercores3/folder:wget https://secure.globalsign.net/cacert/root-r6.crt -O ~/.servercores3/root.crt
openssl x509 -inform der -in ~/.servercores3/root.crt -out ~/.servercores3/root.crt
chmod 600 ~/.servercores3/root.crt -
In the
.aws/configconfiguration file, add a parameter:ca_bundle = ~/.servercores3/root.crt
-
Create a text file - for example,
root.txt. -
In the
root.txtfile, add the contents of the certificate in base64 format. -
In the
.aws/configconfiguration file, add a parameter:ca_bundle = <path>Specify
<path>- the path to theroot.txtfile.
Working with AWS CLI
For the command syntax, see the AWS instructions in Amazon's AWS documentation.
To work with S3 through the AWS CLI, use:
- s3api - commands corresponding to operations in the REST API;
- s3 - additional commands that simplify work with a large number of objects.
Output the list of buckets
-
Open the CLI.
-
Bring up a list of buckets:
aws s3 ls
Create a bucket
-
Open the CLI.
-
Create a bucket:
aws s3 mb s3://<bucket_name>Specify
<bucket_name>is the name of the new bucket.
View list of objects
-
Open the CLI.
-
Check out the list of facilities:
aws s3 ls --recursive s3://<bucket_name>Specify
<bucket_name>- the name of the bucket where you want to view the list of objects.
Load object
Easy loading
Load with condition
Booting from Object Lock
-
Open the CLI.
-
Load the object into the buckets:
aws s3 cp <path_to_object> s3://<bucket_name>/Specify:
<path_to_object>- path in the baket, where the object will be stored;<bucket_name>- name of the bucket where the object will be stored.
You can use conditional queries when loading objects through the AWS CLI.
Facility condition
No object condition
The object will be loaded if there is an object with an ETag that matches the value in the If-Match header. If an object with such ETag is not found, error 412 Precondition Failed will be returned.
-
Open the CLI.
-
Look up the ETag of the object that must exist in the bacquet for the new object to be loaded:
aws s3api head-object \
--bucket <bucket_name_1> \
--key <path_to_object_1>Specify:
<bucket_name_1>- name of the bucket where the object is stored;<path_to_object_1>- path to the object in the baket.
-
Load an object with a condition:
aws s3api put-object \
--bucket <bucket_name_2> \
--key <path_to_object_2> \
--body <path_to_file> \
--if-match "<etag>"Specify:
<bucket_name_2>- name of the bucket to which the object will be loaded;<path_to_object_2>- path in the baketable where the object will be stored;<path_to_file>- path to the file on the local device;<etag>- ETag of the object you looked at in step 2.
The object will be loaded if there is no object with a key that matches the value in the If-None-Match header. This avoids overwriting the object. If an object with such a key already exists in the package, the error 412 Precondition Failed will be returned.
-
Open the CLI.
-
Load an object with a condition:
aws s3api put-object \
--bucket <bucket_name> \
--key <path_to_object> \
--body <path_to_file> \
--if-none-match "*"Specify:
<bucket_name>- name of the bucket where the object will be stored;<path_to_object>- path in the baket, where the object will be stored;<path_to_file>- path to the file on the local device.
Here
--if-none-match"*"means that the object will only be loaded if there is no object already in the bucket on the specified path.
If Object Lock is enabled in the baketool, you can load an object immediately with a temporary lock.
-
Open the CLI.
-
Load a time-locked object:
aws s3api put-object \
--bucket <bucket_name> \
--key <path_to_object> \
--body <path_to_file> \
--object-lock-mode <lock_mode> \
--object-lock-retain-until-date <date>Specify:
<bucket_name>- bucket name;<path_to_object>- path in the baket, where the object will be stored;<path_to_file>- path to the file on the local device;<lock_mode>- lock mode. Possible values areGOVERNANCEorCOMPLIANCE;<date>- the date until which the object will be locked, in ISO 8601 format, e.g.2025-09-06T00:00:00Z.
Get a reference to an object
You can get a link to an object in a public or private bucket via a Presigned URL. For more information about Presigned URLs, see Sharing objects with presigned URLs in the AWS documentation.
-
Open the CLI.
-
Get the link:
aws s3 presign s3://<bucket_name>/<path_to_object> --expires-in <time>Specify:
<bucket_name>- name of the bucket where the object is stored;<path_to_object>- path to the object in the baket;- optional:
--expires-in <time>- link expiration time, where<time>- timein seconds after which the link will stop working. If you don't add--expires-in <time>, the link will work for one hour.
Copy object
Simple copying
Conditional copying
-
Open the CLI.
-
Copy the object:
aws s3 cp s3://<bucket_name_1>/<path_to_object_1> s3://<bucket_name_2>/<path_to_object_2>Specify:
<bucket_name_1>- name of the bucket where the object to be copied is stored;<path_to_object_1>- path to the object to be copied in the bucket;<bucket_name_2>- name of the bucket to which the object will be copied;<path_to_object_2>- path in the baketable where the object will be stored.
You can use conditional queries when copying objects through the AWS CLI.
The condition of immutability of the object
Object change condition
The object will be copied if its ETag matches the value you specify in the --copy-source-if-match header - that is, if the object has not been modified. If the object's ETag does not match, a 412 Precondition Failed error will be returned.
-
Open the CLI.
-
Copy the object with the condition:
aws s3api copy-object \
--copy-source <bucket_name_1>/<path_to_object_1> \
--bucket <bucket_name_2> \
--key <path_to_object_2> \
--copy-source-if-match "<etag>"Specify:
<bucket_name_1>- name of the bucket where the object to be copied is stored;<path_to_object_1>- path to the object to be copied in the bucket;<bucket_name_2>- name of the bucket to which the object will be copied;<path_to_object_2>- path in the baket, where the copied object will be stored;<etag>- ETag, which must match the ETag of the object.
The object will be copied if its ETag does not match the value you specify in the --copy-source-if-none-match header - that is, if the object has been modified. If the object's ETag matches, a 412 Precondition Failed error will be returned.
-
Open the CLI.
-
Copy the object with the condition:
aws s3api copy-object \
--copy-source <bucket_name_1>/<path_to_object_1> \
--bucket <bucket_name_2> \
--key <path_to_object_2> \
--copy-source-if-none-match "<etag>"Specify:
<bucket_name_1>- name of the bucket where the object to be copied is stored;<path_to_object_1>- path to the object to be copied in the bucket;<bucket_name_2>- name of the bucket to which the object will be copied;<path_to_object_2>- path in the baket, where the copied object will be stored;<etag>- ETag, which must not match the ETag of the object.
Delete object
Simple removal
Conditional deletion
-
Open the CLI.
-
Delete the object:
aws s3 rm s3://<bucket_name>/<object_name>Specify:
<bucket_name>- bucket name;<object_name>- object name.
You can use conditional queries when deleting objects through the AWS CLI - this can help reduce the risk of accidentally deleting a file.
The object will be deleted if there is an object with an ETag that matches the value in the If-Match header. If an object with such ETag is not found, error 412 Precondition Failed will be returned.
-
Open the CLI.
-
Look up the ETag of the object that must exist in the bucket for the desired object to be deleted:
aws s3api head-object \
--bucket <bucket_name_1> \
--key <path_to_object_1>Specify:
<bucket_name_1>- name of the bucket where the object is stored;<path_to_object_1>- path to the object in the baket.
-
Delete the object with the condition:
aws s3api delete-object \
--bucket <bucket_name_2> \
--key <path_to_object_2> \
--if-match "<etag>"Specify:
<bucket_name_2>- name of the bucket where the object is stored;<path_to_object_2>- path to the object in the bucket;<etag>- ETag of the object you looked at in step 2.