We used the AWS SDK to upload files using Java in the previous post and everything worked perfectly, but I needed to upload a database backup in .gz format with a size of 18GB, and using the AWS SDK did not allow me to do it so that it just allows upload in a maximum put of 5GB,
so using AWS CLI allows us to upload until 160GB in just one put, and now let's see how we can use set it up in Centos 7 and use it:
Note: get ready yourself going to Identity and Access Management (IAM), creating a new user or using an existing one which has got access to the group AmazonS3FullAccess,
so once you've got an AWS Access Key ID with the AWS Access Key ID keep doing the process.
There a lot of ways to install it, let's do it like so:
aws-cli/1.16.283 Python/2.7.5 Linux/3.10.0-957.1.3.el7.x86_64 botocore/1.13.19
Now let's set aws-cli up to use your AWS credentials:
this is a list of some regions:
aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
account then you can use the aws cli like so
to copy from your EC2 instance to the S3 Bucket:
/usr/local/bin/aws s3 cp yourFilename s3://yourbucketname
to copy from your S3 Bucket instance to your instance:
/usr/local/bin/aws s3 cp s3://yourbucketname/fileName.txt /path/tofile
so using AWS CLI allows us to upload until 160GB in just one put, and now let's see how we can use set it up in Centos 7 and use it:
Note: get ready yourself going to Identity and Access Management (IAM), creating a new user or using an existing one which has got access to the group AmazonS3FullAccess,
so once you've got an AWS Access Key ID with the AWS Access Key ID keep doing the process.
There a lot of ways to install it, let's do it like so:
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"unzip awscli-bundle.zipcd awscli-bundle/sudo ./install -i /usr/local/aws -b /usr/local/bin/aws
Check it with:
aws --versionaws-cli/1.16.283 Python/2.7.5 Linux/3.10.0-957.1.3.el7.x86_64 botocore/1.13.19
Now let's set aws-cli up to use your AWS credentials:
this is a list of some regions:
| Region | Name |
|---|---|
| ap-northeast-1 | Asia Pacific (Tokyo) |
| ap-northeast-2 | Asia Pacific (Seoul) |
| ap-south-1 | Asia Pacific (Mumbai) |
| ap-southeast-1 | Asia Pacific (Singapore) |
| ap-southeast-2 | Asia Pacific (Sydney) |
| ca-central-1 | Canada (Central) |
| eu-central-1 | EU Central (Frankfurt) |
| eu-west-1 | EU West (Ireland) |
| eu-west-2 | EU West (London) |
| sa-east-1 | South America (Sao Paulo) |
| us-east-1 | US East (Virginia) |
| us-east-2 | US East (Ohio) |
| us-west-1 | US West (N. California) |
| us-west-2 | US West (Oregon) |
aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Access Key ID [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEYDefault region name [None]: us-east-1Default output format [None]: json
Now let's check if it was set ip properly with:
aws sts get-caller-identity{ "Account": "903503371367", "UserId": "AKIAIOSFODNN7EXAMPLE", "Arn": "arn:aws:iam::903503371367:user/bucketname"}
so after you run the command aws sts get-caller-identity and you get back the details which belong to youraccount then you can use the aws cli like so
to copy from your EC2 instance to the S3 Bucket:
/usr/local/bin/aws s3 cp yourFilename s3://yourbucketname
to copy from your S3 Bucket instance to your instance:
/usr/local/bin/aws s3 cp s3://yourbucketname/fileName.txt /path/tofile
Comments
Post a Comment