Using S3 with Laravel

Tutorials

February 7th, 2022

Using S3 with Laravel

AWS S3 provides a place for us to store files off of our servers. There are some big benefits to this:

  1. Backup/redundancy - S3 and similar have built-in backups and redundancy
  2. Scaling - Savings files off-server becomes necessary in modern hosting, such as serverless or containerized environments, as well as in traditional load-balanced environments
  3. Disk usage - You won't need as much disk space when storing files in the cloud
  4. Features - S3 (and other clouds) have some great features, such as versioning support for files, lifecycle rules for deleting old files (or storing them in a cheaper way), deletion protection, and more

Using S3 now (even in single-server setups) can reduce headaches in the long run. Here's what you should know!

Configuration

There's two places to configure things for S3:

  1. Within Laravel - usually via .env but potentially also within config/filesystem.php
  2. Within your AWS account

Laravel Config

If you check your config/filesystem.php file, you'll see that s3 is an option already. It's setup to use environment variables from your .env file!

Unless you need to customize this, then you can likely leave it alone and just set values in the .env file:

1# Optionally Set the default filesystem driver to S3
2FILESYSTEM_DISK=s3
3# Or if using Laravel < 9
4FILESYSTEM_DRIVER=s3
5 
6# Add items needed for S3-based filesystem to work
7AWS_ACCESS_KEY_ID=xxxzzz
8AWS_SECRET_ACCESS_KEY=xxxyyy
9AWS_DEFAULT_REGION=us-east-2
10AWS_BUCKET=my-awesome-bucket
11AWS_USE_PATH_STYLE_ENDPOINT=false

The config/filesystem.php file contains options like the following:

1return [
2 'disks' => [
3 // 'local' and 'public' ommitted...
4 
5 's3' => [
6 'driver' => 's3',
7 'key' => env('AWS_ACCESS_KEY_ID'),
8 'secret' => env('AWS_SECRET_ACCESS_KEY'),
9 'region' => env('AWS_DEFAULT_REGION'),
10 'bucket' => env('AWS_BUCKET'),
11 'url' => env('AWS_URL'),
12 'endpoint' => env('AWS_ENDPOINT'),
13 'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
14 ],
15 ],
16];

There's a few options there we didn't use in the .env file. For example, the AWS_URL can be set, which is useful for using other file storage clouds that have an S3 compatible API such as CloudFlare's R2 or Digital Ocean's Spaces.

AWS Configuration

Within AWS, you need to do 2 things:

  1. Create a bucket within the S3 service
  2. Create an IAM User to get a Key/Secret Key, and then attach a Policy to that user that allows access to the S3 API.

Like anything in AWS, creating a bucket in S3 involves looking at a ton of configuration options and wondering if you need any of them. For most use cases, you don't!

Head to the S3 console, create a bucket name (it has to be globally unique, not just unique to your AWS account), choose the region you operate in, and leave all the defaults (including the ones that labeled "Block Public Access settings for this bucket").

Yes, some of these options are ones you may want to use, but you can choose them later.

After creating a bucket, we need permission to do things to it. Let's pretend we created a bucket named "my-awesome-bucket".

We can create an IAM User, select "programmatic access", but don't attach any policies or setup anything else. Make sure to record the secret access key, as they'll only show it once.

I've created a video showing the process of creating a bucket and setting up IAM permissions here: https://www.youtube.com/watch?v=FLIp6BLtwjk

The Access Key and Secret Access Key should be put into your .env file.

Next, click into the IAM User and add an Inline Policy. Edit it using the JSON editor, and add the following (straight from the Flysystem docs):

1{
2 "Version": "2012-10-17",
3 "Statement": [
4 {
5 "Sid": "Stmt1420044805001",
6 "Effect": "Allow",
7 "Action": [
8 "s3:ListBuckets",
9 "s3:GetObject",
10 "s3:GetObjectAcl",
11 "s3:PutObject",
12 "s3:PutObjectAcl",
13 "s3:ReplicateObject",
14 "s3:DeleteObject"
15 ],
16 "Resource": [
17 "arn:aws:s3:::my-awesome-bucket",
18 "arn:aws:s3:::my-awesome-bucket/*"
19 ]
20 }
21 ]
22}

This allows us to perform the needed S3 API actions on our new bucket.

Laravel Usage

Within Laravel, you can use the file storage like so:

1# If you set S3 as your default:
2$contents = Storage::get('path/to/file.ext');
3Storage::put('path/to/file.ext', 'some-content');
4 
5# If you do not have S3 as your default:
6$contents = Storage::disk('s3')->get('path/to/file.ext');
7Storage::disk('s3')->put('path/to/file.ext', 'some-content');

The path to the file (within S3) gets appended to the bucket name, so a file named path/to/file.ext will exist in s3://my-awesome-bucket/path/to/file.ext.

Directories technically do not exist within S3. Within S3, a file is called an "object" and the file path + name is the "object key". So, within bucket my-awesome-bucket, we just created an object with key path/to/file.ext.

Be sure to check out the Storage area of the Laravel docs to find more useful ways to use Storage, including file streaming and temporary URL's.

Pricing

S3 is fairly cheap - most of us will spend pennies to a few dollars a month. This is especially true if you delete files from S3 after you're done with them, or setup Lifecycle rules to delete files after a set period of time.

The pricing is (mostly) driven by 3 dimensions. The prices vary by region and usage. Here's an example based on usage for a real application in a given month for Chipper CI (my CI for Laravel application), which stores a lot of data in S3:

  1. Storage: $0.023 per GB, ~992GB ~= $22.82
  2. Number of API Calls: ~7 million requests ~= $12
  3. Bandwidth usage: This is super imprecise. Data transfer for this was about $23, but this excludes EC2 based bandwidth charges.

Useful Bits about S3

  1. If your AWS setup has servers in a private network, and uses NAT Gateways, be sure to create an S3 Endpoint (type of Gateway). This is done within the Endpoints section in the VPC service. This allows calls to/from S3 to bypass the NAT Gateway and thus get around extra bandwidth charges. It doesn't cost extra to use this.
  2. Considering enabling Versioning in your S3 bucket if you're worried about files being overwritten or deleted
  3. Consider enabling Intelligent Tiering in your S3 bucket to help save on storage costs of files you likely won't interact with again after they are old
  4. Be aware that deleting large buckets (lots of files) can cost money! This is due to the number of API calls you'd have to make to delete files.

Filed in:

Chris Fidao

Teaching coding and servers at CloudCasts and Servers for Hackers. Co-founder of Chipper CI.