Build your own multi-user photo album app with React, GraphQL, and AWS Amplify — Part 2 of 3

Add URL routing, photo uploads, and an album details view

Part 1 | Part 2 | Part 3
This is the second post in a three-part series that shows you how to build a scalable and highly available serverless web app on AWS that lets users upload photos to albums and share those albums privately with others.

In Part One we bootstrapped our app, added authentication, and integrated a GraphQL API for creating photo album records with a web frontend that let us list out album names and create new album names.

Here in Part Two, we’ll make the following improvements:

  • Add URL routing and views to switch between a list of albums and viewing details of an album
  • Add photo uploads to an album with S3, automatically generate thumbnails in the cloud, and store the photo metadata in another DynamoDB table in a way that will let us link photos to albums
  • Fetch all photos for a selected album using a nested Photos field inside our Album type in our GraphQL schema

Pre-requisites

We have one more requirement to get out of the way before we continue with the steps below.

Install the SAM CLI — In this post, we’ll use a tool from AWS to help with packaging and deploying AWS Lambda functions called the AWS Serverless Application Model (SAM) and the SAM CLI. Please follow the installation instructions for the SAM CLI, which will also require you to install Docker (relevant links are provided in the SAM installation instructions).

Adding routing and an album details view

First off, since we’ll want to have two modes — an albums list view and an album details view — let’s add some simple routing to our React app using react-router-dom and some new components to help with loading the details for an album and rendering an album.

Run npm install --save react-router-dom

Then, add some imports to the top of App.js, create some new components to load and render an album, and modify our App component to render different components depending on which route we're on. Make the following changes to src/App.js.

// src/App.js
// 1. NEW: Import the routing components
import {BrowserRouter as Router, Route, NavLink} from 'react-router-dom';
// 2. NEW: Add a new query we can use to render an album's details
const GetAlbum = `query GetAlbum($id: ID!) {
getAlbum(id: $id) {
id
name
}
}
`;
// 3. NEW: Create an AlbumDetailsLoader component
// to load the details for an album
class AlbumDetailsLoader extends React.Component {
render() {
return (
<Connect query={graphqlOperation(GetAlbum, { id: this.props.id })}>
{({ data, loading, errors }) => {
if (loading) { return <div>Loading...</div>; }
if (errors.length > 0) { return <div>{JSON.stringify(errors)}</div>; }
if (!data.getAlbum) return;
          return <AlbumDetails album={data.getAlbum} />;
}}
</Connect>
);
}
}
// 4. NEW: Create an AlbumDetails component
class AlbumDetails extends Component {
render() {
return (
<Segment>
<Header as='h3'>{this.props.album.name}</Header>
<p>TODO: Allow photo uploads</p>
<p>TODO: Show photos for this album</p>
</Segment>
)
}
}
// 5. EDIT: Replace the App component's render() method 
// with updated code to control which components
// render depending on what route we're on
class App extends Component {
// ...
// Leave other parts of the App component alone
// ...
  // Replace the render() method with this version: 
render() {
return (
<Router>
<Grid padded>
<Grid.Column>
<Route path="/" exact component={NewAlbum}/>
<Route path="/" exact component={AlbumsListLoader}/>
            <Route
path="/albums/:albumId"
render={ () => <div><NavLink to='/'>Back to Albums list</NavLink></div> }
/>
<Route
path="/albums/:albumId"
render={ props => <AlbumDetailsLoader id={props.match.params.albumId}/> }
/>
</Grid.Column>
</Grid>
</Router>
);
}
}

Our app won’t look any different at this point, but if we modify our AlbumsList component to render the album names as links that follow the routing path we set out above, we should be able to click in and view an album. Of course, once we’re viewing an album’s details, we’ll also want a link to go back to the albums list. Let’s add all of this in. Make the following changes to src/App.js:

// src/App.js
// 1. EDIT: Replace the AlbumsList component's albumItems()
// with updated code to output the names as nav links
class AlbumsList extends React.Component {
  // Replace the existing albumItems() with this new one:
albumItems() {
return this.props.albums.sort(makeComparator('name')).map(album =>
<List.Item key={album.id}>
<NavLink to={`/albums/${album.id}`}>{album.name}</NavLink>
</List.Item>
);
}

// ... the rest of the AlbumsList component remains unchanged
}

At this point, if you check out the app you’ll see that we can click on an album’s name and we’ll change views and see the details for that album (plus we’ve got a link at the top to go back to our albums list). Of course, right now all our AlbumDetails component does is render the album’s name, plus some TODOS which we’ll get to later. We won’t have any other album information to fetch until we add in the ability to upload photos to our album, so let’s take care of that next.

Adding photo uploads to an album

We’ll need a place to store all of the photos that get uploaded to our albums and Amazon Simple Storage Service (S3) is a great option. First up, we’ll use the Amplify CLI to enable storage for our app, which will create a bucket on Amazon S3 and set it up with appropriate permissions so that users who are logged in to our app can read and write to it in various locations and permissions. You can read more about the Storage module here.

Run amplify add storage, select 'Content' at the prompt, optionally enter your own names for the resource category and bucket name, and configure it so that only authenticated users can access, with full read/write permissions. Then run amplify push. Here is some sample output with responses:

$ amplify add storage
? Please select from one of the below mentioned services: Content (Images, audio, video, etc.)
? Please provide a friendly name for your resource that will be used to label this category in the project: photoalbumsstorage
? Please provide bucket name: <accept the default value>
? Who should have access: Auth users only
? What kind of access do you want for Authenticated users: read/write
$ amplify push

Now that we have an S3 bucket where our photos can get stored, we’ll want to create a UI that lets us upload photos to that bucket for storage. Then, we’ll need to track that the photo was intended to be part of the album it was uploaded to, so that we can eventually load all of the photos that belong to a specific album.

Let’s create a new S3ImageUpload component that will contain an HTML file input element which will fire off an event handler when a user selects a photo. Our upload event handler will need to upload the file to S3 with some metadata annotating which album it's destined for. Luckily, the Amplify JS Storage module makes uploading files to S3 very easy. Also, we'll need to introduce one new dependency to our app — a way to generate UUIDs — because we'll need to ensure that we're uploading files to S3 with unique names (if we used the filenames from users' devices, they could conflict).

Run npm install --save uuid and then update our src/App.js file, adding some imports, creating an S3ImageUpload component, and including the S3ImageUpload component in the AlbumDetails component. Make the following changes to src/App.js:

// src/App.js
// 1. NEW: Add imports from uuid and semantic-ui-react
import {v4 as uuid} from 'uuid';
import { Form, Grid, Header, Input, List, Segment } from 'semantic-ui-react';
// 2. EDIT: add an import of Storage from Amplify
import Amplify, { API, graphqlOperation, Storage } from 'aws-amplify';
// 3. NEW: Create an S3ImageUpload component
class S3ImageUpload extends React.Component {
constructor(props) {
super(props);
this.state = { uploading: false }
}
  onChange = async (e) => {
const file = e.target.files[0];
const fileName = uuid();
    this.setState({uploading: true});
    const result = await Storage.put(
fileName,
file,
{
customPrefix: { public: 'uploads/' },
metadata: { albumid: this.props.albumId }
}
);
    console.log('Uploaded file: ', result);
this.setState({uploading: false});
}
  render() {
return (
<div>
<Form.Button
onClick={() => document.getElementById('add-image-file-input').click()}
disabled={this.state.uploading}
icon='file image outline'
content={ this.state.uploading ? 'Uploading...' : 'Add Image' }
/>
<input
id='add-image-file-input'
type="file"
accept='image/*'
onChange={this.onChange}
style={{ display: 'none' }}
/>
</div>
);
}
}
// 4. EDIT: Add the S3ImageUpload component 
// to the AlbumDetails component
class AlbumDetails extends Component {
render() {
return (
<Segment>
<Header as='h3'>{this.props.album.name}</Header>
<S3ImageUpload albumId={this.props.album.id}/>
<p>TODO: Show photos for this album</p>
</Segment>
)
}
}

At this point there’s not much to look at, but you should be able to click the button, select a file, and see it change to ‘Uploading…’ before switching back to an upload button again. You can also go manually explore the S3 bucket in the AWS web console to see that the files are getting uploaded. The easiest way to find the bucket name is to look at src/aws-exports.js and find the value configured for aws_user_files_s3_bucket. Find your bucket in the S3 web console, then look in the bucket under public/uploads.

There are a few things worth calling out in our new S3ImageUpload component. It uses AWS Amplify's Storage.put method to upload a file into the S3 bucket we configured for our app. In this API call, we're passing in a few extra options.

We pass in customPrefix: { public: 'uploads/' } because we'll want to automatically make thumbnails for each image. We'll accomplish this shortly by adding a trigger onto the S3 bucket that will fire off a thumbnail creation function for us each time any file is added to the uploads/ path of the bucket. New thumbnails will also get added to the bucket and to avoid a recursive trigger loop where each thumbnail creation then causes the function to fire again, we'll scope our trigger to only execute for files that are added with a key prefix of uploads/. Amplify knows to use our prefix because we specified that it was for files that should be publicly accessible, which is the default permission level for Storage.put.

Is it a problem that the default is for all files to be accessible (at the API level) to any of our users in the app? No. This is acceptable since we're using unguessable UUIDs for the photo keys, and users will only be able to retrieve a list of photos for an album if they know that album's UUID as well. If you go read all of the Amplify Storage module's API (or if you're familiar with the underlying S3 API), you might ask “but wait, users can just list all of the objects in the public path and see all of the photos!” For now, you're right, but we'll deal with that later, after our app is working and we take additional precautions to lock it down further (by restricting album listing to certain usernames and by preventing users from listing items in the bucket).

We pass in metadata: { albumid: this.props.albumId } because we're going to have our S3 thumbnail trigger function take care of adding the information about this photo to our data store after it finishes making the thumbnail, and that function will somehow need to know what album the photo was uploaded for. We could have put the album ID in the photo key as a prefix or suffix, for example, but I think the metadata approach is nicer. After all, this is metadata about the photo, right?

Generating thumbnails

One of the great things about building on AWS is the integration between services. In our case, we we’ll want to have some server-side code generate thumbnails for each of our images that we upload. On AWS, we can do this in a serverless fashion by creating an AWS Lambda function and setting up our S3 bucket to trigger the function whenever new objects enter the bucket.

AWS Lambda lets you author functions in a number of programming languages, but since we’ve been working exclusively with JavaScript in this app so far, let’s stay in JS land and create a Lambda function that will run in the cloud on Node.js 8.10. There are a lot of options out there to help you author and deploy functions to AWS Lambda. For this tutorial we’ll use the AWS Serverless Application Model (SAM) and the SAM CLI.

SAM: The AWS Serverless Application Model

First, follow the installation instructions for the SAM CLI.

Then, inside our photo-albums project directory, use the SAM CLI to bootstrap a new Node.js 8.10 function.

Run sam init --runtime nodejs8.10 --name photo_processor

This creates an example function in photo_processor/hello_world. Let's rename photo_processor/hello_world to something more appropriate: photo_processor/src. Also, while I'm a fan of unit tests, we're not going to write any in this tutorial, so remove the photo_processor/src/tests directory, since the placeholder tests there will be irrelevant once we write our photo processing code.

Now it's time to get down to writing some code to take care responding to events from an S3 bucket and resizing our uploads. At the time of this writing, a popular choice for performing photo resizing in Node.js is Sharp, so below is our AWS Lambda function, which we should put in photo_processor/src/app.js.

While we're working with the S3 API here, we'll also include the code we need to handle fetching the metadata off of the uploaded file, since we'll need that info when get to storing the photo's info into DynamoDB later in this post.

Paste this content into photo_processor/src/app.js:

// photo_processor/src/app.js
const AWS = require('aws-sdk');
const S3 = new AWS.S3({ signatureVersion: 'v4' });
// Note: Sharp requires native extensions. To get sharp to install from NPM in a
// way that's compatible with the Amazon Linux environment that AWS runs Node.js
// on, we can use this command: docker run -v "$PWD":/var/task lambci/lambda:build-nodejs8.10 npm install
const Sharp = require('sharp');
// We'll expect these environment variables to be defined when the Lambda function is deployed
const THUMBNAIL_WIDTH = parseInt(process.env.THUMBNAIL_WIDTH, 10);
const THUMBNAIL_HEIGHT = parseInt(process.env.THUMBNAIL_HEIGHT, 10);
function thumbnailKey(filename) {
return `public/resized/${filename}`;
}
function fullsizeKey(filename) {
return `public/${filename}`;
}
function makeThumbnail(photo) {
return Sharp(photo).resize(THUMBNAIL_WIDTH, THUMBNAIL_HEIGHT).toBuffer();
}
async function resize(bucketName, key) {
const originalPhoto = (await S3.getObject({ Bucket: bucketName, Key: key }).promise()).Body;
const originalPhotoName = key.replace('uploads/', '');
const originalPhotoDimensions = await Sharp(originalPhoto).metadata();
    const thumbnail = await makeThumbnail(originalPhoto);
    await Promise.all([
S3.putObject({
Body: thumbnail,
Bucket: bucketName,
Key: thumbnailKey(originalPhotoName),
}).promise(),
        S3.copyObject({
Bucket: bucketName,
CopySource: bucketName + '/' + key,
Key: fullsizeKey(originalPhotoName),
}).promise(),
]);
    await S3.deleteObject({
Bucket: bucketName,
Key: key
}).promise();
    return {
photoId: originalPhotoName,

thumbnail: {
key: thumbnailKey(originalPhotoName),
width: THUMBNAIL_WIDTH,
height: THUMBNAIL_HEIGHT
},
        fullsize: {
key: fullsizeKey(originalPhotoName),
width: originalPhotoDimensions.width,
height: originalPhotoDimensions.height
}
};
};
async function processRecord(record) {
const bucketName = record.s3.bucket.name;
const key = record.s3.object.key;
    if (key.indexOf('uploads') != 0) return;
    return await resize(bucketName, key);
}
exports.lambda_handler = async (event, context, callback) => {
try {
event.Records.forEach(processRecord);
callback(null, { status: 'Photo Processed' });
}
catch (err) {
console.error(err);
callback(err);
}
};

Next, replace the auto-generated package.json file with the one below, so we can track our dependency on Sharp. Paste the following into photo_processor/src/package.json:

{
"name": "photo_processor",
"version": "1.0.0",
"description": "Our Photo Album uploads processor",
"main": "src/app.js",
"dependencies": {
"sharp": "^0.20.2"
}
}

Finally, install our function’s dependencies. We’re using Sharp, which requires native extensions as part of its installation, so we need to get NPM to install Sharp in an Amazon Linux environment because that’s the OS that AWS Lambda will run our function on. Luckily, there’s a docker image to make this easy.

From inside the photo_processor/src directory, run docker run -v "$PWD":/var/task lambci/lambda:build-nodejs8.10 npm install

That takes care of everything we'll need to have in order to package up and deploy our function to AWS Lambda in the cloud.

Packaging and deploying the Lambda function

The SAM CLI helps with bootstrapping a Lambda function (which we did above), and it can also take care of packaging and deploying Lambda functions. When we bootstrapped our function, the SAM CLI also generated a SAM template file (in YAML format), which will get pre-processed into an AWS CloudFormation template file. The generated template.yml defines a Lambda function that gets triggered in response to an HTTP request.

Creating a SAM Template File

In our case, we want to define a Lambda function that has permissions to work on our storage bucket and to write logs to Amazon CloudWatch. We don’t need an HTTP endpoint trigger set up, but we do want to include the thumbnail width and height environment variables that the resizing function expects. Also, since our storage bucket was created with a different Cloud Formation template (by the Amplify CLI) we’ll configure this template to expect us to pass in the Amazon Resource Name of the storage bucket as a parameter so that we can set up the appropriate permissions.

Below is a SAM template.yml file that takes care of all of this. Replace the contents of photo_processor/template.yml with this:

# photo_processor/template.yml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
photo_processor
Sample SAM Template for photo_processor
Parameters:
S3UserfilesBucketArn:
Type: String

Globals:
Function:
Timeout: 10
Resources:
PhotoProcessorFunctionIamRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: Allow
Principal:
Service: [lambda.amazonaws.com]
Action: ["sts:AssumeRole"]
ManagedPolicyArns: ["arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"]
Path: "/"
Policies:
-
PolicyName: "AllPrivsForPhotoAlbumUserfilesBucket"
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action: "s3:*"
Resource: !Join ["/", [!Ref S3UserfilesBucketArn, "*"]]
PhotoProcessorFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: src/
Handler: app.lambda_handler
Role: !GetAtt PhotoProcessorFunctionIamRole.Arn
Runtime: nodejs8.10
Environment:
Variables:
THUMBNAIL_WIDTH: 80
THUMBNAIL_HEIGHT: 80
BucketPermission:
Type: AWS::Lambda::Permission
Properties:
Action: 'lambda:InvokeFunction'
FunctionName: !Ref PhotoProcessorFunction
Principal: s3.amazonaws.com
SourceAccount: !Ref "AWS::AccountId"
SourceArn: !Ref S3UserfilesBucketArn
Outputs:
PhotoProcessorFunction:
Description: "Photo Processor Lambda Function ARN"
Value: !GetAtt PhotoProcessorFunction.Arn
PhotoProcessorFunctionIamRole:
Description: "IAM Role created for Photo Processor function"
Value: !GetAtt PhotoProcessorFunctionIamRole.Arn

Packaging a SAM Template File

Once you have a SAM template ready, you need to have SAM package up your Lambda function, causing it to zip up the function and its dependencies, generate the final CloudFormation template, and upload the CloudFormation template to an S3 bucket (all CloudFormation templates must exist in an S3 bucket before they can be executed).

As a one-time operation, create an S3 bucket to host this (and any future CloudFormation) template(s), taking care to create the bucket with a unique name and to specify the same region that is referenced in our React app’s src/aws-exports.js file. Run this command (with appropriate substitutions):

export MY_UNIQUE_CLOUDFORMATION_TEMPLATES_BUCKET_NAME=PickAUniqueNameHere
aws s3 mb s3://$MY_UNIQUE_CLOUDFORMATION_TEMPLATES_BUCKET_NAME --region us-east-1

Deploying a SAM Template file

Now, we’ll use the SAM CLI to package up and deploy our Lambda function to the cloud. From within the photo_processor directory, run:

sam package \
--template-file template.yaml \
--output-template-file packaged.yml \
--s3-bucket $MY_UNIQUE_CLOUDFORMATION_TEMPLATES_BUCKET_NAME

Finally, it’s time to deploy the Lambda function. The previous command created a packaged.yml function alongside the template.yml file and uploaded a zip file of our lambda function and its dependencies. We’ll reference that in our deploy command, but we also need to pass in a parameter to tell CloudFormation the ARN of the S3 bucket our app is using for file storage. Look in our React app’s src/aws-exports.js file for the aws_user_files_s3_bucket value and substitute it below.

From within the photo_processor directory, run (with an appropriate substitution for S3UserfilesBucketArn):

export MY_AWS_USERFILES_S3_BUCKET_ARN=arn:aws:s3:::my-aws-user-files-s3-bucket-name
sam deploy \
--template-file packaged.yml \
--stack-name PhotoAlbumsProcessorSAMStack \
--capabilities CAPABILITY_IAM \
--parameter-overrides \
--region us-east-1 \
S3UserfilesBucketArn=$MY_AWS_USERFILES_S3_BUCKET_ARN

After a short wait, our Lambda function should be deployed and ready for us to connect to S3 for resizing our photos! If you’d like, you can read more about packaging and deploying Lambda functions with AWS SAM in the Deploying Serverless Applications documentation.

Invoking our Lambda function when photos are uploaded to S3

Now that our photo resizing Lambda function is deployed, we need to add an event source to it so it will get invoked whenever a new photo is uploaded to our storage bucket.

Adding S3 uploads as a trigger for our photo_processor Lambda function

Here’s how to connect S3 bucket uploads to trigger our Lambda:

  1. Open the AWS web console, be sure you’re in the same region that our app is using, and load the Lambda console page
  2. Find the name of our Lambda function, which should have ‘PhotoProcessorFunction’ in it (you can use the search box to narrow down the list of functions if you have a lot), and click the function to view and manage its configuration
  3. In the Designer section at the top of the page, click S3 from the ‘Add Triggers’ list on the left
  4. In the ‘Configure triggers’ section that appears:
    a. Select the name of your storage bucket (you can look this up in the src/aws-exports.js file)
    b. Select the PUT event type
    c. Enter ‘uploads/’ for the prefix
    d. Click ‘Add’
  5. Click the orange ‘Save’ button in the top right

With that done, the photo resizing Lambda function should be invoked whenever new photos appear in the S3 bucket under the uploads/ prefix. You can check to see if things are working by using the Album details web interface to upload a new photo to an album, then use the S3 web console to browse the contents of the bucket; look for a photo in public/ and one in public/resized/ with the same name.

Storing photo metadata in DynamoDB

Our last step, before we can show all of the photos in an album, is to add a new entry to a table in DynamoDB with the metadata about the photo. The GraphQL schema we defined in Part One describes a Photo type that is connected to an Album. If we put new items into the table described in our AppSync Photo’s datasource, the information will be available when we try to fetch the nested photos for an Album via a GraphQL query. For the sake of simplicity, we’ll add on to our existing photo_processor Lambda function, rather than creating another function.

We’ll need to generate a unique ID for each photo that we insert into Dynamo, so let’s bring in another package. From the photo_processor/src directory, run: npm install --save uuid

Working with DynamoDB from JavaScript and the AWS JS SDK is pretty easy thanks to the DynamoDB Document Client class. Make the following changes to photo_processor/app.js:

// photo_processor/app.js
// 1. NEW: Import the DynamoDB DocumentClient and the uuid module
const DynamoDBDocClient = new AWS.DynamoDB.DocumentClient({apiVersion: '2012-08-10'});
const uuidv4 = require('uuid/v4');
// 2. NEW: Extract the name of the photos table 
// from an environment variable (we'll set this value via
// our SAM template below...)
const DYNAMODB_PHOTOS_TABLE_NAME = process.env.DYNAMODB_PHOTOS_TABLE_ARN.split('/')[1];
// 3. NEW: Add a new function to handle putting 
// our new Photo info into DynamoDB
function storePhotoInfo(item) {
const params = {
Item: item,
TableName: DYNAMODB_PHOTOS_TABLE_NAME
};
return DynamoDBDocClient.put(params).promise();
}
// 4. NEW: Add a new function to get the metadata for a photo
async function getMetadata(bucketName, key) {
const headResult = await S3.headObject({Bucket: bucketName, Key: key }).promise();
return headResult.Metadata;
}
// 5. EDIT: Replace processRecord() with this definition, 
// which passes the metadata and the sizes info
// to storePhotoInfo().
//
// We'll also add a createdAt property to our photo items
// which will be helpful when we get around to
// paginating photos in date order.
async function processRecord(record) {
const bucketName = record.s3.bucket.name;
const key = record.s3.object.key;

if (key.indexOf('uploads') != 0) return;

const metadata = await getMetadata(bucketName, key);
const sizes = await resize(bucketName, key);
const id = uuidv4();
const item = {
id: id,
owner: metadata.owner,
photoAlbumId: metadata.albumid,
bucket: bucketName,
thumbnail: sizes.thumbnail,
fullsize: sizes.fullsize,
createdAt: new Date().getTime()
}
await storePhotoInfo(item);
}

Update the SAM template.yml file to add in the new environment variable we introduced above and to add a new policy that allows our Lambda function to write to the photos table:

# photo_processor/template.yml
# ...
Parameters:
# ...

# 1. NEW: Add another parameter
    DynamoDBPhotosTableArn:
Type: String
# ...
Resources:
# ....
PhotoProcessorFunctionIamRole:
Properties:
# ...
Policies:
# ...

# 2. NEW: Add another policy
                - 
PolicyName: "AllPrivsForDynamo"
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action: "dynamodb:*"
Resource:
- !Ref DynamoDBPhotosTableArn
# ...

PhotoProcessorFunction:
# ...
Environment:
Variables:
# ...

# 3. NEW: add a new environment
# variable referencing our param
                    DYNAMODB_PHOTOS_TABLE_ARN: !Ref DynamoDBPhotosTableArn

With these changes completed, our photo_processor Lambda should now create a thumbnail for each uploaded photo and also create a new record in our photos DynamoDB table with the data necessary to have our front end render the album nicely.

It’s now time to deploy an updated version of our Lambda function (repeating the same sam package and sam deploy commands we ran earlier). Assuming previously exported variables still exist (you're in the same terminal session), we just need to define one new environment variable for the DynamoDB photos table ARN.

Viewing the data sources for our AppSync API

To look up the correct photos table ARN:

  1. Go to our API in the AWS AppSync web console
  2. Click ‘Data Sources’
  3. Find the PhotoTable entry and click the link of the table name to go to the DynamoDB web console
  4. Copy the ARN at the bottom of the DynamoDB table Overview tab

Once you’ve found the ARN, substitute it in the export statement below and run the following commands in the same terminal window you’ve been using for SAM commands (so the other environment variables we’ve already set are still defined). Note: don’t change the stack-name parameter below; we want it to be a different name than the stack that Amplify created.

From the photo_processor directory, run:

# Fill in the value below with the ARN for your DynamoDB Photos table
export MY_DYNAMODB_PHOTOS_TABLE_ARN=my-dynamo-db-photos-table-arn
sam package \
--template-file template.yaml \
--output-template-file packaged.yml \
--s3-bucket $MY_UNIQUE_CLOUDFORMATION_TEMPLATES_BUCKET_NAME
sam deploy \
--template-file packaged.yml \
--stack-name PhotoAlbumsProcessorSAMStack \
--capabilities CAPABILITY_IAM \
--region us-east-1 \
--parameter-overrides \
S3UserfilesBucketArn=$MY_AWS_USERFILES_S3_BUCKET_ARN \
DynamoDBPhotosTableArn=$MY_DYNAMODB_PHOTOS_TABLE_ARN

From this point forward, any new photos that we upload from our app should end up with a row in the photos DynamoDB table. Try another upload from the front end and in the next section we’ll see if we can fetch the photo info via GraphQL.

Fetching photos for an album

Our GraphQL schema already indicates that an Album has a Photos field (of type [Photo]) inside it. We've already taken care of writing rows to the DynamoDB table holding photo information, linking each entry to an album via the albumid metadata provided during each photo's upload. The Amplify CLI already took care of writing a resolver to correctly look up the related photos for a given album. So we should be able to ask GraphQL for photos that belong to a specific album. Let's try it.

Querying our AWS AppSync API in the web console

Back in the AWS AppSync web console, go to the Queries section and run this query:

query AllAlbums {
listAlbums {
items {
id
name
photos {
items {
id
bucket
thumbnail {
width
height
key
}
}
}
}
}
}

If you added any photos to an album since our last Lambda deploy, you should see some albums return with photos included, too! All that’s left is to show these images in our UI.

Rendering all of the photos in an album

To render each photo, we can take advantage of another React component provided by the AWS Amplify JS library: S3Image (you can read more about this component here). Let's update our GetAlbum query to fetch an album's photos, create a new PhotosList component, and use it inside our AlbumDetails component. Make the following changes to src/App.js:

// src/App.js
// 1. NEW: Add an import of S3Image 
// and add Divider to imports from semantic-ui-react
import { S3Image } from 'aws-amplify-react';
import { Divider, Form, Grid, Header, Input, List, Segment } from 'semantic-ui-react';

// 2. EDIT: Update our GetAlbum query to include
// fetching thumbnail info for each photo
const GetAlbum = `query GetAlbum($id: ID!) {
getAlbum(id: $id) {
id
name
photos {
items {
thumbnail {
width
height
key
}
}
nextToken
}
}
}
`;
// 3. NEW: Create a new PhotosList component
class PhotosList extends React.Component {
photoItems() {
return this.props.photos.map(photo =>
<S3Image
key={photo.thumbnail.key}
imgKey={photo.thumbnail.key.replace('public/', '')}
style={{display: 'inline-block', 'paddingRight': '5px'}}
/>
);
}
render() {
return (
<div>
<Divider hidden />
{this.photoItems()}
</div>
);
}
}
// 4. EDIT: Add PhotosList to AlbumDetail's render()
class AlbumDetails extends Component {
render() {
return (
<Segment>
<Header as='h3'>{this.props.album.name}</Header>
<S3ImageUpload albumId={this.props.album.id}/>
<PhotosList photos={this.props.album.photos.items} />
</Segment>
)
}
}

If you refresh your app now, you should see photos loading for the album you’re viewing. Woo! If you add new photos, wait a moment for the Lambda function to get invoked by S3, then refresh, your new photo should become visible, too.

Viewing an album after uploading some photos

At this point, there are three things about our photo listing experience that are worth discussing:

  • Refreshing the album view in order to see new photos isn’t a great user experience, but this post has already run through quite a bit of material and there’s still more to cover in the next post, too. In short, the way to handle this would be to have our photo_processor Lambda function trigger a mutation on our API, and to have the AlbumDetailsLoader component subscribe to that mutation. However, because we’re using Amazon Cognito User Pool authentication, the only way to have our Lambda function trigger such a mutation would be to create a sort of ‘system’ user (through the normal user sign up and confirmation process), store that user’s credentials securely (perhaps in AWS Secrets Manager), and authenticate to our AppSync API as that user inside our Lambda in order to trigger the mutation.
  • If an album has many photos in it, our API won’t return all of them in our first getAlbum query. Instead, we'll need to enhance our AlbumDetails component to allow the user to paginate through older photos, loading more on-demand. We'll cover this in the next post.
  • We're currently only rendering the thumbnail for each photo. It might be nice to show the full size for a photo when you click on it. I'll leave this enhancements as an exercise for the reader. :-)

Coming up

We’ve covered a lot of ground in this post. We added routing to our React app, created components for loading and rendering an album’s details, uploading photos to an album, and displaying the photos in an album. We made an AWS Lambda function to automatically create thumbnails for our photos, and we learned how to package and deploy the Lambda using the SAM CLI.

In the next (and last) post in this series, we’ll improve the listing and pagination experience for photos, add in fine-grained security for our albums, and we’ll see how to deploy our app to a CDN for faster load times all around the world.

If you’d like to be notified when new posts come out, please follow me on Twitter: Gabe Hollombe. That’s also the best way to reach me if you have any questions or feedback about this post.

Part 1 | Part 2 | Part 3
This is the second post in a three-part series that shows you how to build a scalable and highly available serverless web app on AWS that lets users upload photos to albums and share those albums privately with others.

Bootstrapping what we’ve built so far

If you’d like to just check out a repo and launch the app we’ve built so far, check out this repo on GitHub and use the blog-post-part-one tag, linked here: https://github.com/gabehollombe-aws/react-graphql-amplify-blog-post/tree/blog-post-part-two. Follow the steps in the README to configure and launch the app.