In the previous post, candidate uploaded finished assignment to S3 bucket. Now, we will execute the build and if build succeeds and meets some crieria then we will move the candidate to next round. In this post, I will only cover executing the build. You can extend the application to add custom checks like code coverage or static analysis metrics that code should meet. You can look at the full application code to understand more about how you can add more checks.
In this post, we will create our new function in Java. This will help you see how you can use Java for writing functions as well.
$ serverless create --template aws-java-gradle --path build-executor-service --name build-executor
This will create build-executor
directory on your filesystem. It generates a standard Java Gradle project with following files and directories.
├── build.gradle
├── gradle
│ └── wrapper
│ └── gradle-wrapper.properties
├── gradlew
├── gradlew.bat
├── serverless.yml
└── src
└── main
├── java
│ └── com
│ └── serverless
│ ├── ApiGatewayResponse.java
│ ├── Handler.java
│ └── Response.java
└── resources
└── log4j.properties
8 directories, 9 files
As with all serverless projects, it contains serverless.yml
that defines our new service. You can open this project in your favourite IDE.
Make sure you are inside the build-executor
directory and run the Gradle wrapper task to download the Gradle wrapper jar.
$ gradle wrapper
To install wrapper jar, you will be required to have Gradle installed on your machine. If you are on Mac, you can use brew package manager.
$ brew install gradle
In the serverless.yml, we will add the following. Make sure bucket cre-candidate-submissions-dev
exists. If your testing this service in standalone then you can create this bucket using the web console.
service: build-executor-service
provider:
name: aws
runtime: java8
stage: dev
region: us-east-1
environment:
CANDIDATE_SUBMISSIONS_S3_BUCKET: "cre-candidate-submissions-${opt:stage, self:provider.stage}"
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:*"
Resource: "*"
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "*"
package:
artifact: build/distributions/build-executor-service.zip
functions:
assignment-build-executor:
handler: cre.build.executor.BuildHandler
memorySize: 704
timeout: 300
events:
- s3:
bucket: ${self:provider.environment.CANDIDATE_SUBMISSIONS_S3_BUCKET}
event: s3:ObjectCreated:*
rules:
- suffix: .zip
We will remove the code generated by Serverless framework and create a new package cre.build.executor
. Inside the package, create a new class BuildHandler
. BuildHandler has to do the following:
BuildHandler
will listen toS3Event
and when event is received it will download the zip file to the /tmp directory of the container which is running the lambda function.- After downloading the zip file, lambda function will extract the zip to /tmp directory.
- Then, lambda function will run a series of commands to build the project.
- Making gradlew executable
- Downloading JDK 8 on the container. Lambda containers only have JRE and to build the project you need JDK.
- Execute the build
- Update the candidate status in the DynamoDB table.
Now, that you understand what we need to do in BuildHandler let's add code piece by piece.
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.S3Event;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.event.S3EventNotification;
import com.amazonaws.services.s3.event.S3EventNotification.S3EventNotificationRecord;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.S3Object;
import org.apache.commons.io.FileUtils;
import org.apache.log4j.Logger;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.net.URLDecoder;
import java.nio.file.Path;
import java.nio.file.Paths;
public class BuildHandler implements RequestHandler<S3Event, String> {
private final Logger logger = Logger.getLogger(BuildHandler.class);
private final AmazonS3Client s3Client = new AmazonS3Client();
@Override
public String handleRequest(S3Event s3Event, Context context) {
logger.info(String.format("Received event %s", s3Event));
S3EventNotificationRecord record = s3Event.getRecords().get(0);
Path zipPath = downloadS3ObjectToTmp(record);
logger.info("Downloaded zip to " + zipPath);
return "OK";
}
private Path downloadS3ObjectToTmp(S3EventNotificationRecord record) {
try {
S3EventNotification.S3Entity entity = record.getS3();
String srcKey = URLDecoder.decode(entity.getObject().getKey().replace('+', ' '), "UTF-8");
S3Object s3Object = s3Client.getObject(new GetObjectRequest(entity.getBucket().getName(), srcKey));
String zipName = s3Object.getKey();
return downloadZipTo(s3Object.getObjectContent(), zipName, "/tmp");
} catch (IOException e) {
throw new RuntimeException("Unable to download zip to /tmp directory. Reason is", e);
}
}
private Path downloadZipTo(InputStream in, String filename, String dest) throws IOException {
File assignmentZipFile = Paths.get(dest, filename).toFile();
FileUtils.copyInputStreamToFile(in, assignmentZipFile);
return assignmentZipFile.toPath();
}
}
We need to following dependencies to our Gradle project.
dependencies {
compile(
'com.amazonaws:aws-lambda-java-core:1.1.0',
'com.amazonaws:aws-lambda-java-log4j:1.0.0',
'com.fasterxml.jackson.core:jackson-core:2.8.5',
'com.fasterxml.jackson.core:jackson-databind:2.8.5',
'com.fasterxml.jackson.core:jackson-annotations:2.8.5'
)
compile('com.amazonaws:aws-lambda-java-events:1.3.0') {
exclude module: 'aws-java-sdk-kinesis'
exclude module: 'aws-java-sdk-kms'
exclude module: 'aws-java-sdk-sqs'
exclude module: 'aws-java-sdk-cognitoidentity'
exclude module: 'aws-java-sdk-sns'
}
compile "commons-io:commons-io:2.5"
compile('org.zeroturnaround:zt-zip:1.11')
}
Build the package and deploy it.
$ ./gradlew clean build
$ serverless deploy
Once successfully deployed, you can test service by dropping a zip file to the bucket. You will see following in the log output. You can see logs using sls logs -f assignment-build-executor -t
command.
2017-02-27 10:58:02 <request_id> INFO cre.build.executor.BuildHandler:31 - Downloaded zip to /tmp/assignment1.zip
Now, that we downloaded zip to /tmp
directory let's extract the zip.
@Override
public String handleRequest(S3Event s3Event, Context context) {
// removed for brevity
logger.info("Downloaded zip to " + zipPath);
String candidateId = zipPath.getFileName().toString().split("-")[1].replace(".zip", "");
File unzipDirectory = Paths.get("/tmp", "assignment-" + candidateId).toFile();
ZipUtil.unpack(zipPath.toFile(), unzipDirectory);
logger.info(String.format("%s extracted to %s", zipPath.getFileName(), unzipDirectory.getAbsolutePath()));
return "OK";
}
In the code shown above, once we have downloaded the zip we unpack the zip using ZeroAround's zip-zip utilty. Finally, we log the extracted path.
Build the package and deploy it.
$ ./gradlew clean build
$ serverless deploy
Once successfully deployed, you can test service by dropping a zip file to the bucket. You will see following in the log output. You can see logs using sls logs -f assignment-build-executor -t
command.
2017-02-27 11:13:44 <request_id> INFO cre.build.executor.BuildHandler:32 - Downloaded zip to /tmp/assignment-123.zip
2017-02-27 11:13:44 <request_id> INFO cre.build.executor.BuildHandler:36 - assignment-123.zip extracted to /tmp/assignment-123
Executing a build includes running multiples commands. Let's write a simple Cmd class that abstracts this concept.
package cre.build.executor;
import org.apache.commons.io.IOUtils;
import java.io.IOException;
import java.nio.file.Path;
public class Cmd {
private final String cmd;
private final Path directory;
public Cmd(String cmd, Path directory) {
this.cmd = cmd;
this.directory = directory;
}
public final CmdResult run() {
try {
Process p = Runtime.getRuntime().exec(this.cmd, null, this.directory.toFile());
int exitCode = p.waitFor();
System.out.println("InputStream...");
IOUtils.copy(p.getInputStream(), System.out);
System.out.println("ErrorStream...");
IOUtils.copy(p.getErrorStream(), System.err);
return new CmdResult(this.cmd, exitCode);
} catch (IOException | InterruptedException e) {
throw new RuntimeException(
String.format("Unable to execute the cmd %s", this.cmd),
e);
}
}
}
Next, we will write executeBuild method in the BuildHandler that runs all the commands.
private CmdResult executeBuild(Path codeDir) {
List<Cmd> preBuildCmds = Arrays.asList(
new Cmd("chmod u+x gradlew", codeDir),
new Cmd("sh /var/task/scripts/java.sh 1.8", codeDir)
);
preBuildCmds.forEach(Cmd::run);
Cmd buildCmd = new Cmd("./gradlew --gradle-user-home=/tmp/gradle -Dorg.gradle.java.home=/tmp/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.77-0.b03.9.amzn1.x86_64 build", codeDir);
CmdResult buildResult = buildCmd.run();
return buildResult;
}
We need to call this method from the handleRequest method as shown below.
@Override
public String handleRequest(S3Event s3Event, Context context) {
// remove for brevity...
logger.info(String.format("%s extracted to %s", zipPath.getFileName(), unzipDirectory.getAbsolutePath()));
CmdResult buildResult = executeBuild(unzipDirectory.toPath());
if (buildResult.getExitCode() != 0) {
logger.info(String.format("Build failed with result %s", buildResult));
// update status to fail in DynamoDB
return "Fail";
}
logger.info(String.format("Build succeeded with result %s", buildResult));
// update status to pass in DynamoDB
return "Pass";
}
Once zip is extracted, we call the executeBuild method that executes the build. If exit code returned is not equal to 0 i.e. command failed then we return "Fail" as response else we return "Pass" as response.
Last thing remaining is to update DynamoDB with build status.
private UpdateItemRequest updateRequest(String candidateId, String buildStatus) {
Map<String, AttributeValue> key = new HashMap<>();
key.put("id", new AttributeValue().withS(candidateId));
Map<String, AttributeValue> expressionAttributeValues = new HashMap<>();
expressionAttributeValues.put(":build_status", new AttributeValue().withS(buildStatus));
UpdateItemRequest updateRequest = new UpdateItemRequest();
updateRequest.withTableName(System.getenv("CANDIDATE_TABLE"))
.withKey(key)
.withUpdateExpression("set build_status = :build_status")
.withExpressionAttributeValues(expressionAttributeValues);
return updateRequest;
}
Also, update the handleRequest method to call updateRequest method as shown below.
private final AmazonDynamoDBClient dynamoDBClient = new AmazonDynamoDBClient();
@Override
public String handleRequest(S3Event s3Event, Context context) {
if (buildResult.getExitCode() != 0) {
logger.info(String.format("Build failed with result %s", buildResult));
// update status to fail in DynamoDB
UpdateItemRequest updateRequest = updateRequest(candidateId, "FAILED");
dynamoDBClient.updateItem(updateRequest);
return "Fail";
}
logger.info(String.format("Build succeeded with result %s", buildResult));
// update status to pass in DynamoDB
UpdateItemRequest updateRequest = updateRequest(candidateId, "PASSED");
dynamoDBClient.updateItem(updateRequest);
return "Pass";
}
This finishes up the whole application. We have built an end-to-end application using Serverless approach. In last post of this series, we will look at how we can take application to production.