-
Create your top level directory for your project.
mkdir example-lambdas
-
Create a config directory and inside a config.yml file.
cd example-lambdas/
mkdir config
cd config
touch config.yml
The config.yml will define essential configurations that the lambda version manager needs to update the lambdas with the correct s3 key location. In the config.yml file we need three keys set to explain our project to the generator.
- The
environments
key. This key will define an array of the environments that we are setting properties for. Environments must be unique. - Under each
environment
value the following values must be defined.region
The aws region for the environment. This is used to make aws sdk calls.account
The aws account that this environment lives in. This can be used to limit deployments for example between staging and prod accounts.s3_bucket
The s3 bucket that lambda artifacts are uploaded to.base_path
The path in the s3 bucket above to the lambda artifacts.
Here is a example config.yml
environments:
us-east-1: #prod
region: us-east-1
account: "123456789101"
s3_bucket: "lambdastorage.us-east-1.prod.myspecials3domain.com"
base_path: "lambdas"
eu-central-1:
region: eu-central-1
account: "123456789101"
s3_bucket: "lambdastorage.prod.eu-central-1.myspecials3domain.com"
base_path: "lambdas"
staging: #staging
region: us-east-1
account: "101987654321"
s3_bucket: "lambdastorage.staging.us-east-1.mys3domain.com"
base_path: "lambdas"
The environments folder will contain the environments which will contain the managed lambdas.
- Create a environments folder
cd example-lambdas/
mkdir environments
- Make a yaml files with the same names as the environment keys under your environment block in the
config.yml
.
cd example-lambdas/environments
touch us-east-1.yaml
touch eu-central-1.yaml
touch staging.yaml
- To manage a lambda in a given environment it must be defined in the given environment file by its function name.
- The top level key must be the function(lambda) name.
- Below the function name define the following keys
artifact_name
The artifact name. This name must match the name of the built artifact the lambda uses.version
The version of the artifact.extension
The extension of the artifact.
- Optional keys to define
sha1
The sha1 sum of the artifact. This useful if you repeatedly build the same version and therefore need to generate a diff to trigger an update to the lambda.s3_bucket
An alternate s3 bucket to use as a artifact source over the one specified in the config.s3_key
Specifying this will tell the version manager to use this as the exact path to the artifact in s3. This overrides the derived path from theartifact_name
,version
, andextension
values.
- An example of an environment file is below
user_registration: #lambda name
artifact_name: user-registration #Name of artifact produced by the build and uploaded to s3
version: 1.0.41-SNAPSHOT
extension: jar
expired_document_cleanup:
artifact_name: expired-document-cleanup
version: 1.0.7
extension: jar
sha1: klf90849u5hkwhfp9op2ojrfy3rubmwlehr
The bin directory contains the lambda-version-manager cli. An example of running the cli is below. The project_path
argument specifies the path to the lambda management whose creation was defined above. This will deploy all the lambdas in the us-east-1 environment in the supplied account that have changed since the last deploy.
./lambda-version-manager deploy --project_path ~/example-lambdas/ --environments us-east-1 --account 123456789101```
The command below will update the yaml configuration block for the lambdas that use the user-registration artifact to the supplied version in the give accounts.
./lambda-version-manager update_project --project_path ~/example-lambdas/ --artifact user-registration --version 1.0.42-SNAPSHOT --accounts 123456789101
When you use the lambda version manager to deploy a lambda it will update the .history directory with a copy of the latest yaml file for the environment deployed. This allows the tool to run a diff before a deployment and deploy only updated lambdas. This enables the following use case.
- Using lambda version manager update_project command to update the configs to the new version or sha1 in the lambda build job.
- Use a separate job triggered by scm updates that will deploy the newly updated lambda only. This allows you to also make manual changes to the lambda version project and have those deployed as well. Decoupling the deployment portion from the build job.
If you are not worried about updating only the changed lambda code this process can be simplified and the history tracking stuff can be ignored. You can use the deploy_all
flag to deploy everything that matches your filter regardless of changes.