Deploying AWS Lambda function using Bitbucket Pipelines

In the previous article, I posted on how engineers can leverage on Bitbucket Pipelines as a CI/CD tool to automate the integration and deployment process. Specifically, using Bitbucket Pipelines to deploy serverless cloud function code to GCP.
In this article, we will discuss how we can do the same to deploy serverless code to AWS Lambda.
Prerequisite:
- An AWS account with the permission to create new IAM role
- Bitbucket account with the permission to create / manage repository
- Application / function code
Steps
- Create a repository in Bitbucket and enable Pipelines
- Create a new IAM user account in AWS for Lambda deployment, and a new IAM role for Lambda execution
- Update repository variable settings
- Update repository with function code and bitbucket-pipelines.yml
- Commit changes and push to Bitbucket to see the pipelines happening in action
- Create a repository in Bitbucket and enable Pipelines
After you created a new empty repository, head to the Settings tab, find the Pipelines section, and click on the the Enable Pipelines
checkbox.

2. Create a new IAM user account in AWS for Lambda deployment, and a new IAM role for Lambda execution
Navigate to AWS console > IAM and create a new user. Make sure the Programmatic access
option is selected.

For permission, I selected the AWSLambda_FullAcces
role. You can create a new dedicated policy that restrict permission to CreateFunction
or restrict to specific ARN. That is more advisable.
Lastly, you can also add a key-value pair tag for your own reference. After the user is being created, take note of the access key ID
and secret access key
which we will be adding to the Bitbucket repository variables.
Then, create a new role to be used by the Lambda, which is known as the execution role.

The next screen on permission is the place where you add other permission required by the execution role, such as access to S3, Parameter Store or EC2.

After the role has been created, take note of the Role ARN, which will be used in the pipelines configuration file later.

3. Update repository variable settings
Back to Bitbucket settings, navigate to Repository variables and input the few variables we captured from the IAM user creation process. Take note that the AWS related variables needs to be named as such, i.e. AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
as AWS CLI will look for these variables during the CI/CD process. We can specify AWS default region here if we are not going to specify it during the pipelines CLI command, at the same time we add the function name we want to deploy to AWS in the variables list as well.

4. Update repository with function code and bitbucket-pipelines.yml
The folder structure is similar to the one I shown in previous article. Main changes is in the bitbucket-pipelines.yml file.
Line 4–9: Define the steps to run linting, unit and coverage testing
Line 11–18: As AWS Lambda requires the code to be compiled into a zip file, these steps are defined to install the dependencies in requirements.txt and compile the application code into zip file.
Main magic is in line 30, where it will create a function if the specified function name is not in AWS yet. Line 31 is redundant the first time the lambda is being created, but subsequent changes to the git repository will update the function code accordingly.
5. Commit changes and push to Bitbucket to see the pipelines happening in action



Conclusion
By leveraging on Pipelines offered by Bitbucket, we can create a CI/CD workflow that deploys to AWS Lambda. Further optimization and automation can be implemented to deal with the nuisance during the first creation of lambda. Perhaps using the Serverless application framework which has some of these features implemented already would be a better option.
The sample repository that comes with the basic template including the function code, linting, unit, and coverage testing, as well as the bitbucket-pipelines.yml can be found here.