Continuous Integration and Continuous Deployment for Slack app with Azure DevOps

How to deploy a Slack Bolt app from Azure DevOps to AWS Lambda with Serverless.

Continuous Integration and Continuous Deployment for  Slack app with Azure DevOps
Photo by Oli Dale on Unsplash

The story

We wanted to build a Slack bot to automate our weekly update flow. Our minimum viable product is straightforward. There is a Slack app installed in our workspace. Our team can use the Slack command  /weekly-update {the project name} to send an update to our dedicated channel for all projects' weekly updates.

The building blocks

We chose Slack Bolt and Serverless Framework to keep everything simple.

  • Slack Bolt: It’s the recommended way of creating a Slack app. It allows us to build a Slack app with JavaScript by default. With just a small tweak, we can use TypeScript.
  • Serverless Framework: It is a fast way to stand up a simple AWS Lambda to host our Slack app. Local development experience is also quite nice.

Following Slack’s tutorial, we managed to build our first version up and running in a few hours.

What is missing from the tutorial?

While the tutorial is very detailed on building and deploying step by step, the experience is very much focused on local machine development. In our case, we want a continuous integration (CI) / continuous deployment (CD) pipeline so that:

  • The build and deployment processes don’t rely on any developer’s machine.
  • Build output is immutable.
  • The deployment is automated and secure.
  • We follow best practices like building once and deploying everywhere.

In this article, we will share our experiences setting this up for Slack Bolt and Serverless Framework on Azure DevOps.

Continuous integration

In this section, we will discuss how we bundle the TypeScript Slack app into an immutable package.

The build script

The most important requirement of a CI pipeline is the build output needs to be immutable. It needs to have all the necessary dependencies to run the app. To achieve this, we create a build script in package.json

"scripts": {
  "build": "npm run clean && tsc -p . && npm run copy-files && cd build && renamer --find serverless-aws --replace serverless serverless-aws.yml && npm ci --production && rimraf package.json package-lock.json",
  "clean": "rimraf build",
  "copy-files: "copyfiles package.json package-lock.json serverless-aws.yml build",
},

The steps are:

  • npm run clean is a custom script to clean the build folder.
  • tsc -p . to trigger TypeScript to transpile the *.ts files into *.js we have "outDir": "build" in the tsconfig.json file so the output ends up in the build folder.
  • Copy package.json and package-lock.json and serverless-aws.yml into the build folder. A note here: due to the slight differences between running Serverless on AWS and local, we created 2 versions of the .yml file, serverless-aws.yml and serverless-local.yml. We explain the details of serverless-aws.yml below.
  • Change directory to build.
  • Run npm ci with --production flag. We only want production dependencies in the build output; because the deployment agent will have already run npm ci for all packages that need to be resolved will be cached so this will be fast.
  • Remove package.json and package-lock.json. Keep it clean for the deployed app.

The build folder has the final build output. It has all the required dependencies for the deployment steps.

For this to work, we needed the following dev dependencies: npm install --save-dev copyfiles rimraf.

azure-pipeline.yml

Having the build script doing the heavy lifting means our CI/CD pipeline YAML is quite simple:

- task: NodeTool@0
  inputs:
    versionSpec: 16.x

- script: npm ci
  displayName: npm ci
  workingDirectory: $(sourceDirectory)

- script: npm run build
  displayName: npm run build
  workingDirectory: $(sourceDirectory)

- task: ArchiveFiles@2
  displayName: Zip app
  inputs:
    rootFolderOrFile: $(sourceDirectory)/build
    includeRootFolder: false
    archiveType: zip
    archiveFile: $(Build.ArtifactStagingDirectory)/app/App.zip
    replaceExistingArchive: true

- task: PublishPipelineArtifact@1
  displayName: Publish app artifacts
  inputs:
    artifactName: app
    targetPath: $(Build.ArtifactStagingDirectory)/app

Continuous Deployment

In this section, we will discuss our approach of taking the build output and deploying it to AWS, including how we handle Slack API secrets. We will not discuss how the AWS service connection is set up in DevOps since it’s covered by the official AWS guide.

Azure DevOps secret variables

Our app requires two Slack secrets, the signing key and the app token, which we store as DevOps secret variables. They are passed into our pipeline as a variable group. More documentation can be found in the official documentation.

One “gotcha” we learned: Azure DevOps injects variables as environment variables into scripts by default, but not if they are secrets. We have to do that ourselves on the relevant task that needs the secrets:

# Our template YAML file (partial)
parameters:
  - name: secretEnvs
    type: object
    default: []
  # ...

# ...
  - task: AWSShellScript@1
    # ...
	env:
	  ${{each env in parameters.secretEnvs}}:
	    '${{env}}': '$(${{env}})'
	  SECRETS_ENV_KEY_NAMES: '${{ join('';'', parameters.secretEnvs) }}'

In the above script, secret values are stored in environment variables. All the key names are stored into a ; separated string within the environment variable SECRETS_ENV_KEY_NAMES so they can be programmatically interrogated.

Create AWS secrets with AWS SDK

In another blog post, we covered how we handle AWS secrets with TypeScript CDK. A similar approach was taken here; the Slack secrets are stored with AWS Secret Manager and retrieved at runtime. To avoid the overhead of adding AWS CDK just for creating secrets, given we are using Serverless framework to create the rest of the infrastructure, the secrets are created/updated with AWS JavaScript SDK directly via a script we created called create-secrets.ts. Note: this example shows the secrets added explicitly, rather than via the above version.

Azure DevOps YAML

- task: AWSShellScript@1
  displayName: 'create-secrets'
  inputs:
    awsCredentials: ${{ parameters.serviceConnection }}
    regionName: $(AWS_DEFAULT_REGION)
    scriptType: 'inline'
    inlineScript: 'node create-secrets.js --outputDir=$(workingDirectory)'
    workingDirectory: '$(infraDirectory)'
    disableAutoCwd: true
  env:
    # Manually insert the secrets into as environment variables because DevOps doesn't do it for us
    # <https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#secret-variables>
    SLACK_SIGNING_SECRET: '$(SLACK_SIGNING_SECRET)'
    SLACK_BOT_TOKEN: '$(SLACK_BOT_TOKEN)'
    SECRETS_ENV_KEY_NAMES: 'SLACK_SIGNING_SECRET;SLACK_BOT_TOKEN'

create-secrets.ts

This is called from the above YAML to securely create/update AWS secrets with the environment variables.

import {
  AWSError,
  SecretsManager,
} from 'aws-sdk'
import { CreateSecretResponse, ListSecretsResponse, PutSecretValueResponse } from 'aws-sdk/clients/secretsmanager'
import { writeFile } from 'fs'
import minimist = require('minimist')
import { resolve } from 'path'

// Create/update AWS secrets with environment variables.
// This outputs the secrets' ARNs into a JSON file for serverless.yml
// Steps:
// - Read the secret names from `SECRETS_ENV_KEY_NAMES`
// - For each secret
//     - Check if it already exists on AWS Secret Manager, if not, create
//     - Update the secret value
//     - Take the ARN
// - Put all ARNs into a JSON file

// See .env.sample for required process.env config
const secretsManager = new SecretsManager({
  region: process.env.AWS_DEFAULT_REGION,
})

type CreateOrUpdateSecretResult = {
  secretEnvKey: string
  secretName: string
  secretArn: string
  secretVersionId: string
}
const createOrUpdateSecret = (secretEnvKey: string, value: string | undefined) => {
  return new Promise<CreateOrUpdateSecretResult>((resolve, reject) => {
    const secretName = `WeeklyUpdateSlackBot${process.env.DEPLOYMENT_ENVIRONMENT}/${secretEnvKey}`

    if (!value) reject(`Secret value for ${secretName} was not provided`)

    secretsManager.listSecrets({
      Filters: [{
        Key: 'name',
        Values: [secretName],
      }]
    }, (err: AWSError, listSecretResponse: ListSecretsResponse) => {
      if (err) {
        reject(err)
        return
      }
      const awsSecret = listSecretResponse.SecretList?.find(s => s.Name === secretName)
      if (!awsSecret) {
        secretsManager.createSecret({
          Name: secretName,
          SecretString: value,
        }, (err: AWSError, data: CreateSecretResponse) => {
          if (err) {
            reject(err)
            return
          }
          resolve({
            secretEnvKey,
            secretName,
            secretArn: data.ARN!,
            secretVersionId: data.VersionId!,
          })
        })
      } else {
        secretsManager.putSecretValue({
          SecretId: awsSecret.ARN!,
          SecretString: value
        }, (err: AWSError, data: PutSecretValueResponse) => {
          if (err) {
            reject(err)
            return
          }
          resolve({
            secretEnvKey,
            secretName,
            secretArn: data.ARN!,
            secretVersionId: data.VersionId!,
          })
        })
      }
    })
  })
}

;(async () => {
  if (!process.env.SECRETS_ENV_KEY_NAMES) {
    console.log('There are no secrets to be set')
    return
  }

  const args = minimist(process.argv.slice(2))
  // sample: --outputDir=\\"../weekly-update\\"
  const pathToJsonOutput = resolve(__dirname, args.outputDir ?? '.', 'outputs.json')

  const secretEnvKeys = process.env.SECRETS_ENV_KEY_NAMES?.split(';')
  const createOrUpdateSecretPromises = secretEnvKeys.map((secretEnvKey) => 
    createOrUpdateSecret(secretEnvKey, process.env[secretEnvKey]))

  const output = await Promise.all(createOrUpdateSecretPromises).then(
    successes => {
      successes
        .forEach((success: CreateOrUpdateSecretResult) => {
          console.log(`Secret set for ${success.secretName} with version ${success.secretVersionId}`)
        })

      return successes.reduce((previous, current) => ({
        ...previous,
        [`${current.secretEnvKey}_ARN`]: current.secretArn,
      }), {})
    },
    (failures: any) => {
      if (failures instanceof Array) {
        failures.forEach((failure: AWSError) => {
          console.error(`Secret set FAILED: [${failure.code}] ${failure.name}: ${failure.message}`)
        })
        throw 'Failed'
      } else {
        throw failures
      }
    },
  )

  writeFile(pathToJsonOutput, JSON.stringify(output), (error) => {
    if (error) {
      console.log(`Failed to write results JSON file: ${error?.message}`)
    }
  })
})()

azure-pipeline.yml

Once the secrets are set up, we configure serverless credentials and deploy:

- task: AWSShellScript@1
	displayName: Deploy with serverless
	inputs:
	  awsCredentials: ${{ parameters.serviceConnection }}
	  regionName: $(AWS_DEFAULT_REGION)
	  scriptType: 'inline'
	  inlineScript: 'npx serverless@2.72.1 deploy'
	  workingDirectory: '$(workingDirectory)'
	  disableAutoCwd: true

serverless-aws.yml

Previously, we discuss that we have 2 versions of the serverless.yml to facilitate the deployment pipeline. It is because the secret ARNs and AWS IAM need to be configured for AWS, but not locally.

service: weekly-update
frameworkVersion: '2'
provider:
  name: aws
  # grant the lambda read permission to the secrets
  iam:
    role:
      statements:
        - Effect: 'Allow'
          Action: 
            - 'secretsmanager:GetResourcePolicy'
            - 'secretsmanager:GetSecretValue'
            - 'secretsmanager:DescribeSecret'
            - 'secretsmanager:ListSecretVersionIds'
          Resource:
            - ${file(./outputs.json):SLACK_SIGNING_SECRET_ARN}
            - ${file(./outputs.json):SLACK_BOT_TOKEN_ARN}
  runtime: nodejs14.x
  # secret ARNs
  environment:
    SLACK_SIGNING_SECRET_ARN: ${file(./outputs.json):SLACK_SIGNING_SECRET_ARN}
    SLACK_BOT_TOKEN_ARN: ${file(./outputs.json):SLACK_BOT_TOKEN_ARN}
functions:
	slack:
    handler: "app.handler"
    events:
      - http:
          path: slack/events
          method: post

serverless.yml

Here’s the local (default) version, it just requires that you have a .env with the SLACK_SIGNING_SECRET and SLACK_BOT_TOKEN; just be careful to add .env to your .gitignore to avoid accidentally committing those secrets.

service: weekly-update
frameworkVersion: '2'
provider:
  name: aws
  runtime: nodejs14.x
  region: ap-southeast-2
  environment:
    SLACK_SIGNING_SECRET: ${env:SLACK_SIGNING_SECRET}
    SLACK_BOT_TOKEN: ${env:SLACK_BOT_TOKEN}
functions:
  slack:
    handler: "app.handler"
    events:
      - http:
          path: slack/events
          method: post
plugins:
  - serverless-dotenv-plugin
  - serverless-plugin-typescript
  - serverless-offline

The Slack handler

We modified Slack’s app handler slightly to read secrets at start-up

module.exports.handler = async (event: any, context: any, callback: any) => {
  process.env.SLACK_SIGNING_SECRET = await getSecret(process.env.SLACK_SIGNING_SECRET_ARN as string)
  process.env.SLACK_BOT_TOKEN = await getSecret(process.env.SLACK_BOT_TOKEN_ARN as string)

  const awsLambdaReceiver = await createReceiver()
  const handler = await awsLambdaReceiver.start()
  return handler(event, context, callback)
}

const createReceiver = async() => {
  const awsLambdaReceiver = new AwsLambdaReceiver({
    signingSecret: process.env.SLACK_SIGNING_SECRET!,
  })
  return awsLambdaReceiver
}

The code for getSecret is

import { SecretsManager } from 'aws-sdk'

export async function getSecret(secretArn: string): Promise<string> {
  var client = new SecretsManager({
    // This is automatically specified with in the Lambda runtime environment
    //  <https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-runtime>
    region: process.env.AWS_REGION,
  })

  return new Promise((resolve, reject) => {
    client.getSecretValue({ SecretId: secretArn }, (err, data) => {
      if (err) {
        console.log(JSON.stringify(err))
        reject(err)
        return
      }

      if ('SecretString' in data) {
        resolve(data.SecretString as string)
      } else {
        resolve(Buffer.from(data.SecretBinary as any, 'base64').toString('ascii'))
      }
    })
  })
}

Conclusion

In this blog post, we discussed our approach to deploying a Slack Bolt app from Azure DevOps to AWS Lambda. We shared a few lessons about packaging and deploying the app and, most importantly, keeping the Slack secrets secure.

Thank you for your time. Happy coding.