Forwarding the events using Python
This article takes you step-by-step through the configuration process using Python:
- Download the source file
- Edit the config.json file
- Prepare a ZIP file for upload
- Create a new Lambda function
Download the source file
Download this zip file, decompress it and copy the following folder and two files to the folder where you saved the Devo domain certificates previously downloaded:
Edit the config.json file
Open the config.json file in an editor and edit the values for the following parameters.
Parameter Description address
This is the host address for the Devo Cloud for the region you are using. It should be one of:
- USA: us.elb.relay.logtrust.net
- Europe: eu.elb.relay.logtrust.net
port The inbound port number of the Devo Platform host should always be 443. chain
The name of the Devo domain Chain CA file.
This is usually chain.crt.
The name of the Devo domain certificate file.
The name of the Devo domain private key file.
This is the Devo tag that corresponds to the technology that generated the events you are sending to Devo. In this case, the tag is cloud.aws.cloudtrail.events, which is already specified.
- Save the file in the folder where the domain certificates and Python script are saved.
Prepare a ZIP file for upload
You should have a folder with the following five files plus the devo folder (and its contents): your updated and renamed configuration file, the Lambda Python script file, and the three certificate files you downloaded from your Devo domain. Note that two of the certificate files should have the name of your Devo domain (devo_domain in the example below).
Create a ZIP file containing the folder plus the five files, and name it whatever you like. You will upload this ZIP to AWS to create the Lambda function in step 7 of the next procedure.
Create a new Lambda function
This procedure guides you through creating the new Lambda function that will monitor the S3 bucket for changes.
Create a new AWS Lambda function in the same zone in which the S3 bucket resides.
Click Blueprints, then click the s3-get-object-python blueprint tile.
Click the Configure button. The next page contains three sections; Basic information, S3 trigger, and Lambda function code.
In the Basic information section, enter a Name for the new function.
If using an existing role, make sure that it has Lambda execution and S3 read permissions.
If not using an existing role, create a new one. Under Role, select Create new role from AWS Policy Templates. Enter a role name and select Amazon S3 object read-only permissions as the Policy Template.
In the S3 trigger section, select the Bucket that contains the events, set the Event type to All object create events, then select Enable trigger.
Click Create function. The next page contains several sections in which you configure the details of your new function.
Modify the Function code section as indicated below and for Function package, click Upload to select the .zip file you created earlier. Then, click Save to upload the file.
- In the Execution role section, select the role you specified/created for the function. In the Basic settings section, set the Memory and Timeout to an interval that is close to, but less than, the event creation frequency. For example, if the log file creation frequency is 5 minutes, set the Timeout to 4 minutes and 30 seconds. In the Network section, select No VPC for the VPC value.
- Now, select the new function to view its details. In the Execution role area, click View the <function-name> role to edit the role permissions.
- On the Permissions tab, click Attach policy. Select AmazonS3ReadOnlyAccess, then click Attach policy.
Now you can confirm that the Lambda function has been correctly associated to the bucket. Go to S3 and open the bucket. In the bucket's Properties tab, make sure that there's an active notification associated with Events.
If there is no active notification, click the Events tile, then click Add notification. Set up a new event as shown below and click Save.
Now, every time there a new object file is written to the S3 bucket, it will be sent to your Devo domain with the tag specified in the config.json file.