AWS DynamoDB is a NoSQL database offered by Amazon. It integrates seamlessly with all other connectors in Data Pipelines. In this demo we are focusing on moving data between DynamoDB and Google Sheets.
More advanced use cases, such as combining data between the two data sources are also supported. For example, you could join a DynamoDB table with a Google spreadsheet and write the result back to Google Sheets or any of the other supported data connectors.
It is important to consider Google Sheets' limitations. A worksheet can only contain a maximum of 5 million cells according to the official documentation. A DynamoDB table can hold a lot more than this so make sure the data you are writing to Google Sheets can fit in a single worksheet.
A difference between writing data to DynamoDB and other databases is that Data Pipelines will not create the table for you so you will have to make sure that the table already exists.
Read and write performance greatly depends on your table's throughput parameters. For example, reads from a table that is configured to auto scale and has not been used for a while could take a few minutes. This is because it takes time for AWS to scale up the throughput (according to the scaling policy). Such a delay may be acceptable for a scheduled run which happens once a day in the background but may not be acceptable when you are inspecting data or building your pipelines. For such scenarios you will want to provision some capacity.
You can read more about provisioning capacity for your tables in the official AWS DynamoDB documentation.
In the following video we demonstrate how to move data between DynamoDB and Google Sheets in either direction. With just a few clicks you could set up a daily monitoring job that exports data from DynamoDB (to which non-developers do not have access) to a Google spreadsheet which can be shared with anyone. Alternatively, spreadsheet data from Google Sheets can be written to a DynamoDB table at the desired intervals. This could be a way to ingest data from users without having to write any code.
Note that your AWS IAM user needs to have the
AmazonDynamoDBFullAccess permission to be able to read from and write to DynamoDB.
If you have any questions about connecting your data to setting up your pipelines, contact us.