AWS S3

Write events to an S3 bucket as JSON objects, partitioned by date.

The aws_s3 provider writes each event to an S3 bucket as a separate JSON object, partitioned by date. Use it for webhook archiving, data lake ingestion, or triggering S3 event notifications into Lambda or Step Functions.

When to use this: you want durable long-term storage of raw events, or you're building a pipeline that reads from S3 downstream.

SigV4 signing runs on crypto.subtle — no AWS SDK. Object keys follow the pattern {prefix}{YYYY}/{MM}/{DD}/{event_id}.json, so date-based lifecycle policies work out of the box.

Configuration

bucket string required

S3 bucket name.

region string required

AWS region the bucket lives in, e.g. us-east-1.

prefix string

Optional key prefix prepended to every object, e.g. "webhooks/". A trailing / is added automatically if missing. When omitted, objects land at the bucket root under {YYYY}/{MM}/{DD}/{event_id}.json.

access_key_id string required

IAM access key ID with s3:PutObject permission on the bucket.

secret_access_key string required

IAM secret access key. Stored encrypted, masked in GET responses.

format string default: json

Object format. json (default) writes application/json. raw writes application/octet-stream with the raw event body.

Object Layout

Object keys are composed as:

text
{prefix}{YYYY}/{MM}/{DD}/{event_id}.json

For example, webhooks/2026/03/01/evt_abc123.json. Each object contains the full event:

json
{ "event_id": "evt_abc123", "source_id": "src_xyz", "method": "POST", "headers": { "...": "..." }, "body": { "...": "..." }, "received_at": "2026-03-01T12:00:00Z" }

Authentication

Create an IAM user with an inline policy granting s3:PutObject on your bucket:

json
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::my-webhook-archive/*" }] }

Generate an access key and paste both values into the config.

Example

bash
curl -X POST https://hookstream.io/v1/destinations \ -H "X-API-Key: $HOOKSTREAM_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "Event Archive", "type": "aws_s3", "config": { "bucket": "my-webhook-archive", "region": "us-east-1", "prefix": "webhooks/", "access_key_id": "AKIA...", "secret_access_key": "wJal..." } }'

Gotchas

hookstream never deletes S3 objects. Set an S3 lifecycle rule on the bucket to expire archived events after N days, or transition them to Glacier.

Without the trailing /, keys end up like webhooks2026/... instead of webhooks/2026/.... Always include the separator.

Next Steps

AWS SQS

Queue events for async processing instead of archiving.

Learn More
AWS EventBridge

Route events through EventBridge rules.

Learn More
Event Archival

Learn how hookstream archives old events to R2 automatically.

Learn More
Destinations API

Full API reference.

Learn More
Ask a question... ⌘I