BuildWithAI: Prompt Engineering 6 DR Tools with Amazon Bedrock
Overview Now that the architecture is in place — the serverless stack, models.config.json, the 5-layer guardrails — let's get into what happens inside each Lambda. This part covers the prompt engin...

Source: DEV Community
Overview Now that the architecture is in place — the serverless stack, models.config.json, the 5-layer guardrails — let's get into what happens inside each Lambda. This part covers the prompt engineering: the system prompt pattern, how each tool's instructions were tuned, and the patterns that are reusable in any Amazon Bedrock project. Quick recap from the previous part: every tool runs as its own Lambda function behind API Gateway, reads its model and limits from a central config file, and passes through five layers of cost protection before touching Bedrock. If you haven't gone through that yet, it'll give useful context for what follows here. The handler pattern Every Lambda follows the same skeleton. The handler reads its config from models.config.json via a shared module, then calls Bedrock with a tool-specific system prompt: import json, boto3, logging, sys sys.path.insert(0, "/opt/python") # Lambda Layer from guardrails import run_guardrails, DailyLimitExceeded, ToolsDisabled,