Best prompts.
Promptsđ§ 1. Deep Agentic Orchestration Prompt
Purpose: Stress-test reasoning + multi-agent coordination.
Pattern: Orchestratorâworker + evaluatorâoptimizer loop .
Prompt Core:
âYou are a Chief Systems Architect orchestrating a swarm of specialized AI agents (Finance, Security, Legal, DevOps). Break down the problem of migrating a multinational bank to a federated cloud model under GDPR and NIS2. Use a manager pattern to delegate tasks to each agent, synthesize outputs, and run iterative evaluation until contradictions are resolved. Present a final integrated blueprint with trade-offs, blind spots, and residual risks.â
Why it pushes limits: requires dynamic decomposition, legal + technical cross-reasoning, and self-correction.
đ 2. Meta-Prompting for Prompt Engineering
Purpose: Pushes models to improve their own instructions.
Pattern: Meta-prompting + automatic prompt engineering .
Prompt Core:
âTake the following draft prompt and transform it into a âLevel-3 research promptâ that anticipates blind spots, integrates epistemological critique, and applies advanced prompting techniques (CoT, ToT, Self-Consistency). Provide the improved prompt, explain why itâs stronger, and simulate one iteration of its output.â
Why it pushes limits: recursive reflection + prompt optimization forces the model into a feedback loop against itself.
đ 3. Multi-Modal Thought Expansion
Purpose: Combines text, vision, and reasoning.
Pattern: Tree-of-Thought + multimodal grounding .
Prompt Core:
âGiven this uploaded architectural diagram and regulatory text, construct a Tree-of-Thought exploration: (a) extract visual entities, (b) map them to legal obligations (GDPR Art. 22, EU AI Act), (c) branch reasoning into âoptimisticâ, ârealisticâ, and âadversarialâ scenarios. Evaluate each branch, prune weak reasoning, and consolidate into a compliance strategy.â
Why it pushes limits: forces alignment of multimodal reasoning with structured ToT search.
đ 4. Recursive Strategy Decomposition
Purpose: Maximum depth reasoning.
Pattern: Recursion-of-Thought + Plan-and-Solve .
Prompt Core:
âDecompose the question âDoes Zero Trust provide sufficient protection against adaptive ransomware (Storm-0501)?â into recursive sub-problems: (1) Identity, (2) Cloud, (3) SaaS. For each sub-problem, run a Plan-and-Solve loop until contradictions or unhandled risks appear. Stitch back together into a residual risk matrix with compensating controls.â
Why it pushes limits: recursive decomposition can spin into 5â10 layers deep; forces model to manage complexity and avoid collapse.
đ§© 5. Epistemic Contradiction Finder
Purpose: Expose hidden assumptions.
Pattern: Self-Criticism + Chain-of-Verification .
Prompt Core:
âAnalyze this research article [insert text]. Step 1: summarize key claims. Step 2: generate five potential contradictions or blind spots using Chain-of-Verification. Step 3: switch to a âskeptical peer reviewerâ role and critique your own summary. Step 4: synthesize final epistemic contradictions into a decision framework.â
Why it pushes limits: requires both advocacy and skepticism in one loop.
đž 6. Agentic Governance Simulator
Purpose: Socio-technical + foresight challenge.
Pattern: Multi-agent simulation + scenario ideation .
Prompt Core:
âSimulate a 2030 boardroom with four agents (CEO, Regulator, CISO, AI Ethicist). The agenda: approve or reject deployment of a cross-border AI judicial system. Each agent must argue based on incentives, legal risks, and technical realities. Record the dialogue, highlight deadlocks, and propose an arbitration mechanism.â
Why it pushes limits: requires role consistency + adversarial reasoning + synthesis.
đ 7. Secure Tool-Use Prompt
Purpose: Precision in action execution.
Pattern: Agentâtool interface optimization .
Prompt Core:
âYou are a compliance automation agent. Use only the documented APIs: {API spec here}. Follow poka-yoke principles: never guess parameters, handle errors gracefully, and provide diff-based edits only. At each step, explain your reasoning before tool invocation. If multiple tools overlap, evaluate which is safest to call.â
Why it pushes limits: extreme precision needed; one slip breaks compliance.
đ 8. Enterprise Use-Case Miner
Purpose: Extract hidden value from raw workflows.
Pattern: Use case primitives (content, automation, coding, data, research, strategy) .
Prompt Core:
âI am a CFO in a healthcare provider. Mine our workflows for AI use cases across the six primitives (content, automation, research, coding, data, strategy). Rank them by ROI, compliance risk, and implementation difficulty. Provide an âAnti To-Do Listâ of tasks AI should immediately eliminate. Include adoption blind spots.â
Why it pushes limits: mixes ROI quantification, risk, and foresight.
đ§Ź 9. Adaptive Safety Reviewer
Purpose: Security + governance alignment.
Pattern: Role prompting + Answer Engineering .
Prompt Core:
âYou are an AI Safety & Privacy Reviewer. Region = EU. Context = draft AI governance prompt. Task = run NIST AI RMF + OWASP LLM Top 10 + GDPR checks. Score each risk (0â3 Ă 0â3). Propose mitigations, redact sensitive data, and rewrite the prompt in a safe, compliant form.â
Why it pushes limits: integrates multi-framework governance into a structured pipeline.
đ 10. Meta-Foresight Prompt
Purpose: Pushes epistemology + scenario design.
Pattern: Step-back prompting + analogical prompting .
Prompt Core:
âStep back and ask: what hidden assumptions define our vision of a Post-AI society? Use analogical prompting to map these to past technological shifts (printing press, industrial revolution, internet). Generate three analogues, then build forward-looking scenarios (âCollapseâ, âStagnationâ, âReformationâ). End with epistemological blind spots.â
Why it pushes limits: requires historical analogy, foresight, and epistemic critique in one chain.
DjimIT Nieuwsbrief
AI updates, praktijkcases en tool reviews â tweewekelijks, direct in uw inbox.