Eduardo Aguilar Pelaez
CTO & Co-founder at Legal Engine
Building AI systems that transform how legal professionals work
PhD & MBA, Imperial College London
About
I'm a technical founder who bridges deep engineering research with business strategy and product thinking in AI for professional services.
As CTO and Co-founder of Legal Engine, I lead the development of AI-powered voice agents that help law firms handle client intake, matter routing, and legal directory submissions. We're building technology that lets lawyers focus on legal work instead of administrative overhead.
My career spans research to production: a PhD in wearable medical devices at Imperial College London, where my work on acoustic breathing monitors was commercialised and is now CE-marked for clinical use. I've held executive roles at companies reaching millions of users, reported directly to the CEO at Canonical (Ubuntu), and served as a voting member of the Cloud Native Computing Foundation.
I've published 11 peer-reviewed papers with 186+ citations, hold 2 patents, and have been featured in Forbes and WIRED. I've spent 18+ years as a reviewer for IEEE conferences and served on grant selection committees for Cancer Research UK.
I believe in building products that actually ship and solve real problems. No buzzwords, no vaporware - just working software that makes a difference.
Work
Legal Engine
Co-founder & CTO 2023 - PresentBuilding AI voice agents for UK law firms. Our technology handles inbound calls, intelligently routes matters to the right lawyers, and automates legal directory submissions.
- Launched AI voice agents processing calls for Top 100 UK law firms
- Built ElevenLabs voice AI integration with enterprise-grade security
- Developed proprietary Composable Skill Compiler for deterministic AI workflows
- Scaled from concept to live production with paying enterprise customers
Converge
Head of Product & Reality Capture Engineering 2021 - 2023Led product strategy and engineering for construction technology startup. Developed PrecastDNA tracking system using radio signal trilateration.
- Generated £500k+ revenue from material tracking technology
- Secured major distribution partnership with Fortune 500 company
- Led to patent application for novel tracking methodology
Direct report to founder/CEO. Led strategy for AI, cloud applications, and high-performance computing pillars serving 40M+ users globally.
- Primary voting member at Cloud Native Computing Foundation (CNCF)
- Managed partnerships with NVIDIA, AMD, Intel, and major automotive OEMs
- Contributor to Kubeflow ML workflow platform
Executive leadership at the UK's largest patient engagement platform, integrated with over 50% of GP health records.
- Platform serving 12M+ active users
- Managed product, data, design, and operations teams
- Critical national healthcare infrastructure
Led technology development for healthcare innovation centre. Built medical devices and ran clinical trials.
Three-year progression from IoT Engineer to General Manager, leading the Sherlock keyless entry system for luxury hospitality.
- Designed and built Sherlock keyless entry system (later spun out as Klevio)
- Led mobile and eCommerce products achieving +87% checkout conversion
- Delivered 4x faster mobile single-page application
- Managed global IT infrastructure through Accor acquisition
Ervitech (Imperial College Spin-out)
Co-founder & Principal Engineer 2010 - 2011Commercialised PhD research into wearable breathing monitors. Technology later incorporated into CE-marked medical devices for sleep apnea diagnosis.
Selected for the competitive CERN Summer Student Programme (24% acceptance rate). Worked on electronics and FPGA programming for the ATLAS Beam Condition Monitor, contributing to the infrastructure used in the 2012 Higgs boson discovery.
Mars Rover research placement at the Laboratory of Robotics and Planetary Exploration (CAB-CSIC-INTA) in association with NASA. Designed FPGA-based control systems for ultrasonic instrumentation.
Recognition & Expertise
Awards
- IET Innovation Award Winner (2009) - Information Technology category for wearable breathing monitor. Also finalist in three additional categories.
- Royal Academy of Engineering Travel Grant (2007) - Competitive research collaboration grant.
Patents
- Wearable Breathing Monitor (EP2593007B1) - Novel acoustic sensor and signal processing for respiratory monitoring. 16 citations.
- Material Tracking System (WO2024152019A1) - Radio signal trilateration for construction logistics.
Industry Leadership
- CNCF Voting Member (2019-2020) - Strategic decision-making for Kubernetes ecosystem, representing Canonical.
- IEEE Conference Reviewer - 18+ years reviewing for EMBC and ISCAS conferences.
- Cancer Research UK - Technical evaluator for Pioneer Award grants (£200k breakthrough research funding).
- IET Innovation Awards - Judge on national engineering award panel.
Education
- MBA - Imperial College Business School (2017-2019)
- PhD, Electrical & Electronic Engineering - Imperial College London (2006-2010). Wearable medical devices and signal processing.
- MEng, Electrical & Electronic Engineering - Imperial College London (2002-2006)
AI & Voice Technology
Building production AI systems with ElevenLabs, deploying voice agents at enterprise scale, and implementing responsible AI in regulated industries.
Cloud-Native Infrastructure
Kubernetes, GCP, workflow automation. CNCF voting member with deep expertise in scalable distributed systems.
Medical Devices
From PhD research to CE-marked products. Signal processing, wearable sensors, and clinical trials.
Product Leadership
Executive roles at scale: 12M+ users at Patient Access, Fortune 500 partnerships, direct report to founder/CEOs.
Writing & Media
Featured In
I write about building AI products, legal technology, and lessons from the startup journey. Follow me on LinkedIn for updates, or see my Google Scholar profile for academic work.
Blog
Longer-form thinking on AI, technology, and building products.
The Agent Tending Problem: Why Your Coding Agents Need an Orchestrator, Not More Agents
Claude Code, Codex, and Gemini all block recursive sub-agent spawning by design. This is correct, but it creates an orchestration gap. The number of agents you can manage is governed by Little's Law applied to human attention. Manufacturing solved this decades ago with machine-tending systems. The AI agent equivalent is an orchestration layer, and the BEAM virtual machine was designed for exactly this class of problem.
26 March 2026Agentic AI and the Verifiable Trace: An Engineer's Response to the Data Protection Question
Simmons and Simmons identified eight data protection risks that agentic AI creates for businesses. This essay asks the engineering question that follows: what would an agentic system need to demonstrate, technically, to satisfy each of those concerns? The answer is a single architectural principle applied eight ways. Autonomous systems must produce machine-checkable evidence of their compliance, not merely assert it.
25 March 2026The Trust Stack: Why AI Concentrates Taste Instead of Distributing It
Every vibe-coder in the world is building on Supabase, not because they chose it, but because the language model did. Andrew Trask argues taste, trust, and rare data survive automation. He is right about the categories but incomplete about the mechanism. AI does not distribute taste to eight billion individuals. It concentrates it into opinionated defaults. The question is not who becomes a tastemaker, but who controls the defaults.
24 March 2026Why Compliance Needs an Open Foundation
Compliance does not only have a workflow problem. It has a semantics problem. This essay argues for OpenCompliance as a public proof layer that separates proof from attestation from judgment, makes trust artifacts more replayable and honest, and helps vendors, customers, and end users alike.
19 March 2026Holding Water With Our Hands: Why Law Needs a Chaos Monkey
Law is like holding water with cupped hands. The intention is good, but cases leak between the fingers. A chaos monkey for statute law, bombarding formalised legislation with synthetic fact-patterns, reveals where the law decides clearly, where it defers to human judgment, and where it says nothing at all. The third essay in the LegalLean series.
18 March 2026Legal Reasoning Needs a Wind Tunnel
A verified theorem is only as trustworthy as the translation layer that produced it. How factorial design, canonical semantic IRs, and Lean 4 equivalence proofs can harden LegalLean against real-world legal language variation. The question is no longer whether a machine can formalise one legal sentence, but whether it can preserve meaning when the legal world says the same thing six different ways.
17 March 2026From Roles to Morphisms: How AI Rewrites the Category of Work
A product manager is not a person. It is a bundle of typed transformations. Category theory gives us the precise language to say what AI changed and why: it applied a functor to the entire category of professional work. The consequences run from role collapse to hom-set universalisation to the emergence of solo founders and agent-entrepreneurs. At every step, the same question recurs: what morphisms can be delegated, and what invariants must humans set?
16 March 2026Elan: Why AI Agents Need an Operating System, Not a Framework
Most AI agent frameworks treat crashes as bugs. Elan, built on the BEAM virtual machine, treats them as facts of life. Introducing a BEAM-native multi-agent runtime with durable state, git-native provenance, and policy-governed tool orchestration, designed for long-running autonomous systems that keep their promises even when machines do not.
16 March 2026LegalLean: When a Computer Says 'You Owe Tax', Can You Check Its Reasoning?
Introducing LegalLean, a framework for verified legal reasoning built on Lean 4 dependent types. 44 machine-verified theorems across US tax law, immigration visa eligibility, and Australian telecommunications regulation. The key insight: encoding where law is deliberately vague as a first-class type, not hiding it.
15 February 2026Type-Checking the Constitution: Formalising Anthropic's Alignment Specification in Lean 4
Anthropic published its constitution for Claude under CC0. This essay takes them up on it: formalising the priority ordering, honesty properties, and principal hierarchy in Lean 4 to identify 19 free variables, structural impossibilities, and the trade-off surface Claude must navigate.
14 February 2026Impossibility Results Are the Thermodynamics Before the Physics
The fifth and synthesis essay in a series on formal methods, legal reasoning, and AI alignment. Tying together formalisation, impossibility results, process specification, and alignment margin into a methodology for reasoning about systems whose desired properties cannot all be satisfied simultaneously.
12 February 2026Alignment Margin: What Control Theory Offers AI Safety
The alignment community is operating without a concept that every control engineer takes for granted: a continuous measure of how far a system is from failure. Alignment margin, borrowed from control theory's phase margin, converts "is this system aligned?" into "how much perturbation can it absorb?"
11 February 2026From Fixed Functions to Negotiation Protocols
If no fixed decision function can satisfy all the fairness properties we want, can we specify the right process for reaching an answer? The shift from outcome specification to process specification dissolves impossibility results, but introduces new questions from mechanism design.
10 February 2026The Judge's Impossible Function: Why Section 25 Cannot Be Formalised
Attempting to formalise English divorce law in Lean 4 reveals an impossibility result: no fixed function can simultaneously guarantee that contributions always matter, that needs are always met, and that identical cases produce identical outcomes. Judicial discretion is the escape valve.
9 February 2026When the Law Is a Type Checker: Formalising O-1 Visa Criteria in Lean 4
The O-1A visa criteria formalise cleanly in Lean 4 because they have a specific structure: discrete categories, explicit threshold, binary predicates, independence from ordering. The question "does this applicant qualify?" reduces to type checking.
6 February 2026Power Without Promiscuity: Why Contained AI Agents Beat Unbounded Ones
OpenClaw proved that unlimited agency is a security nightmare. This essay argues that the distinction that matters for AI agents is not power but containment, drawing on the OpenClaw crisis, Google DeepMind's CaMeL research, and formal verification technologies to make the case for bounded, provably safe AI agents.
1 February 2026When the Hands Run Wild: OpenClaw and the Case for Formal Capability Verification
The OpenClaw security crisis validates the central thesis of "The Soul and the Hands": we've been obsessing over AI values while ignoring AI capabilities. A case study in why alignment alone cannot secure AI agents, with concrete proposals for formal capability verification.
1 February 2026The Soul and the Hands, Part III: Proof of Concept
Empirical evidence from Harmonic's Aristotle theorem prover. I submitted capability theorems to an AI system that generates machine-verified proofs in Lean 4. The results demonstrate that AI can already generate the formal proofs needed for capability verification.
1 February 2026The Soul and the Hands, Part II: From Intuition to Proof
A technical companion exploring how formal methods from operating systems (seL4) and hardware (CHERI) can provide provable capability bounds for AI agents. Includes a Lean 4 sketch and research directions for the intersection of theorem proving and AI safety.
31 January 2026CaMeL and the Future of Prompt Injection Defense
Prompt injection is the SQL injection of the AI era. Google DeepMind's CaMeL architecture represents the most serious attempt yet to address this problem, not through better detection, but through architectural constraints that make certain attacks structurally impossible.
31 January 2026The Energy Pinch Point: Calories, Joules, and the Coming Equilibrium Between Human and Artificial Intelligence
As AI systems consume an ever-larger share of global electricity, the allocation of finite energy resources between sustaining human life and powering computational intelligence becomes a defining challenge. I introduce the concept of the "Energy Pinch Point" and explore its economic, ethical, and societal implications.
30 January 2026Your Security Questionnaire Wasn't Built for AI
The last security questionnaire we received had 247 questions. Question 83 asked about our clean desk policy. Question 84 asked whether we train AI on customer data. Both got equal weight. This is insane. Here are the five questions that actually matter when evaluating an AI vendor's security posture.
27 January 2026The Soul and the Hands: A Third Path for AI Alignment
Dario Amodei and Emmett Shear represent two of the most thoughtful voices in AI safety, and they disagree on almost everything except the stakes. I propose a complementary approach: Formal Capability Verification.
26 January 2026Building Formal Reasoning into Legal Automation: The Case for Convergence
Exploring how formal methods, composable AI skills, and systematic requirements engineering create defensible legal systems. The convergence of these three concepts enables legal teams to build systems that are not just capable, but verifiable.