iron_sql is a typed SQL code generator and async runtime for PostgreSQL. Write SQL where you use it, run generate_sql_package, and get a module with typed dataclasses, query helpers, and pooled connections without hand-written boilerplate.
pip install iron-sql # runtime only (psycopg + psycopg-pool + pydantic)
pip install iron-sql[codegen] # + inflection for code generationYou also need sqlc v2 available in your PATH (or pass sqlc_command/sqlc_path to the generator).
- Query discovery.
generate_sql_packagescans your codebase for calls like<package>_sql("SELECT ..."), runssqlcfor type analysis, and emits a typed module. - Strong typing. Generated dataclasses and method signatures flow through your IDE and type checker.
- Async runtime. Built on
psycopgv3 with pooled connections, context-based connection reuse, and transaction helpers. - Safe by default. Helper methods enforce expected row counts instead of returning silent
None.
runtime.py-- asyncConnectionPool, row helpers (get_one_row,typed_scalar_row), JSON validation decorators.codegen/generator.py-- query discovery, type resolution, module rendering.codegen/sqlc.py-- wraps thesqlcCLI and models its JSON output.codegen/util.py-- shared codegen utilities (indent_block,write_if_changed).
- Add a schema file. A Postgres DDL dump, e.g.
db/schema.sql. - Write queries where they live. Import the future helper and use SQL literals inline:
Named parameters use
from myapp.db.mydb import mydb_sql user = await mydb_sql( "SELECT id, username, email, created_at FROM users WHERE id = @user_id" ).query_single_row(user_id=uid)
@param(required) or@param?(optional, expands tosqlc.narg). Positional$1works too. - Generate the client module.
This writes
from pathlib import Path from iron_sql.codegen import generate_sql_package generate_sql_package( schema_path=Path("schema.sql"), package_full_name="myapp.db.mydb", dsn_import="myapp.config:DSN", src_path=Path("."), )
myapp/db/mydb.pycontaining:- a connection pool singleton,
*_connection()and*_transaction()context managers,- dataclasses for multi-column results (deduplicated by table),
StrEnumclasses for PostgreSQL enums,- a query class per statement with typed methods,
- overloads for the
*_sql()helper so editors infer return types.
- Type overrides.
type_overrides={"custom_type": "int"}maps database type names to Python type strings. - JSON model overrides.
json_model_overrides={"users.metadata": "myapp.models:UserMeta"}adds Pydantic validation for JSON/JSONB columns. - Naming conventions. Supply
to_pascal_fnandto_snake_fncallables to control generated names. - DSN configuration.
dsn_importis written verbatim into the generated module; point it at a config variable, env var lookup, or function call. - Debug artifacts. Pass
debug_pathto save sqlc inputs and outputs for inspection.
ConnectionPoolopens lazily and reopens afterclose(), withContextVar-based connection reuse for nested contexts.query_single_row()raisesNoRowsError;query_optional_row()returnsNone. Both raiseTooManyRowsErroron 2+ rows.- JSONB params are sent with
pgjson.Jsonb; JSON withpgjson.Json. Scalar row factories validate types at runtime. json_validateddecorator applies Pydantic model validation to dataclass fields on construction.
The example/ directory contains a complete working setup: a PostgreSQL schema, generation script with testcontainers, and sample query definitions. See example/generate.py for the codegen call and example/myapp/main.py for query usage.
- Errors identify the file and line where the problematic statement lives.
- Unknown SQL types map to
objectand emitUnknownSQLTypeWarning(promotable to error withwarnings.filterwarnings). - Statements with the same SQL but conflicting
row_typevalues are rejected at generation time.