Jordan Dziat
Senior Staff Engineer. Builder of systems that scale.
15 years in infrastructure and engineering leadership. Currently at The Lifetime Value Co., building platforms and tooling for engineering teams. I write about infrastructure, AI systems, and the craft of building software.
Notes
go test -race ./... doesn’t catch data races in code that isn’t exercised by your tests. If you have concurrent code paths that only trigger under load, you need to write tests that actually exercise them concurrently. Sounds obvious in hindsight.
Astro 5 shipped with content layer, server islands, and simplified prerendering. The content layer API is a massive improvement for managing blog content — glob loaders replaced the old file-based routing and the Zod schemas catch frontmatter errors at build time.
The functional options pattern in Go is one of those rare cases where the community has converged on a genuinely good API design pattern. It’s more boilerplate upfront but the resulting APIs are impossible to misuse. Worth the investment for any public-facing library.
Latest Writing
View all →Hello World
First post on the new site. A fresh start with Astro, Tailwind, and a focus on content over flash.
1 min readBuilding Simple Durable Jobs: A Go Library for Resilient Workflows
Why I built a lightweight Go library for durable job queues with checkpointed workflows, crash recovery, and an embedded monitoring dashboard.
4 min readBuilding a Go SDK for Langfuse
How I built a type-safe Go SDK for Langfuse's LLM observability platform, covering the batch processor, hierarchical tracing, and API design decisions.
4 min readProjects
View all →Stout
A universal package registry that builds from source with integrated supply chain security. Supports Go, npm, Helm, Docker/OCI, Ruby, Python, and AI agent packages. Sandboxed builds, vulnerability scanning, SBOM generation, Sigstore signing, and a community trust system with reviewer verification.
Nocturnium
An autonomous AI development platform. Import a repo, describe what you want, and wake up to production-ready code with tests passing and a PR ready for review. Features quality-gated iteration, self-healing deployments, digital persona learning, and automated infrastructure provisioning.
Salamander
A high-performance C/C++ LLM inference engine forked from llama.cpp. Supports 90+ model architectures, GPU acceleration across CUDA, Metal, Vulkan, and more, with an OpenAI-compatible server, vision/diffusion models, and quantization from 1.5-bit to 8-bit.