Nate Rudolph

Making things that make things easier

back

Generative AI

Since 2022, I've managed generative AI infrastructure for the Communications department at Johns Hopkins Applied Physics Laboratory, maintaining GPU-accelerated installations of tools like Invoke AI and Kokoro for our group of 50+ designers and 100+ communication specialists. This has enabled projects that previously took weeks to complete in days, producing hundreds of images at a fraction of the normal timeline.

I work with these tools daily while developing comprehensive training programs, creating hours of instructional materials and speaking at panels and training events across the laboratory. My work spans technical implementation (GPU management, ControlNet based workflows, API integration) and practical application using on-prem hardware. Currently, I'm also serving as the lead frontend engineer on an LLM-powered war gaming simulation system. The projects below showcase the publicly releasable portion of these efforts.

Generative AI Suite of Tools for OV-1 Creation

Role: Project Manager, Lead Frontend Engineer, Backend Developer

Published research demonstrating a suite of generative AI tools intended for creating military operational graphics in unclass or classified environments. The system combines Stable Diffusion, 3d asset databases, and LLM interfaces to enable rapid production of professional-quality visuals while maitaining data sovereignty in secure government networks.

Generative AI Image Tools

Role: Project Manager, Lead Instructor

These concept images demonstrate a small sample of our on-premises generative AI image creation capability using Stable Diffusion, Flux, and many other models. This enables rapid iteration and exploration of visual concepts while maintaining complete data security. I manage the technical infrastructure and provide hands-on training to dozens of artists, allowing our team to generate hundreds of concept variations in hours rather than weeks of traditional illustration work.