🔍

Experimental workflows playing with poster design

Working with layouts and typography based designs is quiet challenging with the current tools available. As typography comes with very strict visual formulars for the shapes itself, visual consistency and spacing – we struggle with current diffusion methods we have with current models like Stable Diffusion or FLUX. With additional control tools, we are capable to guide the diffusion process somehow but mostly with as loss in image quality.

method I: splitting image layer from text layer

By splitting the

randomly generated poster background with random shapes and slight grainy surface
img2img remixed image

vivid colored high contrast vintage analog silkscreen grainy reflective glossy techno 90s poster with bended surreal shapes lines and bit of noise in vivid warm iridescent colors and high contrast . reflections, translucent glassy materials, glitch and deformations,( white plain background:1.2), dirt, scraps, print errors. high contrast. vivid complementary colors, grainy surface, iridescent gradients, constructional lines

The typographic layer

By using Stable Diffusion 1.5 and ControlNet Depth and/or Canny, we can force the diffusion to pretty tangible results. With a simple black and white typographic offset image, we can use the diffusion process to overwork this plan digital preset with a proper prompt to a more stylized result.

randomly generated typo only guide map
rendered typo guided by DepthNet, CannyNet and custom Typo LoRa
scan of (halftone flat silkscreen print of gradient colored printed zhuj style typography letters and minimalistic black typo on plain white paper in background:1.2), single letter, type design, typography, (baroque, curvy, bold, liquid:1.2),

Additionally to the ControlNet guiding instruments, it is helpful to induce a specific type related LoRa to the render process. Felix Dölker did a greate job training a specific LoRa for that purpose. Visit the project or get the LoRa directly from civitAI.


To bring this all together in a quick workflow, you can use Forge/A1111 with its built-in ControlNET modules.

using two ControlNET modules + LoRa to guide the typo generation

The result is a readable output, but it is quiet far away from activley designing with generative diffusion models. :/

simple two layered, difference composed output