From 7307f7a3a19f91632ed98d7d99140f9453c3c299 Mon Sep 17 00:00:00 2001 From: Gabriel Dunne Date: Sat, 21 Oct 2023 05:04:38 -0700 Subject: [PATCH] es --- _posts/performance/2023-10-15-ethereal-signal.md | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/_posts/performance/2023-10-15-ethereal-signal.md b/_posts/performance/2023-10-15-ethereal-signal.md index d95afab..1e349f1 100755 --- a/_posts/performance/2023-10-15-ethereal-signal.md +++ b/_posts/performance/2023-10-15-ethereal-signal.md @@ -22,7 +22,7 @@ A live performance iteration of [Subspectral](/subspectral). ![]({{site.url}}/m/ethereal-signal/gear.png) -Gearlist for this performance. Clockwise from the laptop. +Gearlist — clockwise from the laptop. - Intel-based 2015 Macbook Pro - Motu Ultralite mk-5 @@ -38,15 +38,7 @@ Gearlist for this performance. Clockwise from the laptop. ![]({{site.url}}/m/ethereal-signal/ethereal-signal.jpg) -Pre-Rendered visuals were created using a custom generative model based on the Stable Diffusion XL model and several accomanying LoRAs. - -To develop the imagery, I define a sampler, number of steps, and the guidance scale that I aesthetically like. Most of this is trial an error, especially when modifying the prompt tokens, and relying heavitly on negative prompt tokens. During this process of sifting through gens for every good gen, there's a thousand that are trash. I usually use the `DPM++ 2M Karras` or `LMS` samplers, around 20 to 50 steps, and a CFG scale of 7. I also set the aspect of 16:9 using a `1365 x 768` resolution for the output. - -When I get my prompt and settings dialed, I produce a seed travel. - -To create a sense of animation, I generate a series of random seeds, and interpolate between them with a random range of steps -- ranging from as low as 10 steps between seeds, or as high as 280 -- and a random interpolation curves which produces a range of various bursts of motion, or smooth blends. - -After I render this series of frames, I bring the sequences into Davinci Resolve and apply a frame-blending and time dialiation to my taste, and any other color correction, editing, or filtering, before rendering the final output. +Pre-Rendered visuals were created using a custom generative model based on Stable Diffusion and several accomanying LoRAs. Post, editing in Davinci Resolve. ![]({{site.url}}/m/ethereal-signal/20231015_212755.jpg) -- 2.34.1