Weird Wonderful AI Art https://weirdwonderfulai.art ART of the future - now! Sun, 22 Sep 2024 05:45:22 +0000 en-AU hourly 1 https://wordpress.org/?v=6.7 Flux Dev Artist Study https://weirdwonderfulai.art/resources/flux-dev-artist-study/ https://weirdwonderfulai.art/resources/flux-dev-artist-study/#respond Fri, 13 Sep 2024 10:28:06 +0000 https://weirdwonderfulai.art/?p=12051 Flux Dev is an amazing model and produces beautiful images so as we had done in the past, we took it upon ourselves to run the same 4100+ artist names through the model to see where the resemblence of their art is reproduced.

This would lead us to believe that said artists images were used in the dataset or not as there is no information about this released by the Black Forest Lab team.

I built a ComfyUI workflow that allowed me to read my formatted prompts from a txt file and run through them one after another, this alone was a fun experiment to build this workflow as nothing like this existed and running a batch that cycles through is not one of the strengths of ComfyUI.

prompt: art by <artist_name> resolution: 1024x1024 px seed: 88888888 image batch: 4

Below is a preview of the study (50 images), because like before with SDXL and Midjourney 4000+ image gallery really strains my web-servers. So if you are interested in grabbing the full study you can get the offline copy.

What you get in Offline version

  • 2x ZIP files containing all the files
  • 1x PDF overview document with observations

I hope you find our Flux Dev Artist Study beneficial in your experimentation and will find share this resource with others. Let me know if you have any comments or questions below.

]]>
https://weirdwonderfulai.art/resources/flux-dev-artist-study/feed/ 0
Ultimate Collection of Flux Dev LoRAs https://weirdwonderfulai.art/resources/ultimate-collection-of-flux-dev-loras/ https://weirdwonderfulai.art/resources/ultimate-collection-of-flux-dev-loras/#respond Thu, 12 Sep 2024 12:28:59 +0000 https://weirdwonderfulai.art/?p=11999 Flux Dev has certainly grabbed a lot of attention over the past month and a half, I mean most of my posts and youtube content has been focused on Flux Dev. Since its release number of new LoRAs have been released by various community members. One of the featured artists Araminta has been at it again producing some of the nicest Flux Dev LoRAs out there.

Let’s explore the list of LoRAs available, their samples and of course the download link.

In order to install these LoRA files (.safetensors), simply copy them in the models\loras folder of ComfyUI or models\lora folder of Automatic1111.

Make sure you review the usage right for each of the LoRAs provided by the author.

Realism Flux LoRA by XLabs-AI

Adds more realism to your final image with more detail that by default. Finer details like hair, eyes, skin textures are greatly improved with this LoRA by XLabs. It was converted for use with ComfyUI by comfyannonymous. There are no trigger words for this LoRA.

Download Realism Flux Dev LoRA

Antiblur LoRA by Shakker Labs

This LoRA has the ability increase the depth of field controlled by its strength parameter. By default photographic results have very shallow depth of field straight out of Flux Dev. This LoRA increases the depth so you have more details in the background. There is no trigger word and the below samples were at Strength 1.0

Download Antiblur LoRA

Half Illustration by Davisbro

Really cool photo and illustration combined generation. If you have clicked on this post’s thumbnail or some of my YT videos lately then you have seen what this Flux Dev LoRA can do. Trigger word in the style of TOK

Download Half illustration

Frosting Lane by Araminta

A soft pastel illustration style LoRA for Flux Dev by our featured Araminta. She has been putting out some really cool LoRAs for Flux. Keep exploring this post. Trigger word frstingln illustration

Download Frosting Lane

Flux Film Foto by Araminta

35mm Film style images that take the overcooked photographic outputs of Flux Dev to a whole new realism. Trigger word flmft photo style

Download Flux Film Foto

Soft Serve Anime by Araminta

Soft colour anime style images are produced when you use this LoRA. It combines the power of Flux and hones in on a beautiful anime style. Trigger word sftsrv style illustration

Download Soft Serve Anime

Linnea Beta by Araminta

Trained on her own hand drawn sketches, this cute little character emerges with simple hand drawn lines and simple colour scheme. Trigger word linnea teal hair

Download Linnea Beta

Mooniverse by Araminta

Another photographic LoRA but with very nice soft aesthetics (soft focus) and muted colours which produce some very nice results out of Flux Dev.

Download Mooniverse

Sonny Anime Flex by Araminta

create soft gel line anime style images in a sort of cute lofi aesthetic. This is the FLEX version, that has more flexibility and can be pretty successfully pushed away from true anime towards cartoon art and more complex scenes. However, it is recommended to include the style direction in your trigger. Trigger word nm22 [style direction] style

Example prompt: [subject], [colors/aesthetic description], nm22 [style direction] style

Download Soony Anime Flex

Koda Diffusion by Araminta

Koda captures the nostalgic essence of early 1990s photography, evoking memories of disposable cameras and carefree travels. It specializes in creating images with a distinct vintage quality, characterized by slightly washed-out colors, soft focus, and the occasional light leak or film grain. Trigger word flmft style

Download Koda Diffusion 

Midsummer Blues by Jake Dahn

A minmalistic yet colourful illustration style LoRA for Flux Dev that give you a very beautiful vibrant illustrated result, unlike other models with Flux Dev you get the correct number of figures and better compositions. Trigger word is MSMRB but as per the download page it works better when used like illustrated MSMRB style
Download Midsummer Blues

These are some of the coolest looking Flux Dev LoRAs that have been found on the internet so I hope you enjoy using them. They small enough that you can download them all and achieve wonderful results with Flux Dev.

If you have others to suggest, leave the details in the omments.

]]>
https://weirdwonderfulai.art/resources/ultimate-collection-of-flux-dev-loras/feed/ 0
LoRA Caption Workflow for ComfyUI https://weirdwonderfulai.art/resources/lora-caption-workflow-for-comfyui/ https://weirdwonderfulai.art/resources/lora-caption-workflow-for-comfyui/#respond Sun, 25 Aug 2024 11:52:24 +0000 https://weirdwonderfulai.art/?p=11990
ComfyUI LoRA Caption Workflow

I’m very happy to share my LoRA Caption workflow for ComfyUI that will let you run a batch of images through two different captioning methods using Florence 2 model or using Clip Interrogator. Both these are paired with WD14 Tagger node which generates some additional tags/keywords for the caption.

Images must be PNG format, JPEGs are not supported currently in the workflow due to custom node.

You can enter the LoRA training “trigger” word which is added to the prompt.

Preview of the workflow – download below
LoRA Caption Workflow (1099 downloads )

Useful tips

  • Caption files generated cannot be overwritten – this a limitation of the custom node, if you want to re-run delete the original TXT file
  • Once all images are run through you need to Reset the counter. Use the Reset counter (use once) switch – set to true. Remember to turn if off – set to false.
  • List index out of range error – this means you are trying to run it but the txt file caption already exists. Or the counter has reached its limit, you need to reset it.
  • I always recommend that you review the captions and finetune them to ensure you get the best result out of your LoRA.
]]>
https://weirdwonderfulai.art/resources/lora-caption-workflow-for-comfyui/feed/ 0
Flux Dev Image to Prompt Workflow with Upscaler https://weirdwonderfulai.art/resources/flux-dev-image-to-prompt-workflow-with-upscaler/ https://weirdwonderfulai.art/resources/flux-dev-image-to-prompt-workflow-with-upscaler/#comments Sun, 18 Aug 2024 08:21:07 +0000 https://weirdwonderfulai.art/?p=11949 Flux Dev has been the talk of the month and many amazing developments already showing up in the opensource community, I’ve been experimenting and documenting many of the latest finding here on the blog so do checkout “flux” search to find all the posts.

In this post I will share my workflow of this Flux Dev (Image to prompt) with Upscaler workflow. Over the past years I’ve created so many images and with the advancement of tools like IPAdapters we hardly need to write a prompt to generate similar images. So as we don’t have anything like IPAdapters at the moment, I created this workflow which can generate the prompt to feed to the Flux model.

The workflow is running fine with 16GB and uses the Flux Dev Fp8 so it will work with 12GB as well.

Try to play with Task setting in Florence2Run node, i’ve set it to more_detailed_caption but you can scale it back for shorter prompt. Other settings to try would be caption or detailed_caption.

Workflow preview only – download via the link below

Must Do

In order to make sure these new workflows work for you you must do the following:

  • Update ComfyUI so you have the latest version
  • Install any Missing nodes using ComfyUI Manager

Workflow

The workflow is provided as PNG and JSON file so you can use either to import the workflow into ComfyUI. Simply drag and drop either of these files on your ComfyUI and you will be up and running in no time, make sure you have completed the Must Do above.

Flux Dev Image to Prompt with Upscaler (2868 downloads )

The workflow is easy to follow and comes with some notes to help you understand how to control some settings. The upscaler is set to 2x but works at 4x as well, just takes longer. I’m also embedding the realism LoRA of Flux Dev so you can generate realistic details in the final image.

Of course if you have any questions or comments don’t hesitate to share it below.

]]>
https://weirdwonderfulai.art/resources/flux-dev-image-to-prompt-workflow-with-upscaler/feed/ 1
Flux LoRA Training Tutorial by Araminta https://weirdwonderfulai.art/tutorial/flux-lora-training-tutorial-by-araminta/ https://weirdwonderfulai.art/tutorial/flux-lora-training-tutorial-by-araminta/#respond Tue, 13 Aug 2024 04:01:38 +0000 https://weirdwonderfulai.art/?p=11944
Video Tutorial by Araminta

Araminta who has been featured before on this blog has just released a tutorial video where she trains a new Flux LoRA using her own computer with RTX3090. She walks through the entire process of taking the AI Toolkit by @ostrisai

The toolkit seems straight forward to use requires you to use your Huggingface Token to download the models and other components. I guess this first step will take time, I have not yet explored it myself but there may be a way to re-use the model if you have it downloaded already which means you won’t have to re-download again. But as I don’t know yet I am guessing this is necessary for the initial first run.

Another observation from this video is that it does not generate the prompts for your dataset of images, so you need to do that using other tools like Kohya SS to generate the prompts for your images.

Once you have all the images and dataset ready you can simply generate the YAML configuration file train_lora_flux_24gb.yaml like the one provided in the config/samples folder of AI Toolkit and can create your own with settings you want. It is nicely commented and therefore easy to follow. Araminta explains and walks through this in the video.

It is great to see Araminta expanding and sharing her knowledge via YouTube.

]]>
https://weirdwonderfulai.art/tutorial/flux-lora-training-tutorial-by-araminta/feed/ 0
Cheap Solution for running Flux Dev using Runpod https://weirdwonderfulai.art/resources/cheap-solution-for-running-flux-dev-using-runpod/ https://weirdwonderfulai.art/resources/cheap-solution-for-running-flux-dev-using-runpod/#comments Sun, 11 Aug 2024 05:48:09 +0000 https://weirdwonderfulai.art/?p=11929 Flux has two models Dev and Schnell both of which are rated to require upwards of 32GB VRAM, there are other solutions available that let you run Flux under 12GB but if you are truly wanting to explore the power of this model and how good it can be a higher resolution then you need more VRAM.

My solution for this VRAM limitation (as I run RTX4080 16GB on my local PC) is to use Runpod and use their A40 GPU which only costs $0.35 per hour which is half a cent per minute. I don’t even know if the electricity used by GPU is costing them ore than that or not. It is ridiculously cheap and readily available on their platform.

I am using the ComfyUI Image for Runpod, which provides latest ComfyUI version with standard SD and SDXL models.

Flux on Runpod.io (1065 downloads )

You can open the various applications via the links provided.

To download the models I created a JupyterLab Notebook with all the links so you can download all the models, for Flux Dev you do need to use Huggingface access Token which is free so just need to enter it in the notebook before you run it. The way I have built the notebook is that it will download the Models, Clip and Realism LoRA in the required ComfyUI directories.

Simply upload the notebook I provided (IPYNB) in the Workspace folder on Runpod and double click to open. Once launched you can click on the play button for each of the cells with the links (after updating the access token).

Once this is done you can simply import the provided Workflow in ComfyUI and you are good to go. By default the workflow I use utilises all the higher settings of Flux Dev because I built it for RunPod where you will be using higher VRAM GPU. You can modify them as you like.

The workflow I provided uses a few custom nodes so if you end up with red boxes upon loading the workflow just use the ComfyUI Manager to install missing nodes.

Flux on Runpod.io (1065 downloads )

I also have a video walkthrough of my setup of Runpod on my YouTube which is below so feel free to browse through it if you like video format better.

I hope you find this information useful and helpful. If you have any queries or suggestions leave them in the comment section.

]]>
https://weirdwonderfulai.art/resources/cheap-solution-for-running-flux-dev-using-runpod/feed/ 5
Flux Schnell and Dev Workflow with Upscaling https://weirdwonderfulai.art/comfyui/flux-schnell-and-dev-workflow-with-upscaling/ https://weirdwonderfulai.art/comfyui/flux-schnell-and-dev-workflow-with-upscaling/#comments Mon, 05 Aug 2024 13:11:50 +0000 https://weirdwonderfulai.art/?p=11916 Flux has been out of under a week and already seeing some great innovation in the open source community. I was working on exploring and putting together my guide on running Flux on Runpod ($0.34 per hour) and discovered this workflow by @plasm0 that runs locally and support upscaling as well.

So I ran up my local instance on my computer of ComfyUI with Flux and started to see some incredible results.

Workflow is available here, you can download.

What you will need to run

In order to run this, you need ComfyUI (update to the latest version) and then download these files.

Place the Model in the models\unet folder, VAE in models\VAE and Clip in models\clip folder of ComfyUI directories. Make sure you restart ComfyUI and Refresh your browser.

The workflow utilises Flux Schnell to generate the initial image and then Flux Dev to generate the higher detailed image. Final upscale is done using an upscale model.

This is just a workflow preview download it via the link above.

I made a couple of changes to improve it for me but I don’t think they are major so you can explore and edit as you like.

The results and image quality are absolutely stunning!!

]]>
https://weirdwonderfulai.art/comfyui/flux-schnell-and-dev-workflow-with-upscaling/feed/ 2
Flux – A New Open Source Model to Compete with Midjourney https://weirdwonderfulai.art/general/flux-a-new-open-source-model-to-compete-with-midjourney/ https://weirdwonderfulai.art/general/flux-a-new-open-source-model-to-compete-with-midjourney/#respond Sun, 04 Aug 2024 11:27:41 +0000 https://weirdwonderfulai.art/?p=11885 August 1, 2024 Black Forest Labs released three new models Flux.1 – Pro, Dev and Schnell. The Pro version is not open source and is available through their API but DEV and Schnell are both open source and available to download via Huggingface page.

Dev is a higher quality model than Schnell, but Schnell is much faster (4 steps). These are big models though both of them weight a whopping 23.8GB each and they require high level of VRAM to run. It is recommended that you have 32GB RAM.

However, don’t be sad because there is a way to run them on lower VRAM GPUs. I have RTX4080 with 16GB and I can run both Dev and Schnell only difference is that Dev takes about 3 minutes to generate an image 1024px by 1536px while Schnell takes only 30-40 seconds to generate the same.

The buzz at the moment is that these models are at par with Midjourney and in my testing I have to agree that they are much better. It is better at many aspects actually:

  • Resolution – the model is able to handle any image size you want from extremely wide to extremely tall, there is no set resolution that you have to adhere to
  • Prompts – it is much better at handling the prompt and adhering to the various nuances of the prompt
  • Quality – quality is much better and higher in this initial release. Hands are better formed, composition is almost spot on always and facial features are well defined
  • Text – renders text better than any model out there even SD3

Most importantly it doesn’t apply its own recipe or sauce to make your image better, so it stays close to your prompt as much as possible. Whereas, with Midjourney there is always the influence that their model tries to add in the image to make it better which can often make it hard to control the image with just a text prompt.

Download

In order to run this, you need ComfyUI (update to the latest version) and then download these files.

Place the Model in the models\unet folder, VAE in models\VAE and Clip in models\clip folder of ComfyUI directories. Make sure you restart ComfyUI and Refresh your browser.

The default workflows are provided by ComfyAnonymous on their github page.

My adapted workflows are available as well for download. I provide two workflows Text 2 Image and Image 2 Image, just drag the PNG files in the zip into ComfyUI. Install any missing nodes using ComfyUI Manager.

Flux.1 Txt2Img and Img2Img Workflows (745 downloads )
Preview of Text to Image workflow in ComfyUI (download the zip above)
Preview of Image to Image workflow in ComfyUI (download the zip above)

My Image to Image workflow utilises Florence 2 LLM and Clip Interrogator (got the original version online from somewhere I can’t recall) to generate an accompanying prompt to help guide Flux. So you have Image that is influencing the generation plus the text prompt that makes the result super!!

Sample Results

It’s been a wonderful breath of fresh air to get a model that can produce such high quality coherent results which has kick off the month of August with a bang. In wonder what other excitement is awaiting us next. For me I keep exploring Flux and had already downscaled my Midjourney subscription but is it time to ditch Midjourney, we will see.

]]>
https://weirdwonderfulai.art/general/flux-a-new-open-source-model-to-compete-with-midjourney/feed/ 0
Basic Camera Motion LoRA for AnimateDiff https://weirdwonderfulai.art/resources/basic-camera-motion-lora-for-animatediff/ https://weirdwonderfulai.art/resources/basic-camera-motion-lora-for-animatediff/#respond Sun, 28 Jul 2024 04:58:40 +0000 https://weirdwonderfulai.art/?p=11871 In this post I will share some Motion LoRA designed to be used in conjunction with AnimateEiff which emulate the basic camera movements like zoom in/out, pan right/left and crane up/down. These Motion LoRA have been trained by Tamás cseh and you can download them here.

A Motion LoRA would hook onto the AnimateDiff Loader in the motion_lora input and is loaded using Load AnimateDiff LoRA node.

The Motion LoRA provide a very nice movement in your animation and generally work really when when a strength of 0.7-1.0 is used. They are designed with v2 version of AnimateDiff so they won’t work with v3 version and were trained with a context window of 16 so ideally suited for Stable Diffusion 1.5 models.

Download

You can grab the various motion LoRAs via the links below. These are hosted by the createor on their huggingface page, links are provided for ease of download below.

Download the basic workflow that will let you test these above motion LoRA in AnimateDiff. You will need to download some custom nodes which you can easily do via ComfyUI Manager.

Motion LoRA AnimateDiff (474 downloads )

I also put together a video preview and walk through of my workflows that showcase these basic camera motion LoRA for AnimateDiff. In the video I show a bigger much larger workflow which can be built easily by expanding the above workflow and copying the necessary nodes to review multiple motion LoRAs togther.

Hope you found this post useful and thanks for stopping by. Check out some of the other related posts below and if you can support our blog.

]]>
https://weirdwonderfulai.art/resources/basic-camera-motion-lora-for-animatediff/feed/ 0
LivePortrait in ComfyUI https://weirdwonderfulai.art/comfyui/liveportrait-in-comfyui/ https://weirdwonderfulai.art/comfyui/liveportrait-in-comfyui/#respond Fri, 12 Jul 2024 11:04:53 +0000 https://weirdwonderfulai.art/?p=11847 One of the most exciting releases of this week has been the LivePortrait model which was released by KwaiVGI and soon after was incorporated in a Custom Node for ComfyUI by Kijai.

The custom node bring with a sample workflow that can be imported into ComfyUI and you can get started with generating your own animated live character from their image. The workflow is quite simple and all you need is:

  • Portait image of who you want to animate, could be your own photo or AI generated image
  • Guiding video – a source video from where the model will take the facial queues and animate the portrait image. There are bunch of samples provide in the Repo you can download but don’t stop there, record your own and try it as well.

Ideally the two components should be the same ratio, that if the video is 1:1 ratio use an image that is in the same ratio. Without that it does work but the results may be a bit skewed.

Based on the guiding video and the number of frames it has it will generate the same number of output frames. The Video Combine node will compile the final video based on the frame rate that you set.

There are few guiding videos available in the original Repo, you can download them in and use them to get started. However the most fun part is that you can record your own self and create some cool unique results.

Default workflow creates a merged video of the Guiding video and resulting Live Portrait. However, its easy to change that and only create the Live Portrait. Simply drag the full_image node from LivePortraitProcess into the images node of Video Combine. This will bypass the merged video and create the final result on its own.

I also shared a quick video on how to setup and use LivePortrait over on YouTube.

YouTube video about LivePortrait Node and ComfyUI Workflow

Playing around with all these I had so much fun with different character and expression, its quite addictive.

Let me know if you have any questions or issues but installing and using LivePortrait is pretty easy.

]]>
https://weirdwonderfulai.art/comfyui/liveportrait-in-comfyui/feed/ 0