Recommended Prompts
1girl,long hair,solo,looking at viewer,best quality,masterpiece
Recommended Negative Prompts
worst quality,lowres,low quality,signature,watermark,username
Recommended Parameters
samplers | Euler, DPM++ 2M, Euler a | |
steps | 25-30 | |
cfg | 7-7.5 | |
clip skip | 2 | |
vae | noob_vae_trainer_step_7.safetensors | |
resolution | 862×1216, 832×1216, 1024×1536, 1024×1216 |
Recommended Hires (High-Resolution) Parameters
upscaler | R-ESRGAN 4x+ Anime6B | |
upscale | 1.8 | |
steps | 15 | |
denoising strength | 0.5 |
Creator Sponsors
All sponsors are not affiliates of Diffus. Diffus provides an alternative online Stable Diffusion WebUI experience.
We extend our sincere gratitude to our esteemed GPU sponsors for their generous support: https://cloud.lanyun.net
this is an image generation model based on training from Illustrious-xl, and continued trained by Laxhar Lab.
https://civitai.com/models/795765/illustrious-xl
It utilizes the latest full Danbooru and e621 datasets for training, with native tags caption.
The version uploaded on 8 October trained 5 epochs on 8*H100, as a Early Access Version.
And huggingface page of Lab
New Version is training on 32*H100
https://huggingface.co/Laxhar/sdxl_noob
Follow-up models and technical reports will be posted on huggingface
Communication qq groups:①875042008 ②914818692 ③635772191
DISCORD: Laxhar Dream Lab SDXL NOOB
This version of the model improves the fit of the characters and styles in Illustrious-xl 0.1ver, and the specific characteristics of the characters have a better representation. Laxhar lab is currently continuing to train the new version of the open-source model of XL on the basis of this beta version in the hope of minimising the use of lora, and releasing a more Noob-friendly, one-click SDXL anime model!
Note: The model name and other details are subject to change.
This model is still undergoing training!!!
This model is still undergoing training!!!
This model is still undergoing training!!!
Current Status
This is an 50% version intended for internal use. However, we are considering allowing limited external testing.
Datasets
– Danbooru (Pid: 1~7,600,039):
https://huggingface.co/datasets/KBlueLeaf/danbooru2023-webp-4Mpixel
– Danbooru (Pid > 7,600,039):
https://huggingface.co/datasets/deepghs/danbooru_newest-webp-4Mpixel
– E621 Data as of 2024-04-07 :
https://huggingface.co/datasets/NebulaeWis/e621-2024-webp-4Mpixel
Caption
<1girl/1boy/1other/...>, <character>, <series>, <artists>, <special tags>, <general tags>
Quality Tags
For quality tags, we evaluated image popularity through the following process:
-
Data normalization based on various sources and ratings.
-
Application of time-based decay coefficients according to date recency.
-
Ranking of images within the entire dataset based on this processing.
Our ultimate goal is to ensure that quality tags effectively track user preferences in recent years.
Percentile Range Quality Tags
> 95th masterpiece
> 85th, <= 95th best quality
> 60th, <= 85th good quality
> 30th, <= 60th normal quality
<= 30th worst quality
In the CCIP test, noobaiXL showed an improvement of approximately 2% compared to its base model. Based on data from over 3,500 characters, 89.2% of the characters achieved a CCIP score higher than 0.9. Given the current model performance, it is necessary to further expand the dataset for the existing CCIP test.
NoobAI-XL Short Test Report V0.1
https://nx9nemngdhk.feishu.cn/docx/XcAddUhDOo57U7x7MbXcE6VNnYc
Monetization Prohibition:
● You are prohibited from monetizing any close-sourced fine-tuned / merged model, which disallows the public from accessing the model’s source code / weights and its usages.
● As per the license, you must openly publish any derivative models and variants. This model is intended for open-source use, and all derivatives must follow the same principles.
License
This model is released under Fair-AI-Public-License-1.0-SD
Plz check this website for more information:
Freedom of Development (freedevproject.org)
Many thanks to those who have gone before us for their experience and training, and I welcome other labs to pick up the slack and train the community anime model better and better!
The participants, contributors, and testers of the model are acknowledged below
(listed in no particular order)
participants
L_A_X https://civitai.com/user/L_A_X
https://www.liblib.art/userpage/9e1b16538b9657f2a737e9c2c6ebfa69
li_li https://civitai.com/user/li_li
nebulae https://civitai.com/user/kitarz
Chenkin https://civitai.com/user/Chenkin
Euge https://civitai.com/user/Euge_
contributors
Narugo1992:
Thanks to narugo1992 and the deepghs he leads for open-sourcing a range of training sets, image processing tools and models.
https://huggingface.co/deepghs
Naifu:
Training scripts
https://github.com/Mikubill/naifu
Onommai:
Thanks to onommai open source for such a powerful base model.
aria1th261 https://civitai.com/user/aria1th261
neggles https://github.com/neggles/neurosis
parsee-mizuhashi https://huggingface.co/parsee-mizuhashi
bluvoll https://civitai.com/user/bluvoll
sdtana https://huggingface.co/sdtana
chewing https://huggingface.co/chewing
irldoggo https://github.com/irldoggo
reoe https://huggingface.co/reoe
kblueleaf https://civitai.com/user/kblueleaf
Yidhar https://github.com/Yidhar
ageless 白玲可 Creeper KaerMorh 吟游诗人 SeASnAkE zwh20081 Wenaka~喵 稀里哗啦 幸运二副 昨日の約. 445 EBIX Sopp Y_X adsfssdf Minthybasis Rakosz