It’s really hard getting dark skin sometimes. A lot of the time it’s not even just the model, LoRAs and Textual Inversions make the skin lighter again so you have to try even harder. It’s going to take conscious effort from people to tune models that are inclusive. With the way media is biased right now, I feel like it’s going to take a lot of effort.
Right now people seem to prefer smaller quantized models, with whatever set of even smaller LoRAs on top, that make them output what they want… and only include more generic elements in the base model.
To my understanding the problem is that the models reproduce biases in the training material, not model size. Alignment is currently a manual process after the initial unsupervised learning phase, often done by click-workers (Reinforcement Learning from Human Feedback, RLHF), and aimed at coaxing the model towards more “politically correct” outputs; But ultimately at that time the damage is already done since the bias is encoded in the model weights and will resurface in the outputs just randomly or if you “jailbreak” enough.
In the context of the OP, if your training material has a high volume of sexualised depictions of Asian women the model will reproduce that in its outputs. Which is also the argument the article makes. So what you need for more inclusive models is essentially a de-biased training set designed with that specific purpose in mind.
I’m glad to be corrected here, especially if you have any sources to look at.
First, there is no thing as a “de-biased” training set, only sets with whatever target series of biases you define for them to reflect.
Then, there are only two ways to change the biases of a training set:
either you replace data until your desired objective, which will reduce the model’s quality for any of the alternatives
or you add data until your desired objective, which will require an increased size to encode the increased amount of data, or the model’s quality will go down for all cases (you’d be diluting every other case)
For reference, LoRAs are a sledgehammer approach to apply the first way.
As for the article, it’s talking about the output of some app, with unknown extra prompting and LoRAs getting applied in the back, so it’s worthless as a discussion of the underlying model, much less as a discussion of all models.
First, there is no thing as a “de-biased” training set, only sets with whatever target series of biases you define for them to reflect.
Yes, I obviously meant “de-biased” by definition of whoever makes the set. Didn’t think it worth mentioning, as it seems self evident. But again, in concrete terms regarding the OP this just means not having your dataset skewed towards sexualised depictions of certain groups.
either you replace data until your desired objective, which will reduce the model’s quality for any of the alternatives
[…]
For reference, LoRAs are a sledgehammer approach to apply the first way.
We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks.
There is no data replaced, the model is not changed at all. In fact if I’m not misunderstanding it adds an additional neural network on top of the pre-trained one, i.e. it’s adding data instead of replacing any. Fighting bias with bias if you will.
And I think this is relevant to a discussion of all models, as reproduction of training set biases is something common to all neural networks.
We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks.
You can see how it works in the “Introduction” section, particularly figure 1, or in this nice writeup:
LoRA is a “space and time efficient” technique to produce a modification matrix for each layer. It doesn’t introduce new layers, or add data to any layer. To the contrary, it’s bludgeoning all the separate values in each layer, and modifying each whole column and whole row by the same delta (or only a few deltas, in any case with Ar«Wk and Br«Wd).
Turns out… that’s enough to apply some broad strokes type of changes to a model, which still limps along thanks to the remaining value variation. But don’t be mistaken: with each additional LoRA applied, a model loses some of its finer details, until at some point it descends into total nonsense.
you do not “replace data until your desired objective”.
the original model stays intact (the W in the picture you embedded).
Meaning that when you change or remove the LoRA (A and B), the same types of biases will just resurface from the original model (W). Hence “less biased” W being the preferable solution, where possible.
Don’t get me wrong, LoRAs seem quite interesting, they just don’t seem like a good general approach to fighting model bias.
“less biased” W being the preferable solution, where possible.
Not necessarily. There are two parts to a diffusion model: a tokenizer, and a neural network with a series of layers (W in this case would be a single layer) that react in some way to some tokens. What you really want, is a W “with more information”, no matter if some tokens refer to a more or less “fair” (less biased) portion of it.
It doesn’t really matter if “girl = 99% chance of white girl + 1% of [other skin tone] girl”, and “asian girl = sexualized asian girl”… as long as the “biased” token associations don’t reduce de amount of “[skin tone] girl” variants you can extract with specific prompts, and still react correctly to negative prompts like “asian girl -sexualized”.
LoRAs are a way to bludgeon a whole model into a strong bias, like “everything is a manga”, or “everything is birds”, or “all skin is frogs”, and so on. The interesting thing of LoRAs is that, if you get a base model where “girl = sexualized white girl”, and add an “all faces are asian” LoRA, and a “no sexualized parts” LoRA… then well, you’ve beaten the model into submission without having to use prompts (kind of a pyrrhic victory).
That is, unless you want something like a “multirracial female basketball team”.
That would require the model to encode the “race” as multiple sets of features, then pick one at random for every player in whatever proportion you find acceptable… but for that, you’re likely better off with adding an LLM preprocessor stage to pick a random set of races in your desired proportion, then have it instruct a bounded box diffusion model to draw each player with a specific prompt, so the bias of the model’s tokens would again become irrelevant.
Forcing the model to encode more variants per token, is where you start needing a larger model, or start losing quality.
a neural network with a series of layers (W in this case would be a single layer)
I understood this differently. W is a whole model, not a single layer of a model. W is a layer of the Transformer architecture, not of a model. So it is a single feed forward or attention model, which is a layer in the Transformer. As the paper says, a LoRA:
injects trainable rank decomposition matrices into each layer of the Transformer architecture
It basically learns shifting the output of each Transformer layer. But the original Transformer stays intact, which is the whole point, as it lets you quickly train a LoRA when you need this extra bias, and you can switch to another for a different task easily, without re-training your Transformer. So if the source of the bias you want to get rid off is already in these original models in the Transformer, you are just fighting fire with fire.
Which is a good approach for specific situations, but not for general ones. In the context of OP you would need one LoRA for fighting it sexualising Asian women, then you would need another one for the next bias you find, and before you know it you have hundreds and your output quality has degraded irrecoverably.
For more inclusive models, or for current ones? In order to add something, either the size has to grow, or something would need to get pushed out (content, or quality). 4GB models are already at the limit of usefulness, both DALLE3 and SDXL run at about 12B parameters, so to make them “more inclusive” they’d have to grow.
Wait, by “fine-tuning”… do you mean LoRAs? Because those are more like brain surgery with a sledgehammer, rather the opposite of “fine”. I don’t think it’s possible for LoRAs to avoid having undesirable side effects… and I don’t think people even want that.
Actual “fine” tuning, would be adding the LoRA’s training data to the original set, then training the whole model from scratch… and that would require increasing the model’s size to encode the increased amount of data for the same output quality.
It’s really hard getting dark skin sometimes. A lot of the time it’s not even just the model, LoRAs and Textual Inversions make the skin lighter again so you have to try even harder. It’s going to take conscious effort from people to tune models that are inclusive. With the way media is biased right now, I feel like it’s going to take a lot of effort.
“Inclusive models” would need to be larger.
Right now people seem to prefer smaller quantized models, with whatever set of even smaller LoRAs on top, that make them output what they want… and only include more generic elements in the base model.
[citation needed]
To my understanding the problem is that the models reproduce biases in the training material, not model size. Alignment is currently a manual process after the initial unsupervised learning phase, often done by click-workers (Reinforcement Learning from Human Feedback, RLHF), and aimed at coaxing the model towards more “politically correct” outputs; But ultimately at that time the damage is already done since the bias is encoded in the model weights and will resurface in the outputs just randomly or if you “jailbreak” enough.
In the context of the OP, if your training material has a high volume of sexualised depictions of Asian women the model will reproduce that in its outputs. Which is also the argument the article makes. So what you need for more inclusive models is essentially a de-biased training set designed with that specific purpose in mind.
I’m glad to be corrected here, especially if you have any sources to look at.
You can cite me on this:
First, there is no thing as a “de-biased” training set, only sets with whatever target series of biases you define for them to reflect.
Then, there are only two ways to change the biases of a training set:
For reference, LoRAs are a sledgehammer approach to apply the first way.
As for the article, it’s talking about the output of some app, with unknown extra prompting and LoRAs getting applied in the back, so it’s worthless as a discussion of the underlying model, much less as a discussion of all models.
Yes, I obviously meant “de-biased” by definition of whoever makes the set. Didn’t think it worth mentioning, as it seems self evident. But again, in concrete terms regarding the OP this just means not having your dataset skewed towards sexualised depictions of certain groups.
The paper introducing LoRA seems to disagree (emphasis mine):
There is no data replaced, the model is not changed at all. In fact if I’m not misunderstanding it adds an additional neural network on top of the pre-trained one, i.e. it’s adding data instead of replacing any. Fighting bias with bias if you will.
And I think this is relevant to a discussion of all models, as reproduction of training set biases is something common to all neural networks.
That paper is correct (emphasis mine):
You can see how it works in the “Introduction” section, particularly figure 1, or in this nice writeup:
https://dataman-ai.medium.com/fine-tune-a-gpt-lora-e9b72ad4ad3
LoRA is a “space and time efficient” technique to produce a modification matrix for each layer. It doesn’t introduce new layers, or add data to any layer. To the contrary, it’s bludgeoning all the separate values in each layer, and modifying each whole column and whole row by the same delta (or only a few deltas, in any case with Ar«Wk and Br«Wd).
Turns out… that’s enough to apply some broad strokes type of changes to a model, which still limps along thanks to the remaining value variation. But don’t be mistaken: with each additional LoRA applied, a model loses some of its finer details, until at some point it descends into total nonsense.
Yeah but that’s my point, right?
That
Meaning that when you change or remove the LoRA (A and B), the same types of biases will just resurface from the original model (W). Hence “less biased” W being the preferable solution, where possible.
Don’t get me wrong, LoRAs seem quite interesting, they just don’t seem like a good general approach to fighting model bias.
Not necessarily. There are two parts to a diffusion model: a tokenizer, and a neural network with a series of layers (W in this case would be a single layer) that react in some way to some tokens. What you really want, is a W “with more information”, no matter if some tokens refer to a more or less “fair” (less biased) portion of it.
It doesn’t really matter if “girl = 99% chance of white girl + 1% of [other skin tone] girl”, and “asian girl = sexualized asian girl”… as long as the “biased” token associations don’t reduce de amount of “[skin tone] girl” variants you can extract with specific prompts, and still react correctly to negative prompts like “asian girl -sexualized”.
LoRAs are a way to bludgeon a whole model into a strong bias, like “everything is a manga”, or “everything is birds”, or “all skin is frogs”, and so on. The interesting thing of LoRAs is that, if you get a base model where “girl = sexualized white girl”, and add an “all faces are asian” LoRA, and a “no sexualized parts” LoRA… then well, you’ve beaten the model into submission without having to use prompts (kind of a pyrrhic victory).
That is, unless you want something like a “multirracial female basketball team”.
That would require the model to encode the “race” as multiple sets of features, then pick one at random for every player in whatever proportion you find acceptable… but for that, you’re likely better off with adding an LLM preprocessor stage to pick a random set of races in your desired proportion, then have it instruct a bounded box diffusion model to draw each player with a specific prompt, so the bias of the model’s tokens would again become irrelevant.
Forcing the model to encode more variants per token, is where you start needing a larger model, or start losing quality.
I understood this differently. W is a whole model, not a single layer of a model. W is a layer of the Transformer architecture, not of a model. So it is a single feed forward or attention model, which is a layer in the Transformer. As the paper says, a LoRA:
It basically learns shifting the output of each Transformer layer. But the original Transformer stays intact, which is the whole point, as it lets you quickly train a LoRA when you need this extra bias, and you can switch to another for a different task easily, without re-training your Transformer. So if the source of the bias you want to get rid off is already in these original models in the Transformer, you are just fighting fire with fire.
Which is a good approach for specific situations, but not for general ones. In the context of OP you would need one LoRA for fighting it sexualising Asian women, then you would need another one for the next bias you find, and before you know it you have hundreds and your output quality has degraded irrecoverably.
I wouldn’t mind. I’m here for it.
Are you ready to run a 100B FP64 parameter model? Or even a 10B FP32 one?
Over time, I wouldn’t be surprised if 500B INT8 models became commonplace with neuromorphic RAM, but there’s still some time for that to happen.
You don’t need that many parameters, 4gb checkpoints work just fine.
For more inclusive models, or for current ones? In order to add something, either the size has to grow, or something would need to get pushed out (content, or quality). 4GB models are already at the limit of usefulness, both DALLE3 and SDXL run at about 12B parameters, so to make them “more inclusive” they’d have to grow.
I’m saying SD 1.5 and SDXL capture the concepts just fine, it’s just during fine-tuning people train away some of the diversity.
Wait, by “fine-tuning”… do you mean LoRAs? Because those are more like brain surgery with a sledgehammer, rather the opposite of “fine”. I don’t think it’s possible for LoRAs to avoid having undesirable side effects… and I don’t think people even want that.
Actual “fine” tuning, would be adding the LoRA’s training data to the original set, then training the whole model from scratch… and that would require increasing the model’s size to encode the increased amount of data for the same output quality.
I mean like this. This paper just dropped the other day.