Closed Helmets / Robots
Here is an example of a few images generated using the LORA to produce some full covered helmets and robots / cyborgs, as well as some reasons for what makes it a bit of a challenge and tips that may help. (This post is in response to a question asked on the older full model I have uploaded)
There are far more human faces in the training dataset than closed helmets so in general it will tend to push more in that direction. Even in the image examples on the post you can see human faces are generated sometimes, even though its not the desired outcome or makes sense based on prompt. In many cases you will find that certain seed values are more likely to produce human faces and some seed values will very reliably produce helmets.
Large sample step values can push the generations more towards human faces, the more time it has to process the image the more chances it has to drive the images towards a human face. 30-40 steps can often yield a decent full closed helmet while that same seed at 60-90 steps may be a open helmet with a persons face. Seed value again is going to play the biggest role on whether or not that happens.
Base model used with the LORA is also quite important as many models have had substantial amounts of additional training on images of people which results in those models being much more prone to producing human faces. All my sample images in this post are using the standard Stable Diffusion 1.5 base model.
Regarding extra components like "better image" negative embeddings, character embeddings or LORAs and style adjustments. These are also most often quite heavily biased towards improving the generation of human likeness, a specific likeness or have additional training but on images that contain mostly person or human elements and can drive the images more towards human faces rather than closed helmets or robotics. (LORAs/Embeddings trained to make robots though can be used to improve the likelihood of getting mechanical or robotic images in the same way). Reducing the strength of these kind of extra components or removing them entirely, may help decrease the number of faces generated when prompting for other things.
Prompting for types of helmets that exist in the real world like welding helmets or diving masks can help as it gives a stronger reference in the base model. Similar for robots, types of mechanical things, or different variations on the word like robot, cyborg, automaton or android will all tend towards different types of results. You can see in the grid sample that while the prompt does have an effect the seed value tends to matter more as those things are all fairly similar. You can also notice how the addition of the term agriculture seems to have a much more profound effect on the images generated, so trying different types of industry may also push the generations towards different style of robot.