Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weighting Logic is Wrong? #19

Open
igaziev13 opened this issue Oct 27, 2022 · 2 comments
Open

Weighting Logic is Wrong? #19

igaziev13 opened this issue Oct 27, 2022 · 2 comments

Comments

@igaziev13
Copy link

Hi, thanks for such a great project! Cool!

I have a question regarding Weighting. Let's take your example:

  • hat_red+10.png
  • hat_orange+20.png
  • hat_yellow+20.png
  • hat_green+40.png
  • hat_blue.png
    Logically, the sum of all weights should be 100%. But in your example, it comes that blue hat will appear 100%, so all other hats have no chance to be chosen. Am I right?
@senqi77
Copy link

senqi77 commented Nov 24, 2022

After my test, it still generates other layers except for 100%, which should be a probability every time when generating, when not filling +, then there will be a high probability to appear. Another possibility is that the number of parts generated exceeds the maximum limit so other parts are selected and then continue to be generated according to +

@CryptoWalkArt
Copy link

every trait needs a number otherwise they auto split. so if you have 2 that are blank, they will share 50% of the mint. That hat_blue needs +10

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants