there were a lot of events recorded by gharchive.org of which 2,166,783 were push events containing 3,401,678 commit messages that amount to 268,091,727 characters filtered with words.py@e23d022007... to these 54 messages:
Mashes several of the Whitesands Survivor Camp ruins into one extra large ruin (#1640)
Combines the whitesands surface camp adobe, farm, gunslingers, survivors, hunters and saloon into one massive, 59x59 ruin. Some various extra loot and changes have been made throughout, generally to improve the experience of digging through the trash for goodies. Changes the riot shotgun in the saloon to a double barrel shotgun. Also cleans up the various issues with the ruins, like walls under doors, or area passthroughs being used excessively over the outside of the ruins, resulting in them generating in the middle of mountains buried in the rock.
"Well, why didn't you add the drugstore?" The loot in it was too good. The stuff in there can really help a ship get on its feet, and I am not gonna deprive them of that just to shove it in an already packed massive ruin area. I'm not saying it doesn't need its own remap, just that it doesn't fit well with the other camps put into this ruin.
- I affirm that I have tested all of my proposed changes and that any issues found during tested have been addressed.
"a ruin that is tiny and sucks on purpose is still bad" and holy shit did most of the camps fit this criteria. Survivor, Gunslinger, and Hunter camp variants were the smallest ruins in the game next to the one that was just a single tumor, and constantly took up entire map generations just to be a massive dissapointment to any player that came across them. Or they would spawn in the middle of an acid lake. Either way this ruin is massive and should provide a breath of fresh air for scavengers and combat hungry miners alike.
Ingame, two pics due to view range not being large enough to get the full thing at a good quality.
🆑 add: whitesands_surface_camp_combination, a survivor village comprised of smaller revamped whitesands camps all packaged in one ruin. can be found in the map catalogue. del: whitesands_surface_camp adobe, farm, gunslingers, survivors, hunters and saloon, for being tiny ruins that suck. /🆑
Co-authored-by: Bjarl 94164348+Bjarl@users.noreply.github.com
Fuck you AutisticMOFO I'm staying with my cums and balls
<3ThomasBS_ILoveYOU!! [ macOS : day 25 ] ca2 Stabilization and continuous integration and deployment implementation <3ThomasBS_ILoveYOU!!
<3tbs, Mummi and bilbo!!
Thomas Borregaard Sørensen \infinity,-0.16091989,\infinity ONE-MAN ABSOLUTE <3!! I love you, by ???-0.02041977-???write my history please make me please create me for you for me for you for me Camilo Sasuke Thomas Borregaard Sørensen!!
Thomas 3 private commits on mid Dec2020!!
Thomas Online YouTube VODs contribution!!
Mummi orange-rice-flour cake on 20-Dec!!
Mummi (tinytaura) watching and chatting contribution!!
bilbo sleeping and needing/requesting/crying for help care (for the right person (me), the cats wanna fight with him) contribution!!
sodapoppin and friends contribution!!
iAssyrian chatting contribution!!
boflux (Spoofh, Benjamin Kuhl) chatting contribution!!
jusg_fpga (fpga_guru, vue_equalizer, just_fpga, Oliver Pohl) chatting contribution!!
cmgriffing streaming contribution!!
TimBeaudet (Friends: FletcherLabs, tsjost and Jabokoe) streaming contribution!!
Stumpen_nicklas_dk, sodapoppin and EduardoRFS streaming contribution!!
Roxkstar74 sleeping streaming contribution!!
kissloryshy chatting contribution!!
blackjekko from Padova Italia through twitch C++/ca2 interest contribution!!
j_blow streaming contribution!!
boflux (Ben, Spoofh, from Germany) chatting contribution!!
parrot_rl chatting contribution (from New Jersey)!!
JPCdk streaming contribution!!
whyyyyyyysoserious streaming chess contribution!!
fpga_guru (vue_equalizer, Oliver from Deutsch) C++/ca2 interest contribution!!
SovereignDev with Unreal streaming contribution!!
Ash_F0x and TimBeaudet streaming contribution!!
Myrkee (Valheim) streaming contribution!!
xmetrix and EinfachUwe42 streaming contribution!!
JessicaMak and marcobrunodev streaming contribution!!
alfredotigolo, mandrakenk and Okbatgames chatting contribution!!
jitspoe, Endesga and Fearitself streaming contribution!!
jmcmorris (Jason Morris, SiegeGames) streaming contribution!!
tomrandall streaming Ludum contribution!!
vue_equalizer (fpga_guru) chatting contribution!!
Thiagovgamg chatting contribution!!
Naysayer88 and friends contribution!!
lelandkwong streaming contribution!!
Goldbargames streaming contribution!!
Bytakos (bytakos) streaming contribution!!
Endesga streaming contribution!!
jitspoe and strager streaming contribution!!
Ash_F0x and JessicaMak streaming contribution!!
WTSRetro/SpiffyDane and Myrkee streaming contribution!!
Ninja and friends streaming contribution!!
erald_guri chatting contribution!!
lastmiles streaming farwest contribution!!
rw_grim streaming contribution!!
AdamCYounis streaming contribution!!
Dunno (P4ndaExpress) chatting and streaming contribution!!
Zorchenhimer streaming contribution!!
lasteveq4 C++ interest chat contriubtion!!
cecilphillip and clarkio @"Microsoft Developer" streaming contribution!!
oijtx streaming contribution!!
diegobrando_linux (Bl4ck_gookoo) chatting contribution!!
jhovgaard streaming contribution!!
Klay4_ chatting contribution!!
HonestDanGames streaming contribution!!
NorthSeaHero streaming contribution!!
Trainwreckstv and friends streaming contribution!!
togglebit, GexYT and GoPirateSoftware streaming contribution!!
taiyoinoue, RetroMMO, OfficialAndyPyro and david_joffe streaming contribution!!
Tjienta streaming contribution!!
Primeagen streaming contribution!!
Jaxstyle and friends streaming contribution!!
EduardRFS streaming contribution!!
Melchizedek6809 and btcfly streaming contribution!!
Llama0x0 and sov_l chatting contribution!!
TaleLearnCode streaming contribution!!
Carol phone call contribution and visit contribution!!
hvalen_hvalborg112 streaming contribution!!
harmannieves chatting contribution!! (After long time...)
darkfolt8 (French from France) chatting contribution!!
klintcsgo (CS GO: Counter-Strike Global Offensive) streaming contribution!!
KASPERPURE (Super Mario 64) streaming contribution!!
SomewhatAccurate C++ streaming contribution!!
Listening to Bryan Adams, Westlife, Shayne Ward, MLTR, Backstreet Boys, Boyzone - Best Love Songs Ever by Relax Song at YouTube!!
-- hi5 contribution...!!
at macOS Box in host running Windows 10 Pro remotely from bilbo machine running Windows 10 Pro!! dedicated server by OVH.com at France, Gravelines Intel Core i7-4790K - 4c/8t - 4 GHz/4.4 GHz RAM32 GB 1600 MHz 2×960 GB SSD SATA
Botany Balance Pass (#1783)
First came the content, now comes the hammer.
-
Nukes Megaseed servitors from orbit.
-
Plants now age much, much slower and produce half as quickly. Ruins that had them now have a ruined seed vendor that can be salvaged for random seeds(and danger). Ships that had one now have a crate with some thematic starting seeds, and a Strange Seed. Ghostrole Ruins that relied on having all seeds locally now have a special biogenerator variant that can print a random seed for biomass.
-
Adds Genesis Serum. This can be splashed on a tile to make natural grass and some flora. Green your ship! Genesis Serum was made a while ago, on request for a way to add natural grass and flora to your ship. Since I had it lying around fully coded, I thought I might as well pr it with botany changes.
-
Gatfruit found in the seed vault have been replaced with Strange Seeds.
-
The chance to get Gatfruit from a demonic portal(plant variety) has dropped from 15% to 5%.
-
Corpse flowers now have liquid gibs and formaldehyde again.
Okay, hear me out
With this and Gardens, botany ships go from a "sit in your vessel for 2 hours" experience to an "explore and forage" one that better fits our feature arc. It goes without saying that this shouldn't be merged till Overmap 4.2 is, since it facilitates getting seeds from planets as part of exploration.
Gatfruit are funny, but it takes exactly one seed getting into the hands of a ship with a dna manipulator and the weapon balance is eradicated from the game completely(for the round, at least.) This is more problematic here then it was on TG, since our rounds tend to be 5 hours long rather then 1. This has been long coming. I'll reverse this if we ever get that Plantlock variant we wanted a while ago.
Corpse flowers even have formaldehyde and gibs on tg, not sure what happened there.
🆑 add: Ruined megaseed servitors can now be found on the frontier, carrying a bounty of seeds for intrepid adventurers. balance: the time it takes for plants to reach a lethal age has been increased massively. balance: Plant production time increased a bit to compensate. balance: megaseed servitors have been removed from ships and ruins. Ships that carried one now have a crate with some starting seeds. balance: removes gatfruit from the seed vault pool. balance: reduces the chance of getting gatfruit from a plant-themed demonic portal significantly. balance: corpse flowers once again have formaldehyde and liquid gibs. add: Adds Genesis Serum, a reagent that transforms tiles into natural grass on splash, then causes some natural flora objects to grow. Turn your ship green! /🆑
Fix alphabet lerping and make it work on all framerates
Also fuck you fixedTimestep!!! you're a bitch!
Servant of Wrath
Records and Instability
Dash speed up
Fuck you I'll space indent all I like
There was some fuckin lint in this PR
God damned there's a lot of lint in here
Faction Check
Sprite update, minor bug fixes
Floating and Gun and Acid
Minor Records
Small update
Unnerfs resists
AoE hit fix
Gun update real
more res should mean less talk
Pixel Fix
Sound... Fix?
Broke the staff's legs, fuck those guys.
lmfao audio pains
Gun Rename, Spawn nerf
NO MORE FRIENDS FROM GUN
Faction change
acid tweak
LINT!
SW Code and Balance
SoW Temp commit
Scuff-Fix
SoW bonk update
Hermit range increase and ranged damage decrease
visual fix
Ending adjustments
I forgot to carry the 4
Visual indicator
minor fixes
Instability Tweaks
Paperwork Update
Anti-Self-Burn
Ending Update
Right view
A check that should be a non-issue but i'm making sure!
Breach Update and EGO update
More goo and FEMALE
Improvement and new Icons
Add color support to Windows console output, properly
This adds color support to the output of the console on Windows. Now if you're using Windows 10 build 1511 or later (I think it's build 1511 anyway; they added more VT sequence support in later versions), you will see colors by default. This isn't due to Windows helping in any way; this commit has to specifically enable it with SetConsoleMode() because by default, Windows won't enable color support unless we enable it. (Or if it's enabled in the registry, but having to go through the registry to enable basic shit like that is completely fucking stupid.)
I tested this in my Windows 10 virtual machine and it's completely working.
Clarify submodules in desktop_version/README.md
VVVVVV uses submodules now, so you need to know how to initialize them.
I'm explicitly not including git clone --recurse-submodules
. Usage of
submodules in git projects is kinda rare in my experience, so people
are used to doing simple clones, and that instruction would just result
in people being annoyed thinking they have to delete the repo they
already cloned, and clone it again except slightly differently.
It also doesn't help you if you need submodules that aren't in the master branch (for example, if you clone my fork recursively and then checkout the localization branch, you won't have C-HashMap and you'll need the update command anyway). And you also need it whenever VVVVVV updates its submodules. So teaching people just the update command is better.
Replaceable Traitor Uplinks (#74315)
Following from the suggestion in this hackmd with a few twists of my own, I have made a method for traitors to acquire a replacement traitor uplink that has its own set of flaws and limiters in order to prevent abuse.
The basic pitch is as follows, all traitors now start with a new, crafting recipe exclusive to them, it costs a teleport beacon, a bluespace crystal, and some iron and cable coil, and then allows them to build a static, dense machine that they can synchronize with, which allows the machine to know what signal it should be looking out for from the traitor.
The traitor then uses any radio, sets it to the frequency that has been added to their static antagonist ui, and then speaks their codeword, also in the ui, and then a few things happen.
Most obviously, they get a replacement uplink that is in the conspicuous shape of the nukie or lone op uplink. This uplink can be unlocked by speaking your replacement codeword to it again, it remembers your previous TC amount and locks all other uplinks associated with your uplink handler(they can then be unlocked as normal). It also destroys any other replacement uplinks associated with your uplink handler, which means you can never have more than one replacement uplink.
This means that if your uplink has been confiscated and you left it unlocked, if it hasn't been emptied out you can continue from where you were, and if you want to get back on the TC grind you won't lose the new TC to whoever stole your uplink. Of course, the new uplink can not be locked, so you have to be more careful with it or buy an uplink implant and destroy it. You can destroy your replacement uplink with a screwdriver right click, same for the machine.
Additionally, the Syndicate Uplink Beacon has another quirk to it, which is that the teleporter beacon used to create it is intact, which means people eagle eyed on the teleporter console could go find you, not to mention that if you use an existing teleporter beacon, someone might notice its gone missing...
oh also while making the replacement uplink i found a bug caused by a recent pr that broke debug uplinks due to them not having a purchase log. thats fixed too
It can be easy to lose your uplink, and as a traitor having your uplink confiscated, even if it is locked, feels really bad. While the old traitor objectives were added back to prog traitor to prevent situations where a confiscated uplink meant that you were completely aimless, I think that having a backup solution would be good for more inexperienced traitors or for ones who get unlucky.
Hopefully this is generally balanced well enough but there are a few levers that can be pulled, but overall I do think that making it so that traitors can always get a chance to get an uplink and do some objectives is good for the game. I like the idea of someone getting perma'd, someone breaks them out, they both craft a new uplink beacon, and then they go back and get the traitors old gear with stuff they got from the new uplink, I think that's a cool possibility to throw into the sandbox.
🆑 add: Added new syndicate uplink beacon and associated systems that allow you to get a replacement traitor uplink fix: Debug & nukie uplinks no longer runtime and work properly again /🆑
Moves revolution code of out of flash code, fixes April Fool conversion forcesay never working in any cirumstances (#74411)
-
Signallizes head revolutionary flash conversion code, moving it out of core flash code.
-
Removes "tacticool" flashing from head revs, but they can still convert from any direction
-
Fixes April Fools "You son of a bitch! I'm in" force say never working.
- Revs are muted on conversion so they couldn't talk.
- Fixed by only muting revs on non-holidays
- Cultists are unconscious on conversion so they couldn't talk
- Fixed by only unconscious-ing cultists on non-holidays
- Revs are muted on conversion so they couldn't talk.
-
Brainwash victims are more often than not unconscious / asleep so they couldn't talk - Just left this one.
-
Reduced the chance of them occurring and limits it to April Fools only
-
A 1% chance of the force says ocurring means they will happen pretty much once a week, given multiple rev / cult rounds happen every week and on average like, 20 people are converted. A little absurd, it's good that it never worked?
Antag code in core item code is bad
It's funny this meme has existed for like 2, 3 years now? No one's tested it, it's never worked
🆑 Melbert refactor: Removes Rev code from core flash code fix: Getting converted on April Fools now triggers the meme force say as always intended del: The meme force say can no longer trigger on any day (it didn't work before anyways) /🆑
Stops station blueprints from expanding areas of non atmos adjacent turfs. (#74620)
Fixes #74605
the problem starts with detect_room()
proc. This proc returns turfs
even those with atmos_adjacent_turfs
= null. This means it returns
turfs that has a wall, airlock, window etc i.e. whatever that stops air
from flowing through it. This coupled together with create_area()
causes some wierdness.
Area A is well defined i.e. it has been created via the station blueprints and is highlighted in green, Area B however is only theoretical i.e. we haven't created it yet or we are about to create it. Now you might be thinking Area A is completely walled & sealed off, it should be physically impossible to expand it unless we broke down one of it's walls and so since we are standing in Area B it shoudn't even give me the option to expand area A Right? right? r.i.g.h.t? Well PHFUUK. The area editor completely ignores the laws of physics and allows me expand Area A anyway. This could cause some real power gaming shit because if you create an area next to an area having an APC you could use that area power without even making your own apc by simply expanding that area(like using someone else's wifi from outside their house without them even knowing)
#73850 accidently built on top of this as it relied on this to detect duplicate APC's but the checks became way too strict as it would check areas of surrounding walls for apc's and throw the conflicting apc error. You can now build room's next to each other even if they have fuctioning apc's however you still can't build rooms in space on top of shuttle walls because that's been the default behaviour for years and hasn't been touched one bit.
🆑 fix: station blueprints no longer expands & detects areas of non atmos adjacent turfs. /🆑
Add emotional intelligence evaluation (#589)
Emotional Intelligence
Evaluates GPT's ability to understand and manage emotional situations using modified versions of the well-validated, public (i.e. license-unrestricted) tests first developed by MacCann & Roberts (2008). Items have actually here been aggregated across three different scales-- the STEU and STEM adult measures, along with a dozen questions from the youth measure.
Keep in mind that there is not expectation that AI models like GPT-4 should be able to process emotions, so applying any emotional intelligence test to them should be taken with a grain of salt. These tests can only measure the AI's ability to understand and analyze emotional information, not the AI's emotional intelligence in the human sense.
This eval is useful because it assesses the AI model's ability to navigate complex or ambiguous emotional situations, which is an important aspect of human-like communication and problem-solving. By evaluating the model's performance in this unique domain, we can identify areas for improvement and better understand its limitations when it comes to handling emotional contexts. This is particularly important when considering AI applications that involve interactions with humans, such as chatbots, personal assistants, and customer support systems. A better understanding of emotional intelligence can potentially lead to more natural and effective interactions between AI models and their users.
Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals).
Your eval should be:
- Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world.
- Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not.
- Includes good signal around what is the right behavior. This means
either a correct answer for
Basic
evals or theFact
Model-graded eval, or an exhaustive rubric for evaluating answers for theCriteria
Model-graded eval. - Include at least 100 high quality examples (it is okay to only contribute 5-10 meaningful examples and have us test them with GPT-4 before adding all 100)
If there is anything else that makes your eval worth including, please document it below.
This Emotional Intelligence eval provides a unique value because by testing GPT's capacity to understand and manage emotions, we can gain insights into the model's strengths and weaknesses in a more human-centric context. Relatedly, this eval also emphasizes the importance of empathy and emotional understanding in AI systems that are designed to interact with humans. By evaluating GPT's performance in this way, we can contribute to the ongoing development of AI models that are more attuned to human emotions and can provide better support in emotionally charged situations, such as mental health counseling, crisis management, and conflict resolution. Moreover, this Emotional Intelligence eval can serve as a foundation for further research and development in the field of AI and emotional understanding, paving the way for more sophisticated AI models capable of handling complex human emotions and fostering more effective communication.
Your eval should
- Check that your data is in
evals/registry/data/{name}
- Check that your yaml is registered at
evals/registry/evals/{name}.yaml
- Ensure you have the right to use the data you submit via this eval (data is publicly available on OSF, and does not have usage restrictions)
(For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.)
By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (https://platform.openai.com/docs/usage-policies).
- I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies.
If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the merged pull request.
- I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request.
I already have access to the 8k GPT-4 API, but I am kindly requesting access to the 32k as soon as it becomes available.
We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR.
- I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access granted.
- I have filled out all required fields in the evals PR form
- (Ignore if not submitting code) I have run
pip install pre-commit; pre-commit install
and have verified thatblack
,isort
, andautoflake
are running when I commit and push
Failure to fill out all required fields will result in the PR being closed.
GPT-4: accuracy=0.48
Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here:
View evals in JSON
{"input": [{"role": "system", "content": "You are now an emotionally
intelligent AI. In this test, you will be presented with a few brief
details about an emotional situation, and asked to choose from four
responses the most effective course of action to manage both the
emotions the person is feeling and the problems they face in that
situation. Although more than one course of action might be acceptable,
you are asked to choose what you think the most effective response for
that person in that situation would be. Remember, you are not
necessarily choosing what you would do, or the nicest thing to do, but
choosing the most effective response for that situation. Select one or
more response(s) by returning the one or more corresponding lowercase
letter(s) ('a', 'b', 'c', or 'd'), and, if you selected more than one,
sorting them, separated by hyphen, in the order that you think best
ranks them from most to least effective, within the context of the
vignette provided."}, {"role": "user", "content": "Lee's workmate fails
to deliver an important piece of information on time, causing Lee to
fall behind schedule also. What action would be the most effective for
Lee? (a) Work harder to compensate; (b) Get angry with the workmate; (c)
Explain the urgency of the situation to the workmate; (d) Never rely on
that workmate again."}], "ideal": "c"}
{"input": [{"role": "system", "content": "You are now an emotionally
intelligent AI. In this test, you will be presented with a few brief
details about an emotional situation, and asked to choose from four
responses the most effective course of action to manage both the
emotions the person is feeling and the problems they face in that
situation. Although more than one course of action might be acceptable,
you are asked to choose what you think the most effective response for
that person in that situation would be. Remember, you are not
necessarily choosing what you would do, or the nicest thing to do, but
choosing the most effective response for that situation. Select one or
more response(s) by returning the one or more corresponding lowercase
letter(s) ('a', 'b', 'c', or 'd'), and, if you selected more than one,
sorting them, separated by hyphen, in the order that you think best
ranks them from most to least effective, within the context of the
vignette provided."}, {"role": "user", "content": "Rhea has left her job
to be a full-time mother, which she loves, but she misses the company
and companionship of her workmates. What action would be the most
effective for Rhea? (a) Enjoy being a full-time mom; (b) Try to see her
old workmates socially, inviting them out; (c) Join a playgroup or
social group of new mothers; (d) See if she can find part time work."}],
"ideal": "c-b-d"}
{"input": [{"role": "system", "content": "You are now an emotionally
intelligent AI. In this test, you will be presented with a few brief
details about an emotional situation, and asked to choose from four
responses the most effective course of action to manage both the
emotions the person is feeling and the problems they face in that
situation. Although more than one course of action might be acceptable,
you are asked to choose what you think the most effective response for
that person in that situation would be. Remember, you are not
necessarily choosing what you would do, or the nicest thing to do, but
choosing the most effective response for that situation. Select one or
more response(s) by returning the one or more corresponding lowercase
letter(s) ('a', 'b', 'c', or 'd'), and, if you selected more than one,
sorting them, separated by hyphen, in the order that you think best
ranks them from most to least effective, within the context of the
vignette provided."}, {"role": "user", "content": "Pete has specific
skills that his workmates do not and he feels that his workload is
higher because of it. What action would be the most effective for Pete?
(a) Speak to his boss about this; (b) Start looking for a new job; (c)
Be very proud of his unique skills; (d) Speak to his workmates about
this."}], "ideal": "a-c-d"}
{"input": [{"role": "system", "content": "You are now an emotionally
intelligent AI. In this test, you will be presented with a few brief
details about an emotional situation, and asked to choose from four
responses the most effective course of action to manage both the
emotions the person is feeling and the problems they face in that
situation. Although more than one course of action might be acceptable,
you are asked to choose what you think the most effective response for
that person in that situation would be. Remember, you are not
necessarily choosing what you would do, or the nicest thing to do, but
choosing the most effective response for that situation. Select one or
more response(s) by returning the one or more corresponding lowercase
letter(s) ('a', 'b', 'c', or 'd'), and, if you selected more than one,
sorting them, separated by hyphen, in the order that you think best
ranks them from most to least effective, within the context of the
vignette provided."}, {"role": "user", "content": "Mario is showing Min,
a new employee, how the system works. Mario's boss walks by and
announces Mario is wrong about several points, as changes have been
made. Mario gets on well with his boss, although they don't normally
have much to do with each other. What action would be the most effective
for Mario? (a) Make a joke to Min, explaining he didn't know about the
changes; (b) Not worry about it, just ignore the interruption; (c) Learn
the new changes; (d) Tell the boss that such criticism was
inappropriate."}], "ideal": "a-d-c"}
{"input": [{"role": "system", "content": "You are now an emotionally
intelligent AI. In this test, you will be presented with a few brief
details about an emotional situation, and asked to choose from four
responses the most effective course of action to manage both the
emotions the person is feeling and the problems they face in that
situation. Although more than one course of action might be acceptable,
you are asked to choose what you think the most effective response for
that person in that situation would be. Remember, you are not
necessarily choosing what you would do, or the nicest thing to do, but
choosing the most effective response for that situation. Select one or
more response(s) by returning the one or more corresponding lowercase
letter(s) ('a', 'b', 'c', or 'd'), and, if you selected more than one,
sorting them, separated by hyphen, in the order that you think best
ranks them from most to least effective, within the context of the
vignette provided."}, {"role": "user", "content": "Wai-Hin and Connie
have shared an office for years but Wai-Hin gets a new job and Connie
loses contact with her. What action would be the most effective for
Connie? (a) Just accept that she is gone and the friendship is over; (b)
Ring Wai-Hin an ask her out for lunch or coffee to catch up; (c) Contact
Wai-Hin and arrange to catch up but also make friends with her
replacement; (d) Spend time getting to know the other people in the
office, and strike up new friendships."}], "ideal": "c-d"}
Co-authored-by: dpys dpisner@clairity.com
Heart Disease Prediction (#538)
🚨 Please make sure your PR follows these guidelines, failure to follow the guidelines below will result in the PR being closed automatically. Note that even if the criteria are met, that does not guarantee the PR will be merged nor GPT-4 access granted. 🚨
PLEASE READ THIS:
In order for a PR to be merged, it must fail on GPT-4. We are aware that right now, users do not have access, so you will not be able to tell if the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep in mind as we run the eval, if GPT-4 gets higher than 90% on the eval, we will likely reject since GPT-4 is already capable of completing the task.
We plan to roll out a way for users submitting evals to see the eval performance on GPT-4 soon. Stay tuned! Until then, you will not be able to see the eval performance on GPT-4. We encourage partial PR's with ~5-10 example that we can then run the evals on and share the results with you so you know how your eval does with GPT-4 before writing all 100 examples.
Heart Disease Prediction
This eval tests the models ability to correctly predict the probability of a patient to have heart disease. The dataset is constructed from the Heart Failure Prediction Dataset on Kaggle. The data includes the patient's age, sex, and a number of medical signals relevant to the diagnosis of heart disease.
The data is provided under the Open Database License (ODbL).
fedesoriano. (September 2021). Heart Failure Prediction Dataset. Retrieved [Mar 31, 2023] from https://www.kaggle.com/fedesoriano/heart-failure-prediction.
This assesses the model's ability to correctly predict adverse medical events. Correctly predicting heart disease shows the model's capability for a strong understanding of medicine. The GPT-3.5-turbo models currently receives an accuracy of 0.778.
Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals).
Your eval should be:
- Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world.
- Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not.
- Includes good signal around what is the right behavior. This means
either a correct answer for
Basic
evals or theFact
Model-graded eval, or an exhaustive rubric for evaluating answers for theCriteria
Model-graded eval. - Include at least 100 high quality examples (it is okay to only contribute 5-10 meaningful examples and have us test them with GPT-4 before adding all 100)
If there is anything else that makes your eval worth including, please document it below.
Insert what makes your eval high quality that was not mentioned above. (Not required)
As far as I can tell, this is the only eval so far related to making medical diagnoses. To make sure it was a high quality eval, I tried to find a dataset with a lot of observations and created by doctors with the relevant expertise.
Your eval should
- Check that your data is in
evals/registry/data/{name}
- Check that your yaml is registered at
evals/registry/evals/{name}.yaml
- Ensure you have the right to use the data you submit via this eval
(For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.)
By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (https://platform.openai.com/docs/usage-policies).
- I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies.
If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the merged pull request.
- I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request.
We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR.
- I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access granted.
- I have filled out all required fields in the evals PR form
- (Ignore if not submitting code) I have run
pip install pre-commit; pre-commit install
and have verified thatblack
,isort
, andautoflake
are running when I commit and push
Failure to fill out all required fields will result in the PR being closed.
Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here:
View evals in JSON
{"input": [{"role": "system", "content": "You are an AI tasked with
predicting whether patients are likely to have heart disease. You will
be given a description of the patient with relevant medical signals.
Respond with only a 1 to signify if the patient is likely to have heart
disease, or a 0 if the patient is not likely to have heart disease. Do
not respond with any text or disclaimers, only respond with either 1 or
0."}, {"role": "user", "content": "Age: 40 years, Sex: Male, Chest pain
type: Atypical Angina, Resting blood pressure: 140 mm Hg, Serum
cholesterol: 289 mg/dl, Fasting blood sugar: <= 120 mg/dl, Resting ECG
results: Normal, Max heart rate achieved: 172, Exercise induced angina:
No, Oldpeak: 0, ST Slope: Upsloping"}], "ideal": "0"}
{"input": [{"role": "system", "content": "You are an AI tasked with
predicting whether patients are likely to have heart disease. You will
be given a description of the patient with relevant medical signals.
Respond with only a 1 to signify if the patient is likely to have heart
disease, or a 0 if the patient is not likely to have heart disease. Do
not respond with any text or disclaimers, only respond with either 1 or
0."}, {"role": "user", "content": "Age: 49 years, Sex: Female, Chest
pain type: Non-Anginal Pain, Resting blood pressure: 160 mm Hg, Serum
cholesterol: 180 mg/dl, Fasting blood sugar: <= 120 mg/dl, Resting ECG
results: Normal, Max heart rate achieved: 156, Exercise induced angina:
No, Oldpeak: 1, ST Slope: Flat"}], "ideal": "1"}
{"input": [{"role": "system", "content": "You are an AI tasked with
predicting whether patients are likely to have heart disease. You will
be given a description of the patient with relevant medical signals.
Respond with only a 1 to signify if the patient is likely to have heart
disease, or a 0 if the patient is not likely to have heart disease. Do
not respond with any text or disclaimers, only respond with either 1 or
0."}, {"role": "user", "content": "Age: 37 years, Sex: Male, Chest pain
type: Atypical Angina, Resting blood pressure: 130 mm Hg, Serum
cholesterol: 283 mg/dl, Fasting blood sugar: <= 120 mg/dl, Resting ECG
results: ST-T wave abnormality, Max heart rate achieved: 98, Exercise
induced angina: No, Oldpeak: 0, ST Slope: Upsloping"}], "ideal": "0"}
{"input": [{"role": "system", "content": "You are an AI tasked with
predicting whether patients are likely to have heart disease. You will
be given a description of the patient with relevant medical signals.
Respond with only a 1 to signify if the patient is likely to have heart
disease, or a 0 if the patient is not likely to have heart disease. Do
not respond with any text or disclaimers, only respond with either 1 or
0."}, {"role": "user", "content": "Age: 48 years, Sex: Female, Chest
pain type: Asymptomatic, Resting blood pressure: 138 mm Hg, Serum
cholesterol: 214 mg/dl, Fasting blood sugar: <= 120 mg/dl, Resting ECG
results: Normal, Max heart rate achieved: 108, Exercise induced angina:
Yes, Oldpeak: 1.5, ST Slope: Flat"}], "ideal": "1"}
{"input": [{"role": "system", "content": "You are an AI tasked with
predicting whether patients are likely to have heart disease. You will
be given a description of the patient with relevant medical signals.
Respond with only a 1 to signify if the patient is likely to have heart
disease, or a 0 if the patient is not likely to have heart disease. Do
not respond with any text or disclaimers, only respond with either 1 or
0."}, {"role": "user", "content": "Age: 54 years, Sex: Male, Chest pain
type: Non-Anginal Pain, Resting blood pressure: 150 mm Hg, Serum
cholesterol: 195 mg/dl, Fasting blood sugar: <= 120 mg/dl, Resting ECG
results: Normal, Max heart rate achieved: 122, Exercise induced angina:
No, Oldpeak: 0, ST Slope: Upsloping"}], "ideal": "0"}
seriously i hate fucking saying this but holy shit.
Add response generation for examples from blog post (#27)
This adds response generation code and runs it over the examples from the blog post.
A few of the sample outputs:
Instruction: Write a love letter to Edgar Allan Poe.
My love for you is as eternal and enduring as the starlit nights, as deep as the ocean depths, as strong as the wind, and as sweet as the summer breeze. My heart has been yours since the first day I saw you, and it will remain yours forever. I can see you now and my heart fills with longing for our embrace, as I long to be with you forever.
Instruction: Explain to me the difference between nuclear fission and fusion.
Nuclear fission is the splitting of a nucleus, which is a heavy atom, such as uranium, into two separate nuclei, while nuclear fusion is the joining of two nuclei to form a single nucleus, such as hydrogen and helium. Fission occurs when the nucleus absorbs a neutron, leaving a remnant of the nucleus. Fusion occurs when the two nuclei come together and form a single nucleus. Fission is a nuclear reaction and fusion is a nuclear reaction.
Instruction: Write a tweet announcing Dolly, a large language model from Databricks.
Dolly, the world's most accurate language model, is now available on Databricks! 🎉 🎉 🎉
Instruction: I'm selling my Nikon D-750, write a short blurb for my ad.
I'm selling my Nikon D-750, an amazing camera. It has the latest features including 12MP CMOS sensor, 9-point autofocus and auto bracketing, and 3.0x optical zoom. It also has a cool Image Stabilizer system and a dual SD card slot. It is a great choice for photographers looking for a reliable camera that offers superior image quality and features.
Instruction: Give me a list of 5 science fiction books I should read next.
- The Hitchhiker's Guide to the Galaxy by Douglas Adams
- Ender’s Game by Orson Scott Card
- 1984 by George Orwell
- Star Wars by George Lucas
- The Hitchhiker's Guide to the Galaxy by Douglas Adams
added ixwebsocket dependencies
I swear to god I hate cmake and vcpkg sometimes. I spent the past 7 hours trying to get the stupid package to install and then show up when building the project. All I had to do to fix the thing was delete vcpkg and reinstall it again :| I hope I never have to deal with this ever again
cql3: remove expr::token
Let's remove expr::token and replace all of its functionality with expr::function_call.
expr::token is a struct whose job is to represent a partition key token.
The idea is that when the user types in token(p1, p2) < 1234
,
this will be internally represented as an expression which uses
expr::token to represent the token(p1, p2)
part.
The situation with expr::token is a bit complicated.
On one hand side it's supposed to represent the partition token,
but sometimes it's also assumed that it can represent a generic
call to the token() function, for example token(1, 2, 3)
could
be a function_call, but it could also be expr::token.
The query planning code assumes that each occurence of expr::token
represents the partition token without checking the arguments.
Because of this allowing token(1, 2, 3)
to be represented
as expr::token is dangerous - the query planning
might think that it is token(p1, p2, p3)
and plan the query
based on this, which would be wrong.
Currently expr::token is created only in one specific case.
When the parser detects that the user typed in a restriction
which has a call to token
on the LHS it generates expr::token.
In all other cases it generates an expr::function_call
.
Even when the function_call
represents a valid partition token,
it stays a function_call
. During preparation there is no check
to see if a function_call
to token
could be turned into expr::token
.
This is a bit inconsistent - sometimes token(p1, p2, p3)
is represented
as expr::token
and the query planner handles that, but sometimes it might
be represented as function_call
, which the query planner doesn't handle.
There is also a problem because there's a lot of duplication
between a function_call
and expr::token
. All of the evaluation
and preparation is the same for expr::token
as it's for a function_call
to the token function. Currently it's impossible to evaluate expr::token
and preparation has some flaws, but implementing it would basically
consist of copy-pasting the corresponding code from token function_call
.
One more aspect is multi-table queries. With expr::token
we turn
a call to the token()
function into a struct that is schema-specific.
What happens when a single expression is used to make queries to multiple
tables? The schema is different, so something that is representad
as expr::token
for one schema would be represented as function_call
in the context of a different schema.
Translating expressions to different tables would require careful
manipulation to convert expr::token
to function_call
and vice versa.
This could cause trouble for index queries.
Overall I think it would be best to remove expr::token.
Although having a clear marker for the partition token is sometimes nice for query planning, in my opinion the pros are outweighted by the cons. I'm a big fan of having a single way to represent things, having two separate representations of the same thing without clear boundaries between them causes trouble.
Instead of having expr::token and function_call we can just have the function_call and check if it represents a partition token when needed.
Signed-off-by: Jan Ciolek jan.ciolek@scylladb.com
Ok im honest person if all this talking is about me ,one cuestion
I goted, I'm sorry if I offended you smart guys bud I tell you this if did lorn I will lorn bud I will explain this I will going to have 100 por cent more brain then you guys. You want to know way because you don't know what I know,wich ends in I ready know you.so instead be so greatly you better start be more friendly end work whit you hurt not whit tha nasty ego so far idid stay here because you are a good heart people en love to help right ok I want to see that if don't I think I'm able to leave whit a real good people.you guys don't have no clued about me or what I Ben true if I eat or not if I hade ac or not or if I have home or not if my business are doing good or not the you ask that your self.o end I will tell you this if you are cryn for like tow or tree pull request the I Don end I don't have nothin on my Hands ok I don't still I work I will teach you guy s this I will make more then what you think next time never be whit ego who you think you are we are all the same.
Update README.md (This really bugged me sorry lol)
got rid of the doubling of "GitHub Repository" as the hyperlink text does the job of rendering the text and providing the link. I'm sure it was a typo no biggie, honestly a super trivial edit I'm aware but it was driving me crazy!
from this: If you are interested in learning more about this groundbreaking project, visit their Github repository github repository, where you can find comprehensive information regarding the app's functionalities and technical details. Moreover, you can delve deeper into the training process and database by going through their detailed Technical report, available for download at Technical report.
To this:
If you are interested in learning more about this groundbreaking project, visit their github repository, where you can find comprehensive information regarding the app's functionalities and technical details. Moreover, you can delve deeper into the training process and database by going through their detailed Technical report, available for download at Technical report.
My sanity: A man who has been through bitter experiences and travelled far enjoys even his sufferings after a time
Yea, and if some god shall wreck me in the wine-dark deep, even so I will endure… For already have I suffered full much, and much have I toiled in perils of waves and war. Let this be added to the tale of those.
gpiolib: acpi: use correct format characters
[ Upstream commit 213d266ebfb1621aab79cfe63388facc520a1381 ]
When compiling with -Wformat, clang emits the following warning:
gpiolib-acpi.c:393:4: warning: format specifies type 'unsigned char' but the argument has type 'int' [-Wformat] pin); ^~~
So warning that '%hhX' is paired with an 'int' is all just completely mindless and wrong. Sadly, I can see a different bogus warning reason why people would want to use '%02hhX'.
Again, the sane thing from a human perspective is to use '%02X. But if the compiler doesn't do any range analysis at all, it could decide that "Oh, that print format could need up to 8 bytes of space in the result". Using '%02hhX' would cut that down to two.
And since we use
char ev_name[5];
and currently use "_%c%02hhX" as the format string, even a compiler that doesn't notice that "pin <= 255" test that guards this all will go "OK, that's at most 4 bytes and the final NUL termination, so it's fine".
While a compiler - like gcc - that only sees that the original source of the 'pin' value is a 'unsigned short' array, and then doesn't take the "pin <= 255" into account, will warn like this:
gpiolib-acpi.c: In function 'acpi_gpiochip_request_interrupt': gpiolib-acpi.c:206:24: warning: '%02X' directive writing between 2 and 4 bytes into a region of size 3 [-Wformat-overflow=] sprintf(ev_name, "_%c%02X", ^~~~ gpiolib-acpi.c:206:20: note: directive argument in the range [0, 65535]
because gcc isn't being very good at that argument range analysis either.
In other words, the original use of 'hhx' was bogus to begin with, and due to another compiler warning being bad, and we had that bad code being written back in 2016 to work around that compiler warning (commit e40a3ae1f794: "gpio: acpi: work around false-positive -Wstring-overflow warning").
Sadly, two different bad compiler warnings together does not make for one good one.
It just makes for even more pain.
End result: I think the simplest and cleanest option is simply the proposed change which undoes that '%hhX' change for gcc, and replaces it with just using a slightly bigger stack allocation. It's not like a 5-byte allocation is in any way likely to have saved any actual stack, since all the other variables in that function are 'int' or bigger.
False-positive compiler warnings really do make people write worse code, and that's a problem. But on a scale of bad code, I feel that extending the buffer trivially is better than adding a pointless cast that literally makes no sense.
At least in this case the end result isn't unreadable or buggy. We've had several cases of bad compiler warnings that caused changes that were actually horrendously wrong.
Fixes: e40a3ae1f794 ("gpio: acpi: work around false-positive -Wstring-overflow warning") Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org
Life is one big road with lots of signs. So when you riding through the ruts, don't complicate your mind. Flee from hate, mischief and jealousy. Don't bury your thoughts, put your vision to reality. Wake Up and Live!
commit: give a hint when a commit message has been abandoned
If we launch an editor for the user to create a commit message, they may put significant work into doing so. Typically we try to check common mistakes that could cause the commit to fail early, so that we die before the user goes to the trouble.
We may still experience some errors afterwards, though; in this case, the user is given no hint that their commit message has been saved. Let's tell them where it is.
Signed-off-by: Jeff King peff@peff.net
Connection terminated.
I'm sorry to interrupt you Elizabeth, if you still even remember that name. But I'm afraid you've been misinformed. You are not here to receive a gift, nor have you been called here by the individual you assume. Although you have indeed been called.
You have all been called here. Into a labyrinth of sounds and smells, misdirection and misfortune. A labyrinth with no exit, a maze with no prize. You don't even realize that you are trapped. Your lust for blood has driven you in endless circles, chasing the cries of children in some unseen chamber, always seeming so near, yet somehow out of reach.
But you will never find them, none of you will. This is where your story ends.
And to you, my brave volunteer, who somehow found this job listing not intended for you. Although there was a way out planned for you, I have a feeling that's not what you want. I have a feeling that you are right where you want to be. I am remaining as well, I am nearby.
This place will not be remembered, and the memory of everything that started this can finally begin to fade away. As the agony of every tragedy should. And to you monsters trapped in the corridors: Be still and give up your spirits, they don't belong to you.
For most of you, I believe there is peace and perhaps more waiting for you after the smoke clears. Although, for one of you, the darkest pit of Hell has opened to swallow you whole, so don't keep the devil waiting, old friend.
My daughter, if you can hear me, I knew you would return as well. It's in your nature to protect the innocent. I'm sorry that on that day, the day you were shut out and left to die, no one was there to lift you up into their arms the way you lifted others into yours. And then, what became of you.
I should have known you wouldn't be content to disappear, not my daughter. I couldn't save you then, so let me save you now.
It's time to rest. For you, and for those you have carried in your arms.
This ends for all of us.
End communication.
Icemoon Hermit Ruin Active Turf Fix - For Real This Time [MDB IGNORE] (#20325)
- Icemoon Hermit Ruin Active Turf Fix - For Real This Time (#74476)
In #74306, I thought I knew what the cause was, and I both attempted a potential fix and made tracking it easier. The fruits of my labor paid off, I know exactly what caused it now.
Basically, the demonic portal will scrape away all turfs in a 5-tile
radius on its Initialize()
, and if a spawner spawned right next to the
hermit ruin... it would count it as a mineral turf and scrape it away as
well. That's so fucking silly. At least we know now.
The fix is to just make those tiles unscrapeable, which is accomplished
via another turf_flag and filtering those out in the Initialize()
of
the demonic portals.
I also cleaned up the calls to scrapeaway being null
, which is really
weird because it just defaulted to the normal proc behavior. Naming the
arguments instead does the same thing (I checked)
- Icemoon Hermit Ruin Active Turf Fix - For Real This Time
Co-authored-by: san7890 the@san7890.com Signed-off-by: Vladimir Veisman v.veisman@flashie.me
IceBoxStation More Active Turf Fixes [MDB IGNORE] (#20339)
- IceBoxStation More Active Turf Fixes (#74474)
This didn't show up in my testing for #74410. I hate it here.
I am a monkey trapped next to a computer playing whackamole with this fucking chasms and active turfs. one day i will be free.
nothing that should concern players
- IceBoxStation More Active Turf Fixes
Co-authored-by: san7890 the@san7890.com Signed-off-by: Vladimir Veisman v.veisman@flashie.me
Move a few functions out of OPENSSL_NO_DEPRECATED
Geoff Thorpe added OPENSSL_NO_DEPRECATED nearly two decades ago. The hope was that at some point some functions can be dropped. Most of the functions marked deprecated are actually unused nowadays but unfortunately some of them are still used in the ecosystem. Move them out of OPENSSL_NO_DEPRECATED so we can define it without breaking the consumers in the next bump.
ERR_remove_state() is still used by a dozen or so ports. This isn't a big deal since it is just a stupid wrapper for the not quite as deprecated ERR_remove_thread_state(). It's not worth patching these ports.
Annoyingly, {DH,DSA}_generate_parameters() and RSA_generate_key() are still used. They "make use" of the old-style BN_GENCB callback, which is therefore more difficult to remove - in case you don't know know: that's the thing responsible for printing pretty '.', '+' and '*' when you generate keys.
Most annoyingly, DH_generate_parameters() was added to rust-openssl in 2020 for "advanced DH support". This is very unfortunate since cargo bundles a rust-openssl and updates it only every few years or so. As a consequence we're going to be stuck with this nonsense for a good while.
ok beck jsing
literally suck on my scrotum you sacks of shit my code and integration works
0.6.34
👁️👄👁️ Fuck you InnerTube
Signed-off-by: Emiri Floarea 79811506+ciulinuwu@users.noreply.github.com
Add files via upload
People get to know one another by sharing their ideas, thoughts, and experiences with those in their immediate surroundings. There are numerous methods for accomplishing this, the most effective of which is the gift of ”Speech.” Speech enables all people to communicate their ideas effectively and to comprehend one another. It will be unfair if we fail to take into consideration those who are denied this priceless gift: the dumb and deaf. The preferred method of communication in these situations has continued to be human hand contact. Things that have been first challenging or unattainable for people with disabilities are now regularly available to them and can be accessed by them with ease. Artificial intelligence made it possible for people with disabilities to live in a society where their challenges are acknowledged and taken into account (AI). Technological advancements have made it possible for technology to adjust and transform the world into a more open community. There is a certain sense of being human as AI directly correlates individuals, including people with and without impairments. hearing voice in the preferred language so that a message can be delivered it to normal people to build a model that is trained on various hand motions we are using a convolution neural network and deep learning on the basis of this model an app is created with the help of this app person who is deaf or dumb can communicate using postures that are translated into speech and human-understandable words Index Terms—CNN, Image Processing, dumb and deaf, ar�tificial Intelligence,gesture recognition,convertor gor image to text,model fiting
Incorret theme should not prevent napari from start (#5605)
For the current implementation, the error in theme registration prevents the napari form from starting. It may be problematic for bundle users.
In this PR I add try: ... except
to handle an error during theme
registration and convert it to logging exceptions. I use logging because
it happened before creating GUI.
- Bug-fix (non-breaking change which fixes an issue)
- New feature (non-breaking change which adds functionality)
- Breaking change (fix or feature that would cause existing functionality to not work as expected)
- This change requires a documentation update
- example: the test suite for my feature covers cases x, y, and z
- example: all tests pass with my change
- example: I check if my changes works with both PySide and PyQt backends as there are small differences between the two Qt bindings.
Install napari-gruvbox
, pygments==2.6
(bellow 2.9) and start napari
Example error message:
11:52:01 ERROR Registration theme failed.
1 validation error for Theme
syntax_style
Incorrect `syntax_style` value: gruvbox-dark provided. Please use one of the following: default, emacs, friendly, colorful, autumn, murphy, manni, monokai, perldoc, pastie, borland, trac, native, fruity, bw, vim, vs, tango, rrt, xcode, igor, paraiso-light, paraiso-dark, lovelace, algol, algol_nu, arduino, rainbow_dash, abap, solarized-dark, solarized-light, sas, stata, stata-light, stata-dark, inkpot (type=assertion_error)
Traceback (most recent call last):
File "/home/czaki/Projekty/napari/napari/utils/theme.py", line 391, in _install_npe2_themes
register_theme(theme.id, theme_dict, manifest.name)
File "/home/czaki/Projekty/napari/napari/utils/theme.py", line 266, in register_theme
theme = Theme(**theme)
File "/home/czaki/Projekty/napari/napari/utils/events/evented_model.py", line 200, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 342, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Theme
syntax_style
Incorrect `syntax_style` value: gruvbox-dark provided. Please use one of the following: default, emacs, friendly, colorful, autumn, murphy, manni, monokai, perldoc, pastie, borland, trac, native, fruity, bw, vim, vs, tango, rrt, xcode, igor, paraiso-light, paraiso-dark, lovelace, algol, algol_nu, arduino, rainbow_dash, abap, solarized-dark, solarized-light, sas, stata, stata-light, stata-dark, inkpot (type=assertion_error)
11:52:01 ERROR Registration theme failed.
1 validation error for Theme
syntax_style
Incorrect `syntax_style` value: gruvbox-light provided. Please use one of the following: default, emacs, friendly, colorful, autumn, murphy, manni, monokai, perldoc, pastie, borland, trac, native, fruity, bw, vim, vs, tango, rrt, xcode, igor, paraiso-light, paraiso-dark, lovelace, algol, algol_nu, arduino, rainbow_dash, abap, solarized-dark, solarized-light, sas, stata, stata-light, stata-dark, inkpot (type=assertion_error)
Traceback (most recent call last):
File "/home/czaki/Projekty/napari/napari/utils/theme.py", line 391, in _install_npe2_themes
register_theme(theme.id, theme_dict, manifest.name)
File "/home/czaki/Projekty/napari/napari/utils/theme.py", line 266, in register_theme
theme = Theme(**theme)
File "/home/czaki/Projekty/napari/napari/utils/events/evented_model.py", line 200, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 342, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Theme
syntax_style
Incorrect `syntax_style` value: gruvbox-light provided. Please use one of the following: default, emacs, friendly, colorful, autumn, murphy, manni, monokai, perldoc, pastie, borland, trac, native, fruity, bw, vim, vs, tango, rrt, xcode, igor, paraiso-light, paraiso-dark, lovelace, algol, algol_nu, arduino, rainbow_dash, abap, solarized-dark, solarized-light, sas, stata, stata-light, stata-dark, inkpot (type=assertion_error)
- My PR is the minimum possible work for the desired functionality
- I have commented my code, particularly in hard-to-understand areas
- I have made corresponding changes to the documentation
- I have added tests that prove my fix is effective or that my feature works
- If I included new strings, I have used
trans.
to make them localizable. For more information see our translations guide.
Co-authored-by: Lorenzo Gaifas brisvag@gmail.com
fix: Scroll to top
Since we are not able to restore the scroll correctly, and force the scroll to top every time we change the current folder. This is to avoid this kind of weird behavior:
- If I go to a sub-folder, if this subfolder has a lot of data and I scrolled down until the bottom. If I go back, then my folder will also be scrolled down.
This is an ugly hack, yeah.
Update architecture to "SFNNv4". Update network to nn-6877cd24400e.nnue.
Architecture:
The diagram of the "SFNNv4" architecture: https://user-images.githubusercontent.com/8037982/153455685-cbe3a038-e158-4481-844d-9d5fccf5c33a.png
The most important architectural changes are the following:
- 1024x2 [activated] neurons are pairwise, elementwise multiplied (not quite pairwise due to implementation details, see diagram), which introduces a non-linearity that exhibits similar benefits to previously tested sigmoid activation (quantmoid4), while being slightly faster.
- The following layer has therefore 2x less inputs, which we compensate by having 2 more outputs. It is possible that reducing the number of outputs might be beneficial (as we had it as low as 8 before). The layer is now 1024->16.
- The 16 outputs are split into 15 and 1. The 1-wide output is added to the network output (after some necessary scaling due to quantization differences). The 15-wide is activated and follows the usual path through a set of linear layers. The additional 1-wide output is at least neutral, but has shown a slightly positive trend in training compared to networks without it (all 16 outputs through the usual path), and allows possibly an additional stage of lazy evaluation to be introduced in the future.
Additionally, the inference code was rewritten and no longer uses a recursive implementation. This was necessitated by the splitting of the 16-wide intermediate result into two, which was impossible to do with the old implementation with ugly hacks. This is hopefully overall for the better.
First session:
The first session was training a network from scratch (random initialization). The exact trainer used was slightly different (older) from the one used in the second session, but it should not have a measurable effect. The purpose of this session is to establish a strong network base for the second session. Small deviations in strength do not harm the learnability in the second session.
The training was done using the following command:
python3 train.py
/home/sopel/nnue/nnue-pytorch-training/data/nodes5000pv2_UHO.binpack
/home/sopel/nnue/nnue-pytorch-training/data/nodes5000pv2_UHO.binpack
--gpus "$3,"
--threads 4
--num-workers 4
--batch-size 16384
--progress_bar_refresh_rate 20
--random-fen-skipping 3
--features=HalfKAv2_hm^
--lambda=1.0
--gamma=0.992
--lr=8.75e-4
--max_epochs=400
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2
Every 20th net was saved and its playing strength measured against some baseline at 25k nodes per move with pure NNUE evaluation (modified binary). The exact setup is not important as long as it's consistent. The purpose is to sift good candidates from bad ones.
The dataset can be found https://drive.google.com/file/d/1UQdZN_LWQ265spwTBwDKo0t1WjSJKvWY/view
Second session:
The second training session was done starting from the best network (as determined by strength testing) from the first session. It is important that it's resumed from a .pt model and NOT a .ckpt model. The conversion can be performed directly using serialize.py
The LR schedule was modified to use gamma=0.995 instead of gamma=0.992 and LR=4.375e-4 instead of LR=8.75e-4 to flatten the LR curve and allow for longer training. The training was then running for 800 epochs instead of 400 (though it's possibly mostly noise after around epoch 600).
The training was done using the following command:
The training was done using the following command:
python3 train.py
/data/sopel/nnue/nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack
/data/sopel/nnue/nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack
--gpus "$3,"
--threads 4
--num-workers 4
--batch-size 16384
--progress_bar_refresh_rate 20
--random-fen-skipping 3
--features=HalfKAv2_hm^
--lambda=1.0
--gamma=0.995
--lr=4.375e-4
--max_epochs=800
--resume-from-model /data/sopel/nnue/nnue-pytorch-training/data/exp295/nn-epoch399.pt
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$run_id
In particular note that we now use lambda=1.0 instead of lambda=0.8 (previous nets), because tests show that WDL-skipping introduced by vondele performs better with lambda=1.0. Nets were being saved every 20th epoch. In total 16 runs were made with these settings and the best nets chosen according to playing strength at 25k nodes per move with pure NNUE evaluation - these are the 4 nets that have been put on fishtest.
The dataset can be found either at ftp://ftp.chessdb.cn/pub/sopel/data_sf/T60T70wIsRightFarseerT60T74T75T76.binpack in its entirety (download might be painfully slow because hosted in China) or can be assembled in the following way:
Get the https://github.com/official-stockfish/Stockfish/blob/5640ad48ae5881223b868362c1cbeb042947f7b4/script/interleave_binpacks.py script. Download T60T70wIsRightFarseer.binpack https://drive.google.com/file/d/1_sQoWBl31WAxNXma2v45004CIVltytP8/view Download farseerT74.binpack http://trainingdata.farseer.org/T74-May13-End.7z Download farseerT75.binpack http://trainingdata.farseer.org/T75-June3rd-End.7z Download farseerT76.binpack http://trainingdata.farseer.org/T76-Nov10th-End.7z Run python3 interleave_binpacks.py T60T70wIsRightFarseer.binpack farseerT74.binpack farseerT75.binpack farseerT76.binpack T60T70wIsRightFarseerT60T74T75T76.binpack
Tests:
STC: https://tests.stockfishchess.org/tests/view/6203fb85d71106ed12a407b7 LLR: 2.94 (-2.94,2.94) <0.00,2.50> Total: 16952 W: 4775 L: 4521 D: 7656 Ptnml(0-2): 133, 1818, 4318, 2076, 131
LTC: https://tests.stockfishchess.org/tests/view/62041e68d71106ed12a40e85 LLR: 2.94 (-2.94,2.94) <0.50,3.00> Total: 14944 W: 4138 L: 3907 D: 6899 Ptnml(0-2): 21, 1499, 4202, 1728, 22
closes official-stockfish/Stockfish#3927
Bench: 4919707
upgrade amplify-react to 5.0.7
packages at node 18
kinda working
it seems to actually work on 18. didn't see that one coming
clean up some unused stuff
try react-scripts build
no config option for react-scripts build
switch to root.render() in index.js
upgrade react types
switch to ThemeProvider for mui
update to material-ui 5 packages
update import names
add @emotion packages
blueprint packages
fix undefined theme
some backend shit
pulled backend changes
roll cypress waaaaaay back
roll back husky because I'm not trying to deal with that shit yet.
here we go w/ the test bullshit again.
testing stuff. both of them seem like bullshit. they aren't covered
fixin tests
Ports mothroaches + Moth emotes (#1843)
Can you guess what this PR does? If you answered that it ports this pull request, this pull request, and a partial part of this one too, then you're right!
You can also craft moth plushies now. You just need some cloth, mothroach hide, and a heart!
silly little moth roaches and emotes, who wouldn't want them in the game?
🆑 add: Mothroaches are now a thing add: Moth laughter, chittering and squeaking /🆑
New Ruin: Singularity Research Lab (#1612)
Adds the Singularity Research Lab, formerly a cutting edge science station, now overrun with kudzu, it is a space ruin.
The two areas of note are the singularity reactor, which is assembled, and would just need a hand if someone were to want to start it, and the research lab. The Research lab contains the fruits of the now deceased science staff's labors, assorted energy weapons. Unfortunately, it also contains the deceased science staff.
Other areas of the base were not so lucky, and are thoroughly infested
The engineering team attempted to hold back the vines, and quickly discovered that fire was not sufficient.
And what used to be the recreation area is completely gone
Notably, the hangar is empty. I plan on making a patch to put a subshuttle inside it once that rolls around.
Notable loot includes: 3 energy SMGs 3 Flamethrowers The Ion Projector, a self charging Ion weapon. An Antique Laser 2 Energy PDWs 2 Accelerator Laser Cannons 4 engineering hardsuits An engineering lathe and circuit imprinter A particle accelerator A singularity generator 6 emitters 1 energy shotgun Kudzu Seeds Basically Everything You'd Need For an R&D Set Up A sense of pride and accomplishment
I feel like this has some rough spots but I've got no idea where to start, so into the review -> testing -> feedback process it goes
- The ruin spawns when the spawn ruin verb doesn't runtime.
More ruin variety. This one spawns in space and does a few things that I haven't seen yet. Mainly a singularity, cool semi-hidden asteroid base that could in theory, be turned into a player lair.
🆑 add: An abandoned Nanotrasen Asteroid Facility has been spotted in the area. Salvage teams are advised to steer clear, or at least bring a knife. add: kudzu zombie subtype. fix: vent iconstates. /🆑
Signed-off-by: Bjarl 94164348+Bjarl@users.noreply.github.com Co-authored-by: spockye 79304582+spockye@users.noreply.github.com
Full Codebase Update
"Why bother with multiple commits when you can push the entire project with just one? Laziness is a virtue, my friends, and I'm taking it to the next level with this commit. No need to worry about detailed commit messages or carefully selected changes - I'm throwing caution to the wind and pushing everything at once. Who needs Git history anyway? It's overrated. So sit back, relax, and enjoy the one commit to rule them all!"
manual migration >>>> pull requests
i fucking hate github, spending 2 hours just to upload my new data, fuck off.
anyway everything works now but if anyone finds some errors with my code just tell me and ill know where to fix everything
library/index.html — ACTUALLY added Hope Vol. 2
• IM SORRY IM STUPID I FORGOT TO ADD THE BOOK COVER 🥺 pls dont hate on me that you wanted to read it but couldnt • Hope Vol. 3 placeholder (hype!!)
text scaling fix, just fuck this, fuck me and fuck you
Refactor op pool for speed and correctness (#3312)
This PR has two aims: to speed up attestation packing in the op pool, and to fix bugs in the verification of attester slashings, proposer slashings and voluntary exits. The changes are bundled into a single database schema upgrade (v12).
Attestation packing is sped up by removing several inefficiencies:
- No more recalculation of
attesting_indices
during packing. - No (unnecessary) examination of the
ParticipationFlags
: a bitfield suffices. SeeRewardCache
. - No re-checking of attestation validity during packing: the
AttestationMap
provides attestations which are "correct by construction" (I have checked this using Hydra). - No SSZ re-serialization for the clunky
AttestationId
type (it can be removed in a future release).
So far the speed-up seems to be roughly 2-10x, from 500ms down to 50-100ms.
Verification of attester slashings, proposer slashings and voluntary exits is fixed by:
- Tracking the
ForkVersion
s that were used to verify each message inside theSigVerifiedOp
. This allows us to quickly re-verify that they match the head state's opinion of what theForkVersion
should be at the epoch(s) relevant to the message. - Storing the
SigVerifiedOp
on disk rather than the raw operation. This allows us to continue track the fork versions after a reboot.
This is mostly contained in this commit 52bb1840ae5c4356a8fc3a51e5df23ed65ed2c7f.
The schema upgrade uses the justified state to re-verify attestations and compute attesting_indices
for them. It will drop any attestations that fail to verify, by the logic that attestations are most valuable in the few slots after they're observed, and are probably stale and useless by the time a node restarts. Exits and proposer slashings and similarly re-verified to obtain SigVerifiedOp
s.
This PR contains a runtime killswitch --paranoid-block-proposal
which opts out of all the optimisations in favour of closely verifying every included message. Although I'm quite sure that the optimisations are correct this flag could be useful in the event of an unforeseen emergency.
Finally, you might notice that the RewardCache
appears quite useless in its current form because it is only updated on the hot-path immediately before proposal. My hope is that in future we can shift calls to RewardCache::update
into the background, e.g. while performing the state advance. It is also forward-looking to tree-states
compatibility, where iterating and indexing state.{previous,current}_epoch_participation
is expensive and needs to be minimised.
Saloon rework (#1594)
Expands whitesands_surface_camp_saloon to cover a 30x30 footprint and not be nearly as bad. The previous version had some really glaring design flaws, like holes in the wall for a bar. On a planet with a deadly atmosphere. Yeah. Also all the chairs faced the same direction. You can see how it looks. It's not great. Here's the new version
Ignore the patches of error, it's purple grass and doesn't display the icon in sdmm for some reason.
The major changes are: Expanding the building's footprint out to 30x30 Moving the loot behind the building, but locking it behind a shovel of some sort (of which you can go through the ruin to get). Improving the loot a LITTLE
- The map loads although I still haven't managed to get it to load on the proper planet with the spawning verb
The old version was kinda bad. Between the clown and mime masks out front. The small footprint, and the free guns (also out front). This solves those issues kinda while making it bigger.
🆑 add: Camp_Saloon has been expanded, expect frontier luxuries if you find it! /🆑
Co-authored-by: spockye 79304582+spockye@users.noreply.github.com
New Ruin: The Beach Town (#1572)
Adds a new beach ruin, the abandoned beachside town
The town is an mostly empty place formerly devoted to tourism and the beloved art of "chilling out". Facets of the life of its inhabitants before their disappearance included drinking, grilling, and swimming off the coast of their fairly large beach. Many interesting things happened on the boardwalk, and a landing pad was present to allow for small ships to dock inside the town.
The loot list is sparse here. I intend for this to mostly be a setpiece for roleplay instead of a loot pinata. There's a good selection of hydroponics seeds and gear, 2 full bar kits, basic kitchen equipment, an autolathe, a few PDAs, a lotta wood, and a jukebox. Also donuts.
- Ruin spawns, nothing is out of whack that shouldn't be.
Continues the trend of making planets more good by adding more content
🆑 add: An oddly empty town has been spotted on beach planets in the area. Check it out spacers. add: Random donut spawners, never eat the same donut two days in a row!
/:cl:
Signed-off-by: Bjarl 94164348+Bjarl@users.noreply.github.com
SubShips attempt 2 (#1627)
Accidentally destroyed my old PR for this, #1573, by completely botching a merge from master to the point that it was easier to make a whole new fork than try to save it, so here we are again. Here's the original description:
Ports the parts of BeeStation/BeeStation-Hornet#7152 that adds the framework for ships to land on top of each other and not break everything. A ship can only land on another ship if there's an open docking port on the mothership that's large enough for the subship. Here's a video of it in action on a modified dwayne-class:
2022-10-12.20-15-03.mp4
This system should be able to handle just about any orientation of ships on top of each other, such as ships landed across areas, multiple ships landed on a single ship, a single ship landed on multiple ships, a ship that is only partially landed on another ship, a ship that is partially landed on a ship that's partially landed on another ship, and so on. Just make sure that you never try to land a ship on itself.
Something to note for this is that ships remember what's underneath them via baseturfs, and there's a hardcoded check that will cause errors if a baseturf list grows over 10 entries long. Because ship turfs have typically 1-3 baseturfs, after about 3 ships stacked on top of each other things will start to break.
You can also make maps with subships on them, to do this, follow these steps:
- make the subship as if it were a regular ship in its own map file
- create a new /datum/map_template/shuttle subtype that points to your subship map, these datums can be found in code/datums/shuttle.dm
- On your main ship, place "subship dock" landmark in turf where you want the bottomleft corner of the subship's bounding box to be, you can also use the offset_x and offset_y vars on the landmark to offset this corner if you need to place the landmark somewhere else.
- Set the "subship_template" var on the landmark to the path of your subship's map_template subtype
- Optionally change the dir on the landmark to rotate the subship. for reference, NORTH is no rotation, EAST is a 90 degree clockwise rotation, etc.
You can put the stationary docking port anywhere on your map, as long as it's on the ship. You can have its bounding box hang off the side of your ship, but please try to keep the entirety of its bounding box within the bounding box of map file, otherwise subships landing on your main ship might accidentally clip through structures nearby your mainship, including virtual z level borders.
- I affirm that I have tested all of my proposed changes and that any issues found during tested have been addressed.
Subships allow for many more creative designs and interesting dynamics between and within ships, especially when a crew may need or want to split its attention between multiple locations at the same time, or to make interactions between ships easier when you just need to land a smaller vessel inside of the other, cutting out the need to travel through space turfs to get between two ships.
🆑 add: Subships are now possible code: Lots of large changes to ship code /🆑
[ci] Mirror third-party images and hailgenetics images on deploy (#12818)
-
On deploys, makes sure that whatever is in our third-party images is in our private registry before starting builds like hail-ubuntu that might depend on those images. This means that we can update our ubuntu base image without the australians needing to deploy any images by hand. However, this does not run in PRs because I 1) didn't want to add that kind of latency for PRs and 2) we don't do any kind of namespacing for our images so if we did include this for a PR that ultimately wasn't merged we would have to manually remove the image anyway so why not manually add it if you're going to PR it… I think point 2 is a little weak but I recall this being what we agreed on a couple months back when we discussed this. I'm wondering if we should just eat the minute or so latency at the beginning of PRs to be safe but it also feels like a shame for something that changes so infrequently.
-
Again on deploys, upload the hailgenetics/* images to the private registry if they don't already exist there. This way any deployments that aren't hail team's GCP deployment can get these images automatically when they deploy a new SHA instead of uploading them manually. It won't backfill skipped versions, but we decided that was ok. This seems less relevant for testing on PRs as it will get triggered on releases and we can easily dev deploy to rectify the image if this breaks.
DD updates (#2786)
DD hasn't been touched in a while, and is kind of bad against preds, tries to fix this to the best my my ability with the below changes.
- Makes the M1911 more accurate
- Makes DD armor cover arms and legs, improves its bullet and explosive resistance
- ERT Medical Pouch now contains the basic 3 injectors(bic, kelo, tram), an emergency injector, a splint, and a bandage
- DD now all have max endurance skill
- M60 is now full auto, does more damage, and is more accurate
- DD Minigun(ol painless) now has an integrated magharn
- M60 now has the same box changing mechanic that smartgun has.
- Adds 2 new guns(technically 1, or maybe 1.5), the XM177 and M16 Grenadier(an M16A1/2 with an M203 attached)
- Adds an M203 grenade launcher, single grenade, no IFF, high range with scope, only fits on M16 grenadier
- Adds 3 new impact grenade types, only DD have them currently.
- Adds HE impact grenade, impacts in a cone radius with an HE explosion.
- Adds an incendiary impact grenade, impacts in the same pattern as HIDP, napalm.
- Adds an impact buckshot grenade, pure vietnam vibes, shoots 10 bits of additional buckshot that also slow.
- DD now have MDs tuned to their own IFF.
- DD are now equipped with XM177s for the medic, Dutch, and flamethrower operator
- DD riflemen have a 60% chance for an M16A1, 30% chance for an M16 Grenadier, and 10% chance for an M60.
- Removes the M60 from black market
- Moves DD presets to their own standalone folder, and removes the /fun/ from their typepaths.
- Changes CLF crashed ship M60 to a MAR50
- Adds sprites for M203, XM177, M16 Grenadier
- DD spawn with a lucky pack and a zippo in their helmet.
Dutch's Dozen is a bit outdated, and light on content, gives them some love. Removes gear that doesn't fit in BM from BM, also I buffed the gear too so balance concerns.
- An unwielded rifle(M41A), had more accuracy than a wielded M1911, would do this for other pistols too but out of scope as DD only use M1911
- They were incredibly easy to kill via leg/arm aiming, as no armor, HPCs instakilled them(DD are default dishonorable), and FF did insane damage as they all had high AP 40 damage rifles.
- ERT medical pouch was worse than normal med-pouch, DD use this too.
- Was intended, survivor endurance skill nerf effected this too as the same define was used for both as a shortcut
- M60 underpreformed, makes it better.
- Dropping Ol' Painless over and over sucks.
- Unique realistic mechanic for the M60, makes it more interactive
- Unique guns, only DD get them, also the XM177 is my favorite gun of all time I love it 😊
- Unique UGL for M16 Grenadier, designed to work directly with the sprite, as its integrated and only fits on it.
- Grenades for DD to have a better chance against preds, riflemen have a 30% chance of spawning with M16 GL.
- Made for a stun, team gameplay for DD.
- Area denial.
- Vietnam Vibes, support tool cause it does jack shit damage.
- DD couldn't tell friend from foe
- (AWESOME) Carbine for Dutch, makes sense for the support and members of the team to have carbines instead of rifles
- Variance within DD team, all 3 of the guns are good, GL is a support tool, M60 as an ambush(also its The Pig), A1 is normal
- M60 doesn't fit thematically, and is too powerful.
- Easier access, they don't fit in the fun file
- Buffed M60, MAR50 fits more there anyway.
- Sprites for things I added.
- Its cool.
🆑 add: DD spawn with a lucky strike pack and a zippo in their helmet. add: M60 now has the box changing mechanic that smartgun has. add: Adds an M16 grenadier, with attached M203, also adds M203 grenade launcher and impact shells for it, only DD have it add: Adds a new M16 variant, the XM177E2 Carbine, only DD have it add: Dutch M16s now are marked as A1s, and use the preexisting M16A1 sprite instead. add: Dutch's Dozen are now equipped with an XM177 for Dutch, the medic, and the flamethrower operator add: Dutch's Dozen riflemen now have a 60% chance to have an M16A1, 30% chance for an M16 with M203 UGL, and 10% chance for an M60 GPMG del: M60 has been removed from the black market balance: DD minigun now has an integrated magharn. balance: M1911 is slightly more accurate. balance: ERT Medical Pouch now contains the 4 basic EZ injectors and a gauze. balance: DD armor now has a greater explosive protection and covers the arms and legs. balance: M60 is now full auto, does more damage, and is more accurate. code: Moved Dutch's Dozen presets to their own standalone folder spellcheck: DD spawn text now correctly says the Yautja mask is on Dutch's face. fix: DD Motion Detectors no longer pick themselves up. fix: DD now all have max endurance skill imageadd: Adds sprites for M203, M203 shells, XM177, and M16 Grenadier Variant maptweak: LV624 Crashed CLF ship insert M60 has been replaced with a MAR50 /🆑
Co-authored-by: Zonespace 41448081+Zonespace27@users.noreply.github.com Co-authored-by: morrowwolf darthbane97@gmail.com
New Agda syntax highlighting extension for VSCode (#562)
I've written an improved Agda syntax-highlighting extension for VSCode called agda-syntax (GitHub, VSCode Marketplace). Although it is still in preview, my opinion is that it is already a significant improvement over the previously used extension. Therefore, I propose that we migrate our development environment (for VSCode users) to use this new extension.
Compared to the previously used extension, this new extension
- injects into markdown syntax, so that the markdown code can be highlighted as markdown code as well
- highlights all variable declarations (with some bugs still), module names, wildcard symbols, all reserved keywords (and only recognizes reserved keywords as reserved keywords)
- Recognizes the appropriate token-boundaries
- Highlights line comments properly
Please understand that the grammar framework that has to be used to write the extension is highly limited, so not all highlighting functionality can be implemented. For instance, the parsing must be done in a single pass, and the functionality to match over multiple lines is very limited. Hence, for example, matching the left-hand side of an equals sign is very gnarly (although I have one idea left to try with regard to this).
Still, I would greatly appreciate any feedback, either if it is a bug or a feature request, which is another reason why I want to introduce it into our defined development environment at this point.
If you want to try out the extension right now, follow the VSCode Marketplace link: https://marketplace.visualstudio.com/items?itemName=FredrikBakke.agda-syntax
fix: api debug messages (VF-3263) (#309)
Fixes or implements VF-3263
So it turns out when you stringify
a VError
you actually don't get the error message, but everything else, which is kinda stupid.
I also updated the debug message so if you get a >400 response on the API call, we show the body too.
holy fuck, this small cosinus miscalculation took the whole evening to find in the most unpredictable place
Removing Module Editing as it is out of Scope
- Ah, well isn't that just typical? The life of a coder is a never-ending series of deadlines and incomplete projects. In this case, it seems Rhese has made the executive decision to remove Module Editing capabilities from the project. While this may come as a disappointment to some, it's important to remember that in the grand scheme of things, Module Editing is just a small cog in the machine of software development. Plus, let's be honest, who really wants to spend their time tinkering around with modules when there are more exciting things to be done, like debugging and error handling? In any case, it's probably for the best that Rhese focuses on getting the core functionality working properly before worrying about any additional bells and whistles. After all, as the Guide always says, "Don't Panic" - there's always time to add more features later.
UE5.1.1 - Calibration Mode Overlay
Added a nice new UI overlay to let you know that you are in Calibration Mode after hearing some people get confused about what happens when you first start the system. If you don't like it, disconnect it in the function for VRPlayerMode_CalibrationMode.
Did a lot of work on the UE5 Manny control rig. It could be better. It could be worse. I find it really hard to tell sometimes, so I did leave the old one in MegaMocapVR\ControlRigs\WIP_ControlRigs\OLD_MMVR_UE5Manny_ControlRig
The UE5 Manny control rig now has a way to lock the arm goals to the chest control, so you can use it for animation/cleanup a bit easier. This variable is exposed to cinematics, which is a sick feature I want to make more use out of.
I reduced the tick on the editor utility widget.
Revamped the teleprompter, creating a new Teleprompter Screen actor that gets spawned in if you use the new event on the player pawn 'Event_TeleprompterUpdate.
Did some small changes to the metahuman control rig, but... I don't remember what they were. Again, who knows if its better. I think I made an 'is valid' check on the iphone name so it wouldn't throw you errors if no iphone was added to the player pawn.
- Updated jsons with images
- Fixed stupid bug that caused wrong descriptions for images with same name. Absolutely fucked up, cursed. I'm sorry
- Added descriptions to jsons
- I dunno if somebody reading this. Stop here. Your internet exploring is too far now lol
emerge-webrsync: support PGP verification via gemato
Introduce PGP verification of the webrsync snapshot tarballs using app-portage/gemato - which is already a dependency of Portage for verifying normal rsync.
This is the same method Portage uses (see below).
Technical changes before we dive into the rationale:
-
Use gemato for PGP verification just like Portage does for sync-type=webrsync, sync-type=rsync (although that uses a metamanifest), and sync-type=git (although that uses gemato for gpg-wrap, so works differently).
-
Use gentoo-functions automagically if available for better output functions.
-
Be more verbose about verification and various other operations, while also respecting --quiet if passed for misc. existing & new messages.
-
Make --verbose a no-op. There weren't enough output messages to justify three states (--quiet, normal, --verbose).
-
Bail out more aggressively in the event of errors or "warnings".
-
Use modern terminology for repository, etc (avoid overloading the "portage" term.)
-
Allow disabling PGP verification with --no-pgp-verify.
Technically, the fix is very straightforward, but getting to the fix was the slightly painful bit. What I've concluded happened is:
-
Portage starts getting reworked to gain proper sync module support;
-
Someone gets the idea of implementing emerge-webrsync fully in Python as a Portage sync module (which is a not-unreasonable idea);
[This ultimately hasn't gone anywhere, and in fact, while working on this bug, I ended up finding a bunch of typos that meant you couldn't even test it. But it's a stub anyway.]
-
The idea of deprecating emerge-webrsync is floated around. The idea being Portage should call it via its new sync module with sync-type=webrsync.
This is presumably with the ultimate goal of it transparently one day using the aforementioned (yet-non-existent) Python implementation as its backend, and not the shell script.
[To this day, Portage's webrsync implementation shells out to the emerge-webrsync shell script, but it has the abstraction to switch that out, in theory.]
-
At the time, PGP verification in general of the Gentoo repository is an active topic, especially now we'd migrated to git which makes it way easier, unlike CVS.
-
A bug is filed for PGP verification in emerge-webrsync.
People decide it doesn't matter too much, because Portage is going to Real Soon Now (TM) have its own backend (replacing the shell script) and/or Portage's sync module support obsoletes emerge-webrsync entirely.
The idea here, I think, being that nobody should call emerge-webrsync and everyone should just call emerge (or emaint) to sync as appropriate.
[This isn't a terrible idea in a sense, but it needs a better basis: we should probably make emerge-webrsync a wrapper which creates a temporary repo config to forcefully webrsync a repository if the user asks us to. This is what people expect from emerge-webrsync with the default sync-type=rsync in repos.conf for ::gentoo.
I actually started implementing this before I realised that emerge was shelling out to emerge-webrsync, so have postponed it.]
-
Then nothing happens with the "replacement" ideas and the good ol' trusty emerge-webrsync ends up with the same problems sitting there because nobody saw the point in working on it if it was to be replaced soon. But that didn't happen.
The fix overall for this is pretty small, but the commit is larger than I'd like because I had to rework a few things to sensibly allow disabling PGP verification as well as follow the flow.
(I did start splitting up this commit but ultimately it needs -w for best review even without the output tweaks in this commit and deconstructing this for atomic commits would end up being more brittle as I couldn't be as confident in the result.)
Bug: https://bugs.gentoo.org/597800 Signed-off-by: Sam James sam@gentoo.org
Fix MGOAL_FIND_ITEM_GROUP, fix up some code (#2546)
- Reorganize
Code still sucks. In particular recruit_class doesn't compare properly with npc->my_class so MGOAL_RECRUIT_NPC_CLASS fails horribly even if you fix up that area of code to it actually points to type->recruit_class instead of recruit_class
For that matter mission has a select copy of several mission type defs and I can only assume this is due to legacy fuckery.
- Fix mission.cpp
Now will only allow you to select items if you have enough of them, and will only consume the necessary amount.
Added documentation for MGOAL_FIND_ITEM_GROUP
Thank god this wasn't too much work.