2,943,060 events, 1,460,673 push events, 2,357,070 commit messages, 167,831,758 characters
Removes xenohide from runner.
Because fuck you, salt PR.
test/doom.d/init.el: Updating from hlissner/doom-emacs - ef7113d6
---
+++
@@ -111,7 +111,8 @@
:lang
;;agda ; types of types of types of types...
- ;;cc ; C/C++/Obj-C madness
+ ;;beancount ; mind the GAAP
+ ;;cc ; C > C++ == 1
;;clojure ; java with a lisp
;;common-lisp ; if you've seen one lisp, you've seen them all
;;coq ; proofs-as-programs
@@ -124,6 +125,7 @@
emacs-lisp ; drown in parentheses
;;erlang ; an elegant language for a more civilized age
;;ess ; emacs speaks statistics
+ ;;factor
;;faust ; dsp, but you get to keep your soul
;;fsharp ; ML stands for Microsoft's Language
;;fstar ; (dependent) types and (monadic) effects and Z3
@@ -138,9 +140,8 @@
;;julia ; a better, faster MATLAB
;;kotlin ; a better, slicker Java(Script)
;;latex ; writing papers in Emacs has never been so fun
- ;;lean
- ;;factor
- ;;ledger ; an accounting system in Emacs
+ ;;lean ; for folks with too much to prove
+ ;;ledger ; be audit you can be
;;lua ; one-based indices? one-based indices
markdown ; writing docs for people to ignore
;;nim ; python + lisp at the speed of c
@@ -159,7 +160,7 @@
;;(ruby +rails) ; 1.step {|i| p "Ruby is #{i.even? ? 'love' : 'life'}"}
;;rust ; Fe2O3.unwrap().unwrap().unwrap().unwrap()
;;scala ; java, but good
- ;;scheme ; a fully conniving family of lisps
+ ;;(scheme +guile) ; a fully conniving family of lisps
sh ; she sells {ba,z,fi}sh shells on the C xor
;;sml
;;solidity ; do you need a blockchain? No.
@@ -167,6 +168,7 @@
;;terra ; Earth and Moon in alignment for performance.
;;web ; the tubes
;;yaml ; JSON, but readable
+ ;;zig ; C, but simpler
:email
;;(mu4e +gmail)
fix: fixed font data corruption
This was the most annoying bug to find in my life. I was using the texture() function in my shaders incorrectly. I was supposed to pass in the Sampler2D (texture slot) but I was just passing in 0 every time. For some reason all of my tests functioned correctly regardless of this. Fuck you, OpenGL.
Many fixes & changes mostly related to Cleric abilities. Other general refactoring & updates. (#662)
- Small update to Vigor advancement description to avoid confusion with custom physical damage
- Updated the Life Drain & Arcane Thrust advancements regarding terminology used to be more accurate, and added more detail
- Life Drain's internal logic has been simplified. It should still behave the same
- Divine Justice now has sound effects for its healed players. It is now a bit easier for future content to make specific chords or short tunes with note block sounds
- Updated Crusade's description for consistency & detail. It now defines humanoids. Improved the way it applies itself to ability damage
- Crusade now uses the partial particles system, with effects updated for consistency & accounting for different enemy sizes
- Magma Shield & Choir Bells' angles for what enemies they consider in front of you are now slightly more precise
- Crusade's damage multiplier is now more precise
- Updated Luminous Infusion's description for consistency & detail. Tidied the numbers it references & its melee damage logic
- Updated Meteor Slam's description for clarity, as well as that of a few other recently edited abilities
- Updated Divine Justice's description for consistency & detail, including its terminology used. Tidied its damage/healing calculations especially when interacting with other abilities
- Divine Justice now deals custom holy damage which bypasses iframes, instead of modifying melee damage. Luminous Infusion's passive now also applies to Divine Justice's total damage to keep their synergy. Crusade is now able to apply to the combined holy damage of such abilities on its own
- Made Choir Bells' swap hand interactions consistent with other abilities. Updated its skill description for accordingly & for consistency
- Updated the way recently edited abilities create percentages & durations for display
- Divine Justice's particles have been tweaked for consistency and now use the partial particles system
- The way custom damage is linked to abilities internally has been tidied, with unnecessary or deleted abilities removed
- Updated Holy Javelin's description for consistency, detail & clarity, including its terminology used. Its damage calculations have been improved, and now take shared passive damage straight from Divine Justice & Luminous Infusion
- Luminous Infusion's passive now deals custom holy damage which bypasses iframes, instead of modifying melee damage, running after its own explosion if applicable
- Abilities that referenced crits have had their descriptions updated for consistency/detail, including their terminology used. Brute Force now mentions its damage size, differentiates between the damage types it deals to enemies, and states its level 2 damage multiplier in the right order of application. Cursed Wound's description is now more accurate. Soul Rend now mentions its level 2 healing's range
- Abilities that were recently edited now also mention the kind of melee or custom damage they deal
- Similar to this week's crit detection fixes and falling attack terminology updates, Arcane Thrust's trigger has been made more in line with sweeping attack requirements
Auto merge of #3639 - rust-lang:renovate/postcss-8.x, r=Turbo87
Update dependency postcss to v8.3.0
This PR contains the following updates:
Package | Change | Age | Adoption | Passing | Confidence |
---|---|---|---|---|---|
postcss (source) | 8.2.15 -> 8.3.0 |
postcss/postcss
PostCSS 8.3 improved source map parsing performance, added Node#assign()
shortcut, and experimental Document
node to AST.
This release was possible thanks to our community.
If your company wants to support the sustainability of front-end infrastructure or wants to give some love to PostCSS, you can join our supporters by:
- Tidelift with a Spotify-like subscription model supporting all projects from your lock file.
- Direct donations in PostCSS & Autoprefixer Open Collective.
Because PostCSS needs synchronous API, we can’t move from the old `source-map 0.6 to 0.7 (many other open-source projects too).
[@​7rulnik](https://togithub.com/7rulnik)
forked source-map
0.6 to source-map-js
and back-ported performance improvements from 0.7. In 8.3 we switched from source-map
to this source-map-js
fork.
You map see 4x performance improvements in parsing map from processing step before PostCSS (for instance, Sass).
Thanks to [@​gucong3000](https://togithub.com/gucong3000),
PostCSS already parse CSS from HTML and JS files (CSS-in-JS templates and objects).
But his plugin need big updates. [@​hudochenkov](https://togithub.com/hudochenkov)
from stylelint team decided to create new parsers for styles inside CSS-in-JS, HTML, and Markdown.
He suggested adding new Document
node type to PostCSS AST to keep multiple Root
nodes inside and JS/HTML/Markdown code blocks between these style blocks.
const document = htmlParser(
'<html><style>a{color:black}</style><style>b{z-index:2}</style>'
)
document.type //=> 'document'
document.nodes.length //=> 2
document.nodes[0].type //=> 'root'
This is an experimental feature. Some aspects of this node could change within minor or patch version releases.
The creator of famous postcss-preset-env
and many other PostCSS tools, [@​jonathantneal](https://togithub.com/jonathantneal)
suggested a nice shortcut to change multiple properties in the node:
decl.assign({ prop: 'word-wrap', value: 'break-word' })
📅 Schedule: At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻️ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
- If you want to rebase/retry this PR, check this box.
This PR has been generated by WhiteSource Renovate. View repository job log here.
vendor_spark: notch-city: Add 3 mode display cutout handler [2/3]
[@AgentFabulous - POSP]
- Introduces the HideCutout and StatusBarStock overlay used in the 3 mode display cutout handler. The HideCutout overlay is necessary since we can't register a content observer in the display manager code. We only have access to resources during boot. Thus, leave this as an overlay and let the config and overlay change methods handle this. Though we can probably do statusbar stock height toggling in the SystemUI code without overlays, I kinda got lazy by the end, just live with it god damn it xD
Signed-off-by: Kshitij Gupta kshitijgm@gmail.com Change-Id: I62f63f39bcb410cfbc68e0028b9cef3d748d7eb6 Signed-off-by: Arghya Chanda arghyac35@gmail.com
notch-city: Refactor package name
Signed-off-by: ShubhamB shubhamprince111@gmail.com Change-Id: Ieb8b35a3062c9334e82153a1dd26df3853db4f1f
Enhanced Styling
- "Hello" in index.html bigger
- Color of technologies in work.html less obstrusive
- Rearrange skills => most important on top, non programming on bottom
- More personal things on Hello! screen ("I love music")
- Apply bootstrap on contact form
- Change mail in contact
- Change "Where do you live?" to "Country" in contact form
- Add margin at the bottom of social buttons
"10:20am. I got up 10m ago. Let me chill a bit and then I will start.
There is no need to do research anymore. I am not going to beat the transformers anytime soon. I did the equivalent of my PhD in the last six years. Now I just have to make those agents and hope it does not take too long for better chips to get here.
10:55am. Ah, I wish I could see into the future and understand what the algorithms will be like. Imagination will only get you so far in this.
I absolutely loathe leaving important work to others. The smart thing to do would be to kick back while others do the hard work, but I resent it. I keep desiring that I could grasp a bit further and come to an understanding how network distilation could be done. To go beyond a fixed number of steps backwards in time to an infinite amount. It won't be that difficult of a process either.
11am. Now that the secret of how Hopfield nets should be done on a linear layer is out, that will serve as an anchor for how unsupervised learning should be done. I did think of an update such as it, but I could not reason out that it would do what it does. It is really weird that getting rid of that sign and using the softmax has not been figured out earlier. It just goes to show how few eyes are looking into things. Maybe had I been looking into Hopfield nets myself, I could have reached the conclussion.
11:05am. I need the superhuman abilities of self improving AIs if I want to go further. Tech is not like cultivation, it is too impersonal. Power that you can control is always better than the one you can't.
11:10am. Now I need to hype myself up into getting started.
"terminal.integrated.shell.windows": "C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\powershell.exe",
"terminal.integrated.shellArgs.windows": [
"-ExecutionPolicy", "ByPass", "-NoExit", "-Command", "& 'C:/Users/Marko/anaconda3/shell/condabin/conda-hook.ps1' ; conda activate 'C:/Users/Marko/anaconda3'"
],
This is deprecated, the new recommended way to configure your default shell is by creating a terminal profile in `#terminal.integrated.profiles.windows#` and setting its profile name as the default in `#terminal.integrated.defaultProfile.windows#`. This will currently take priority over the new profiles settings but that will change in the future.(2)
First let me get rid of this. The linter is complaining that these are deprecated.
"terminal.integrated.automationShell.windows": "-ExecutionPolicy ByPass -NoExit -Command & 'C:/Users/Marko/anaconda3/shell/condabin/conda-hook.ps1' ; conda activate 'C:/Users/Marko/anaconda3'",
Is this how it should go?
> Executing task: 'c:\Users\Marko\Source\Repos\The Spiral Language\VS Code Plugin\node_modules\.bin\tsc.cmd' -p 'c:\Users\Marko\Source\Repos\The Spiral Language\VS Code Plugin\tsconfig.json' --watch <
The terminal process failed to launch: Path to shell executable "-ExecutionPolicy ByPass -NoExit -Command & 'C:\Users\Marko\anaconda3\shell\condabin\conda-hook.ps1' ; conda activate 'C:\Users\Marko\anaconda3'" does not exist.
command 'python.execInTerminal-icon' not found
Now I cannot execute in terminal anymore.
11:15am.
"terminal.integrated.defaultProfile.windows": "-ExecutionPolicy ByPass -NoExit -Command & 'C:/Users/Marko/anaconda3/shell/condabin/conda-hook.ps1' ; conda activate 'C:/Users/Marko/anaconda3'"
Let me try this.
> Executing task: 'c:\Users\Marko\Source\Repos\The Spiral Language\VS Code Plugin\node_modules\.bin\tsc.cmd' -p 'c:\Users\Marko\Source\Repos\The Spiral Language\VS Code Plugin\tsconfig.json' --watch <
The terminal process failed to launch: Path to shell executable "-ExecutionPolicy ByPass -NoExit -Command & 'C:\Users\Marko\anaconda3\shell\condabin\conda-hook.ps1' ; conda activate 'C:\Users\Marko\anaconda3'" does not exist.
Again I get this. Ah no wait, I forgot to remove the other setting.
11:20am. The internet connection is being dogshit again.
https://code.visualstudio.com/docs/python/environments
Oh, once I select an interpreter I can run the scripts almost.
Note: conda environments cannot be automatically activated in the integrated terminal if PowerShell is set as the integrated shell. See Integrated terminal - Configuration for how to change the shell.
"terminal.integrated.profiles.windows": {
"PowerShell -NoProfile": {
"source": "PowerShell",
"args": ["-NoProfile"]
}
},
Hmmm, let me try this.
"terminal.integrated.profiles.windows": {
"PowerShell -conda": {
"source": "PowerShell",
"args": ["-ExecutionPolicy", "ByPass", "-NoExit", "-Command", "& 'C:/Users/Marko/anaconda3/shell/condabin/conda-hook.ps1' ; conda activate 'C:/Users/Marko/anaconda3'"]
}
}
Let me try it like this.
Oh, this is good. Now I have this profile. But it is not using it by default.
11:30am. Oh it works. Great. Now it uses the above as default, and I do not have to play that dumb game of temporarily commenting out the args before launching the Typescript build watcher when I start the plugin.
Nice. It is good that I resolved this.
11:35am. Now let me think, what is next?
11:40am. Feeding the net.
11:45am. I need some time to think about it. I'll shut out everything else from my mind and focus on this.
For the past 4 weeks, my concern has been mostly on NNs themselves. But whether it is the weak level of today, or the superhuman level of tomorrow, certain things will be the same. The way these nets are fed and sustained will remain constant.
Right now, if I want to be successful, I need to purposely cultivate a peasant mentality of tending to my field. The peasant is not doing gene editing, he is just maintaining the field.
I've gone through these motions a few times before, but I got distracted by research every time and so that will has been fittered away.
11:55am. I need to remember, ultimately the architecture being used, and the training regime is just a technicality. It is a detail that will change over time. The GPU will make way to neurochips, and neurochips will have their own iterations and differences.
Let me stop here for breakfast and chores. If needed I will step away from the screen for the rest of the day until I get my drive."
"1:25pm. Done with chores and breakfast, and the Mahoako chapter. I do not feel like programming right now as expected. It is really hard for me to get into the mindset.
Let me turn off the computer and step away from the screne for a while. I need to find the feeling again. Once I do, I will start this and continue going forward without stopping.
I am going to get the agent done, set it lose on online gambling dens and get my success and money. I will integrate new findings as they come along. I will follow the proper steps to get me to the main stage. But I need to get rid of the inertia and overcome the research mindset of the past month. It served me well for a while, but now it is time to go beyond.
Though it did not give me much in terms of algorithms, the Hopfield net paper and the research that I did improved my understanding of transformers significantly. For short term memory, attention is really all that I need.
There are some papers recently showcasing the benefits of feedforward nets, but transformers should have better generalization without the need for regularization. They will make better use of the GPU as well.
Eventually, the way to do Long Term Credit Assignment and proper modules will be discovered. My ideas are interesting pieces that I'll expect to play a role, but I can't see beyond them to the actual solution. I do not think the solution will be hard, I'd expect it to be simple, but regardless, I can't grasp it.
Once the hardware is there, and the understanding of the LTCA is complete, the Singularity itself will be close at hand.
The things I can't do right now do not matter. Ultimately, neuroscience will get its shit together. Or the researchers at large will stumble upon the solution.
I am envious and jealous, so much that I want to cry blood over not being able to go just a step further and deal with this myself. But in this competition, it is not like their position is that dominant. While they hide their thoughts, they are superior, but according to their incentives they have no choice but to release their insights, and once they are out, their work will be internalized by me.
They cannot hide their thoughts either because of the publish or perish culture. The only time hiding insight would make sense is if they stumble on something crucial while would enable them to dominate the whole world in a short amount of time. That kind of fantasy is not going to happen. The improvements are always incremental.
1:40pm. The last month was pretty fun. I learned quite a lot of ML that I did not know a few years ago. It has been a great refresher. But I need to put my own life in order.
I need to internalize the pursuit of the elite agent. The net architectures and the hardware will change, but the pipeline will stay the same. The refining the learning pipeline is something I can get better at and it will give me returns long into the future. The ML algorithms themselves are ultimately a small part of the overall scheme.
I will grow along with the technology itself.
Let me take a nap for a while here."
plein de truc ajouter ... oh fuck in english sorry, well lots of things added but not yet fully implemented you know ping pong is more important than this shitty mathematical project fuck it
"4pm. How did I derive the EqProp update again? I am starting to undertand how long term credit assignment could work.
import torch
import torch.functional
import torch.nn
import torch.nn.functional
s1 = torch.tensor([1,1],torch.float32)
s2 = torch.tensor([2,2],torch.float32)
y = torch.scalar_tensor(0,dtype=torch.float32)
w = torch.scalar_tensor(2,dtype=torch.float32)
def F(s): w * s[0] * s[1]
def E(s): -F(s)
def C(s): torch.abs(F(s) - y)
r = E(s1) + C(s2)
This is how I wrote it out, but this is wrong. The forward pass is definitely not supposed to be w * s[0] * s[1]
, this is just the energy.
I am confused. How are you supposed to take the gradient through the forward pass? I mean, the forward pass does not exist in these kinds of models.
42/119.
A conceptual difference between s(θ, x) and f(θ, x) is that, in conventional deep learning, f(θ, x) is usually thought of as the output layer of the model (i.e. the last layer of the neural network), whereas here s(θ, x) represents the entire state of the system. Another difference is that f(θ, x) is usually explicitly determined by θ and x through an analytical formula, whereas here s(θ, x) is implicitly specified through the variational equation of Eq. (2.3) and may not be expressible by an analytical formula in terms of θ and x. In particular, there exists in general several such states s(θ, x) that satisfy Eq. (2.3). We further point out that s(θ, x) need not be a minimum of the energy function E ; it may be a maximum or more generally any saddle point of E.
In the end what I did was lower the energy at the present point and increase it at the target point. But the derivative of the energy should not necessarily be related to the derivative of the state of the system.
43/119.
Here in eq 2.9 the gradient is outright derived as the difference of the gradients of the energy function. dE(s1)/dW - dE(s2)/dW
. Yes, that makes sense. That is in fact how the update is derived.
def F(s): w * s[0] * s[1]
def E(s): -F(s)
def C(s): torch.abs(F(s) - y)
r = E(s1) + C(s2)
What does not make sense is that C
here. The actual cost does not have a damn thing to do with anything. F
is not the forward pass.
4:20pm. Ah, that cost function is supposed to correspond to clamping. That might be the intent of that.
It is a more elegant way of saying - set the value to this. Energy based models are declarative, so it makes sense to express the need through a cost function.
Hmmmm, but that means not just the output, but the input could be set to a particular value as well through the cost function. Right.
4:25pm.
import torch
import torch.functional
import torch.nn
import torch.nn.functional
s1 = torch.tensor([1,1],torch.float32)
s2 = torch.tensor([2,2],torch.float32)
w = torch.scalar_tensor(2,dtype=torch.float32,requires_grad=True)
def E(s): -w * s[0] * s[1]
r = E(s1) - E(s2)
r.backward()
This would give the update in the paper.
Well, either way, you are just pushing the energy around.
4:25pm. Anyway, let me talk a bit. I figured out the essence of long term memory assignment.
It has been giving me a headache to try and wrap my head around this, but now I finally see where I was going wrong. Just like when I was bashing my head against synthetic gradients and then had the great inspiration that instead of predicting the gradients, I should reconstruct the inputs, rigth now I see where I've been going wrong in my assumptions.
I've been like a fish in water not realizing it is wet.
I've been wracking my brain to try and figure out what kind of timescale the higher level modules should have. The naive version of having them be 2x slower than the lower one would not work. Not only because after a certain point no learning would get done, but also because after a certain point they would not be useful to the lower layer modules. With a 2x slowdown the higher level modules would give an improvement for a bit and then become dead weight as they get stacked.
So this is wrong.
Then I've started thinking about NN distilation and data compression. For this line of thought, I've changed my assumption that every module should have the same timescale, but employ data compression in order to have an effectively infinite horizon.
But this does not strike me as being right either. I don't really see how with a fixed timescale, planning and reasoning could be done.
4:40pm. Then I let my thought wonder on poker and realized that by the time you've separated the different phases of the game into discrete steps, the job of intelligence would be 99% done.
It seems really simple, but the way the game is presented to the brain would be duplicate after duplicate data point for the vast majority of its time. By the time you get to actual game steps, the state has already been filtered.
So I thought from the perspective, what if for every data point in a sequence I duplicated it for an arbitrary number of times before feeding it to the net. The current algorithms would not be able to deal with it, but the brain could.
...
So here is the proper assumption - each module in the network should have an arbitrary timescale. Not short term, not long term, but arbitrary.
Lower layers should not be fast, and the higher layers should not be slow. They should be arbitrary.
The notion of time is so ubiquitous in the current day algorithms and my own thinking, that it just never occured to me to wonder and see it as just another feature the brain presents to my consciousness. I took the feature, and ended up assuming it is fundamental to the brain's processing.
This is despite already having an inkling that to get good compression the brain should be aggressively factoring out time. I already got a clue when I realized that predicting the gradients cannot possibly work for long term credit assignment. But I was still thinking about time in discrete steps.
4:50pm. But this notion of arbitrary timescale in modules completely breaks the backpropagation rules. It is impossible, not because of symmetric weights, but because credit assignment cannot possibly work like that in an arbitrary regime. Instead it would be necessary to reconstruct the inputs all the way based on the outputs.
The current notion of replay buffers does not make sense in such a regime, instead the credit assignment would completely be done through associative memories. The memories themselves could store transitions. And those transitions could be arbitrary in time.
4:55pm. This is only something that could be researched on neurochips. GPUs are a dead end as far as this is concerned. I can only discretize the inputs when it comes to them. I can even barely think about this subject when the GPUs are in the picture.
Hmmm...
I think ultimately, the energy based model formulation is correct, as opposed to a probabilistic one.
5pm. ANNs are ultimately abstractions of the real thing, but they ended up abstracting too much in fact.
At their level, I can't easily recover timescale arbitrariness between modules.
5:05pm. Ok, to make my vision work, I am missing two things - the way to do memory distilation. And the way to backprop through it.
I am not sure if GANs can be used for reconstruction, but I'll asume there is an unsupervised method for that. Maybe it would work with GANs as well.
5:10pm. I think I see it. There are ways of achieving what I want if I am willing to think outside the box. I'll fold these insights into my prediction of the future. It does not matter if I cannot reach all the way there. If I can be quarter of the step ahead of the rest, that is fine.
And as a ML practitioner, I should understand all the important parts of intelligence.
Being able to reconstruct is really important when doing credit assignment - this capability is primary to the brain and energy based models, but in constrast I have absolutely no idea how to do a hierarchical associative memory in ANNs. Apart from the modern Hopfield layer, but that thing needs all the past examples in the replay buffer.
GANs can generate great samples, and with the duality gap method could be used to train stably, but can they be used as an associative memory? I have no idea.
Let me do a bit more research.
https://arxiv.org/abs/2011.13553 Association - Remind Your GAN not to Forget
https://arxiv.org/abs/1611.06953 Associative Adversarial Networks
5:25pm. Got a wholesome award for one of my old posts on the RL sub. Let me read these papers quickly.
5:50pm. No forget it. The first paper might have something in it, but it is too complicated for me to dig it out. I don't feel like paying attention to it. The second one is outright not what I am looking for.
None of the current GAN research matters squit.
I think my current ideas are pretty good, but I do not have the architectural pieces to even start thinking about it. I'll put them on the backburner. EqProp makes sense, and I've managed to reason out how credit assignment would work in models that have modules with arbitrary timescales.
It is one thing to say that the inputs should be reconstructed, it is easy enough of you assume a magic energy-based model that could do that, but I have no idea how that could be done with current DL architectures. Having a large Hopfield layer with all the patterns stored in its buffer would be too inefficient. I don't know if GAN's even with their training stabilized via the duality gap method would be good enough for this. I have no idea, the pieces are missing and I won't be able to reason them out. The research community will simply have to get there.
6pm. Let me take a bath and I'll close for the day. If necessary I'll spend the day in bed tomorrow as well. I need to get these thoughts out of my system."
I'm sorry for the commit spam but I have to do this to find otu why the fuck this shit is having conflicts
10 Hours of Fuck this shit I'm out.
Ah yes, now you're talking.
oh man, this is hard to code
We have progressed!
the GF home theater has been further scanned. now we've got all frame that refers the same spot. I think
we've started Rule The World normal JSON chart. I'm sorry, I am too lazy right now, so this likely be the same accross all difficulties with difference maybe scroll speed a chart. why not have it this where difficulty have layers. the base is normal. you can have hard layers that adds more arrows, and easy that remove arrow, stuff like that.
further expanded the sound for MIDI version SeaNothing. well, the gameover one rendered is the pixel one, not the regular one, but we duplicated it and renamed it as it is.
clean the space up. PAIN IS TEMPORARY GLORY IS FOREVER lol wintergatan!
change the directive for discord rich presence to scan cpp define instead, because when it complains the incompatibility, it says can't access cpp packages for non cpp build
like neko
unfortunately folks, HaxeFlixel did not list Neko, Haslink, Java, and other weird platforms renderable to. but it show Flash there. so um... maybe... idk... why bother swf anyway? it's already not always good idea these days.
oh man. this lime can only render desktop to same OS as the host is. and my Linux area is tight right now, I can't install Haxelib. Haxeflixel, and Lime to it. damn! we need to do something!
don't forget the lime test windows -32bit
for Windows render! somehow there's still exist people who still forcedly stick to 32bit Windows or what, due to unforseen financial issues or knowledges. Guys, it's 2021! tbh, it's now shameful to excuse of that these days. are you guys okay? you need help? c'mon, we gotta go 64 bit.
building PC without graphic card is still works! I've seen Linus did it! https://youtu.be/J1z4XqEkSEU
add another health icon for future use. there are Surprise and Warning. Yeah, in the future we will build surprise character song. we may yoink characters from someone's mods. so stay tuned with that. and for those whose mods got yoinked here, congratulations. don't worry, I'll always make sure your gamebanana or whatever link is in CREDIT.md
under the section mods that got into this mod
thingy. also your name will be listed too next to it. Basically covering all credits in here and there, everywhere.
let's call this Last Moment
. let me be honest here. I believe we, gamers, will protest once the FULL ASS GAME released. why? because it's not $0. and many of us will ignore everything and focus on the price like media these days, even explicitly said, mm let's see ...
https://www.kickstarter.com/projects/funkin/friday-night-funkin-the-full-ass-game
mm, Eventual Rollout from the Full Ass to web version, they said... Yeah check out section WHAT ABOUT WEB GAME
, Will it still get updates
. They said yes, they'll still got it as times roll by
and oh! I believe we gamers forcused on this: Ultimately though you'll only to be able to play the FULL ASS GAME if you buy the FULL ASS GAME.
. um yeah, tbh, I also disagree with that. HOWEVER, we haven't seen the actual outcome yet, so gamers, stay tuned to there, in about next year or so.
And therefore it is. As we protested, we began to leave, and the fandom would die in a matter of months, so I want you gamers to enjoy the Last Moment
right before this game considered dead because of still disagreement.
I know I am angery, but let's not take the negative this time. well you don't want to, so I'll do it so you'll be motivated as well.
psst, btw, check out his https://ninja-muffin24.itch.io/ here. he had done something... dirty... 😏👉 hehe yeah boi. don't tell your parent. oh, you can send me your screenshot that you bought it from n word gaming site. that sure motivates me lmao!
okay sleep
Pin Android NDK version to 21.4.7075529
GitHub Actions seems to experience some issues with detecting the right Android NDK to use. There all sorts of hacks with "side-by-side" installation and "ndk-bundle" which you can read about elsewhere.
Resolve this predicament by pinning the NDK version we are using for building our native code, both BoringSSL and Themis JNI stuff.
This means developers will have to install this specific version of NDK instead of using whatever they are using. But that's not a particular issue. NDK does not affect compatibility of apps either, unless Google really screws up and breaks something there. Not in our case though since we're not really using anything Android-specific in our code.
[debug] run on CI
[debug] disable stuff
Revert "Pin Android NDK version to 21.4.7075529"
This reverts commit 6767e614a9d7dabf74fa23aaf7b81a1a7af82eab.
ANDROID_NDK_HOME
ANDROID_NDK_HOME
no
3.6.4
Revert "Revert "Pin Android NDK version to 21.4.7075529""
This reverts commit ae48ecd55b3045264ae8e223e579bca60675e5d6.
what if?
oh fuck you
Revert "3.6.4"
This reverts commit d369d2e259d29b48904df06f7a60a303413e5c07.
Revert "no"
This reverts commit 9d4774176b01b5308d246e6b0974f8643f0072c7.
Revert "ANDROID_NDK_HOME"
This reverts commit 36857d6c987fdef40e07e60c9ae86eb73225110e.
Revert "ANDROID_NDK_HOME"
This reverts commit f9b7ecb7d9f4d8ac3bf6d96eb5f8814d4c1d481d.
Revert "Revert "Revert "Pin Android NDK version to 21.4.7075529"""
This reverts commit 3122e246e9ae6423766a260bca1a718cebb473a0.
asdasdas
asdsd
asdas
chec it out
asd
do eet
try to unfuck
and build again
oh fuck you too, actions
comments
Pin to macos-10.15 because fuck surpriises
do list first
try to make it faster
what if I just remove these instead
them just do this
and disable this for a moment
yep, seems to be working
rename
Merge pull request #15 from SD11B-Group-3/Kean---Don't-open-if-you're-not-Kean-for-the-love-of-God
Add Charts, Region, City