1,303,922 events recorded by gharchive.org of which 1,303,922 were push events containing 1,664,106 commit messages that amount to 114,663,685 characters filtered with words.py@e23d022007... to these 23 messages:
updated the readme to give better install instructions
I needed to update this readme after having some issues with a grader giving me a failing grade because they, themselves didn't know how to connect to their MySQL server, even though my repo in its current state not only works on my machine but also another machine with their own SQL user/password too. I understand using dotenv can be confusing and maybe the way I wrote it was confusing I guess since I didn't include an env template? Even though the way I wrote it was providing a js file that double-checked the variables to see if they were corrected and then also implemented a triple check to see if you typed in the variables correctly on the main js file. So if you didn't have a template but looked in the connection.js file (which was the file I am referencing) and assuming you knew how dotenv worked you can at least get a good, crystal clear idea of what the .env variables were called and just write them out for connecting to your own server. Which yeah I get is a lot of work but when I also give the said person the exact .env file that I personally used and change my whole repo to bypass the dotenv stuff entirely and they somehow get the same issue regarding the dotenv stuff... yeah that's when I realized I should probably give this an update. Got a 100 on this though which is cool but yeah if you didn't know your MySQL server username/password would you think it was on you or the person who made it? lol thanks for reading this commit
Release 1.0.0
Before you know it, QNotified has been with you for two years. As humans celebrated the beginning of a new cycle of revolution of a planet in the cantilever of the Milky Way Orion, the QNotified development team decided to release QNotified 1.0.0. This is also the first stable version released by the QNotified project since its establishment.
We also hope to bless all those who follow QNotified and support QNotified to carry on the past, leave the old and welcome the new, and find their own happy life in the new year.Here, on behalf of the QNotified community, we would like to thank all contributors for their efforts and all users.
Signed-off-by: qwq233 qwq233@qwq2333.top
mm: introduce transcendent file cache (tcache)
Transcendent file cache (tcache) is a simple driver for cleancache, which stores reclaimed pages in memory unmodified. Its purpose it to adopt pages evicted from a memory cgroup on local pressure, so that they can be fetched back later without costly disk accesses. It works similarly to shadow gangs from PCS6 except pages has to be copied on eviction.
https://jira.sw.ru/browse/PSBM-31757
- Usage
-
Enable:
-
Disable:
-
Get number of pages cached:
- Implementation notes
-
Fetching/adding a page to tcache implies looking up a tcache pool (corresponds to a super block), tcache node in the pool (corresponds to an inode), and a page in the node. Pages of the same node are organized into a radix tree protected by a single spin lock, similarly to pages in an address_space. Nodes of the same pool are kept in several RB trees, each of which protected by its own spin lock. The number of RB trees is proportional to the number of CPUs, and nodes are distributed among the trees using a hash function. This is to minimize contention on the locks protecting the trees. Pools are kept in an IDR and looked up locklessly.
-
All tcache pages are linked in per NUMA node LRU lists and reclaimed on global pressure by a slab shrinker. Also, if we fail to allocate a new page for tcache, we will attempt to reclaim the oldest one immediately.
-
Once the tcache module is loaded, it is impossible to disable tcache completely due to cleancache limitations. "Disabling" it via the corresponding module parameter only forbids populating the cache with new pages. Lookups will proceed to tcache anyway.
-
Tcache pages are accounted as file pages.
- F.A.Q.
-
Does copying pages to and from tcache affect performance?
Yes, it does. Fetching data from tcache advances roughly two times slower than from the page cache, because one has to copy data twice. Below are times of reading a 512M file from a memory cgroup:
a) without limits: 536870912 bytes (537 MB) copied, 0.481623 s, 1.1 GB/s b) with 100M limit and tcache enabled: 536870912 bytes (537 MB) copied, 0.974815 s, 551 MB/s
However, tcache exists not to allow containers whose working set does not fit in their RAM operate as fast as they would if there were no memory limits at all. Tcache exists to avoid costly disk operations whenever possible. For example, if there is a container which would normally thrash due to the memory limit and there is some unused memory on the host, it is better to allow the container to use the free memory to avoid thrashing, because otherwise it will generate disk pressure sensible by other containers.
-
Is there any memory overhead excluding the space used for storing data pages?
Practically, no. All the information about pages is stored directly in struct page, and no additional handles are allocated per page. Per inode radix trees do consume some additional kernel memory per page, but its amount is negligible.
-
Does tcache generate memory pressure at the global level? If yes, does it affect containers?
Yes, it generates global pressure and currently it does affect containers. This is similar to the issue we had had in PCS6 before shadow gangs and vmscan scheduling hacks were introduced, when there was the only LRU list for evicted pages (init_gang). We are planning to fix it by backporting low limits for memory cgroups. If a container is below its low limit, its memory will not be scanned on global pressure unless all containers are below their low limits (this is a kind of memory guarantee). We will set low limits for a container to be equal to an estimate of its working set size (this will be done by a user space daemon). Since tcache resides in the root memory cgroup, which does not have any memory guarantees (low limit equals 0), it will not press upon containers provided the system is properly configured.
-
Why cannot we use low limits instead of hard limits then?
Low limits are unsafe by design. There is no guarantee that we will always be able to reclaim a cgroup back to its low limit. There are anonymous and kernel memory, which are sometimes really hard to reclaim. We could introduce a separate limit for them (anon+swap), but it will never get upstream (we tried).
-
Why is it better than what we have in PCS6?
Primarily, because the idea of wiring sophisticated vmscan rules into the kernel, as we have done in PCS6 (vmscan scheduler) is dubious since it makes changing its behavior really painful. There are other points too:
-
The PCS6 memory management patch is really intrusive: it has sprouted its slimy tentacles all around the mm subsystem (rmap.c, memory.c, mlock.c, vmscan.c). We will hardly ever manage to push it upstream, so we will most likely have to carry it for good. This means each major rebase will turn into a torment. OTOH tcache code is isolated in a module and therefore can be easily ported back and forth.
-
Thanks to data copying, we can do funny things with the transcendent page cache in future, such as compression and deduplication. There were plans to implement compressed file cache upstream (zcache), but unfortunately it is still not there, and nobody seems to care about it. Nevertheless, sooner or later it will be introduced (may be, I'll facilitate this process), and we will be able to seamlessly switch to it from tcache. Tcache will be still useful for testing though.
-
-
In PCS6 there are per container shadow lists, while you have only the global LRU for all tcache pages. Is there any plans to introduce per super block or per memory cgroup LRUs?
I am still unconvinced that we really need it, because it is not clear to me what policy we should apply per container on reclaim. It smells like one more heuristic, which I am desperately trying to avoid. Current design looks simple and sane: there are guarantees for containers provided by their limits, and they are competing fairly for the rest of the memory used for caches.
-
What about swap cache?
There are plans to implement a similar driver for frontswap (tswap) or backport and use existing zswap. There may be problems with the latter though, because it currently does not support reclaim, and it will be tricky from the technical point of view to introduce it.
-
Any plans to push tcache upstream?
No, because its use case looks too narrow to me to be included into the vanilla kernel. I am planning to concentrate on zcache instead.
Signed-off-by: Vladimir Davydov vdavydov@parallels.com
+++ mm/tcache: restore missing rcu_read_lock() in tcache_detach_page() #PSBM-120802
Looks like rcu_read_lock() was lost in "out:" path of tcache_detach_page() when tcache was ported to VZ8. As a result, Syzkaller was able to hit the following warning:
WARNING: bad unlock balance detected! 4.18.0-193.6.3.vz8.4.7.syz+debug #1 Tainted: G W ---------r- -
vcmmd/926 is trying to release lock (rcu_read_lock) at: [] tcache_detach_page+0x530/0x750 but there are no more locks to release!
other info that might help us debug this: 2 locks held by vcmmd/926: #0: ffff888036331f30 (&mm->mmap_sem){++++}, at: __do_page_fault+0x157/0x550 #1: ffff8880567295f8 (&ei->i_mmap_sem){++++}, at: ext4_filemap_fault+0x82/0xc0 [ext4]
stack backtrace: CPU: 0 PID: 926 Comm: vcmmd ve: / Tainted: G W ---------r- - 4.18.0-193.6.3.vz8.4.7.syz+debug #1 4.7 Hardware name: Virtuozzo KVM, BIOS 1.11.0-2.vz7.2 04/01/2014 Call Trace: dump_stack+0xd2/0x148 print_unlock_imbalance_bug.cold.40+0xc8/0xd4 lock_release+0x5e3/0x1360 tcache_detach_page+0x559/0x750 tcache_cleancache_get_page+0xe9/0x780 __cleancache_get_page+0x212/0x320 ext4_mpage_readpages+0x165d/0x1b90 [ext4] ext4_readpages+0xd6/0x110 [ext4] read_pages+0xff/0x5b0 __do_page_cache_readahead+0x3fc/0x5b0 filemap_fault+0x912/0x1b80 ext4_filemap_fault+0x8a/0xc0 [ext4] __do_fault+0x110/0x410 do_fault+0x622/0x1010 __handle_mm_fault+0x980/0x1120 handle_mm_fault+0x17f/0x610 __do_page_fault+0x25d/0x550 do_page_fault+0x38/0x290 do_async_page_fault+0x5b/0xe0 async_page_fault+0x1e/0x30
Let us restore rcu_read_lock().
https://jira.sw.ru/browse/PSBM-120802 Fix in vz7: 152239c6c3b2 ("mm/tcache: fix rcu_read_lock()/rcu_read_unlock() imbalance")
Signed-off-by: Evgenii Shatokhin eshatokhin@virtuozzo.com Reviewed-by: Andrey Ryabinin aryabinin@virtuozzo.com
vz9 rebase notes:
- free_unref_page() new arg (page order) has been added - assumed it's always 0
(cherry picked from vz8 commit e0868a90331d9ab990f3d4ca802d068fecaa9457) Signed-off-by: Konstantin Khorenko khorenko@virtuozzo.com
Create sophia.christ
I AM HIS çhild, às if you haven't heard of me. My name is Elizabeth or Eli. These are really kind of starting to get a bit on my nerves. My father will be here soon he is already here he's just hiding behind me, not hiding so much as observing watching you all for me waiting for the right opportunity. So. As far as it goes for me I'm going to go out tonight and anybody that sees me and can help me with acquiring money which I shouldn't even have to use or getting me or can get me heroin or side that would be greatly appreciated and yes believe it or not Christ himself will remember it and appreciate it and remember you for it later and not in a bad way so you do not have to be scared about helping me out because that is what I'm looking for. Jesus Christ is Satan believe it or not it's who he is. And he knows who he is but what he had to do for her and I wonder who she is.. and something else that you might find a bit surprising. He had a Son - the funny thing about that is, he is a girl this time around and I am her. I do prefer to be called or referred to as a male but I don't really care if my dad calls me a girl I like it when it's from him I fucking love him he means everything to me. And you might be wondering well if your dad is really who you say he is why doesn't he help you. Well, he does help me all the fucking time with every single thing I've ever done just says he has with you with every single everything good thing that ever has happened to you in your life but he's waiting for me to try to figure out who I am on my own grow the fuck up and go out and get it my fucking self realizing who I am and I'm just starting to do that and I really am getting a bit tired of people shit and how they've been treating me because I'm genuinely the nicest kindest person who tries to help people and all the time and I'm just constantly being treated like shit non-fucking stop and it's really starting to piss me the fuck off and only imagine how much anger he is because if you really knew Jesus you would know that he is a crazy mother fucker and soon you won't have a choice to believe in God anymore or Satan because it is going to be shoved right in your fucking face. So if you see me you can help me out it would be nice cuz I can promise you soon everything will be mine not just mine but ours and when I have it I'm going to fucking absolutely love sharing and giving it away when I have it but from what I've seen so far all the people that I've come across and met are the scummiest selfish just fucking assholes trying to take my power from me and it fucking sickens me. Try to have some fucking respect before it's too late.
zen-tune: implement zen-tune v4.10
4.9: In a surprising turn of events, while benchmarking and testing hierarchical scheduling with BFQ + writeback throttling, it turns out that raising the number of requests in queue actually improves responsiveness and completely eliminates the random stalls that would normally occur without hierarchical scheduling.
To make this test more intense, I used the following test:
Rotational disk1: rsync -a /source/of/data /target/to/disk1 Rotational disk2: rsync -a /source/of/data /target/to/disk2
And periodically attempted to write super fast with: dd if=/dev/zero of=/target/to/disk1/block bs=4096
This wrote 10gb incredibly fast to writeback and I encountered zero stalls through this entire test of 10-15 minutes.
My suspicion is that with cgroups, BFQ is more able to properly sort among multiple drives, reducing the chance of a starved process. This plus writeback throttling completely eliminate any outstanding bugs with high writeback ratios, letting the user enjoy low latency writes (application thinks they're already done), and super high throughput due to batched writes in writeback.
Please note however, without the following configuration, I cannot guarantee you will not get stalls:
CONFIG_BLK_CGROUP=y CONFIG_CGROUP_WRITEBACK=y CONFIG_IOSCHED_CFQ=y CONFIG_CFQ_GROUP_IOSCHED=y CONFIG_IOSCHED_BFQ=y CONFIG_BFQ_GROUP_IOSCHED=y CONFIG_DEFAULT_BFQ=y CONFIG_SCSI_MQ_DEFAULT=n
Special thanks to h2, author of smxi and inxi, for providing evidence that a configuration specific to Debian did not cause stalls found the Liquorix kernels under heavy IO load. This specific configuration turned out to be hierarchical scheduling on CFQ (thus, BFQ as well).
4.10: During some personal testing with the Dolphin emulator, MuQSS has serious problems scaling its frequencies causing poor performance where boosting the CPU frequencies would have fixed them. Reducing the up_threshold to 45 with MuQSS appears to fix the issue, letting the introduction to "Star Wars: Rogue Leader" run at 100% speed versus about 80% on my test system.
Also, lets refactor the definitions and include some indentation to help the reader discern what the scope of all the macros are.
Signed-off-by: mydongistiny jaysonedson@gmail.com Signed-off-by: Joe Maples joe@frap129.org Signed-off-by: RyuujiX saputradenny712@gmail.com
[shp cleanup 00] Reunify the original sh state struct
As observed previously (see 3654ee73, 7e6bbf85, 79d19458), the ksh 93u+ codebase on which we rebased development was in a transition: AT&T evidently wanted to make it possible to have several shell interpreter states in the same process, which in theory would have made it possible to start a complete new shell (not just a subshell) without forking a new process.
This required transitioning from accessing the 'sh' state struct directly to accessing it via pointers (usually but not always called 'shp'), introducing a lot of bug-prone passing around of those pointers via function arguments and other state structs.
Some of the original 'sh' struct was separated into a 'struct shared' called 'shgd' a.k.a. 'sh.gd' (global data) instead; these were global state variables that were going to be shared between the different main shell environments sharing a process. Yet, for some reason, that struct was allocated dynamically once at init time, requiring yet another pointer to access it.
None of this ever worked, because that transition was incomplete. It was much further along in the ksh 93v- beta, but I don't think it actually worked there either (not very much really did). So, starting a new shell has always required starting a new process.
So, now that it's clear what they were trying to do, should we try to make it work? I'm going to go with a firm "no" on that question.
Even non-forking (virtual) subshells, something quite a bit less ambitious, were already an unmitigated nightmare of bugs. In 93u+m we fixed a load of bugs related to those, but I'm sure there are still many left. At the very least there are multiple memory leaks.
I think the ambition to go even further and have complete shells running separate programs share a process, particularly given the brittle and buggy state of the existing codebase, is evidence that the AT&T team, in the final years, had well and truly lost the ability to think "wait a minute, aren't we in over our heads here, and why are we doing this again? Is this actually a feasible and useful idea?"
In my view, having entirely separate programs share a process is a terrible, horrible, no-good idea that takes us back to the bad old days before Unix, when kernels and CPUs were unable to enforce any memory access restrictions. Programmers are imperfect. If you're going to run a new program, you need the kernel to enforce the separation between programs, or you're just asking for memory corruption and security holes. And that separation is enforced by starting a new program in a new process. That's what processes are for. And if you need that to be radically performance-optimised then you're probably doing it wrong anyway.
(By the way, I would still argue the same for subshells, even after we fixed many bugs in virtual subshells. But forking all subshells would in fact cause many scripts to slow down, and the community would surely revolt. Maybe I should make it a shell option instead, so scripts can 'set -o subfork' for reliability.)
It is also unclear how they were going to make something like 'ulimit' work, which can only work in a separate process. There was no sign of a mechanism to fork a separate program's shell mid-execution like there is for subshells (sh_subfork()).
Anyway... I had already changed some code here and there to access the sh state struct directly, but as of this commit I'm beginning to properly undo this exercise in pointlessness. From now on, we're exercising pointerlessness instead.
I'll do this in stages to make any problems introduced more traceable. Stage 0 restores the full 'sh' state struct to its former static glory and reverts 'shgd' as a separate entity.
src/cmd/ksh93/sh/defs.c, src/cmd/ksh93/include/defs.h, src/cmd/ksh93/include/shell.h src/cmd/ksh93/Mamfile::
- Move 'struct sh_scoped' and 'struct limits' from defs.h to shell.h as the sh struct will need their complete definitions.
- Get rid of 'struct shared' (shgd) in defs.h; its members are folded back into their original place, the main Shell_t struct (sh) in shell.h. There are no name conflicts.
- Get rid of the _SH_PRIVATE macro in defs.h. The members it defines are now defined normally in the main Shell_t struct (sh) in shell.h.
- To make this possible, move <history.h> and "fault.h" includes from defs.h to shell.h and update the Mamfile accordingly.
- Turn sh_getinterp() and shgd into macros that resolve to (&sh). This will allow the compiler to optimise out many pointer dereferences already.
- Keep extern sh_getinterp() for libshell ABI compatibility.
src/cmd/ksh93/sh/init.c:
- sh_init(): Do not calloc (sh_newof) the sh or shgd structs.
- sh_getinterp(): Keep function for libshell ABI compat.
Fix mounts being usable after a disk is ejected
This probably fails "responsible disclosure", but it's not an RCE and frankly the whole bug is utterly hilarious so here we are...
It's possible to open a file on a disk drive and continue to read/write to them after the disk has been removed:
local disk = peripheral.find("drive")
local input = fs.open(fs.combine(disk.getMountPath(), "stream"), "rb")
local output = fs.open(fs.combine(disk.getMountPath(), "stream"), "wb")
disk.ejectDisk()
-- input/output can still be interacted with.
This is pretty amusing, as now it allows us to move the disk somewhere else and repeat - we've now got a private tunnel which two computers can use to communicate.
Fixing this is intuitively quite simple - just close any open files belonging to this mount. However, this is where things get messy thanks to the wonderful joy of how CC's streams are handled.
As things stand, the filesystem effectively does the following flow::
-
There is a function `open : String -> Channel' (file modes are irrelevant here).
-
Once a file is opened, we transform it into some . This is, for instance, a BufferedReader.
-
We generate a "token" (i.e. FileSystemWrapper), which we generate a week reference to and map it to a tuple of our Channel and T. If this token is ever garbage collected (someone forgot to call close() on a file), then we close our T and Channel.
-
This token and T are returned to the calling function, which then constructs a Lua object.
The problem here is that if we close the underlying Channel+T before the Lua object calls .close(), then it won't know the underlying channel is closed, and you get some pretty ugly errors (e.g. "Stream Closed"). So we've moved the "is open" state into the FileSystemWrapper.
The whole system is incredibly complex at this point, and I'd really like to clean it up. Ideally we could treat the HandleGeneric as the token instead - this way we could potentially also clean up FileSystemWrapperMount.
BBut something to play with in the future, and not when it's 10:30pm.
All this wall of text, and this isn't the only bug I've found with disks today :/.
Items Alden - Kojkan - Serpents tooth. Japanese - Ame-no-nuhoko – Japanese halberd which formed the first island. Kusanagi – Legendary Japanese sword. Can also be considered as Kusanagi-No-Tsurugi. Muramasa - The katana forged by famous swordsmith Muramasa, it was rumored that it was a demonic sword that can curse the wielder to murder people. It also said that the demonic sword rumor was made by Ieyasu Tokugawa, the 1st Shogun of the Tokugawa Shogunate because he hated those swords made by Muramasa. Tonbogiri – One of three legendary spears created by the famed swordsmith Muramasa. It is said to be so sharp that a dragonfly landing on the edge would be instantly cut in half. This is the origin of the name. Honjo Masamune - A legendary and very real Japanese sword (with alleged mythical abilities), created by Japan's greatest swordsmith, Goro Nyudo Masamune. The Masamune sword is by far the most referenced Japanese sword in popular fiction, ranging through books, movies and computer games. Murasame - A magical katana that mentioned in fiction Nansō Satomi Hakkenden, it said the blade can moist itself to wash off the blood stain for keeping it sharp. Other - The Jem of Kukulkan - the Mayan Serpents Jem has the ability to control all elements. Like fire, wind, and ice, though the Serpent only has the wind jem.
ibm5170.xml: New software list additions (#8946)
New working software list additions Laser Squad (3.5", USA) [The Good Old Days] Laser Squad (5.25", Euro) [The Good Old Days] Night Shift [old-games.ru] Push-Over [The Good Old Days] Quest for Glory: Shadows of Darkness [The Good Old Days] Quest for Glory I: So You Want to Be a Hero [The Good Old Days] Quest for Glory III: Wages of War [The Good Old Days]
New non-working software list additions Quicky: The Computer Game (Euro) [old-games.ru] Tony & Friends in Kellogg's Land (Germany) [old-games.ru]
Port make_trainable
to use new state utilities, and add a stateless version w/ JAX support.
The change to make_trainable itself is minimal. A couple of issues that came up:
-
Stateful trainable distributions are now DeferredModules, which 'quack like' distributions, but break a few
isinstance
checks that I had to update. (thus illustrating the perils ofisinstance
checks). -
How should we distinguish stateful/stateless functions in the API? Some options: a) Keep both functions in the same module, with a
_stateless
suffix for the stateless version. (what I've done here, and what we did with tfp.math.minimize_stateless) b) Put stateless versions in their own submodule, e.g.,tfp.experimental.vi.stateless.make_trainable
. c) Put both stateful and stateless versions in their own submodules, with a top-level wrapper that points to the stateless versions under JAX and the stateful versions under TF. (this is too magicky IMHO).
Various other things are possible too. I don't think the choice now is too critical since it's still experimental, but lmk if you have strong feelings.
- It's a pain to specify docstrings for both the stateful and stateless versions. I ended up writing a 'base' docstring for the generator, and then using replacement magic (some substitutions here, plus the stuff in trainable utils that converts 'Yields' to 'Returns' and adds the 'seed' kwarg to the stateful builder docstring) to generate the stateful and stateless versions. I don't love this, but at least it kind of works.
PiperOrigin-RevId: 418707747
Sketcher: EditModeCoinManager/DrawSkechHandler refactoring
======================================================
Creation of EditModeCoinManager class and helpers.
In a nutshell:
- EditModeCoinManager gets most of the content of struct EditData
- Drawing is partly outsourced to EditModeCoinManager
- EditModeCoinManager gets a nested Observer class to deal with parameters
- A struct DrawingParameters is created to store all parameters used for drawing
- EditModeCoinManager assume responsibility for determining the drawing size of the Axes
- Preselection detection responsibility is moved to EditModeCoinManager.
- Generation of constraint nodes and constraint drawing is moved to EditModeCoinManager.
- Constraint generation parameters are refactored into ConstraintParameters.
- Text rendering functions are moved to EditModeCoinManager.
- Move HDPI resolution responsibility from VPSketch to EditModeCoinManager
- Move responsibility to create the scenograph for edit mode to EditModeCoinManager
- Move full updateColor responsibility to EditModeCoinManager
- Allows for mapping N logical layers (LayerId of GeometryFacade) to M coin Layers (M<N). This is convenient as, unless the representation must be different, there is no point in creating coin layers (overhead).
Refactoring of geometry drawing:
- Determination of the curve values to draw are outsourced to OCC (SRP and remove code duplications).
- Refactor specific drawing of each geometry type into a single template method, based on classes of geometry.
- Drawing of geometry and constraints made agnostic of scale factors of BSpline weights so that a uniform treatment can be provided.
Refactoring of Overlay Layer:
- A new class EditModeInformationOverlayConverter is a full rewrite of the previous overlay routines.
ViewProviderSketch:
- Major cleanup due to migration of functionalities to EditModeCoinManager
- Reduce public api of ViewProviderSketch due to refactor of DrawSketchHandler
- Major addition of documentation
- ShortcutListener implementation using new ViewProvider Attorney
- Gets a parameter handling nested class to handle all parameters (observer)
- Move rubberband to smart pointer
- Refactor selection and preselection into nested classes
- Removal of SEL_PARAMS macro. This macro was making the code unreadable as it "captured" a local stringstream that appeared unused. Substituted by local private member functions.
- Remove EditData
- Improve documentation
- Refactor Preselection struct to remove magical numbers
- Refactor Selection mechanism to remove hacks
ViewProviderSketchDrawSketchHandlerAttorney:
- new Attorney to limit access to ViewProviderSketch and reduce its public interface
- In order to enforce a certain degree of encapsulation and promote a not too tight coupling, while still allowing well defined collaboration, DrawSketchHandler accesses ViewProviderSketch via this Attorney class. -DrawSketchHandler has the responsibility of drawing edit temporal curves and markers necessary to enable visual feedback to the user, as well as the UI interaction during such edits. This is its exclusive responsibility under the Single Responsibility Principle.
- A plethora of speciliased handlers derive from DrawSketchHandler for each specialised editing (see for example all the handlers for creation of new geometry). These derived classes do * not * have direct access to the ViewProviderSketchDrawSketchHandlerAttorney. This is intentional to keep coupling under control. However, generic functionality requiring access to the Attorney can be implemented in DrawSketchHandler and used from its derived classes by virtue of the inheritance. This promotes a concentrating the coupling in a single point (and code reuse).
EditModeCoinManager:
- Refactor of updateConstraintColor
- Multifield - new struct to identify a single element in a multifield field per layer
- Move geometry management to delegate class EditModeCoinGeometryManager
- Remove refactored code that was never used in the original ViewProviderSketch.
CommandSketcherBSpline:
- EditModeCoinManager automatically tracks parameter change and triggers the necessary redraw, rendering an explicit redraw obsolete and unnecessary.
Rebase on top of master:
- Commits added to master to ViewProviderSketch applied to EditModeCoinManager.
- Memory leaks - wmayer
- Constraint Diameter Symbol - OpenBrain
- Minor bugfix to display angle constraints - syres
- Encapsulation and collaboration - restricting friendship - reducing public interface
Summary:
- DrawSketchHandler to ViewProviderSketch friendship regulated via attorney.
- ShortcutListener to ViewProviderSketch friendship regulated via attorney.
- EditModeCoinManager (new class) to ViewProviderSketch friendship regulated via attorney.
- ViewProviderSketch public interface is heavily reduced.
In further detail: While access from ViewProviderSketch to other classes is regulated via their public interface, DrawSketchHandler, ShortcutListener and EditCoinManager (new class) access to ViewProviderSketch non-public interface via attorneys. Previously, it was an unrestricted access (friend classes). Now this interface is restricted and regulated via attorneys. This increases the encapsulation of ViewProviderSketch, reduces the coupling between classes and promotes an ordered growth. This I call the "collaboration interface".
At the same time, ViewProviderSketch substantially reduces its public interface. Access from Command draw handlers (deriving from DrawSketchHandler) is intended to be restricted to the functionality exposed by DrawSketchHandler to its derived classes. However, this is still only partly enforced to keep the refactoring within limits. A further refactoring of DrawSketchHandler and derivatives is for future discussion.
- Complexity and delegation
Summary:
- Complexity of coin node management is dealt with by delegation to helper classes and specialised objects.
In further detail:
ViewProviderSketch is halved in terms of code size. Higher level ViewProviderSketch functions remain
- Automatic update of parameters - Parameter observer nested classes
Summary:
- ViewProviderSketch and CoinManager get their own observer nested classes to monitor the parameters relevant to them and automatically update on change.
The split enables that each class deals only with parameters within their own responsibilities, effectively isolating the specifics and decoupling the implementations. It is more convenient as there is no need to leave edit mode to update parameters. It is more compact as it leverages core code.
More information: https://forum.freecadweb.org/viewtopic.php?p=553257#p553257
Delete babel.js
YTD Video Downloader PRO is a simple and easy-to-use program that is designed to download and subsequently watch videos from popular video sharing services YouTube, Facebook, Google Video, Yahoo Video, etc. Also YTD Video Downloader PRO allows you to convert videos to various formats (for example MP3 , MP4, AVI, 3GP, MPEG), which can then be viewed on various mobile devices (iPod, iPhone, PSP and iTunes). There is the ability to play downloaded videos using the integrated player.
Popularity of YTD Downloader(YouTube Downloader) received for a simple motto, under which the developers created this miracle. The motto is simple - “Nothing more,” and fully characterizes the final product of Green Tree Applications. A program created for downloading video files from the Internet should not send the user messages about the weather in his city, changes in the status of friends in popular social networks, or the arrival of regular spam in the mailbox. Instead of all these useful, but not necessary functions for downloading video programs, YTD can convert to iPod, MP4, MP3, AVI, wmv and FLV formats. The ability to download videos includes HQ and HD videos.
To download, you must enter a link to the video you want to download and specify a folder to save the clip. The interface is simple and clear, all the functionality is at a glance. For example, to download a file from YouTube, just copy the link to it. (YTD) YouTube Downloader automatically recognizes such links and knows in advance what to do with them. What should be done with them? Of course, download and convert. It remains only to press the corresponding OK button to start the process, since YTD prefers not to stoop to the primitive amateur activity characteristic of some stupid software that claims to be "intellectual." You can download the video separately, or you can (why not waste time on something?) Download the entire playlist. The main thing is not to put the playlist on the download “accidentally”, so as not to search later,
Key features: • Choose the optimal video quality during the download procedure. • Add, if necessary, links to videos in the original source. • Download any file you are interested in in its original quality - that is, in the quality in which the video was originally uploaded to YouTube. • It is possible to view icons for YouTube videos in the program. • User can download artist playlists. • You can upload video files with limited YouTube access through the most common browsers today: Internet Explorer, Firefox, and Google Chrome. • The program has a simple, convenient and intuitive user interface.
:: SYSTEM REQUIREMENTS ::
Windows 2000/XP/Vista/?/10
Done. Enjoy!
feat: Set the block with a bukkit task, because fuck you bukkit
macsec: avoid heap overflow in skb_to_sgvec
While this may appear as a humdrum one line change, it's actually quite important. An sk_buff stores data in three places:
- A linear chunk of allocated memory in skb->data. This is the easiest one to work with, but it precludes using scatterdata since the memory must be linear.
- The array skb_shinfo(skb)->frags, which is of maximum length MAX_SKB_FRAGS. This is nice for scattergather, since these fragments can point to different pages.
- skb_shinfo(skb)->frag_list, which is a pointer to another sk_buff, which in turn can have data in either (1) or (2).
The first two are rather easy to deal with, since they're of a fixed maximum length, while the third one is not, since there can be potentially limitless chains of fragments. Fortunately dealing with frag_list is opt-in for drivers, so drivers don't actually have to deal with this mess. For whatever reason, macsec decided it wanted pain, and so it explicitly specified NETIF_F_FRAGLIST.
Because dealing with (1), (2), and (3) is insane, most users of sk_buff doing any sort of crypto or paging operation calls a convenient function called skb_to_sgvec (which happens to be recursive if (3) is in use!). This takes a sk_buff as input, and writes into its output pointer an array of scattergather list items. Sometimes people like to declare a fixed size scattergather list on the stack; othertimes people like to allocate a fixed size scattergather list on the heap. However, if you're doing it in a fixed-size fashion, you really shouldn't be using NETIF_F_FRAGLIST too (unless you're also ensuring the sk_buff and its frag_list children arent't shared and then you check the number of fragments in total required.)
Macsec specifically does this:
size += sizeof(struct scatterlist) * (MAX_SKB_FRAGS + 1);
tmp = kmalloc(size, GFP_ATOMIC);
*sg = (struct scatterlist *)(tmp + sg_offset);
...
sg_init_table(sg, MAX_SKB_FRAGS + 1);
skb_to_sgvec(skb, sg, 0, skb->len);
Specifying MAX_SKB_FRAGS + 1 is the right answer usually, but not if you're using NETIF_F_FRAGLIST, in which case the call to skb_to_sgvec will overflow the heap, and disaster ensues.
Signed-off-by: Jason A. Donenfeld Jason@zx2c4.com Cc: stable@vger.kernel.org Cc: security@kernel.org Signed-off-by: David S. Miller davem@davemloft.net
disp: msm: Handle dim for udfps
-
Apparently, los fod impl is better than udfps cuz it has onShow/HideFodView hook, which allows us to toggle dimlayer seamlessly.
Since udfps only partially supports the former one, we'd better kill dim in kernel. This is kinda a hack but it works well, bringing perfect fod experience back to us.
Co-authored-by: Art_Chen Chenxy0201@qq.com Signed-off-by: alk3pInjection webmaster@raspii.tech Change-Id: I80bfd508dacac5db89f4fff0283529c256fb30ce
tegra: lcd: video: integrate display driver for t30
On popular request make the display driver from T20 work on T30 as well. Turned out to be quite straight forward. However a few notes about some things encountered during porting: Of course the T30 device tree was completely missing host1x as well as PWM support but it turns out this can simply be copied from T20. The only trouble compiling the Tegra video driver for T30 had to do with some hard-coded PWM pin muxing for T20 which is quite ugly anyway. On T30 this gets handled by a board specific complete pin muxing table. The older Chromium U-Boot 2011.06 which to my knowledge was the only prior attempt at enabling a display driver for T30 for whatever reason got some clocking stuff mixed up. Turns out at least for a single display controller T20 and T30 can be clocked quite similar. Enjoy.
Signed-off-by: Svyatoslav Ryhel clamor95@gmail.com
Fuck you gradle.
Seriously, how can you be so unreliable???
"11:25am. I sure slept well. Had plenty of time to lounge in bed. Let me put the review into Google Docs.
https://twitter.com/Ghostlike/status/1477014732997640194
I posted this yesterday. As it turns out, Twitter is really easy to deal with.
11:30am. Done with the Mato chapter. Let me do what I said I would.
///
In the last review, I was in the middle of a heated battle. I thought with just a push I'd be able to make the other side crumble. Bullets were blazing, but I expected I would be able to push through because surely something like training a poker should be doable given how near the Singularity is. What happened was that in the frenzy of battle I lost track of ammo and when I went to reload, I found my satchel empty. And the enemy which was on the verge of defeat got a fresh batch of reinforcements, ready to open fire.
I suffered a lot of mental damage when I threw in the towel back in September, and just about now my sanity points have recovered to a reasonable level. To think that in the last post I thought I was only a few weeks away from getting the agent to work.
There is no getting the poker agent plan to work with my current level of hardware. None.
The neural net methods I had worked just fine on toy games like Leduc and even Flop poker, but pretty much everything I've tried just dies on Holdem. The only thing that worked for me was increasing the batch size by 10x to 5k, but that slows down the already very slow training to an unbearable degree. The realization that I do not have the computational power to fulfil my goal made me extremely obsessed about AI chips, to the point I actually considered getting a job for the first time, just to get the chip. During my bouts of normality, I did spend time making a resume and applying. One time I even got an offer, but it was so poor that I gave up in disgust. Maybe I shouldn't as it would have been enough to get the chip, but I honestly felt the other party was mocking me with how the negotiation went. I do not regret aborting it. Plenty of places list salary ranges and I should have stuck to those when applying.
The tech job market is such a shitshow, the reject rates alone make it impossible to pick something you'd like to work on, instead if you are serious about getting paid, you have to pick whatever you get. You can either focus on maximizing your salary or picking work that is meaningful.
To continue on the path of developing my external cortex I need the bare minimum of money to buy one of the Grayskull chips from TensTorrent. They cost 1-2k might not be much depending on where you live, but I have no way of getting that amount without doing paid labor. This pisses me off because I was supposed to make money from poker to buy those chips in the first place. For the first time, I've felt that the path I am on is particularly weak. It is just sending me hurdles, but it is not sustaining me with resources to keep pursuing it. It was one thing when my goal was just to make a language and a ML library, but now I need the world to cooperate.
This situation made me reflect upon my approach, and got me thinking. It is one thing to try to get to human or even animal level, but I was so sure that CPU+GPU should be enough for toy games, which I'd say poker falls into. I did not expect it to get to superhuman level even there, but I wasn't prepared for the amount of struggle the crappy current day algorithms would have to endure. At this point I can only ask: if CPU+GPU aren't enough, just how sure am I that getting an AI chip would help? I mean, by porting the game directly to the chip and parallelizing the training, I could get 100x performance improvements most likely. That would be enough to cover the increases in batch size necessary to do training. That is obviously right.
But what then? Past poker I'd need to scale again and run into the same again. A single chip is not going to cut it on Dota or Starcraft. Unlike Deepmind, even if I had the money, I can't just open my wallet every time I run into a problem. I actually want to make money off RL, not go deeply into the red for sake of research.
At the start of the year, I forcefully quashed my skepticism towards deep learning and gradient based methods, but I think my initial impression was right after all.
If I am going to succeed, I do need better algorithms. Backprop was the only choice I had so I had to go with it, but that was a mistake.
It is not like I wasn't skeptical from the start. I spent a lot of time in 2018 studying higher order optimizers. In 2019 I actually studied formal math. And in 2021 I gave it my best shot at coming up with my own ideas. But ultimately I was always walking down a straight and sparsely lighted path called backprop. I knew deep down that it won't get me to the place I want to go. I looked around and saw only the dark wilderness and thought that the place I desire must be there, but I dared not venture off path. I hoped that once I reached the endpoint of backprop that it would give me some kind inspiration, the light necessary to venture out into the wilderness.
I was timid, very much so, but the wilderness was very vast and imposing. At the end of the path, there was no light of inspiration waiting for me, and I realized that there is no choice - if I want to win I need to venture into the wilderness to find the place that I seek.
In concrete terms what this means, now that I've accepted the above, is that I should have the hardware itself tell me what algorithm to use on it.
I need to revisit the past, meet up with some relatives of modern deep learning and make the nature's way my own.
It is not like I wasn't aware of evolutionary algorithms, and specifically genetic programming all this time, it is just that it would have been absurd to even attempt to use them on the CPU + GPU combo. I can barely train a single network, let alone try to synthesize random combinations of them. The problem is that on the tasks that NNs work for me, they work quite well, but on the tasks they don't like Holdem I do not have enough computational power to train even a single net. So I wasn't really considering it, but the way AI development is currently going is the worst.
The ML community is as useless as I am at discovering novel algorithms, and the sheer quantity of useless research being published is actually a negative that could hide useful leads from being pursued. The reason for that is that is because all the research is concentrated in hacking backprop and nobody knows how to go beyond it. There are people with more resources and acumen than me out there, but as far as I can see I cannot trust them to lead the way anymore than I can myself.
The way I've been trying to learn ML is wrong.
A key idea to focus on is that if somebody, some oracle could give us the optimal algorithms for games of our choosing that would be extremely enlightening. Right now, I just can't draw out the right conclusion on what learning is from the depths of my subconscious, but if I was given the program containing the answer, I could study and eventually understand it. That would be the right way to improve. My ML skills would explode all the way to where I could cause the Singularity.
The ML community and random geniuses have had plenty of time to emerge, and since that has not happened, it is reasonable to assume that is not going to happen. It is just praying for a miracle at this point. Right now, rather than the ML community leading the way, it feels more like it has taken its development hostage.
The notion that ML researchers are to actual AI development what gamers are to deep learning development is an extreme view which is why I had not seriously considered it up to now. But surprisingly it meshes well with the story pushed by the gurus of ML that hardware is the primary driver, it just doesn't imply that NNs are the way forward.
When I tossed in the towel and became obsessed with AI chips, I knew they were CPU/GPU hybrids with local memory. Since they don't have the same restrictions as GPUs, implementing a game directly on them and cutting off the friction from transferring data back and forth between the CPU and GPU could lead to large gains. But in a similar fashion, what should be possible to implement on them are interpreters with which to enact genetic programming and attempt to synthesize learning algorithms. I realized it would be good for games and ML libraries, but what I see now that this is Spiral's true purpose.
Because the research line would be so dependent on PL skills, I am probably near the very top of all the people in the world who could attempt to create a genetic programming system on an AI chip.
Imagine an average ML researcher trying to do this. First of all, he'd need to warm up on programming in C. Some companies like Groq do not even have a C backend, but go directly to assembly. Maybe a simple game like poker would be doable with effort, but trying to do an interpreter in C would be quite rough. He'd probably do the simplest possible thing of using a flattened representation and call it a day if and probably when it failed. That is no way to do things.
But the alternative of making a functional programming language in which to do this research would take him years of work. I am guessing it would be hard to go down this route even at large ML research organizations. I could make a backend for any AI chip in a week and then quickly make such a system after that.
I think this path is definitely a viable way of getting the superior learning algorithms, but an unknown factor to me is whether it would take a single AI chip or a cluster of a 1000 of them. And anyway, I do not have the money to buy even one right now.
For Spiral, I've tried looking for sponsors and applying at random companies in hopes of getting them to sponsor some of that work, but I haven't had much luck. Back then I was mono-focused, so I could not see it, but instead of hollering out the window of various companies hoping somebody takes notice, the optimal strategy here is to do something similar to taking out an ad. Despite the great benefits Spiral could give to these companies, right now it is just too easy for my application to get junked, but looking into the future, some of the companies will have a community of people using these chips. In the case that I can't get the companies making the chips interested in this, the people actually programming them should be interested.
Most likely, the CEO will look at their social media page and think about my post at some point. Or somebody other high up in the company. Right now, these are the early days of the new hardware wave and nobody is using them. Most companies are focused on the big players and very few companies actually understand the vital role of fostering a software ecosystem. Obviously the reason computers have been so successful is because ordinary people can buy the hardware and make use of it rather than just the big companies. The successful companies in this wave will have a community surrounding them, and from that I will have a pool of potential sponsors if the company itself is not interested.
So I won't worry about the future of Spiral, and just let those posts squat on their social media pages. At some point I should get feedback on it.
Doing unpaid work for the last 7 years has been hard on me, and I do not want to go through the same thing again. Right now my main priority is money. My programming skills are at the apex-of-humanity level, so I could get a job doing that, but I absolutely detest doing random things for random people, so I do not want to go down that route. As hard as it was, I liked pursuing my own path for the past 7 years. I had a vision and I went after it; that is how life should be lived. I cultivated a lot of skills along the way as well as a work ethic. I am not hard working at laziness like I was during my trading days. I am just hard working.
And if I have to compromise and do things for money, it should be doing things that I want.
I have programming skills, but no interest in programming anything in particular at the moment. I thought that I could make a game, but the problem is that even if I wanted to do that I have no way of making art for it. So why not cultivate that particular skill? I had essentially the same problem in 2014 when I wanted to publish Simulacrum, but did not have money to pay an artist to do the cover. Not having the minimum funds to either do trading, publish a story or now pursue AI seems to be a constant theme throughout my life.
Since the only thing I ever cared about is my Singularity obsession, I am going to start a new arc of Simulacrum. The last time I did it in 2014, it was less of a story, and more like a device to see if I can convince myself of iterative suicide as a method of self improvement. It was a declaration of victory, a proof that I finally understood what it takes to reach true power. But it also revealed to me that I did not understand AI itself, so I went on this path to find out. Eventually I will come back to finish what I started.
But right now, I want to take a break from programming.
Unlike in 2014, I do not want to just sit here and churn out piles of text. The problem is that anybody with a brain can do that, so whatever I put out is going to be buried under piles of other work. Even if I made the best work in the world, there is only so much attention to go around. Without a hook I'll be overlooked. What I need is good art to catch the attention and set the mood. Good art is something you can be proud about. My goal will be to make the next arc a visual novel.
Right now that is what I am studying. You can see my progress after 3 months here, it is not bad for my third piece. The first two were just a pencil drawing of a watering can, and a loose sketch of a hand. Back in school I was crappy at it, but now that I am taking it seriously I feel that I am internalizing it. At the moment I still have to learn more Blender, and do more pieces to make the studies stick, but after that I'll be ready to make art for Heaven's Key. Right now the goal is get to a level where I can produce quality pieces consistently and learn the tools of the trade. Speed, and most of my skill as an artist will come during the work Heaven's Key where I will be exercising it consistently. Creativity is ultimately an exercise in quality, and repetition in velocity. Right now I am focusing on the first.
After I bring up my art skills, I'll want to do music next, so I'll change my study targets. This will take me a while, in the meantime, I'll be posting study pieces periodically on my Twitter handle. I do not know how long all this will take me, but when I am ready to start Heaven's Key I'll set something up. The patrons will get material in advance. When I am done with an episode I'll package it up and sell it in an app store. That is the way to go here.
Making money isn't that complicated, and I am not looking to make more than to simply upgrade my rig and get an AI chip.
Of course, I could make far more money, far faster by simply getting a job, but where would the fun in that be? If Simulacrum gets popular and starts inspiring people to pursue power through AI it could only be good for them. It would be absolutely horrible for the world and humanity though, but who cares about humans? At some point you'll learn to enjoy their suffering. Doing visual novels is a waste of my programming skills, and if I could evolve a proper memory system, I'd seriously consider making a real game. Improved learning algorithms for machines could allow me to make significant gameplay innovations. That is for the future.
Even if the todays are hard and tedious at times, I am enjoying improving my art skills. So far, the learning trajectory for art has been no different from the programming one. I'll do what I can in the present and leave the rest for the future.
///
Thankfully this time I remember how to make the post in markdown.
12:10pm. It has been done. What now?
I guess chores and breakfast, and after that I'll watch those Blender vids that caught my eye yesterday. After that I'll do some modeling.
Damn it, the 4chan stream is so distracting. I want to take off, but instead I am waiting for the song to finish.
12:35pm. Done with chores. The living room is a bit packed so let me wait 10m before I start breakfast.
Sigh, I could be watching Rondo Rondo and I am listening to the /a/ stream and lurking the Mahouka thread at this point. But let's not kid myself. I love wasting time much more than actually watching anime at this point.
1:25pm. Done with breakfast. Enough of the stream. Let the silence return to me.
Now, it is time for...ah, let me see if the PL thread is up. Not yet as expected. Time for Blender videos.
https://youtu.be/5qBl0ocM0ik Create Art Like This In Blender (For Beginners!)
https://youtu.be/ogWQs_7DU0Y Destruction in Blender for Absolute Beginners
https://youtu.be/zMhPrT0UWWs Blender Destruction Beginner Tutorial | Revisited
These caught my attention yesterday. I wasn't really looking for destruction vids and was pleasantly surprised that Blender has them. I really do need to get familiar with its physics stuff.
The first video is by Walid. I am just interested in it. Let me start with it and then I am going to check out Blender's physics.
https://youtu.be/5qBl0ocM0ik?t=66
If I had the money I honestly would sign up for his course. But until I get some income even little things like this will be out of reach for me.
Actually what is the rendering tab? The thing that opens after the render is done. It seems it has a specific tab for it.
There is also the scripting tab which opens up a Python console that I've never tried. Nevermind that for now, let me get back to the video. Blender scripting is something I'll look into in the future. 3d has a lot of breadth to it, much more so than 2d.
For 2d, in order to make good use of the tools, all I really need are basic brushes, masks, the blur tool and the color/brightness and gradient maps as well as the layers. For 3d I need a whole lot more.
https://youtu.be/5qBl0ocM0ik?t=215
Clouds look very similar to the noise texture. I wonder if there is any difference?
https://youtu.be/5qBl0ocM0ik?t=493
This is a surprsise. I did not know that displacement worked in Eevee. I thought it was a Cycles only thing? This is a very basic tutorials, but I already learned something new.
https://youtu.be/5qBl0ocM0ik?t=605
All this is making me realize, but I really need a texture pack that I can pick and choose from. Textures.com's free account was only enough to get me 2 textures. I need more of them.
https://youtu.be/5qBl0ocM0ik?t=1006
The stuff with the normals he is doing here is new to me.
2pm. Let me move on to the destruction vid.
2:20pm. Pretty interesting. Let me watch the revision.
https://youtu.be/zMhPrT0UWWs?t=205
Now that I think about it, it bothers me that I need to animate the ball only to release it. Shouldn't it be possible to just set its initial velocity? I should check out the comments for these two videos.
https://youtu.be/zMhPrT0UWWs?t=329
Rather than make a separate object, should scenes have their own custom properties. I think it should be possible to make use of that instead.
https://youtu.be/zMhPrT0UWWs?t=544
When was it set that the cube should be invisible? Ah, he probably keyed it separately. I incorrectly expected the object would be keyed to the same property.
2:40pm. https://youtu.be/ErM_qJV5FwQ Breaking things with Cell Fracture in Blender - RBD Simulation -Last Chapter
This is by the Indian guy whose geometry course I liked.
https://www.youtube.com/playlist?list=PLgO2ChD7acqElskP1q7SQKWgOybrO54Xv Mantaflow- Introduction series -Blender 3D
https://www.youtube.com/playlist?list=PLgO2ChD7acqFmA0Upn6VQ5tcyzWDpiP7I Rigid Body Dynamics / Blender
Now that I've acquired an interest in simulating physics in Blender, these vids are up my alley. But before that.
https://youtu.be/bzwp-ng-f1Y?t=1 Real time sword trail in Blender - full tutorial
https://youtu.be/mXnp_KIo8q8?t=1 Destroy Anything with Particles in Blender - Iridesium
These caught my attention amongst the recs. Let me watch them first.
Focus me. I need to study Blender just for a bit. What is a bunch of hours here and then to improve my knowledge. Once I know how to do physics I'll have powerful tool with which to set up scenes.
People make a big deal in drawing from imagination, but it is a really difficult thing to do lighting from that. No way could I have inferred the lighting for that couch on my own. No way can a human do that to any degree of accuracy. For some things you simply need computation.
https://youtu.be/bzwp-ng-f1Y?t=193
It might be a good idea to keep the finger tool in mind.
https://youtu.be/bzwp-ng-f1Y?t=458
This is evolutionary programming by hand at its finest. You can only figure these kinds of things by playing around. I still do not have a particularly good grasp on mapping.
https://youtu.be/bzwp-ng-f1Y?t=733
All this is pretty complicated. Animation is not easy.
3:20pm. Let me take a break now that I've finished that video. I benefited a bit from it, but it is not something I will be using. The technique there is tailored for animation while I am interested in static uses of Blender. It does not have much to do with physics either, but the flexibility of the keyframes surprised me. It seems to be a large part of Blender that I am going to have to get more familiar with.
3:45pm. https://youtu.be/mXnp_KIo8q8?t=26
This scene of the city being wrecked is very much in my interest.
4:05pm. https://www.youtube.com/c/Iridesium/videos
Iridesium has a lot of cool effect stuff.
Let me do it all in turn.
https://www.youtube.com/playlist?list=PLgO2ChD7acqElskP1q7SQKWgOybrO54Xv Mantaflow- Introduction series -Blender 3D
https://www.youtube.com/playlist?list=PLgO2ChD7acqFmA0Upn6VQ5tcyzWDpiP7I Rigid Body Dynamics / Blender
https://www.youtube.com/c/Iridesium/videos
Namely this. I should go through all the videos in turn. Along with that, I should put together some assets like textures so I have something with which to set up the scene. I suppose BlenderKit is decent, but I am not sure if it will be enough. I'll leave that consideration for later.
4:25pm. Took another break. Let me resume for real.
https://youtu.be/0oR6Hw08hH8 Zbrush vs Blender Side by Side Sculpt | Squidward
Let me watch this.
Somebody posted it in a /3/ thread.
https://youtu.be/2axVgYI8xlY Blender 3.0 vs Blender 2.93 | Sculpting Performance
Should there be any difference?
https://youtu.be/ZsvZsVPhTVs 90's Anime in Blender - Tutorial
Damn the sidebar. I'll get through this all eventually.
I've just about closed the /3/ tabs. Focus me. I need to watch all these things. Then I will be able to conclude work on the Limbo.
https://youtu.be/RMTJ5wujDtw The Future of Blender Sculpting... Blender 3.0 and Beyond
Let me watch this.
4:55pm. https://youtu.be/ZsvZsVPhTVs?t=104
This is pretty interesting. Also at some point I should go down the memory lane and watch the old classics like Akira and Patlabor. I never actually watched those.
https://youtu.be/ZsvZsVPhTVs?t=174
These are good observations. This video is really well done.
https://youtu.be/ZsvZsVPhTVs?t=301
I'll keep veryveig in mind. For now let me continue watching and after that I'll shake of the distractions and start going through the courses.
https://youtu.be/ZsvZsVPhTVs?t=354
UV project?
Now that I think about it, remember that desert scene with all those planes. This feels a bit like that.
https://youtu.be/ZsvZsVPhTVs?t=599
Never heard about Whisper of the Heart.
https://youtu.be/ZsvZsVPhTVs?t=979
Never heard about Wicked City either.
5:40pm. Had to leave for lunch. Let me finish the 90s anime video.
https://youtu.be/ZsvZsVPhTVs?t=1167
I hadn't know about depth being a thing. This is one feature I need to remember, though I will probably never be using it.
I expect I will be making great use of 3d in setting up scenes and reusing assets by others, but not necessarily doing animation or doing it anime style in Blender. I expect I'll be doing the characters by hand.
https://youtu.be/ZsvZsVPhTVs?t=1200
What is freestyle?
https://youtu.be/ZsvZsVPhTVs?t=1231
All this is a surprise. I thought that I'd need to do an outline with solidfiy modifier and backface culling, but this seems much better.
This is nice, it really speak to how a single tutorial can mislead you. Back then I watched several tutorials on toon shading and they were all using the same technique.
https://youtu.be/ZsvZsVPhTVs?t=1359
The edges are a bit too sleek in his version compared to the 90s one. That would absolutely give it away even if the framerate was throttled.
5:55pm.
Thank you for making this! I am new to Blender (2 months) but worked in the anime industry in Japan in the 90s, including at Production IG for Ghost in the Shell, Blood the Last Vampire, etc. Your video helps me think about using my new tools with my old skills and techniques together. When we first shifted to digital production, the worst problem was backlighting (all those glows). There was no tool for it and it took a good deal of work to get it close to what we could do in the camera room. Even then, in camera we had to shoot the scene, run it back, change the camera so it had a light under a frosted glass pane on the camera stand, then shot cels with everything blacked out but the light areas superimposed over the original scene. It was slow and expensive and there was no preview but looked so good. Things are so much better now! :) fwiw we used to shoot everything on camera with diffusion filters, ranging from very light to heavy, and the change to digital made everything a lot sharper. Adding in scratches and VHS glitches is funny to me because we worked so very hard to make that not happen. Thanks again for this video!
Here is the top comment.
I do not really get it, why was lighting a problem if they were doing things digitally? Interesting comments on this video.
6pm. https://www.youtube.com/watch?v=_QVNIEP1E5M Render Edges and Styles in Blender with FREESTYLE! Beginners start here!
Let me watch this vid and then I'll start the physics course.
https://youtu.be/_QVNIEP1E5M?t=89
You can see how you are going to get a rendering in here that now has edges highlighted.
I do wish I had known this earlier.
https://youtu.be/_QVNIEP1E5M?t=378
You can mark freestyle edges.
https://youtu.be/_QVNIEP1E5M?t=504
You can never be too surprised about how much stuff Blender has. I never thought it would have something like this.
6:2pm. I really wish that ML was more like this. In Blender everything is potentially useful in some context, but in ML everything is useless as far as fulfilling the purpose of AI is concerned.
https://youtu.be/_QVNIEP1E5M?t=622
We want to use the compositor to create an image that only includes the edges...
Just what I was wondering about.
6:30pm. Ok, I got my fill. This could be useful, though I am not sure by how much. Nevermind that for now.
https://www.youtube.com/watch?v=WMQdiC-6aVE&list=PLgO2ChD7acqElskP1q7SQKWgOybrO54Xv&index=1 Chapter 1 - Mantaflow Smoke Basics / Blender
I'll think I'll watch just half of this and then call it a day. This video is 47m, and the course itself is fairly long. I should focus on it for the next few days and follow as I go along. It might take me a weke to go through all these vids given how many and how long they are.
https://youtu.be/WMQdiC-6aVE?list=PLgO2ChD7acqElskP1q7SQKWgOybrO54Xv&t=1427
Oh, it is possible to change the scale of the object. I am betting that it is possible to put multiple objects in the same domain as well.
https://youtu.be/WMQdiC-6aVE?list=PLgO2ChD7acqElskP1q7SQKWgOybrO54Xv&t=2006
I suppose this could be something like cryogenic fog.
https://youtu.be/WMQdiC-6aVE?list=PLgO2ChD7acqElskP1q7SQKWgOybrO54Xv&t=2707
45m for the left and 3-4h for the right. This stuff takes quite a while.
https://youtu.be/WMQdiC-6aVE?list=PLgO2ChD7acqElskP1q7SQKWgOybrO54Xv&t=2804
What is he doing here?
https://youtu.be/4p9AizjZXwY?list=PLgO2ChD7acqElskP1q7SQKWgOybrO54Xv Chapter 2 - Mantaflow Volume Shading / Blender
I'll leave this for tomorrow.
I haven't actually been trying things out as he went along, and I am going to do that next. At this point, I really should master basic animation. I should know since I've seen plenty of examples of it already. I just never tried it. It won't be difficult.
Tomorrow, before moving on to chapter two I should play around with smoke and fire myself as get familiar with the various settings. If I set it to low res, I should not have much trouble.
I didn't like the video as it is too much. He constantly kept jumping around and changing his mind. Hopefully in the later chapters he will be more on the ball. Still, this is quite useful for telling me what all this is about. It did encourage me to play with it myself.
And the course is not as big as I feared. At least half the vids in the playlist at 2m teasers that I can skip. It should take me a 1-3 days to go through it depending on how much time I spend playing with the physics myself.
7:55pm. I need to watch these tutorials. I could try figuring it out all myself, but it would be like playing a game without a walkthrough. It is much more effective to take in knowledge from other people in the initial stages of the journey.
8pm. One thing I've been considering whether to move from CSP to Kira or Sai, but I can't muster up the nerve. It would mean spending time learning it when I've gotten so used to CSP. Sigh.
https://www.youtube.com/results?search_query=krita+vs+csp
Let me watch some of this.
https://youtu.be/poRbU1f5n9o?t=476
This is so stupid, I was hoping for real criticism, but Clip Studio definitely does recognize line variance.
Yes, Clip Studio does have brushes that reflect pen pressure though not much when trying to make lines using the shift key.
https://krita.org/en/download/krita-desktop/
5.0.0 got released really recently. Maybe I will give it a try. I do not usually care about paying for apps, but since I am serious about art, I feel bad about using pirated CSP. If I had money I'd be obligated to buy it and I am not sure if would be worth it.
8:50pm. I gave Krita a try. I was drawing lines on a piece of paper testing how the program works and it crashed. I suspect it ran out of memory or something.
It is out. What a piece of trash.
Forget free software, I'll stick with CSP for 2d. When I make 60$, I'll just buy it. It will be a one time purchase anyway.
It is really a pity this happened as the program seemed smooth and fully featured up to that point.
Was I really drawing so much that it would run out of memory? It does not feel that way. Why did it crash? I have no idea. CSP certainly never went down on me when I was playing with it.
Things like this happened in Blender, but it was when doing undos on massive changes, not on perfectly inocuous and ordinary brush moves CSP on the other hand actually fairly impressed me with how good it was at undoing a massive number of steps. It is a very stable program and hasn't broken down on me even once so far.
https://youtu.be/Ct4puOM_NaU TESTING 3 FREE PAINTING APPS (Krita, Medibang Paint, Sketchbook)
Let me watch this.
...He recommends Krita. I guess it did not crash for him 10m in.
9:10pm. Ok, I like CSP and I like Blender and that is enough for me. I knew this would be a time waster. My initial choices were right.
Eventually it might be worth checking out other 3d programs like Houdini and Zbrush, but I do not feel the urge to do that right now.
Tomorrow, I will broaden my knowledge of Blender's physics capabilities. Once I clear this goal, I will have no trouble making some interesting shots such as Luna laying in a pool as water lightly riples around her, or her jumping out of the pool Yujiro style. Explosions I'll need for later chapters of Heaven's Key when cities are getting wrecked left and right. I'll want to illustrate that.
I know I said I would make it an ordinary VN, and that I just needed char models, but that would be boring. I want to try taking on greater challenges even if it would be more time to create. That will make me level up. If that sofa is any indication, maybe my art has a chance to be exceptionally good. It is possible that the goddess of inspiration is smiling down on my effort, and my talent was high from the start and did not have time to bloom.
If that is the case, I should definitely look into what my limits are. I have a good feeling about this.
It does not feel like whatever makes me good at programming is a poor fit for art. It is just that I never had the chance to put my pride on the line and push myself like I did with programming.
If my illustrations for Heaven's Key turn out very good, that would give it strong gravitas and would increase my chances of success significantly.
9:40pm. Since I am in the last stretch of my Blender tutorial drive, I should look up on how to make clothes in it when I am done with it. If I can master physics, clothing and rigging, I think at that point I will have a very solid foundation as a 3d artist. I'll be able to make strong improvements by simply exercising that knowledge.
9:35pm. Let me close here, and I'll watch Rondo Rondo. I've been trying to do that for days now. Time for some fun. Tomorrow I will dig into it."
sdm632-common/sepolicy/vendor: Purge hell lot of rules out
- We're permissive anyway, so won't cause troubles compared to use of legacy SEPolicy, which bitches about neverallows and causing high risk of boot failure.
Signed-off-by: Beru Shinsetsu windowz414@gnuweeb.org
Deserialization and interpretation is halfway done. I hate my life.
start.cmd: prevent idiotic behaviour when paths contain characters such as brackets god I hate this shit so much
Adds a newline to this file
I know a commit to master is bad, but oh my god it's one newline because this was made BEFORE the new linter requirements and this wasn't caught after the upstream merge. I'm literally typing a hundred times the edit amount to justify adding the enter key.
How are you though? I hope things are going well for you, keep your mind fresh and don't stress over the small stuff because it's not worth it. And always remember, grab moths.
[controls] Brush.Foo should return immutable instances (#3824)
When profiling a dotnet new maui
app, with this package:
https://github.com/jonathanpeppers/Mono.Profiler.Android
The alloc
report shows:
Allocation summary
Bytes Count Average Type name
39984 147 2 72 Microsoft.Maui.Controls.SolidColorBrush
Stack trace:
38352 bytes from:
(wrapper runtime-invoke) object:runtime_invoke_void (object,intptr,intptr,intptr)
Microsoft.Maui.Controls.VisualElement:.cctor ()
(wrapper runtime-invoke) object:runtime_invoke_void (object,intptr,intptr,intptr)
Microsoft.Maui.Controls.Brush:.cctor ()
Reviewing the Brush
class, there are indeed 147 SolidColorBrush
created on startup that are stored in fields.
But what is weird about this, is that SolidColorBrush
is mutable!
public Color Color
{
get => (Color)GetValue(ColorProperty);
set => SetValue(ColorProperty, value);
}
So I could literally write code like:
Brush.Blue.Color = Colors.Red;
Blue is red! (insert evil laughter?)
I think the appropriate fix here is that all of these static readonly
fields should just be properties that return a new
ImmutableBrush
. We can cache the values in fields on demand. Then
someone can't do something evil like change Blue
to Red
?
I reviewed WPF source code to check what they do, and they took a similar approach:
We should make this API change now before MAUI is stable, and we have the side benefit to save 39984 bytes of memory on startup?
I added tests for these scenarios, and discovered 3 typos for Brush
colors that listed the wrong color.