Skip to content

Latest commit

 

History

History
735 lines (490 loc) · 52.3 KB

2021-03-07.md

File metadata and controls

735 lines (490 loc) · 52.3 KB

< 2021-03-07 >

2,185,573 events, 1,249,742 push events, 1,747,207 commit messages, 105,057,362 characters

Sunday 2021-03-07 00:39:32 by ctnp-e

colors. ive been doing this for millions of years. fuck you. suck my nuts


Sunday 2021-03-07 01:15:10 by Jean-Paul R. Soucy

New data: 2021-03-06: See data notes.

Revise historical data: cases (MB, SK).

Note regarding deaths added in QC today: “10 new deaths, for a total of 10,465 deaths: 4 deaths in the last 24 hours, 4 deaths between February 27 and March 4, 2 death before February 27.” We report deaths such that our cumulative regional totals match today’s values. This sometimes results in extra deaths with today’s date when older deaths are removed.

Recent changes:

2021-01-27: Due to the limit on file sizes in GitHub, we implemented some changes to the datasets today, mostly impacting individual-level data (cases and mortality). Changes below:

  1. Individual-level data (cases.csv and mortality.csv) have been moved to a new directory in the root directory entitled “individual_level”. These files have been split by calendar year and named as follows: cases_2020.csv, cases_2021.csv, mortality_2020.csv, mortality_2021.csv. The directories “other/cases_extra” and “other/mortality_extra” have been moved into the “individual_level” directory.
  2. Redundant datasets have been removed from the root directory. These files include: recovered_cumulative.csv, testing_cumulative.csv, vaccine_administration_cumulative.csv, vaccine_distribution_cumulative.csv, vaccine_completion_cumulative.csv. All of these datasets are currently available as time series in the directory “timeseries_prov”.
  3. The file codebook.csv has been moved to the directory “other”.

We appreciate your patience and hope these changes cause minimal disruption. We do not anticipate making any other breaking changes to the datasets in the near future. If you have any further questions, please open an issue on GitHub or reach out to us by email at ccodwg [at] gmail [dot] com. Thank you for using the COVID-19 Canada Open Data Working Group datasets.

  • 2021-01-24: The columns "additional_info" and "additional_source" in cases.csv and mortality.csv have been abbreviated similar to "case_source" and "death_source". See note in README.md from 2021-11-27 and 2021-01-08.

Vaccine datasets:

  • 2021-01-19: Fully vaccinated data have been added (vaccine_completion_cumulative.csv, timeseries_prov/vaccine_completion_timeseries_prov.csv, timeseries_canada/vaccine_completion_timeseries_canada.csv). Note that this value is not currently reported by all provinces (some provinces have all 0s).
  • 2021-01-11: Our Ontario vaccine dataset has changed. Previously, we used two datasets: the MoH Daily Situation Report (https://www.oha.com/news/updates-on-the-novel-coronavirus), which is released weekdays in the evenings, and the “COVID-19 Vaccine Data in Ontario” dataset (https://data.ontario.ca/dataset/covid-19-vaccine-data-in-ontario), which is released every day in the mornings. Because the Daily Situation Report is released later in the day, it has more up-to-date numbers. However, since it is not available on weekends, this leads to an artificial “dip” in numbers on Saturday and “jump” on Monday due to the transition between data sources. We will now exclusively use the daily “COVID-19 Vaccine Data in Ontario” dataset. Although our numbers will be slightly less timely, the daily values will be consistent. We have replaced our historical dataset with “COVID-19 Vaccine Data in Ontario” as far back as they are available.
  • 2020-12-17: Vaccination data have been added as time series in timeseries_prov and timeseries_hr.
  • 2020-12-15: We have added two vaccine datasets to the repository, vaccine_administration_cumulative.csv and vaccine_distribution_cumulative.csv. These data should be considered preliminary and are subject to change and revision. The format of these new datasets may also change at any time as the data situation evolves.

https://www.quebec.ca/en/health/health-issues/a-z/2019-coronavirus/situation-coronavirus-in-quebec/#c47900

Note about SK data: As of 2020-12-14, we are providing a daily version of the official SK dataset that is compatible with the rest of our dataset in the folder official_datasets/sk. See below for information about our regular updates.

SK transitioned to reporting according to a new, expanded set of health regions on 2020-09-14. Unfortunately, the new health regions do not correspond exactly to the old health regions. Additionally, the provided case time series using the new boundaries do not exist for dates earlier than August 4, making providing a time series using the new boundaries impossible.

For now, we are adding new cases according to the list of new cases given in the “highlights” section of the SK government website (https://dashboard.saskatchewan.ca/health-wellness/covid-19/cases). These new cases are roughly grouped according to the old boundaries. However, health region totals were redistributed when the new boundaries were instituted on 2020-09-14, so while our daily case numbers match the numbers given in this section, our cumulative totals do not. We have reached out to the SK government to determine how this issue can be resolved. We will rectify our SK health region time series as soon it becomes possible to do so.


Sunday 2021-03-07 03:23:10 by NewsTools

Created Text For URL [sundiatapost.com/lady-laments-as-her-boyfriend-impregnates-girl-who-joined-them-for-threesome/]


Sunday 2021-03-07 03:49:56 by Magical Marvelous MADMADMAD Mister Mim !

Re-added goddamn loaders that work because well god forbid they not drag me back and forth between whats left of the real world and this fucking insane mess THEY created.


Sunday 2021-03-07 04:25:27 by Josh McSavaney

Make nix-build args copy-pastable via set -x

A reproduce script includes a logline that may resemble:

using these flags: --arg nixpkgs { outPath = /tmp/build-137689173/nixpkgs/source; rev = "fdc872fa200a32456f12cc849d33b1fdbd6a933c"; shortRev = "fdc872f"; revCount = 273100; } -I nixpkgs=/tmp/build-137689173/nixpkgs/source --arg officialRelease false --option extra-binary-caches https://hydra.nixos.org/ --option system x86_64-linux /tmp/build-137689173/nixpkgs/source/pkgs/top-level/release.nix -A

These are passed along to nix-build and that's fine and dandy, but you can't just copy-paste this as is, as the {} introduces a syntax error and the value accompanying -A is ''.

A very naive approach is to just printf "%q" the individual args, which makes them safe to copy-paste. Unfortunately, this looks awful due to the liberal usage of slashes:

$ printf "%q" '{ outPath = /tmp/build-137689173/nixpkgs/source; rev = "fdc872fa200a32456f12cc849d33b1fdbd6a933c"; shortRev = "fdc872f"; revCount = 273100; }'
\{\ outPath\ =\ /tmp/build-137689173/nixpkgs/source\;\ rev\ =\ \"fdc872fa200a32456f12cc849d33b1fdbd6a933c\"\;\ shortRev\ =\ \"fdc872f\"\;\ revCount\ =\ 273100\;\ \}

Alternatively, if we just use set -x before we execute nix-build, we'll get the whole invocation in a friendly, copy-pastable format that nicely displays {}-enclosed content and preserves the empty arg following -A:

running nix-build...
using this invocation: 
+ nix-build --arg nixpkgs '{ outPath = /tmp/build-138165173/nixpkgs/source; rev = "e0e4484f2c028d2269f5ebad0660a51bbe46caa4"; shortRev = "e0e4484"; revCount = 274008; }' -I nixpkgs=/tmp/build-138165173/nixpkgs/source --arg officialRelease false --option extra-binary-caches https://hydra.nixos.org/ --option system x86_64-linux /tmp/build-138165173/nixpkgs/source/pkgs/top-level/release.nix -A ''

Sunday 2021-03-07 05:34:16 by Yaroslav Furman

power: supply: force disable frame pointers and optimize for size

Holy fucking shit this is so retarded, it doesn't boot with frame pointers.

Signed-off-by: Yaroslav Furman yaro330@gmail.com


Sunday 2021-03-07 05:41:51 by Terra

I hate my life

Signed-off-by: Terra terra@mcterra.id.au


Sunday 2021-03-07 05:58:44 by gndedesmu

Update README.md

MSDS 6306: Doing Data Science - Case Study 01 - Sprint 2021

Group Members

  • Sowmya Mani
  • Ndede, Migot

Introduction

Using the Beer and Breweries data files provided by Budweiser, our group was able to uncover information that could be beneficial for Budweiser if they are considering entering the craft beer market in the United States.

The information we focused on was breweries by State, beer styles, container sizes, Alcohol by Volume (ABV), and International Bitterness Units (IBU).

The data provided for this analysis consists of 2410 US craft beers from 558 US breweries. The data analysis that follows highlights the following in order to describe the current craft beer market:

The purpose this project is to provide descriptive information about the current craft beer market in the United States.

The Data

Beers dataset contains a list of 2410 US craft beers and Breweries dataset contains 558 US breweries. The datasets descriptions are as follows. Beers.csv: Name: Name of the beer. Beer ID: Unique identifier of the beer. ABV: Alcohol by volume of the beer. IBU: International Bitterness Units of the beer. Brewery ID: Brewery id associated with the beer. Style: Style of the beer. Ounces: Ounces of beer.

Breweries.csv: Brew ID: Unique identifier of the brewery. Name: Name of the brewery. City: City where the brewery is located. State: State where the brewery is located.

File Information

  • -files contains the reports generate when conducting the EDA.

    • report.html report generated by DataExplorer library. The report contains Basic Statistics, Data Structure, Missing Data Profile, Univariate Distribution plots, Correlation Analysis, and Principal Component Analysis.
    • report_files supporting files for the report.html report
  • Files below Contains the data files used in this study.

    • Beers.csv list of 2,410 craft beers and their attributes.
    • Breweries.csv list of 558 breweries within the United States and their attributes.
    • Beer_Merg.csv is a final repo of the merged dataset for both Brewery and Beer datasets
  • files contains all the background information important for understanding the purpose of the study. This folder contains two files.

    • CaseStudy01Sp2020_2.docs the case study document that outlines the questions to be answered by this study.
    • CaseStudy1 Rubric.docx requirements document that contains the requirements the study must contain upon submission.
  • `` this folder contains the files required for the group project final submission.

    • US craft beers and Breweries - Case Study 01.pptx.pptx final presentation used to share the findings for this study.
  • Beers_Project_Case_Study.rmd the Rmarkdown with code and analysis for this study

  • Beers_Project_Case_Study.html the Knit HTML version of the Case Study 01.rmd Rmarkdown file

  • DDS6306-Group-Project-1.Rproj R project file for Case Study 01 project

  • README.md the repo file and folder definition, and the answer to the analysis questions

Analysis Questions

  1. How many breweries are present in each state? The top 5 breweries make up 31.3% of the breweries in the US.

Colorado has 47 breweries which is 8 more than California which is second with 39 breweries. Within Colorado most of the breweries seems to cluster around two major cites Denver and Boulder. Within Boulder a breweries company call Avery Brewing alone holds nearly three times the average number of Colorado breweries. Many of these breweries offers free samples just for stepping inside. And this is what I found most interesting about why CO has the largest number of breweries. It is because Colorado’s water is extremely well suited for brewing great tasting beer, just the slight mineral adjustment and simple filtrations with very minimal treatment produces the best beer in the whole country. Which mean with just very little investment, you can produce best-in class beers with wide variety. Just the regular water along has 80-90% of water chemistry required to make Beer which is very fascinating. So what is there in CO water that makes it the best for brewing? CO water has the perfect mix of magnesium, sodium, sulfates, bicarbonates and calcium which determines the water's hardness and ultimately its suitability for brewing and great test. Hence water chemistry is a major reason why Colorado has so many breweries excelling in different styles of beer from round the world Lone Tree won a gold medal at the 2017 Great American Beer Festival

  1. Merge beer data with the breweries data. Print the first 6 observations and the last six observations to check the merged file.

We merged the two data set into one and named it as 'mergeDF', the primary key being used is 'Brewery_id' from Beer data set, and 'Brew_ID' from Breweries data set. We also changed the two columns' name for clear understanding. The first and last 6 observations were showed there with head/tail command.

  1. Address the missing values in each column.

There are 62 observations where both ABV and IBU had NA values. 943 observations where the IBU only are NA’s. We replaced 62 NA’s in ABV with state level median value. Replacing 1005 NA’s in IBU with state level median led to an 18% reduction in the accuracy of the correlation model. Predicted values from simple linear regression model are used to replace missing values in IBU to improve the accuracy of the model.

  1. Compute the median alcohol content and international bitterness unit for each state. Plot a bar chart to compare.

Median ABV and IBU data have been sorted and plotted. The median ABV per state appears fairly consistent with an overall ABV median of 0.056. Nevada falls in the middle with median ABV of 0.062 and Utah is at the bottom with 0.04. The median IBU per state appears to change considerably between states with an overall IBU median of 37. West Virginia falls in the middle with median value 57.5 and Kansas is at the bottom with value 22. Arkansas and Utah are tied for lowest ABV at 4.0%. Maine has highest ABV at 6.7%. Wisconsin has the lowest IBU @ 19. Maine has the highest IBU @ 61

  1. Which state has the maximum alcoholic (ABV) beer? Which state has the most bitter (IBU) beer?

Maximum ABV and IBU data have been sorted and plotted. The maximum ABV by state appears to have only a small variance. Colorado has the Maximum ABV at 0.128 and Delaware has the lowest at 0.055. The maximum IBU per state appears to vary between states with an overall median of all max values at 92.82. Oregon has the highest Max IBU at 138 and Arkansas has the lowest at 44.11.

  1. Comment on the summary statistics and distribution of the ABV variable.

The ABV clearly illustrates that the IPA variety has more Alcohol By Volume than other varieties. This is across all can sizes with one exception (can size=19.2) in the “other” type. IPA style beer is the predominant variety when ABV exceeds 0.06. This again indicates that IPA beer tends to have higher alcohol content than other variety.

  1. Is there an apparent relationship between the bitterness of the beer and its alcoholic content? Draw a scatter plot. Make your best judgment of a relationship and EXPLAIN your answer.

The point plot below illustrates what appears to a positive linear relationship between the ABV and IBU. The addition of line trend to the point plot below confirms the presence of a positive linear relationship between the ABV and IBU. The correlation between IBU and ABV is 0.757878, with a p-value < 2e-16, Multiple R-squared is 0.5744, meaning that 57% changes in exploratory variable ABV can explain the changes in response variable IBU. GGPairs Plot below shows a strong correlation between ABV and IBU across all styles of beers. The strongest correlation is in other beer style category at 0.787 followed by IPA at 0.689.

  1. Budweiser would also like to investigate the difference with respect to IBU and ABV between IPAs (India Pale Ales) and other types of Ale (any beer with “Ale” in its name other than IPA).
    KNN classification is used to investigate this relationship. Provide statistical evidence one way or the other. Budweiser audience is comfortable with percentages … KNN is very easy to understand conceptually.

In addition, while you have decided to use KNN to investigate this relationship (KNN is required) you may also feel free to supplement your response to this question with any other methods or techniques used. Creativity and alternative solutions are always encouraged.

Our KNN model predicts the outcome with an accuracy of 83% which is pretty good. Optimized K value to 3 based upon 100 iterative run.

  1. Knock their socks off! Find one other useful inference from the data that you feel Budweiser may be able to find value in. You must convince them why it is important and back up your conviction with appropriate statistical evidence.

This information is useful in that it can be used to target the beer consumers based on their consumption habits ( ABV and Size of the beer) and location. By visualizing the Ounces of beer bottles and cross referencing this data with ABV we are able to see a pattern (we can clearly see that most of the 12 and 16 ounces). The pattern can then be layed over states to show which states may act as the largest consumers of beer products. Lighter blue circles that sit higher on the graph show high alcohol content consumption. We can see the trend that most consumed beer is 12 and 16 ounces

Conclusions TODO

The primary objective of our project is to take two different dataset, beer data set containing a list of 2410 US crafts beers and breweries dataset containing 558 breweries an perform analysis. We transformed the data in CVS format into data frame using R-programming, inspected and analyzed the structure of the data, merged the two data frames, and performed analysis on the merged final data set.

As Data Scientist, it is very rare that we get to work only on a single perfect data. A large percentage of work is to accept different datasets and merge them into one final data frame before processing as it is illustrated in our project. After analyzing the data, statistical inference is then made.

The objective of our project is to provide some valuable information such as summary of two dataset, relation between International Bitterness Unit (IBU) and Alcohol By Volume (ABV), number of breweries in each state and how can be an important to Budweiser, and to compute the median and max IBU and ABV.

Based on our team’s analysis, we found that majority of breweries appears to be clustered towards the west of the Mississippi river with California and Colorado being the top two states. Why do these states mostly CO has the highest number of Breweries? Water is the secrete ingredients making CO the top state for breweries in the world and also making 2017 gold medal winner. The water alone contains about 80-90% of water chemistry needed to make the best beers. CO water has the perfect mix of magnesium, sodium, sulfates, bicarbonates and calcium which determines the hardness of water and ultimately its suitability for brewing great testing beers. Hence, we believe that Budweiser should consider thinking about CO in their next project.

Other state of interest, New Hampshire which we believe has the high potential for growth has the highest beer consumers due to no beer sale tax but do not have much breweries. Utah on the other hand has the lowest beer consumer and only allows ABV <4 % which could be due to it being a Mormon state. We were surprised at Nevada, being the home of Las Vegas has only two breweries. North and South Dakota are lowest in the number of breweries but are one of the highest in beer consumption capita.

Our computation of the median IBU and ABV for each state shows that median IBU appears to vary considerably between the states. However, West Virginia falls in the middle with median IBU 37 and Arkansas at the bottom with IBU 7.8. On the other hand, median ABV per state appears somewhat consistent, with Nevada being in the middle with median ABV of 0.0669 which kind of make sense because people who visit Las Vegas majority of the time likes to get drunk, gamble and enjoy their time. Utah again, being the lowest beer consumer also has the lowest ABV of 0.051. In term of Max ABV, all the state appears to have only a small variance. Colorado has the max ABV of 0.128 and Delaware the lowest at 0.055.

IBU which stands for International Bitterness Units, a measure of the bitterness in beer. There is a saying spicier the better, same applies to the beers as well. Bitter the beer better is the taste. The bitterness in beer terms mean better flavor which is produced by adding aroma hop flowers. According to CNBC news, the reason for rise in craft beer sales is its high IBU among the home brewing company that’s growing from garage to thriving commercial company. Our analysis shows that max IBU by states appears to vary between the state ranging from 138 to 0, with Oregon being the highest (138) and Arkansas the lowest (44.11). The reason we believe Oregon has the highest IBU is because Oregon is known for homemade craft beer and they also add high-grade marijuana in their craft beers. However, according to new law, started in Jan 2020, they are avoiding THC or CBD in their beers due to health concern. Arkansas stands at the bottom for IBU.

In a nutshell, our analysis of relationship between ABV and IBU shows that there is somewhat positive linear relation. In general, as the ABV rises so does the bitterness. IPA type has the highest ABV compared to other type also higher the Ounces higher the ABV.

Across the states of US : ALE brewed more than IPA in major breweries Volume brewed : ALE > IPA 12 ounce is brewed more than the 24 and 32 ounces. ALE is sold more than IPA in 12 ounces and IPA constitutes more than ALE in 16 ounces Colorado & California are the major breweries that brew ALE and IPA across US DC and Delaware are the states with the least breweries for ALE and IPA Mean IBU : IPA = 70 and ALE = 34 Mean ABV : IPA = 6.8 and ALE = 5.6

Inference Analysis>

As this is an observational study there is no cause and effect but association to the conclusion We cannot conclude that IPA is more sought out beer than ALE from the mean IBU and ABV values (pvalue <0.0005 from the knn and NB test) In conclusion the association between IBU and ABV can be generalized to identify IPA and ALE beer styles in the selected area of study in the US but cannot be generalized to other countries across the world Though the production of ALE is more than IPA, we have enough evidence to suggest that the market for IPA is increasing and there is potential for business in the states of US Based on the 5-year trend we have enough evidence to suggest that though the count of Breweries is less in Montana, North Dakota and South Dakota the consumption rate is the highest especially with ALE from our study.

Recommendation>

Beer is the most popular alcoholic beverage in US, though the demand is not spread equally Utah has a low level of beer consumption for a couple of reasons. Despite these barriers, Utah’s beer consumption has grown by 2.8% which is a great potential for a new market New Hampshire is another outlier. The state has no sales tax. It is estimated that over 50% of the state’s alcohol sales are to out-of-state visitors and is a great potential for new ALE business Oregon has been at the forefront of the craft beer revolution sweeping the country. Portland alone has over 100 craft brewers, and nearly double-digit growth in the past five years. In states like Oregon and Washington, demand shows no sign of slowing down and is a great market to introduce new flavors especially ALE Montana, South Dakota and North Dakota are few states with less breweries but whose consumption is the most and is a great potential for new market


Sunday 2021-03-07 06:56:59 by Andrew Clark

[Experiment] Lazily propagate context changes (#20890)

  • Move context comparison to consumer

In the lazy context implementation, not all context changes are propagated from the provider, so we can't rely on the propagation alone to mark the consumer as dirty. The consumer needs to compare to the previous value, like we do for state and context.

I added a memoizedValue field to the context dependency type. Then in the consumer, we iterate over the current dependencies to see if something changed. We only do this iteration after props and state has already bailed out, so it's a relatively uncommon path, except at the root of a changed subtree. Alternatively, we could move these comparisons into readContext, but that's a much hotter path, so I think this is an appropriate trade off.

  • [Experiment] Lazily propagate context changes

When a context provider changes, we scan the tree for matching consumers and mark them as dirty so that we know they have pending work. This prevents us from bailing out if, say, an intermediate wrapper is memoized.

Currently, we propagate these changes eagerly, at the provider.

However, in many cases, we would have ended up visiting the consumer nodes anyway, as part of the normal render traversal, because there's no memoized node in between that bails out.

We can save CPU cycles by propagating changes only when we hit a memoized component — so, instead of propagating eagerly at the provider, we propagate lazily if or when something bails out.

Most of our bailout logic is centralized in bailoutOnAlreadyFinishedWork, so this ended up being not that difficult to implement correctly.

There are some exceptions: Suspense and Offscreen. Those are special because they sometimes defer the rendering of their children to a completely separate render cycle. In those cases, we must take extra care to propagate all the context changes, not just the first one.

I'm pleasantly surprised at how little I needed to change in this initial implementation. I was worried I'd have to use the reconciler fork, but I ended up being able to wrap all my changes in a regular feature flag. So, we could run an experiment in parallel to our other ones.

I do consider this a risky rollout overall because of the potential for subtle semantic deviations. However, the model is simple enough that I don't expect us to have trouble fixing regressions if or when they arise during internal dogfooding.


This is largely based on RFC#118, by @gnoff. I did deviate in some of the implementation details, though.

The main one is how I chose to track context changes. Instead of storing a dirty flag on the stack, I added a memoizedValue field to the context dependency object. Then, to check if something has changed, the consumer compares the new context value to the old (memoized) one.

This is necessary because of Suspense and Offscreen — those components defer work from one render into a later one. When the subtree continues rendering, the stack from the previous render is no longer available. But the memoized values on the dependencies list are. This requires a bit more work when a consumer bails out, but nothing considerable, and there are ways we could optimize it even further. Conceptually, this model is really appealing, since it matches how our other features "reactively" detect changes — useMemo, useEffect, getDerivedStateFromProps, the built-in cache, and so on.

I also intentionally dropped support for unstable_calculateChangedBits. We're planning to remove this API anyway before the next major release, in favor of context selectors. It's an unstable feature that we never advertised; I don't think it's seen much adoption.

Co-Authored-By: Josh Story jcs.gnoff@gmail.com

  • Propagate all contexts in single pass

Instead of propagating the tree once per changed context, we can check all the contexts in a single propagation. This inverts the two loops so that the faster loop (O(numberOfContexts)) is inside the more expensive loop (O(numberOfFibers * avgContextDepsPerFiber)).

This adds a bit of overhead to the case where only a single context changes because you have to unwrap the context from the array. I'm also unsure if this will hurt cache locality.

Co-Authored-By: Josh Story jcs.gnoff@gmail.com

  • Stop propagating at nearest dependency match

Because we now propagate all context providers in a single traversal, we can defer context propagation to a subtree without losing information about which context providers we're deferring — it's all of them.

Theoretically, this is a big optimization because it means we'll never propagate to any tree that has work scheduled on it, nor will we ever propagate the same tree twice.

There's an awkward case related to bailing out of the siblings of a context consumer. Because those siblings don't bail out until after they've already entered the begin phase, we have to do extra work to make sure they don't unecessarily propagate context again. We could avoid this by adding an earlier bailout for sibling nodes, something we've discussed in the past. We should consider this during the next refactor of the fiber tree structure.

Co-Authored-By: Josh Story jcs.gnoff@gmail.com

  • Mark trees that need propagation in readContext

Instead of storing matched context consumers in a Set, we can mark when a consumer receives an update inside readContext.

I hesistated to put anything in this function because it's such a hot path, but so are bail outs. Fortunately, we only need to set this flag once, the first time a context is read. So I think it's a reasonable trade off.

In exchange, propagation is faster because we no longer need to accumulate a Set of matched consumers, and fiber bailouts are faster because we don't need to consult that Set. And the code is simpler.

Co-authored-by: Josh Story jcs.gnoff@gmail.com


Sunday 2021-03-07 07:07:38 by CAT

embed kick and ban cmds

made the ugly ass kick and ban success messages embeded so they dont look as shit


Sunday 2021-03-07 09:11:37 by Marco Pivetta

#213 implemented Assert::positiveInteger() to have psalm positive-int assertions (#214)

  • Implemented #213: Assert::positiveInteger() to have psalm positive-int assertions

  • #213 use static::valueToString() instead of static::typeToString() for Assert::positiveInteger() failures

Ref: webmozarts/assert#214 (comment)

  • Added Assert::naturalNumber to verify that a number is positive-int|0

  • Upgraded to latest vimeo/psalm

  • As suggested by @muglug, dropping @mixin and importing a trait as Assert type inference mechanism

Ref: https://symfony-devs.slack.com/archives/C8SFXTD2M/p1611064766060600

Quoting:

ocramius  2:59 PM
can anyone help me tackle down https://github.com/vimeo/psalm/issues/4381 ?
Mostly trying to figure out whether the assertions are being read differently for @mixin and concrete instances :neutral_face:

muglug  3:18 PM
mixins are a massive hack, both whenever they're used but also in Psalm
anything that relies on magic methods to work is going to be more flaky, so I'd encourage you to think of a non-mixin approach if at all possible

ocramius  3:19 PM
yeah, I'm thinking of doing that indeed, just unsure about approach yet :neutral_face:
can't they somehow be unified with traits though?
they're the same thing, no?

muglug  3:22 PM
if they were, you'd just use a trait!

ocramius  3:22 PM
ack, gonna actually try that approach for webmozart/assert then :slightly_smiling_face:

muglug  3:24 PM
with mixins the method that's actually being called is either __call or __callStatic, whereas with traits it's still that exact method

ocramius  3:24 PM
yes, I was just wondering if it could import the method as if it was a trait (at type level)
that would squash out *a lot* of code, but it's also true that a mixin does not have a trait definition
I think it makes sense to use the language feature for this :slightly_smiling_face:

The @mixin support in vimeo/psalm is a massive hack, which also requires other tooling to increase complexity in the static analysis parsing capabilities. In order to reduce that complexity, we instead rely on Assert importing Mixin as a trait, which is much simpler and much more stable long-term.

While this indeed leads to Mixin being changed from an interface to a trait, that is not really a BC break, as Mixin was explicitly documented to be used only as a type system aid, and not as an usable symbol.

  • Removed Assert::naturalNumber(), since we already have Assert::natural() doing the same thing

Note: we refined the assertion on Assert::natural() to correctly restrict to positive-int|0, but we had to remove some minor test on Assert::allNaturalNumber() due to @psalm-assert iterable<positive-int|0> not yet working in upstream.

See vimeo/psalm#5052

  • Adjusted mixin generator to comply with current coding standards

  • Upgraded vimeo/psalm to 4.6.1 to get a green build (note: 4.6.2 broken as per vimeo/psalm#5310)

While there is no guarantee that vimeo/psalm:4.6.3 will fix the issue, we know for sure that vimeo/psalm:4.6.2 is broken, as per vimeo/psalm#5310, and we cannot therefore rely on its static analysis there until upstream issue is fixed.

Meanwhile, this allows for installation of webmozart/assert with vimeo/psalm:4.6.1 and vimeo/psalm:>4.6.2, hoping for a brighter future.


Sunday 2021-03-07 09:18:57 by ATTALA Kheireddine

Update Unity.gitignore

This is just adding the /Temp folder

It is very annoying to work without this addition as you always need to close unity before making commits, I was unaware of why that was a thing for the longest time and it made working with git and Unity quite the pain. Please accept this change to make the life of some fellow git ignorants (or should I saw gitignore ignorants) easier

KH!


Sunday 2021-03-07 13:59:24 by Gusted

Fix compiling API (#5190)

  • Since Typescript 4.2 some Helpers to transpile to 4.2 are deprecated/removed. This includes __spread and __spreadArrays microsoft/TypeScript#41523.
  • We heavily use the spread syntax in Dark Reader and thus need those helper functions.
  • Due the upgrade of 4.2, tslib was outdated and didn't had the new helper function __spreadArray.
  • After good hours of skimming trough the compiler and using the wrong commands to update tslib(default to 1.x). manually updating it to 2.1 which includes this new functions microsoft/tslib#133 https://github.com/microsoft/tslib/releases/tag/2.1.0 the API can be properly compiled again.
  • Resolves #My personal issues with the API.

Note to myself: Now their are 2 versions installed 2.1.10 and 1.14.1 properly configuration setups should default to 2.1.10. But for sakes if I get into problems with this I hope I remember this note and don't waste some hours. Why 1.14.1, I don't know. npm --save-dev -E tslib defaults to 1.14.1. It seems like an NPM bug yarn add --dev -E tslib gives the correct 2.1.10. Damn dependency hell =).


Sunday 2021-03-07 16:13:42 by John Swinbank

Don't list contributors by name.

Not that we don't love them, but they should be listed on GitHub, and this avoids me having to remember who to list.


Sunday 2021-03-07 16:22:41 by Tahir Anjum

Time.java

Brother, I removed the package that was pasted along side your code on the top! Accept the pull request and merge! Use proper wording for variables. "hr" should also be "hours" and to compensate the duplicate use of hours, use "This" to adjust. It manages the scope. You can concern Sir Mukhtar or else you can search it on YouTube etc. Don't think that I stole your code for assignment because I am pro programmer and I know how to do things. I just corrected one thing in your code that I mentioned above. And one more thing, Stay cool! Stay Blessed! May God Almighty strengthen you in your skills and guide you to the right path. THE most important thing is that, I want to see you as CEO of your own company that would compete the world's best companies.


Sunday 2021-03-07 16:42:04 by Aeden

JPN - Ore no Nounai Sentakushi ga Gakuen Love Comedy wo Zenryoku de Jama Shiteiru - ED - Taiyou to Tsuki no Cross


Sunday 2021-03-07 16:51:19 by Marko Grdinić

"2pm. Done with breakfast. Let me chill a bit and then I'll do the chores.

3:10pm. Done with chores. Let me rest a bit and then I will resume. The morning session was too intense so I need more time off.

3:40pm. Let me resume.

I did want to resume earlier, but given how early I got up, and how intensely I am focused on this task, my mind is in a daze. It feels like I have a fever from all this thinking.

3:45pm. I have a strong feeling of inspiration.

4:15pm.

type [<ReferenceEquality>] TypecheckerState = {
    package_id : PackageId
    module_id : ModuleId
    top_env : TopEnv Promise
    results : (Bundle * InferResult * TopEnv) Stream
    bundle : BlockBundleState
    }

Hmmm, to get the actual top env, what I have to do is use the top_env as the neutral element and fold over the results, getting the third item.

let top_env_last (x : TypecheckerState) = x.top_env >>=* fun s -> Stream.foldFun (fun _ (_,_,x) -> x) s x.results

Also to make compilation less annoying, I should in fact...

No nevermind. I'll have to go over all the packages anyway during prepass compilation.

This caught my attention for the reason that making the BuildFile command work took a hack.

4:45pm. I should be writing code, but I am not. It is not like I am blocked. It is not that I do not know what I need to do. It is that the insights and the inspiration are coming in at too rapid of a clip and I am trying to digest them properly.

5:25pm. Done with lunch.

I am actually quite tired after thinking at full tilt for so long. I got up significantly earlier than usual too. I don't think I'll be doing any more programming for the day.

At any rate, I think I understand it all. The entire editor support segment. Last time I could only hack it piece by piece, but now I have the entire structure internalized as one clean mental model.

This time I am absolutely sure that I will be able to design it so it is roboust.

Let me do just a little in anticipation of what is to come.

type PackageFiles =
    | File of module_id: int * path : string * name: string option
    | Directory of dir_id: int * path : string * name: string * PackageFiles list

This. Paths will definitely make the reverse updates easier.

let proj_files_diff (uids_file : ('a * 'b) [], uids_directory : 'b [], files) (num_dirs, uids, files') =
    let uids_file' = Array.zeroCreate (Array.length uids)
    let uids_directory' = Array.zeroCreate num_dirs
    let rec loop = function
        | File(uid,path,name), File(uid',path',name') when uid = uid' && name = name' && path = path' && uids.[uid] = fst uids_file.[uid] ->
            uids_file'.[uid] <- uids_file.[uid]; true
        | Directory(uid,path,name,l), Directory(uid',path',name',l') when uid = uid' && name = name' && path = path' && list (l,l') ->
            uids_directory'.[uid] <- uids_directory.[uid]; true

It did not require much modification.

5:30pm. I definitely made the wrong choice with streams.

Instead the right answer was to make the editor support one huge immutable data structure. This is quite amazing. I finally have the feeling for how this kind of machinery works.

Last time I toughed it out, but now now I can finally visualize the whole thing as an one big whole. I can zoom in and inspect what every piece does in my mind. And according to that, I can verify the correctness of the system.

I am not saying that once I code it all up, that it will work and be without bugs. But if a bug happens, you can bet that I will be able to track it down and fix it right away.

5:35pm. I am really tempted to do a bit more right now and code up at least the inter package updates, but I should leave that for later and take it easy for the rest of the day. A leisurely pace is the best. One day you think hard, the next day you code hard. That is what programming is. It is hard to switch gears to code writing mode.

Instead of doing it forcefully, by tomorrow this surge of thoughts will petter out and solidify as it cools down.

I know the whole system and I am already thinking ahead, but tomorrow I will just play it safe and finish the clean stuff. I only have two functions left before I am done with typechecking. After that comes the prepass. After that I'll move to the .spiproj parser.

There is no need to rush this. Now that I know how to do things, why not give it a few weeks? Forget the time.

After I implement the new design, the editor support won't have any new features, but it should be much more roboust. I'll never have trouble extending it again or fixing what minor bugs are left in it.

Not only will Spiral be very much complete, in the sense of not having all the features that I wanted, but also in the sense of being thoroughly battle tested, but I'll also gain an understanding of this technique. I get the sense that it will be pretty useful in my future endeavors. It meshes particularly well with concurrency, but it would have value without it.

Keeping track of state and managing complexity will be my main challenge as I start taking ML seriously again. I'll probably find an opportunity to make use of this design in my ML work.

I'll fuse my understanding of compiler engineering and ML. That is my main goal for 2021.

I'll refine Spiral and then consolidate all of my experiences.

I can do this. My 2021 ML attempt won't go the same way as the 2018 one. Just because one blow did not topple my opponent does not mean that a much stronger strike wouldn't."


Sunday 2021-03-07 18:52:24 by via8

ultradeathhardmetalcoremothufuckingcocksuckingannual update presenting absolutely new pytux version newer seen such programming skill (shit) before, you got no chance to beat me, this is my code, this is my home, these are my homies, this is my masterpiece, prepare your anuses, get ready to see this: add label start (AUTOMATICALLY YOLO)


Sunday 2021-03-07 19:34:30 by Marko Grdinić

"```fs type PackageFilesTree = | File of module_id: int * path : string * name: string option | Directory of dir_id: int * path : string * name: string * PackageFilesTree list

type PackageFiles = { tree : PackageFilesTree list; num_dirs : int; num_modules : int }


Did a few edits in preparation for tomorrow. Having the relevant info be included up front will be of help down the road.

Knowing the numer of directories and modules up front will make it a lot easier to make the input array for updating the project files. I am going to give this all the care it deserves. This is why the slow and steady approach of building up the motivation and letting the inspiration come to you works.

I will savor this experience of March. When it is over, I am going to get back to work on ML. I won't be impatient to get back to ML or work on UIs.

After I deal with editor support and Spiral is back into operational state, I am going to take some time to collect all the contact info of various AI company contacts and fire them off to a dozen. If nothing comes of that then I will wash my hands of that affair. It is not core to my plan anyway."

---
## [rileyjshaw/rileyjshaw.github.io](https://github.com/rileyjshaw/rileyjshaw.github.io)@[1c9ceef36f...](https://github.com/rileyjshaw/rileyjshaw.github.io/commit/1c9ceef36f4fef69fd3a20b65414de774f3bf3e1)
#### Sunday 2021-03-07 19:58:13 by Riley Shaw

Feature: add a dark theme

Dark themes are something I’ve sought since my early days on a computer. I remember finding a way to apply a black background to Microsoft Word 2003, then being horrified when I accidentally printed the document. In uni I scripted my desktop to boot into [f.lux](https://justgetflux.com/)’s darkroom mode. One of my only lasting contributions to [Signal](https://rileyjshaw.com/blog/the-pool-on-the-roof-must-have-a-leak) was loudly advocating for dark mode on iOS and Desktop, not just Android. I even [made a browser extension in 2015](https://chrome.google.com/webstore/detail/dark-theme-everywhere/nibjplbnmmklkfnkpecgbffkifmdbjed) to apply a dark theme to the entire Internet.

It’s not the most pressing or exciting work, but I decided my site should have a dark theme of its own. I took a lot of inspiration from my former Khan Academy teammate [Josh W. Comeau’s dark mode post](https://www.joshwcomeau.com/react/dark-mode/), with a few differences.

UI-wise, I decided to go a different route than a binary toggle between dark and light. There were three motivations behind this. From most to least important:

- I wanted to include a “system” option. Theming is still a fresh concept for most web users, so I think it’s worthwhile to expose a way to control theme, motion, brightness etc from within the webpage. But [there should _always_ be a simple way to revert back to your system’s display preferences](https://kilianvalkhof.com/2020/design/your-dark-mode-toggle-is-broken).
- I may add more options later (greyscale? low-res? melty??) and this leaves my options open and unlimited.
- I liked the animated eyeball idea.

Speaking of the animated eyeball… I’m happy with how it turned out! It was quite a deep dive, since I didn’t want to include a heavy animation library like [react-spring](https://www.react-spring.io/) (>25kb) or [Framer Motion](https://www.framer.com/motion/) (>90kb). **Thankfully, the coolest thing I learned while working on this feature is that [SVG has a declarative animation specification called SMIL](https://css-tricks.com/guide-svg-animations-smil/)!** It didn’t take long to pick up, and it’s very easy to work with. I’m really happy to add it to my toolkit.

Animations were the _coolest_ deep dive, but they weren’t the deepest. Back when I started building this website, I split the header into Cyan, Magenta, and Yellow (CMY) channels. They [combined subtractively](https://rileyjshaw.com/blog/your-art-teacher-lied) to make a black-ish color, which I used across the site for text. I removed that feature in late 2019, but recently added it back.

With a single theme, that’s three combinations, for a total of 7 colors. With two themes… the combinations explode a bit.

I use [mix-blend-modes](https://developer.mozilla.org/en-US/docs/Web/CSS/mix-blend-mode) to combine colors, of which there are limited options. For my initial color palettes, I wanted to pick three pretty primary colors (light: CMY, dark: RGB) that would blend into three pretty secondary colors (light: RGB, dark: CMY), and all blend together into something with enough contrast from the background to use as body text. Tall order! [So I built a tool for that](https://github.com/rileyjshaw/palette-test-tool), and spent a few idle hours tweaking settings until I was happy.

And the initial palettes are… pretty good? I wish I was more thrilled with the result, but there are so many conflicting requirements that it’s impossible to get a result as good as if I’d hand-picked the colors independently. I’ve done an initial a11y pass of the colors, and there are some areas of the site where I want to improve the contrast. So I’ll probably end up with a palette split between the “true” colors, and their slightly modified, more attractive cousins.

I also needed to add a global settings provider in this commit. Now that I’ve done the work for that, look forward to more settings (reduce motion, contrast preference, block embeds… cheat codes??) coming soon!

I made a few minor changes while working on this commit:

- Update a bunch of the site doodles to fit either theme.
- Share theme colors between CSS-land and JS-land. No more mismatches!
- Keep the `<Layout>` component mounted between pages. Transitions and performance will be just _sliiightly_ smoother.
- Clarify the color variables a bit; `--cb` is now `--color-blue`.
- Add theme-dynamic color variables. For instance, `--color-p1` (“primary one”) maps to `--color-red` in dark mode and `--color-cyan` in light mode.
- Update some non-standard colors (eg. scaling faders demo) to fit the new palettes.
- Super simplify state-handling in the site nav (fixes a persistent console error).
- Standardize the handling of `localStorage` and `sessionStorage`.

Hope you enjoy!

---
## [kees-/ads](https://github.com/kees-/ads)@[a77f03e9eb...](https://github.com/kees-/ads/commit/a77f03e9eb5978eaf651ec7cb86f7c0bcd6a7aaa)
#### Sunday 2021-03-07 23:18:05 by kees

On this page we hate 🙅‍♀️ realpath and 💘 love {_:a}

---
## [Zombaxx/MiddleMars](https://github.com/Zombaxx/MiddleMars)@[fe053fdb1e...](https://github.com/Zombaxx/MiddleMars/commit/fe053fdb1e2f60c64d0fc7ede10cbc21d3e25e32)
#### Sunday 2021-03-07 23:50:18 by Zombaxx

Added more Wonders around the map

-Added ability to form the Holy Terran Empire and related bloodlines
-Added ability to mend the Schism as any non-heretic Singularity religion

---

# [<](2021-03-06.md) 2021-03-07 [>](2021-03-08.md)